Skip to content

Commit

Permalink
Github action: auto-update.
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Feb 3, 2025
1 parent ca9fbba commit 9f6595f
Show file tree
Hide file tree
Showing 56 changed files with 269 additions and 253 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
2 changes: 1 addition & 1 deletion dev/_modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ <h1>All modules for which code is available</h1>
<li><a href="neuralop/data/datasets/darcy.html">neuralop.data.datasets.darcy</a></li>
<li><a href="neuralop/data/datasets/navier_stokes.html">neuralop.data.datasets.navier_stokes</a></li>
<li><a href="neuralop/data/transforms/data_processors.html">neuralop.data.transforms.data_processors</a></li>
<li><a href="neuralop/layers/coda_blocks.html">neuralop.layers.coda_blocks</a></li>
<li><a href="neuralop/layers/coda_layer.html">neuralop.layers.coda_layer</a></li>
<li><a href="neuralop/layers/differential_conv.html">neuralop.layers.differential_conv</a></li>
<li><a href="neuralop/layers/discrete_continuous_convolution.html">neuralop.layers.discrete_continuous_convolution</a></li>
<li><a href="neuralop/layers/embeddings.html">neuralop.layers.embeddings</a></li>
Expand Down

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion dev/_sources/auto_examples/data/plot_darcy_flow.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ Visualizing the data
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.325 seconds)
**Total running time of the script:** (0 minutes 0.375 seconds)


.. _sphx_glr_download_auto_examples_data_plot_darcy_flow.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.186 seconds)
**Total running time of the script:** (0 minutes 0.188 seconds)


.. _sphx_glr_download_auto_examples_data_plot_darcy_flow_spectrum.py:
Expand Down
6 changes: 3 additions & 3 deletions dev/_sources/auto_examples/data/sg_execution_times.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Computation times
=================
**00:00.512** total execution time for 2 files **from auto_examples/data**:
**00:00.563** total execution time for 2 files **from auto_examples/data**:

.. container::

Expand All @@ -33,8 +33,8 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_auto_examples_data_plot_darcy_flow.py` (``plot_darcy_flow.py``)
- 00:00.325
- 00:00.375
- 0.0
* - :ref:`sphx_glr_auto_examples_data_plot_darcy_flow_spectrum.py` (``plot_darcy_flow_spectrum.py``)
- 00:00.186
- 00:00.188
- 0.0
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. code-block:: none
<matplotlib.colorbar.Colorbar object at 0x7f123d6a8cd0>
<matplotlib.colorbar.Colorbar object at 0x7fb5a93cd1d0>
Expand Down Expand Up @@ -447,7 +447,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 30.600 seconds)
**Total running time of the script:** (0 minutes 30.558 seconds)


.. _sphx_glr_download_auto_examples_layers_plot_DISCO_convolutions.py:
Expand Down
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/layers/plot_embeddings.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Let's walk through its use. We start with a function that gives the coordinates
.. code-block:: none
<matplotlib.legend.Legend object at 0x7f123dedd310>
<matplotlib.legend.Legend object at 0x7fb5a9c69d10>
Expand Down Expand Up @@ -244,7 +244,7 @@ Assuming we have one channel of data discretized on a 5x5x5 cube:
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.590 seconds)
**Total running time of the script:** (0 minutes 0.595 seconds)


.. _sphx_glr_download_auto_examples_layers_plot_embeddings.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ The first step of this process is a neighbor search:
.. code-block:: none
<matplotlib.legend.Legend object at 0x7f123dc302d0>
<matplotlib.legend.Legend object at 0x7fb5a99d3c50>
Expand Down Expand Up @@ -136,7 +136,7 @@ Now, let's select a point in the output and visualize its neighbors.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.127 seconds)
**Total running time of the script:** (0 minutes 0.130 seconds)


.. _sphx_glr_download_auto_examples_layers_plot_neighbor_search.py:
Expand Down
8 changes: 4 additions & 4 deletions dev/_sources/auto_examples/layers/sg_execution_times.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Computation times
=================
**00:31.317** total execution time for 3 files **from auto_examples/layers**:
**00:31.282** total execution time for 3 files **from auto_examples/layers**:

.. container::

Expand All @@ -33,11 +33,11 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_auto_examples_layers_plot_DISCO_convolutions.py` (``plot_DISCO_convolutions.py``)
- 00:30.600
- 00:30.558
- 0.0
* - :ref:`sphx_glr_auto_examples_layers_plot_embeddings.py` (``plot_embeddings.py``)
- 00:00.590
- 00:00.595
- 0.0
* - :ref:`sphx_glr_auto_examples_layers_plot_neighbor_search.py` (``plot_neighbor_search.py``)
- 00:00.127
- 00:00.130
- 0.0
18 changes: 9 additions & 9 deletions dev/_sources/auto_examples/models/plot_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -243,13 +243,13 @@ Training the model
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f123db047d0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fb5a9aef9d0>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f123db04050>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fb5a9aee710>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f123db04050>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f123db04550>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fb5a9aee710>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fb5a9aed310>}
Expand Down Expand Up @@ -306,22 +306,22 @@ Then train the model on our small Darcy-Flow dataset:
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=2.48, avg_loss=0.5903, train_err=18.4462
[0] time=2.49, avg_loss=0.5903, train_err=18.4462
Eval: 16_h1=0.3639, 16_l2=0.2512, 32_h1=0.5425, 32_l2=0.2628
[3] time=2.46, avg_loss=0.2077, train_err=6.4899
Eval: 16_h1=0.2191, 16_l2=0.1505, 32_h1=0.4976, 32_l2=0.1840
[6] time=2.47, avg_loss=0.1768, train_err=5.5240
Eval: 16_h1=0.2902, 16_l2=0.2180, 32_h1=0.5098, 32_l2=0.2476
[9] time=2.46, avg_loss=0.1399, train_err=4.3722
[9] time=2.50, avg_loss=0.1399, train_err=4.3722
Eval: 16_h1=0.2089, 16_l2=0.1298, 32_h1=0.5057, 32_l2=0.1746
[12] time=2.54, avg_loss=0.1350, train_err=4.2196
[12] time=2.47, avg_loss=0.1350, train_err=4.2196
Eval: 16_h1=0.2000, 16_l2=0.1225, 32_h1=0.4881, 32_l2=0.1636
[15] time=2.50, avg_loss=0.1203, train_err=3.7584
[15] time=2.47, avg_loss=0.1203, train_err=3.7584
Eval: 16_h1=0.2045, 16_l2=0.1263, 32_h1=0.5199, 32_l2=0.1752
[18] time=2.46, avg_loss=0.0929, train_err=2.9023
Eval: 16_h1=0.1954, 16_l2=0.1193, 32_h1=0.5096, 32_l2=0.1704
{'train_err': 2.8530129194259644, 'avg_loss': 0.09129641342163086, 'avg_lasso_loss': None, 'epoch_train_time': 2.4674222869999767}
{'train_err': 2.8530129194259644, 'avg_loss': 0.09129641342163086, 'avg_lasso_loss': None, 'epoch_train_time': 2.475966762999974}
Expand Down Expand Up @@ -471,7 +471,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 50.546 seconds)
**Total running time of the script:** (0 minutes 50.582 seconds)


.. _sphx_glr_download_auto_examples_models_plot_FNO_darcy.py:
Expand Down
24 changes: 12 additions & 12 deletions dev/_sources/auto_examples/models/plot_SFNO_swe.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -234,13 +234,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f123d025160>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fb5a8d51160>
### LOSSES ###
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f123d0252b0>
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7fb5a8d512b0>
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f123d0252b0>}
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fb5a8d512b0>}
Expand Down Expand Up @@ -297,22 +297,22 @@ Train the model on the spherical SWE dataset
Training on 200 samples
Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)].
Raw outputs of shape torch.Size([4, 3, 32, 64])
[0] time=3.57, avg_loss=2.5862, train_err=10.3447
[0] time=3.52, avg_loss=2.5862, train_err=10.3447
Eval: (32, 64)_l2=1.9413, (64, 128)_l2=2.3896
[3] time=3.47, avg_loss=0.3843, train_err=1.5371
[3] time=3.42, avg_loss=0.3843, train_err=1.5371
Eval: (32, 64)_l2=0.5457, (64, 128)_l2=2.4473
[6] time=3.42, avg_loss=0.2661, train_err=1.0643
[6] time=3.39, avg_loss=0.2661, train_err=1.0643
Eval: (32, 64)_l2=0.4888, (64, 128)_l2=2.3835
[9] time=3.41, avg_loss=0.2174, train_err=0.8696
[9] time=3.38, avg_loss=0.2174, train_err=0.8696
Eval: (32, 64)_l2=0.3596, (64, 128)_l2=2.3757
[12] time=3.42, avg_loss=0.1884, train_err=0.7538
[12] time=3.39, avg_loss=0.1884, train_err=0.7538
Eval: (32, 64)_l2=0.3351, (64, 128)_l2=2.3888
[15] time=3.41, avg_loss=0.1575, train_err=0.6301
[15] time=3.37, avg_loss=0.1575, train_err=0.6301
Eval: (32, 64)_l2=0.2902, (64, 128)_l2=2.3897
[18] time=3.47, avg_loss=0.1381, train_err=0.5524
[18] time=3.40, avg_loss=0.1381, train_err=0.5524
Eval: (32, 64)_l2=0.2958, (64, 128)_l2=2.3930
{'train_err': 0.5088733458518981, 'avg_loss': 0.12721833646297454, 'avg_lasso_loss': None, 'epoch_train_time': 3.4227862730000425}
{'train_err': 0.5088733458518981, 'avg_loss': 0.12721833646297454, 'avg_lasso_loss': None, 'epoch_train_time': 3.35410853999997}
Expand Down Expand Up @@ -383,7 +383,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (1 minutes 25.123 seconds)
**Total running time of the script:** (1 minutes 23.685 seconds)


.. _sphx_glr_download_auto_examples_models_plot_SFNO_swe.py:
Expand Down
24 changes: 12 additions & 12 deletions dev/_sources/auto_examples/models/plot_UNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -340,13 +340,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f123db05d10>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fb5a9aee210>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f123dff8440>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fb5a9a74440>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f123dff8440>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f123db07250>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fb5a9a74440>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fb5a9aef890>}
Expand Down Expand Up @@ -405,22 +405,22 @@ Actually train the model on our small Darcy-Flow dataset
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=9.94, avg_loss=0.7052, train_err=22.0363
[0] time=9.89, avg_loss=0.7052, train_err=22.0363
Eval: 16_h1=0.4284, 16_l2=0.2792, 32_h1=0.9563, 32_l2=0.6271
[3] time=9.87, avg_loss=0.2614, train_err=8.1678
[3] time=9.84, avg_loss=0.2614, train_err=8.1678
Eval: 16_h1=0.3142, 16_l2=0.2201, 32_h1=0.8163, 32_l2=0.5611
[6] time=9.90, avg_loss=0.2037, train_err=6.3663
[6] time=9.92, avg_loss=0.2037, train_err=6.3663
Eval: 16_h1=0.2780, 16_l2=0.1813, 32_h1=0.7817, 32_l2=0.5216
[9] time=9.88, avg_loss=0.2029, train_err=6.3391
[9] time=9.90, avg_loss=0.2029, train_err=6.3391
Eval: 16_h1=0.2832, 16_l2=0.1915, 32_h1=0.7754, 32_l2=0.5527
[12] time=9.91, avg_loss=0.1943, train_err=6.0723
[12] time=9.94, avg_loss=0.1943, train_err=6.0723
Eval: 16_h1=0.2996, 16_l2=0.2057, 32_h1=0.7592, 32_l2=0.5096
[15] time=9.89, avg_loss=0.1562, train_err=4.8821
[15] time=9.97, avg_loss=0.1562, train_err=4.8821
Eval: 16_h1=0.2614, 16_l2=0.1652, 32_h1=0.7594, 32_l2=0.4691
[18] time=9.91, avg_loss=0.1626, train_err=5.0821
[18] time=9.92, avg_loss=0.1626, train_err=5.0821
Eval: 16_h1=0.2597, 16_l2=0.1639, 32_h1=0.7471, 32_l2=0.5076
{'train_err': 4.352840971201658, 'avg_loss': 0.13929091107845307, 'avg_lasso_loss': None, 'epoch_train_time': 9.923147400000005}
{'train_err': 4.352840971201658, 'avg_loss': 0.13929091107845307, 'avg_lasso_loss': None, 'epoch_train_time': 9.90717456599998}
Expand Down Expand Up @@ -494,7 +494,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (3 minutes 21.955 seconds)
**Total running time of the script:** (3 minutes 21.599 seconds)


.. _sphx_glr_download_auto_examples_models_plot_UNO_darcy.py:
Expand Down
8 changes: 4 additions & 4 deletions dev/_sources/auto_examples/models/sg_execution_times.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Computation times
=================
**05:37.624** total execution time for 3 files **from auto_examples/models**:
**05:35.867** total execution time for 3 files **from auto_examples/models**:

.. container::

Expand All @@ -33,11 +33,11 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_auto_examples_models_plot_UNO_darcy.py` (``plot_UNO_darcy.py``)
- 03:21.955
- 03:21.599
- 0.0
* - :ref:`sphx_glr_auto_examples_models_plot_SFNO_swe.py` (``plot_SFNO_swe.py``)
- 01:25.123
- 01:23.685
- 0.0
* - :ref:`sphx_glr_auto_examples_models_plot_FNO_darcy.py` (``plot_FNO_darcy.py``)
- 00:50.546
- 00:50.582
- 0.0
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/training/plot_count_flops.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e

.. code-block:: none
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7f122d902d40>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7fb5996bb560>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
Expand Down Expand Up @@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 2.546 seconds)
**Total running time of the script:** (0 minutes 2.600 seconds)


.. _sphx_glr_download_auto_examples_training_plot_count_flops.py:
Expand Down
Loading

0 comments on commit 9f6595f

Please sign in to comment.