Skip to content

Commit

Permalink
Github action: auto-update.
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Dec 16, 2024
1 parent a798c0e commit 8be73b1
Show file tree
Hide file tree
Showing 36 changed files with 181 additions and 181 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. code-block:: none
<matplotlib.colorbar.Colorbar object at 0x7f052afd4880>
<matplotlib.colorbar.Colorbar object at 0x7fc32d0138b0>
Expand Down Expand Up @@ -448,7 +448,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 37.420 seconds)
**Total running time of the script:** (0 minutes 36.824 seconds)


.. _sphx_glr_download_auto_examples_plot_DISCO_convolutions.py:
Expand Down
24 changes: 12 additions & 12 deletions dev/_sources/auto_examples/plot_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -248,13 +248,13 @@ Training the model
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f0560afcee0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fc348be4d90>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f0546730640>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fc348be4f40>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f0546730640>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f05606759a0>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fc348be4f40>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fc348c035b0>}
Expand Down Expand Up @@ -311,22 +311,22 @@ Then train the model on our small Darcy-Flow dataset:
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=2.60, avg_loss=0.6956, train_err=21.7383
[0] time=2.55, avg_loss=0.6956, train_err=21.7383
Eval: 16_h1=0.4298, 16_l2=0.3487, 32_h1=0.5847, 32_l2=0.3542
[3] time=2.57, avg_loss=0.2103, train_err=6.5705
[3] time=2.52, avg_loss=0.2103, train_err=6.5705
Eval: 16_h1=0.2030, 16_l2=0.1384, 32_h1=0.5075, 32_l2=0.1774
[6] time=2.57, avg_loss=0.1911, train_err=5.9721
[6] time=2.54, avg_loss=0.1911, train_err=5.9721
Eval: 16_h1=0.2099, 16_l2=0.1374, 32_h1=0.4907, 32_l2=0.1783
[9] time=2.57, avg_loss=0.1410, train_err=4.4073
[9] time=2.53, avg_loss=0.1410, train_err=4.4073
Eval: 16_h1=0.2052, 16_l2=0.1201, 32_h1=0.5268, 32_l2=0.1615
[12] time=2.58, avg_loss=0.1422, train_err=4.4434
[12] time=2.53, avg_loss=0.1422, train_err=4.4434
Eval: 16_h1=0.2131, 16_l2=0.1285, 32_h1=0.5413, 32_l2=0.1741
[15] time=2.58, avg_loss=0.1198, train_err=3.7424
[15] time=2.52, avg_loss=0.1198, train_err=3.7424
Eval: 16_h1=0.1984, 16_l2=0.1137, 32_h1=0.5255, 32_l2=0.1569
[18] time=2.56, avg_loss=0.1104, train_err=3.4502
[18] time=2.54, avg_loss=0.1104, train_err=3.4502
Eval: 16_h1=0.2039, 16_l2=0.1195, 32_h1=0.5062, 32_l2=0.1603
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.5626969950000102}
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.5328369260000727}
Expand Down Expand Up @@ -476,7 +476,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 52.899 seconds)
**Total running time of the script:** (0 minutes 51.949 seconds)


.. _sphx_glr_download_auto_examples_plot_FNO_darcy.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_SFNO_swe.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -234,13 +234,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f05467c8520>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fc348b03700>
### LOSSES ###
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f05467c8c10>
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7fc348b035e0>
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f05467c8c10>}
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fc348b035e0>}
Expand Down Expand Up @@ -297,22 +297,22 @@ Train the model on the spherical SWE dataset
Training on 200 samples
Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)].
Raw outputs of shape torch.Size([4, 3, 32, 64])
[0] time=3.55, avg_loss=2.5143, train_err=10.0572
Eval: (32, 64)_l2=1.9155, (64, 128)_l2=2.4213
[3] time=3.55, avg_loss=0.4193, train_err=1.6771
Eval: (32, 64)_l2=0.4481, (64, 128)_l2=2.5097
[6] time=3.49, avg_loss=0.2898, train_err=1.1590
Eval: (32, 64)_l2=0.3605, (64, 128)_l2=2.4361
[9] time=3.53, avg_loss=0.2269, train_err=0.9075
Eval: (32, 64)_l2=0.3382, (64, 128)_l2=2.4815
[12] time=3.52, avg_loss=0.1930, train_err=0.7721
Eval: (32, 64)_l2=0.3280, (64, 128)_l2=2.4038
[15] time=3.54, avg_loss=0.1609, train_err=0.6438
Eval: (32, 64)_l2=0.3029, (64, 128)_l2=2.4056
[18] time=3.57, avg_loss=0.1422, train_err=0.5687
Eval: (32, 64)_l2=0.2940, (64, 128)_l2=2.3771
[0] time=3.52, avg_loss=2.5951, train_err=10.3803
Eval: (32, 64)_l2=1.7616, (64, 128)_l2=2.4268
[3] time=3.52, avg_loss=0.4486, train_err=1.7945
Eval: (32, 64)_l2=0.4180, (64, 128)_l2=2.3742
[6] time=3.47, avg_loss=0.2711, train_err=1.0843
Eval: (32, 64)_l2=0.3335, (64, 128)_l2=2.3245
[9] time=3.48, avg_loss=0.2257, train_err=0.9030
Eval: (32, 64)_l2=0.3089, (64, 128)_l2=2.3506
[12] time=3.47, avg_loss=0.1931, train_err=0.7726
Eval: (32, 64)_l2=0.2473, (64, 128)_l2=2.3245
[15] time=3.46, avg_loss=0.1670, train_err=0.6680
Eval: (32, 64)_l2=0.2365, (64, 128)_l2=2.3338
[18] time=3.46, avg_loss=0.1504, train_err=0.6015
Eval: (32, 64)_l2=0.2184, (64, 128)_l2=2.3164
{'train_err': 0.5653003603219986, 'avg_loss': 0.14132509008049965, 'avg_lasso_loss': None, 'epoch_train_time': 3.540815240000029}
{'train_err': 0.5853043901920318, 'avg_loss': 0.14632609754800796, 'avg_lasso_loss': None, 'epoch_train_time': 3.4730680470000834}
Expand Down Expand Up @@ -383,7 +383,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (1 minutes 26.316 seconds)
**Total running time of the script:** (1 minutes 25.227 seconds)


.. _sphx_glr_download_auto_examples_plot_SFNO_swe.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_UNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -345,13 +345,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f0546796370>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fc348ad6460>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f0546796040>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fc348ad6820>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f0546796040>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f05467962e0>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fc348ad6820>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fc348ad6be0>}
Expand Down Expand Up @@ -410,22 +410,22 @@ Actually train the model on our small Darcy-Flow dataset
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=10.26, avg_loss=0.7033, train_err=21.9780
Eval: 16_h1=0.4504, 16_l2=0.2891, 32_h1=0.8744, 32_l2=0.5683
[3] time=10.22, avg_loss=0.2421, train_err=7.5659
Eval: 16_h1=0.2374, 16_l2=0.1541, 32_h1=0.7924, 32_l2=0.5027
[6] time=10.14, avg_loss=0.2417, train_err=7.5527
Eval: 16_h1=0.2790, 16_l2=0.1725, 32_h1=0.8010, 32_l2=0.4774
[9] time=10.14, avg_loss=0.2402, train_err=7.5063
Eval: 16_h1=0.2619, 16_l2=0.1642, 32_h1=0.7790, 32_l2=0.4700
[12] time=10.11, avg_loss=0.2047, train_err=6.3980
Eval: 16_h1=0.2947, 16_l2=0.1913, 32_h1=0.7847, 32_l2=0.4989
[15] time=10.14, avg_loss=0.1683, train_err=5.2603
Eval: 16_h1=0.2925, 16_l2=0.1709, 32_h1=0.7781, 32_l2=0.4702
[18] time=10.13, avg_loss=0.1472, train_err=4.5987
Eval: 16_h1=0.2312, 16_l2=0.1432, 32_h1=0.7338, 32_l2=0.4617
[0] time=10.09, avg_loss=0.6742, train_err=21.0681
Eval: 16_h1=0.3964, 16_l2=0.2586, 32_h1=0.9592, 32_l2=0.6138
[3] time=9.98, avg_loss=0.2421, train_err=7.5670
Eval: 16_h1=0.2568, 16_l2=0.1667, 32_h1=0.8420, 32_l2=0.5287
[6] time=10.09, avg_loss=0.2110, train_err=6.5937
Eval: 16_h1=0.2970, 16_l2=0.1961, 32_h1=0.8152, 32_l2=0.5072
[9] time=10.02, avg_loss=0.1948, train_err=6.0878
Eval: 16_h1=0.2791, 16_l2=0.1853, 32_h1=0.8134, 32_l2=0.4969
[12] time=10.03, avg_loss=0.2015, train_err=6.2963
Eval: 16_h1=0.2579, 16_l2=0.1594, 32_h1=0.8114, 32_l2=0.4822
[15] time=10.01, avg_loss=0.1410, train_err=4.4049
Eval: 16_h1=0.2554, 16_l2=0.1777, 32_h1=0.8068, 32_l2=0.5007
[18] time=10.00, avg_loss=0.1490, train_err=4.6550
Eval: 16_h1=0.2411, 16_l2=0.1463, 32_h1=0.7796, 32_l2=0.4553
{'train_err': 3.8512582033872604, 'avg_loss': 0.12324026250839233, 'avg_lasso_loss': None, 'epoch_train_time': 10.164662551999982}
{'train_err': 4.482214251533151, 'avg_loss': 0.14343085604906083, 'avg_lasso_loss': None, 'epoch_train_time': 9.988673681000023}
Expand Down Expand Up @@ -499,7 +499,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (3 minutes 26.647 seconds)
**Total running time of the script:** (3 minutes 23.629 seconds)


.. _sphx_glr_download_auto_examples_plot_UNO_darcy.py:
Expand Down
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_count_flops.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e

.. code-block:: none
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7f055d093ca0>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7fc35f094ca0>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
Expand Down Expand Up @@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 3.530 seconds)
**Total running time of the script:** (0 minutes 3.946 seconds)


.. _sphx_glr_download_auto_examples_plot_count_flops.py:
Expand Down
2 changes: 1 addition & 1 deletion dev/_sources/auto_examples/plot_darcy_flow.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ Visualizing the data
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.390 seconds)
**Total running time of the script:** (0 minutes 0.408 seconds)


.. _sphx_glr_download_auto_examples_plot_darcy_flow.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.278 seconds)
**Total running time of the script:** (0 minutes 0.276 seconds)


.. _sphx_glr_download_auto_examples_plot_darcy_flow_spectrum.py:
Expand Down
Loading

0 comments on commit 8be73b1

Please sign in to comment.