Skip to content

Commit

Permalink
Github action: auto-update.
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Jan 14, 2025
1 parent 7616357 commit cd17408
Show file tree
Hide file tree
Showing 38 changed files with 179 additions and 179 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_darcy_flow_spectrum_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_darcy_flow_spectrum_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. code-block:: none
<matplotlib.colorbar.Colorbar object at 0x7f44118dad50>
<matplotlib.colorbar.Colorbar object at 0x7f5bc149ed50>
Expand Down Expand Up @@ -448,7 +448,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 33.625 seconds)
**Total running time of the script:** (0 minutes 33.080 seconds)


.. _sphx_glr_download_auto_examples_plot_DISCO_convolutions.py:
Expand Down
22 changes: 11 additions & 11 deletions dev/_sources/auto_examples/plot_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -248,13 +248,13 @@ Training the model
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f44176c56d0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f5bbea88910>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f44176c6490>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f5bbea8a0d0>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f44176c6490>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f44176c7c50>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f5bbea8a0d0>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f5bbea88690>}
Expand Down Expand Up @@ -311,22 +311,22 @@ Then train the model on our small Darcy-Flow dataset:
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=2.48, avg_loss=0.6956, train_err=21.7383
[0] time=2.44, avg_loss=0.6956, train_err=21.7383
Eval: 16_h1=0.4298, 16_l2=0.3487, 32_h1=0.5847, 32_l2=0.3542
[3] time=2.49, avg_loss=0.2103, train_err=6.5705
[3] time=2.44, avg_loss=0.2103, train_err=6.5705
Eval: 16_h1=0.2030, 16_l2=0.1384, 32_h1=0.5075, 32_l2=0.1774
[6] time=2.48, avg_loss=0.1911, train_err=5.9721
[6] time=2.44, avg_loss=0.1911, train_err=5.9721
Eval: 16_h1=0.2099, 16_l2=0.1374, 32_h1=0.4907, 32_l2=0.1783
[9] time=2.48, avg_loss=0.1410, train_err=4.4073
[9] time=2.45, avg_loss=0.1410, train_err=4.4073
Eval: 16_h1=0.2052, 16_l2=0.1201, 32_h1=0.5268, 32_l2=0.1615
[12] time=2.47, avg_loss=0.1422, train_err=4.4434
Eval: 16_h1=0.2131, 16_l2=0.1285, 32_h1=0.5413, 32_l2=0.1741
[15] time=2.54, avg_loss=0.1198, train_err=3.7424
[15] time=2.45, avg_loss=0.1198, train_err=3.7424
Eval: 16_h1=0.1984, 16_l2=0.1137, 32_h1=0.5255, 32_l2=0.1569
[18] time=2.51, avg_loss=0.1104, train_err=3.4502
[18] time=2.47, avg_loss=0.1104, train_err=3.4502
Eval: 16_h1=0.2039, 16_l2=0.1195, 32_h1=0.5062, 32_l2=0.1603
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.4854582740000524}
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.4835143990000006}
Expand Down Expand Up @@ -476,7 +476,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 50.864 seconds)
**Total running time of the script:** (0 minutes 50.349 seconds)


.. _sphx_glr_download_auto_examples_plot_FNO_darcy.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_SFNO_swe.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -234,13 +234,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f4424acd550>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f5bcfdd1550>
### LOSSES ###
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f4424acef90>
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f5bcfdd2f90>
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f4424acef90>}
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f5bcfdd2f90>}
Expand Down Expand Up @@ -297,22 +297,22 @@ Train the model on the spherical SWE dataset
Training on 200 samples
Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)].
Raw outputs of shape torch.Size([4, 3, 32, 64])
[0] time=3.56, avg_loss=2.6878, train_err=10.7511
Eval: (32, 64)_l2=2.0013, (64, 128)_l2=2.4464
[3] time=3.57, avg_loss=0.4203, train_err=1.6812
Eval: (32, 64)_l2=0.4162, (64, 128)_l2=2.7454
[6] time=3.50, avg_loss=0.2803, train_err=1.1213
Eval: (32, 64)_l2=0.3612, (64, 128)_l2=2.6383
[9] time=3.53, avg_loss=0.2307, train_err=0.9230
Eval: (32, 64)_l2=0.3089, (64, 128)_l2=2.6235
[12] time=3.52, avg_loss=0.2006, train_err=0.8024
Eval: (32, 64)_l2=0.2687, (64, 128)_l2=2.5915
[15] time=3.47, avg_loss=0.1704, train_err=0.6818
Eval: (32, 64)_l2=0.2401, (64, 128)_l2=2.5568
[18] time=3.53, avg_loss=0.1463, train_err=0.5853
Eval: (32, 64)_l2=0.2272, (64, 128)_l2=2.5580
[0] time=3.43, avg_loss=2.6140, train_err=10.4559
Eval: (32, 64)_l2=1.5768, (64, 128)_l2=2.4202
[3] time=3.45, avg_loss=0.3724, train_err=1.4896
Eval: (32, 64)_l2=0.3977, (64, 128)_l2=2.4189
[6] time=3.38, avg_loss=0.2915, train_err=1.1659
Eval: (32, 64)_l2=0.3378, (64, 128)_l2=2.3679
[9] time=3.39, avg_loss=0.2507, train_err=1.0026
Eval: (32, 64)_l2=0.2562, (64, 128)_l2=2.3166
[12] time=3.38, avg_loss=0.2074, train_err=0.8296
Eval: (32, 64)_l2=0.2287, (64, 128)_l2=2.3166
[15] time=3.38, avg_loss=0.1902, train_err=0.7608
Eval: (32, 64)_l2=0.1995, (64, 128)_l2=2.2783
[18] time=3.36, avg_loss=0.1740, train_err=0.6961
Eval: (32, 64)_l2=0.2096, (64, 128)_l2=2.2698
{'train_err': 0.55643718957901, 'avg_loss': 0.1391092973947525, 'avg_lasso_loss': None, 'epoch_train_time': 3.4575020479999807}
{'train_err': 0.6447638177871704, 'avg_loss': 0.1611909544467926, 'avg_lasso_loss': None, 'epoch_train_time': 3.366909286000009}
Expand Down Expand Up @@ -383,7 +383,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (1 minutes 25.958 seconds)
**Total running time of the script:** (1 minutes 23.215 seconds)


.. _sphx_glr_download_auto_examples_plot_SFNO_swe.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_UNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -345,13 +345,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f441fd34410>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f5bcfb34410>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f4424bc2f90>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f5bcfec6a50>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f4424bc2f90>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f441fd351d0>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f5bcfec6a50>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f5bcfb351d0>}
Expand Down Expand Up @@ -410,22 +410,22 @@ Actually train the model on our small Darcy-Flow dataset
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=10.13, avg_loss=0.6277, train_err=19.6169
Eval: 16_h1=0.3908, 16_l2=0.2531, 32_h1=0.8497, 32_l2=0.5628
[3] time=10.01, avg_loss=0.2390, train_err=7.4677
Eval: 16_h1=0.2661, 16_l2=0.1821, 32_h1=0.7915, 32_l2=0.5232
[6] time=10.06, avg_loss=0.2445, train_err=7.6407
Eval: 16_h1=0.2785, 16_l2=0.1834, 32_h1=0.7741, 32_l2=0.4839
[9] time=9.96, avg_loss=0.2181, train_err=6.8151
Eval: 16_h1=0.2662, 16_l2=0.1616, 32_h1=0.7805, 32_l2=0.4683
[12] time=10.12, avg_loss=0.1823, train_err=5.6964
Eval: 16_h1=0.3160, 16_l2=0.2223, 32_h1=0.7456, 32_l2=0.4743
[15] time=9.97, avg_loss=0.1574, train_err=4.9193
Eval: 16_h1=0.2520, 16_l2=0.1663, 32_h1=0.7680, 32_l2=0.4271
[18] time=10.01, avg_loss=0.1625, train_err=5.0769
Eval: 16_h1=0.2389, 16_l2=0.1537, 32_h1=0.7389, 32_l2=0.4555
[0] time=9.94, avg_loss=0.6462, train_err=20.1926
Eval: 16_h1=0.3830, 16_l2=0.2502, 32_h1=0.8540, 32_l2=0.6235
[3] time=9.82, avg_loss=0.2409, train_err=7.5296
Eval: 16_h1=0.2677, 16_l2=0.1713, 32_h1=0.8255, 32_l2=0.5751
[6] time=9.82, avg_loss=0.2110, train_err=6.5926
Eval: 16_h1=0.2473, 16_l2=0.1585, 32_h1=0.8126, 32_l2=0.5537
[9] time=9.85, avg_loss=0.2067, train_err=6.4586
Eval: 16_h1=0.3001, 16_l2=0.2035, 32_h1=0.8125, 32_l2=0.5389
[12] time=9.87, avg_loss=0.2028, train_err=6.3389
Eval: 16_h1=0.3074, 16_l2=0.1954, 32_h1=0.8299, 32_l2=0.4869
[15] time=9.84, avg_loss=0.1606, train_err=5.0182
Eval: 16_h1=0.2506, 16_l2=0.1702, 32_h1=0.7811, 32_l2=0.5287
[18] time=9.89, avg_loss=0.1676, train_err=5.2381
Eval: 16_h1=0.2629, 16_l2=0.1638, 32_h1=0.8032, 32_l2=0.4799
{'train_err': 4.322231197729707, 'avg_loss': 0.13831139832735062, 'avg_lasso_loss': None, 'epoch_train_time': 10.025138579000043}
{'train_err': 4.385094230994582, 'avg_loss': 0.14032301539182662, 'avg_lasso_loss': None, 'epoch_train_time': 9.902361848000055}
Expand Down Expand Up @@ -499,7 +499,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (3 minutes 23.580 seconds)
**Total running time of the script:** (3 minutes 20.840 seconds)


.. _sphx_glr_download_auto_examples_plot_UNO_darcy.py:
Expand Down
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_count_flops.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e

.. code-block:: none
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7f4424a76d40>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7f5bcfd7ad40>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
Expand Down Expand Up @@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 3.150 seconds)
**Total running time of the script:** (0 minutes 3.098 seconds)


.. _sphx_glr_download_auto_examples_plot_count_flops.py:
Expand Down
2 changes: 1 addition & 1 deletion dev/_sources/auto_examples/plot_darcy_flow.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ Visualizing the data
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.317 seconds)
**Total running time of the script:** (0 minutes 0.319 seconds)


.. _sphx_glr_download_auto_examples_plot_darcy_flow.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.186 seconds)
**Total running time of the script:** (0 minutes 0.164 seconds)


.. _sphx_glr_download_auto_examples_plot_darcy_flow_spectrum.py:
Expand Down
30 changes: 15 additions & 15 deletions dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -240,15 +240,15 @@ Set up the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f442548c2b0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f5bd617fce0>
### LOSSES ###
### INCREMENTAL RESOLUTION + GRADIENT EXPLAINED ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f44176ea5d0>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f5bbead2850>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f44176ea5d0>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f4425d7bce0>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f5bbead2850>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f5bd617ed70>}
Expand Down Expand Up @@ -333,7 +333,7 @@ Train the model
Eval: 16_h1=0.8702, 16_l2=0.5536, 32_h1=0.9397, 32_l2=0.5586
[1] time=0.21, avg_loss=0.7589, train_err=10.8413
Eval: 16_h1=0.7880, 16_l2=0.5167, 32_h1=0.9011, 32_l2=0.5234
[2] time=0.21, avg_loss=0.6684, train_err=9.5484
[2] time=0.20, avg_loss=0.6684, train_err=9.5484
Eval: 16_h1=0.7698, 16_l2=0.4235, 32_h1=0.9175, 32_l2=0.4405
[3] time=0.21, avg_loss=0.6082, train_err=8.6886
Eval: 16_h1=0.7600, 16_l2=0.4570, 32_h1=1.0126, 32_l2=0.5005
Expand All @@ -354,26 +354,26 @@ Train the model
Incre Res Update: change res to 16
[10] time=0.27, avg_loss=0.5158, train_err=7.3681
Eval: 16_h1=0.5133, 16_l2=0.3167, 32_h1=0.6120, 32_l2=0.3022
[11] time=0.26, avg_loss=0.4536, train_err=6.4795
[11] time=0.25, avg_loss=0.4536, train_err=6.4795
Eval: 16_h1=0.4680, 16_l2=0.3422, 32_h1=0.6436, 32_l2=0.3659
[12] time=0.27, avg_loss=0.4155, train_err=5.9358
[12] time=0.26, avg_loss=0.4155, train_err=5.9358
Eval: 16_h1=0.4119, 16_l2=0.2692, 32_h1=0.5285, 32_l2=0.2692
[13] time=0.27, avg_loss=0.3724, train_err=5.3195
[13] time=0.26, avg_loss=0.3724, train_err=5.3195
Eval: 16_h1=0.4045, 16_l2=0.2569, 32_h1=0.5620, 32_l2=0.2747
[14] time=0.27, avg_loss=0.3555, train_err=5.0783
[14] time=0.26, avg_loss=0.3555, train_err=5.0783
Eval: 16_h1=0.3946, 16_l2=0.2485, 32_h1=0.5278, 32_l2=0.2523
[15] time=0.27, avg_loss=0.3430, train_err=4.8998
[15] time=0.26, avg_loss=0.3430, train_err=4.8998
Eval: 16_h1=0.3611, 16_l2=0.2323, 32_h1=0.5079, 32_l2=0.2455
[16] time=0.27, avg_loss=0.3251, train_err=4.6438
[16] time=0.26, avg_loss=0.3251, train_err=4.6438
Eval: 16_h1=0.3433, 16_l2=0.2224, 32_h1=0.4757, 32_l2=0.2351
[17] time=0.27, avg_loss=0.3072, train_err=4.3888
[17] time=0.26, avg_loss=0.3072, train_err=4.3888
Eval: 16_h1=0.3458, 16_l2=0.2226, 32_h1=0.4776, 32_l2=0.2371
[18] time=0.27, avg_loss=0.2982, train_err=4.2593
[18] time=0.26, avg_loss=0.2982, train_err=4.2593
Eval: 16_h1=0.3251, 16_l2=0.2116, 32_h1=0.4519, 32_l2=0.2245
[19] time=0.27, avg_loss=0.2802, train_err=4.0024
[19] time=0.26, avg_loss=0.2802, train_err=4.0024
Eval: 16_h1=0.3201, 16_l2=0.2110, 32_h1=0.4533, 32_l2=0.2245
{'train_err': 4.002395851271493, 'avg_loss': 0.2801677095890045, 'avg_lasso_loss': None, 'epoch_train_time': 0.27010723700004746, '16_h1': tensor(0.3201), '16_l2': tensor(0.2110), '32_h1': tensor(0.4533), '32_l2': tensor(0.2245)}
{'train_err': 4.002395851271493, 'avg_loss': 0.2801677095890045, 'avg_lasso_loss': None, 'epoch_train_time': 0.26419909200001257, '16_h1': tensor(0.3201), '16_l2': tensor(0.2110), '32_h1': tensor(0.4533), '32_l2': tensor(0.2245)}
Expand Down Expand Up @@ -447,7 +447,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 6.625 seconds)
**Total running time of the script:** (0 minutes 6.578 seconds)


.. _sphx_glr_download_auto_examples_plot_incremental_FNO_darcy.py:
Expand Down
Loading

0 comments on commit cd17408

Please sign in to comment.