Skip to content

Commit

Permalink
Github action: auto-update.
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Nov 14, 2024
1 parent 5d77994 commit 6aa2f70
Show file tree
Hide file tree
Showing 36 changed files with 167 additions and 167 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. code-block:: none
<matplotlib.colorbar.Colorbar object at 0x7f0e461dc7f0>
<matplotlib.colorbar.Colorbar object at 0x7f8cc01feb50>
Expand Down Expand Up @@ -448,7 +448,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 36.353 seconds)
**Total running time of the script:** (0 minutes 36.596 seconds)


.. _sphx_glr_download_auto_examples_plot_DISCO_convolutions.py:
Expand Down
16 changes: 8 additions & 8 deletions dev/_sources/auto_examples/plot_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -248,13 +248,13 @@ Training the model
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f0e776c2fa0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f8cf16b82b0>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f0e776c2f40>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f8cf16b8820>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f0e776c2f40>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f0e7769fbe0>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f8cf16b8820>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f8cd0cf9940>}
Expand Down Expand Up @@ -317,16 +317,16 @@ Then train the model on our small Darcy-Flow dataset:
Eval: 16_h1=0.2058, 16_l2=0.1716, 32_h1=0.3869, 32_l2=0.2037
[6] time=2.54, avg_loss=0.1687, train_err=5.2734
Eval: 16_h1=0.1864, 16_l2=0.1414, 32_h1=0.3874, 32_l2=0.1798
[9] time=2.55, avg_loss=0.1457, train_err=4.5546
[9] time=2.54, avg_loss=0.1457, train_err=4.5546
Eval: 16_h1=0.1864, 16_l2=0.1451, 32_h1=0.4279, 32_l2=0.1923
[12] time=2.55, avg_loss=0.1348, train_err=4.2138
[12] time=2.53, avg_loss=0.1348, train_err=4.2138
Eval: 16_h1=0.1892, 16_l2=0.1436, 32_h1=0.4446, 32_l2=0.1909
[15] time=2.55, avg_loss=0.1176, train_err=3.6743
[15] time=2.54, avg_loss=0.1176, train_err=3.6743
Eval: 16_h1=0.1565, 16_l2=0.1118, 32_h1=0.3807, 32_l2=0.1519
[18] time=2.54, avg_loss=0.0866, train_err=2.7047
Eval: 16_h1=0.1576, 16_l2=0.1159, 32_h1=0.4055, 32_l2=0.1698
{'train_err': 2.8488178942352533, 'avg_loss': 0.0911621726155281, 'avg_lasso_loss': None, 'epoch_train_time': 2.5436050599998907}
{'train_err': 2.8488178942352533, 'avg_loss': 0.0911621726155281, 'avg_lasso_loss': None, 'epoch_train_time': 2.536003271000027}
Expand Down Expand Up @@ -476,7 +476,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 52.263 seconds)
**Total running time of the script:** (0 minutes 52.108 seconds)


.. _sphx_glr_download_auto_examples_plot_FNO_darcy.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_SFNO_swe.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -239,13 +239,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f0e61677be0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f8cccc411f0>
### LOSSES ###
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f0e6169a6d0>
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f8cccc41ac0>
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f0e6169a6d0>}
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f8cccc41ac0>}
Expand Down Expand Up @@ -302,22 +302,22 @@ Train the model on the spherical SWE dataset
Training on 200 samples
Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)].
Raw outputs of shape torch.Size([4, 3, 32, 64])
[0] time=3.51, avg_loss=2.5811, train_err=10.3245
Eval: (32, 64)_l2=1.7888, (64, 128)_l2=2.3886
[3] time=3.49, avg_loss=0.4420, train_err=1.7682
Eval: (32, 64)_l2=0.4180, (64, 128)_l2=2.5558
[6] time=3.46, avg_loss=0.2738, train_err=1.0950
Eval: (32, 64)_l2=0.3758, (64, 128)_l2=2.4626
[9] time=3.47, avg_loss=0.2376, train_err=0.9506
Eval: (32, 64)_l2=0.2817, (64, 128)_l2=2.4057
[12] time=3.50, avg_loss=0.2004, train_err=0.8015
Eval: (32, 64)_l2=0.2895, (64, 128)_l2=2.3442
[15] time=3.46, avg_loss=0.1736, train_err=0.6945
Eval: (32, 64)_l2=0.2295, (64, 128)_l2=2.3143
[18] time=3.46, avg_loss=0.1513, train_err=0.6052
Eval: (32, 64)_l2=0.2300, (64, 128)_l2=2.2962
[0] time=3.62, avg_loss=2.5873, train_err=10.3491
Eval: (32, 64)_l2=1.6625, (64, 128)_l2=2.3805
[3] time=3.59, avg_loss=0.4299, train_err=1.7195
Eval: (32, 64)_l2=0.3705, (64, 128)_l2=2.3807
[6] time=3.56, avg_loss=0.2634, train_err=1.0535
Eval: (32, 64)_l2=0.3257, (64, 128)_l2=2.3691
[9] time=3.56, avg_loss=0.2119, train_err=0.8477
Eval: (32, 64)_l2=0.2609, (64, 128)_l2=2.3360
[12] time=3.55, avg_loss=0.1822, train_err=0.7286
Eval: (32, 64)_l2=0.2307, (64, 128)_l2=2.3128
[15] time=3.56, avg_loss=0.1581, train_err=0.6322
Eval: (32, 64)_l2=0.1898, (64, 128)_l2=2.3094
[18] time=3.53, avg_loss=0.1431, train_err=0.5725
Eval: (32, 64)_l2=0.1784, (64, 128)_l2=2.3245
{'train_err': 0.601245778799057, 'avg_loss': 0.15031144469976426, 'avg_lasso_loss': None, 'epoch_train_time': 3.4630227890000356}
{'train_err': 0.5559453934431076, 'avg_loss': 0.1389863483607769, 'avg_lasso_loss': None, 'epoch_train_time': 3.525268703999984}
Expand Down Expand Up @@ -388,7 +388,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (1 minutes 25.195 seconds)
**Total running time of the script:** (1 minutes 27.116 seconds)


.. _sphx_glr_download_auto_examples_plot_SFNO_swe.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_UNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -345,13 +345,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f0e616613a0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f8cccbf1640>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f0e616611f0>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f8cccbf1610>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f0e616611f0>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f0e616612b0>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f8cccbf1610>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f8cccbf12e0>}
Expand Down Expand Up @@ -410,22 +410,22 @@ Actually train the model on our small Darcy-Flow dataset
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=10.17, avg_loss=0.6441, train_err=20.1282
Eval: 16_h1=0.3095, 16_l2=0.2449, 32_h1=0.7229, 32_l2=0.5358
[3] time=10.14, avg_loss=0.2428, train_err=7.5868
Eval: 16_h1=0.2699, 16_l2=0.2214, 32_h1=0.7097, 32_l2=0.5759
[6] time=10.13, avg_loss=0.2352, train_err=7.3495
Eval: 16_h1=0.2115, 16_l2=0.1576, 32_h1=0.7179, 32_l2=0.5758
[9] time=10.13, avg_loss=0.2030, train_err=6.3450
Eval: 16_h1=0.2484, 16_l2=0.1970, 32_h1=0.6780, 32_l2=0.5503
[12] time=10.10, avg_loss=0.1893, train_err=5.9155
Eval: 16_h1=0.2512, 16_l2=0.1867, 32_h1=0.6899, 32_l2=0.5352
[15] time=10.07, avg_loss=0.1575, train_err=4.9215
Eval: 16_h1=0.1997, 16_l2=0.1529, 32_h1=0.6739, 32_l2=0.5376
[18] time=10.10, avg_loss=0.1334, train_err=4.1696
Eval: 16_h1=0.1973, 16_l2=0.1490, 32_h1=0.6727, 32_l2=0.5054
[0] time=10.16, avg_loss=0.6551, train_err=20.4733
Eval: 16_h1=0.3185, 16_l2=0.2467, 32_h1=0.8165, 32_l2=0.6623
[3] time=10.08, avg_loss=0.2450, train_err=7.6547
Eval: 16_h1=0.2639, 16_l2=0.2082, 32_h1=0.7555, 32_l2=0.6193
[6] time=10.11, avg_loss=0.2239, train_err=6.9981
Eval: 16_h1=0.2054, 16_l2=0.1505, 32_h1=0.7867, 32_l2=0.6405
[9] time=10.19, avg_loss=0.2389, train_err=7.4641
Eval: 16_h1=0.2084, 16_l2=0.1527, 32_h1=0.7668, 32_l2=0.6469
[12] time=10.10, avg_loss=0.1708, train_err=5.3378
Eval: 16_h1=0.2286, 16_l2=0.1798, 32_h1=0.7274, 32_l2=0.5666
[15] time=10.11, avg_loss=0.1493, train_err=4.6665
Eval: 16_h1=0.2102, 16_l2=0.1550, 32_h1=0.7123, 32_l2=0.5348
[18] time=10.16, avg_loss=0.1667, train_err=5.2080
Eval: 16_h1=0.1794, 16_l2=0.1285, 32_h1=0.6974, 32_l2=0.5106
{'train_err': 4.134030694141984, 'avg_loss': 0.13228898221254348, 'avg_lasso_loss': None, 'epoch_train_time': 10.076509817999977}
{'train_err': 4.332731641829014, 'avg_loss': 0.13864741253852844, 'avg_lasso_loss': None, 'epoch_train_time': 10.153018183000086}
Expand Down Expand Up @@ -499,7 +499,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (3 minutes 25.421 seconds)
**Total running time of the script:** (3 minutes 26.027 seconds)


.. _sphx_glr_download_auto_examples_plot_UNO_darcy.py:
Expand Down
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_count_flops.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e

.. code-block:: none
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7f0e7769c310>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7f8cf1695310>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
Expand Down Expand Up @@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 3.686 seconds)
**Total running time of the script:** (0 minutes 3.795 seconds)


.. _sphx_glr_download_auto_examples_plot_count_flops.py:
Expand Down
2 changes: 1 addition & 1 deletion dev/_sources/auto_examples/plot_darcy_flow.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ Visualizing the data
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.399 seconds)
**Total running time of the script:** (0 minutes 0.406 seconds)


.. _sphx_glr_download_auto_examples_plot_darcy_flow.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.276 seconds)
**Total running time of the script:** (0 minutes 0.278 seconds)


.. _sphx_glr_download_auto_examples_plot_darcy_flow_spectrum.py:
Expand Down
24 changes: 12 additions & 12 deletions dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -240,15 +240,15 @@ Set up the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f0e6277eac0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f8cdc613910>
### LOSSES ###
### INCREMENTAL RESOLUTION + GRADIENT EXPLAINED ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f0e776900d0>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f8cc39b83a0>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f0e776900d0>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f0e77690a00>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f8cc39b83a0>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f8cc39b80d0>}
Expand Down Expand Up @@ -329,13 +329,13 @@ Train the model
Training on 100 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([16, 1, 8, 8])
[0] time=0.24, avg_loss=0.8115, train_err=11.5929
[0] time=0.25, avg_loss=0.8115, train_err=11.5929
Eval: 16_h1=0.7332, 16_l2=0.5739, 32_h1=0.7863, 32_l2=0.5720
[1] time=0.23, avg_loss=0.6661, train_err=9.5159
[1] time=0.22, avg_loss=0.6661, train_err=9.5159
Eval: 16_h1=0.8889, 16_l2=0.7005, 32_h1=1.1195, 32_l2=0.7407
[2] time=0.22, avg_loss=0.6497, train_err=9.2815
Eval: 16_h1=0.6372, 16_l2=0.4883, 32_h1=0.6967, 32_l2=0.5000
[3] time=0.22, avg_loss=0.5559, train_err=7.9411
[3] time=0.23, avg_loss=0.5559, train_err=7.9411
Eval: 16_h1=0.6112, 16_l2=0.4348, 32_h1=0.7432, 32_l2=0.4530
[4] time=0.22, avg_loss=0.4852, train_err=6.9312
Eval: 16_h1=0.5762, 16_l2=0.4037, 32_h1=0.7138, 32_l2=0.4262
Expand All @@ -352,28 +352,28 @@ Train the model
Incre Res Update: change index to 1
Incre Res Update: change sub to 1
Incre Res Update: change res to 16
[10] time=0.28, avg_loss=0.4253, train_err=6.0757
[10] time=0.29, avg_loss=0.4253, train_err=6.0757
Eval: 16_h1=0.4069, 16_l2=0.2959, 32_h1=0.4904, 32_l2=0.2928
[11] time=0.27, avg_loss=0.3745, train_err=5.3500
[11] time=0.28, avg_loss=0.3745, train_err=5.3500
Eval: 16_h1=0.3820, 16_l2=0.2869, 32_h1=0.4769, 32_l2=0.3026
[12] time=0.28, avg_loss=0.3405, train_err=4.8636
Eval: 16_h1=0.3404, 16_l2=0.2598, 32_h1=0.4410, 32_l2=0.2731
[13] time=0.28, avg_loss=0.3090, train_err=4.4136
Eval: 16_h1=0.3231, 16_l2=0.2452, 32_h1=0.4245, 32_l2=0.2586
[14] time=0.28, avg_loss=0.2896, train_err=4.1368
Eval: 16_h1=0.3130, 16_l2=0.2380, 32_h1=0.4161, 32_l2=0.2522
[15] time=0.28, avg_loss=0.2789, train_err=3.9843
[15] time=0.29, avg_loss=0.2789, train_err=3.9843
Eval: 16_h1=0.3072, 16_l2=0.2324, 32_h1=0.4151, 32_l2=0.2455
[16] time=0.28, avg_loss=0.2690, train_err=3.8434
Eval: 16_h1=0.3042, 16_l2=0.2305, 32_h1=0.4100, 32_l2=0.2425
[17] time=0.28, avg_loss=0.2637, train_err=3.7674
Eval: 16_h1=0.2954, 16_l2=0.2229, 32_h1=0.4023, 32_l2=0.2354
[18] time=0.28, avg_loss=0.2557, train_err=3.6533
[18] time=0.29, avg_loss=0.2557, train_err=3.6533
Eval: 16_h1=0.2756, 16_l2=0.2105, 32_h1=0.3780, 32_l2=0.2269
[19] time=0.28, avg_loss=0.2395, train_err=3.4208
Eval: 16_h1=0.2735, 16_l2=0.2106, 32_h1=0.3738, 32_l2=0.2303
{'train_err': 3.4207682268960133, 'avg_loss': 0.23945377588272096, 'avg_lasso_loss': None, 'epoch_train_time': 0.2815485109999827, '16_h1': tensor(0.2735), '16_l2': tensor(0.2106), '32_h1': tensor(0.3738), '32_l2': tensor(0.2303)}
{'train_err': 3.4207682268960133, 'avg_loss': 0.23945377588272096, 'avg_lasso_loss': None, 'epoch_train_time': 0.2821420150000904, '16_h1': tensor(0.2735), '16_l2': tensor(0.2106), '32_h1': tensor(0.3738), '32_l2': tensor(0.2303)}
Expand Down Expand Up @@ -447,7 +447,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 7.019 seconds)
**Total running time of the script:** (0 minutes 7.106 seconds)


.. _sphx_glr_download_auto_examples_plot_incremental_FNO_darcy.py:
Expand Down
Loading

0 comments on commit 6aa2f70

Please sign in to comment.