Skip to content

Commit

Permalink
Github action: auto-update.
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Dec 19, 2024
1 parent e1a6663 commit 420e0b5
Show file tree
Hide file tree
Showing 37 changed files with 233 additions and 172 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
63 changes: 62 additions & 1 deletion dev/_modules/neuralop/models/base_model.html
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,68 @@ <h1>Source code for neuralop.models.base_model</h1><div class="highlight"><pre>
<span class="n">instance</span><span class="o">.</span><span class="n">_init_kwargs</span> <span class="o">=</span> <span class="n">kwargs</span>

<span class="k">return</span> <span class="n">instance</span>


<span class="k">def</span> <span class="nf">state_dict</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">destination</span><span class="p">:</span> <span class="nb">dict</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">prefix</span><span class="p">:</span> <span class="nb">str</span><span class="o">=</span><span class="s1">&#39;&#39;</span><span class="p">,</span> <span class="n">keep_vars</span><span class="p">:</span> <span class="nb">bool</span><span class="o">=</span><span class="kc">False</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> state_dict subclasses nn.Module.state_dict() and adds a metadata field</span>
<span class="sd"> to track the model version and ensure only compatible saves are loaded.</span>

<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> destination : dict, optional</span>
<span class="sd"> If provided, the state of module will</span>
<span class="sd"> be updated into the dict and the same object is returned.</span>
<span class="sd"> Otherwise, an OrderedDict will be created and returned, by default None</span>
<span class="sd"> prefix : str, optional</span>
<span class="sd"> a prefix added to parameter and buffer</span>
<span class="sd"> names to compose the keys in state_dict, by default ``&#39;&#39;``</span>
<span class="sd"> keep_vars (bool, optional): by default the torch.Tensors</span>
<span class="sd"> returned in the state dict are detached from autograd. </span>
<span class="sd"> If True, detaching will not be performed, by default False</span>

<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">state_dict</span> <span class="o">=</span> <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="n">state_dict</span><span class="p">(</span><span class="n">destination</span><span class="o">=</span><span class="n">destination</span><span class="p">,</span> <span class="n">prefix</span><span class="o">=</span><span class="n">prefix</span><span class="p">,</span> <span class="n">keep_vars</span><span class="o">=</span><span class="n">keep_vars</span><span class="p">)</span>
<span class="k">if</span> <span class="n">state_dict</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s1">&#39;_metadata&#39;</span><span class="p">)</span> <span class="o">==</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">state_dict</span><span class="p">[</span><span class="s1">&#39;_metadata&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_init_kwargs</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">warnings</span><span class="o">.</span><span class="n">warn</span><span class="p">(</span><span class="s2">&quot;Attempting to update metadata for a module with metadata already in self.state_dict()&quot;</span><span class="p">)</span>
<span class="k">return</span> <span class="n">state_dict</span>

<span class="k">def</span> <span class="nf">load_state_dict</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">state_dict</span><span class="p">,</span> <span class="n">strict</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">assign</span><span class="o">=</span><span class="kc">False</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;load_state_dict subclasses nn.Module.load_state_dict() and adds a metadata field</span>
<span class="sd"> to track the model version and ensure only compatible saves are loaded.</span>

<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> state_dict : dict</span>
<span class="sd"> state dictionary generated by ``nn.Module.state_dict()``</span>
<span class="sd"> strict : bool, optional</span>
<span class="sd"> whether to strictly enforce that the keys in ``state_dict``</span>
<span class="sd"> match the keys returned by this module&#39;s, by default True.</span>
<span class="sd"> assign : bool, optional</span>
<span class="sd"> whether to assign items in the state dict to their corresponding keys</span>
<span class="sd"> in the module instead of copying them inplace into the module&#39;s current</span>
<span class="sd"> parameters and buffers. When False, the properties of the tensors in the</span>
<span class="sd"> current module are preserved while when True, the properties of the Tensors</span>
<span class="sd"> in the state dict are preserved, by default False</span>

<span class="sd"> Returns</span>
<span class="sd"> -------</span>
<span class="sd"> _type_</span>
<span class="sd"> _description_</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">metadata</span> <span class="o">=</span> <span class="n">state_dict</span><span class="o">.</span><span class="n">pop</span><span class="p">(</span><span class="s1">&#39;_metadata&#39;</span><span class="p">,</span> <span class="kc">None</span><span class="p">)</span>

<span class="k">if</span> <span class="n">metadata</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">saved_version</span> <span class="o">=</span> <span class="n">metadata</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s1">&#39;_version&#39;</span><span class="p">,</span> <span class="kc">None</span><span class="p">)</span>
<span class="k">if</span> <span class="n">saved_version</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">warnings</span><span class="o">.</span><span class="n">warn</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Saved instance of </span><span class="si">{</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="si">}</span><span class="s2"> has no stored version attribute.&quot;</span><span class="p">)</span>
<span class="k">if</span> <span class="n">saved_version</span> <span class="o">!=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_version</span><span class="p">:</span>
<span class="n">warnings</span><span class="o">.</span><span class="n">warn</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Attempting to load a </span><span class="si">{</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="si">}</span><span class="s2"> of version </span><span class="si">{</span><span class="n">saved_version</span><span class="si">}</span><span class="s2">,&quot;</span>
<span class="sa">f</span><span class="s2">&quot;But current version of </span><span class="si">{</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="si">}</span><span class="s2"> is </span><span class="si">{</span><span class="n">saved_version</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
<span class="c1"># remove state dict metadata at the end to ensure proper loading with PyTorch module</span>
<span class="k">return</span> <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="n">load_state_dict</span><span class="p">(</span><span class="n">state_dict</span><span class="p">,</span> <span class="n">strict</span><span class="o">=</span><span class="n">strict</span><span class="p">,</span> <span class="n">assign</span><span class="o">=</span><span class="n">assign</span><span class="p">)</span>

<span class="k">def</span> <span class="nf">save_checkpoint</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">save_folder</span><span class="p">,</span> <span class="n">save_name</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Saves the model state and init param in the given folder under the given name</span>
<span class="sd"> &quot;&quot;&quot;</span>
Expand Down
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. code-block:: none
<matplotlib.colorbar.Colorbar object at 0x7f568db98550>
<matplotlib.colorbar.Colorbar object at 0x7f790614c550>
Expand Down Expand Up @@ -448,7 +448,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 31.674 seconds)
**Total running time of the script:** (0 minutes 31.281 seconds)


.. _sphx_glr_download_auto_examples_plot_DISCO_convolutions.py:
Expand Down
22 changes: 11 additions & 11 deletions dev/_sources/auto_examples/plot_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -248,13 +248,13 @@ Training the model
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f568b636ad0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f791941ead0>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f568b637d90>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f791941e850>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f568b637d90>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f568b637b10>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f791941e850>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f791941e490>}
Expand Down Expand Up @@ -311,22 +311,22 @@ Then train the model on our small Darcy-Flow dataset:
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=2.59, avg_loss=0.6956, train_err=21.7383
[0] time=2.57, avg_loss=0.6956, train_err=21.7383
Eval: 16_h1=0.4298, 16_l2=0.3487, 32_h1=0.5847, 32_l2=0.3542
[3] time=2.57, avg_loss=0.2103, train_err=6.5705
Eval: 16_h1=0.2030, 16_l2=0.1384, 32_h1=0.5075, 32_l2=0.1774
[6] time=2.58, avg_loss=0.1911, train_err=5.9721
[6] time=2.55, avg_loss=0.1911, train_err=5.9721
Eval: 16_h1=0.2099, 16_l2=0.1374, 32_h1=0.4907, 32_l2=0.1783
[9] time=2.64, avg_loss=0.1410, train_err=4.4073
[9] time=2.56, avg_loss=0.1410, train_err=4.4073
Eval: 16_h1=0.2052, 16_l2=0.1201, 32_h1=0.5268, 32_l2=0.1615
[12] time=2.63, avg_loss=0.1422, train_err=4.4434
[12] time=2.57, avg_loss=0.1422, train_err=4.4434
Eval: 16_h1=0.2131, 16_l2=0.1285, 32_h1=0.5413, 32_l2=0.1741
[15] time=2.61, avg_loss=0.1198, train_err=3.7424
[15] time=2.56, avg_loss=0.1198, train_err=3.7424
Eval: 16_h1=0.1984, 16_l2=0.1137, 32_h1=0.5255, 32_l2=0.1569
[18] time=2.59, avg_loss=0.1104, train_err=3.4502
[18] time=2.55, avg_loss=0.1104, train_err=3.4502
Eval: 16_h1=0.2039, 16_l2=0.1195, 32_h1=0.5062, 32_l2=0.1603
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.582470736999994}
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.5690329919999613}
Expand Down Expand Up @@ -476,7 +476,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 53.239 seconds)
**Total running time of the script:** (0 minutes 52.571 seconds)


.. _sphx_glr_download_auto_examples_plot_FNO_darcy.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_SFNO_swe.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -234,13 +234,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f569bfdf770>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f79195ab230>
### LOSSES ###
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f569bfdf620>
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f79195aacf0>
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f569bfdf620>}
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f79195aacf0>}
Expand Down Expand Up @@ -297,22 +297,22 @@ Train the model on the spherical SWE dataset
Training on 200 samples
Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)].
Raw outputs of shape torch.Size([4, 3, 32, 64])
[0] time=3.71, avg_loss=2.5890, train_err=10.3559
Eval: (32, 64)_l2=2.0514, (64, 128)_l2=2.5055
[3] time=3.64, avg_loss=0.4467, train_err=1.7867
Eval: (32, 64)_l2=0.6580, (64, 128)_l2=2.5187
[6] time=3.60, avg_loss=0.2807, train_err=1.1226
Eval: (32, 64)_l2=0.5458, (64, 128)_l2=2.4767
[9] time=3.59, avg_loss=0.2355, train_err=0.9419
Eval: (32, 64)_l2=0.5549, (64, 128)_l2=2.4920
[12] time=3.65, avg_loss=0.2170, train_err=0.8680
Eval: (32, 64)_l2=0.5206, (64, 128)_l2=2.4921
[15] time=3.68, avg_loss=0.1824, train_err=0.7297
Eval: (32, 64)_l2=0.4906, (64, 128)_l2=2.4846
[18] time=3.62, avg_loss=0.1764, train_err=0.7054
Eval: (32, 64)_l2=0.4640, (64, 128)_l2=2.4902
[0] time=3.61, avg_loss=2.5783, train_err=10.3132
Eval: (32, 64)_l2=1.7929, (64, 128)_l2=2.3944
[3] time=3.61, avg_loss=0.3960, train_err=1.5841
Eval: (32, 64)_l2=0.4045, (64, 128)_l2=2.6128
[6] time=3.61, avg_loss=0.2645, train_err=1.0581
Eval: (32, 64)_l2=0.2855, (64, 128)_l2=2.5574
[9] time=3.57, avg_loss=0.2303, train_err=0.9211
Eval: (32, 64)_l2=0.3216, (64, 128)_l2=2.5180
[12] time=3.58, avg_loss=0.1810, train_err=0.7240
Eval: (32, 64)_l2=0.2103, (64, 128)_l2=2.5075
[15] time=3.57, avg_loss=0.1538, train_err=0.6152
Eval: (32, 64)_l2=0.1890, (64, 128)_l2=2.4402
[18] time=3.57, avg_loss=0.1323, train_err=0.5294
Eval: (32, 64)_l2=0.1798, (64, 128)_l2=2.4265
{'train_err': 0.6676547992229461, 'avg_loss': 0.16691369980573653, 'avg_lasso_loss': None, 'epoch_train_time': 3.599016188000064}
{'train_err': 0.5195050823688507, 'avg_loss': 0.12987627059221268, 'avg_lasso_loss': None, 'epoch_train_time': 3.607363260999989}
Expand Down Expand Up @@ -383,7 +383,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (1 minutes 28.830 seconds)
**Total running time of the script:** (1 minutes 28.039 seconds)


.. _sphx_glr_download_auto_examples_plot_SFNO_swe.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_UNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -345,13 +345,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f5699c9e490>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f79182ee490>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f56a11f5e80>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f79301ff620>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f56a11f5e80>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f5699c9e210>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f79301ff620>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f79182ee210>}
Expand Down Expand Up @@ -410,22 +410,22 @@ Actually train the model on our small Darcy-Flow dataset
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=10.19, avg_loss=0.6679, train_err=20.8720
Eval: 16_h1=0.3981, 16_l2=0.2711, 32_h1=0.9217, 32_l2=0.6494
[3] time=10.13, avg_loss=0.2426, train_err=7.5825
Eval: 16_h1=0.2854, 16_l2=0.1799, 32_h1=0.7671, 32_l2=0.4978
[6] time=10.09, avg_loss=0.2373, train_err=7.4160
Eval: 16_h1=0.3061, 16_l2=0.1973, 32_h1=0.7619, 32_l2=0.4908
[9] time=10.08, avg_loss=0.2195, train_err=6.8592
Eval: 16_h1=0.2422, 16_l2=0.1575, 32_h1=0.7381, 32_l2=0.4485
[12] time=10.11, avg_loss=0.1690, train_err=5.2798
Eval: 16_h1=0.2518, 16_l2=0.1580, 32_h1=0.7492, 32_l2=0.4722
[15] time=10.12, avg_loss=0.1825, train_err=5.7038
Eval: 16_h1=0.2699, 16_l2=0.1664, 32_h1=0.7599, 32_l2=0.4589
[18] time=10.08, avg_loss=0.1676, train_err=5.2361
Eval: 16_h1=0.2356, 16_l2=0.1478, 32_h1=0.7352, 32_l2=0.4557
[0] time=10.15, avg_loss=0.6631, train_err=20.7222
Eval: 16_h1=0.4579, 16_l2=0.2992, 32_h1=0.9470, 32_l2=0.6763
[3] time=10.05, avg_loss=0.2476, train_err=7.7385
Eval: 16_h1=0.2439, 16_l2=0.1618, 32_h1=0.9045, 32_l2=0.6343
[6] time=10.04, avg_loss=0.2285, train_err=7.1392
Eval: 16_h1=0.2579, 16_l2=0.1738, 32_h1=0.8590, 32_l2=0.6153
[9] time=10.07, avg_loss=0.1985, train_err=6.2036
Eval: 16_h1=0.2429, 16_l2=0.1520, 32_h1=0.8739, 32_l2=0.5978
[12] time=10.04, avg_loss=0.1856, train_err=5.8014
Eval: 16_h1=0.2410, 16_l2=0.1467, 32_h1=0.8608, 32_l2=0.5604
[15] time=10.05, avg_loss=0.1546, train_err=4.8301
Eval: 16_h1=0.3156, 16_l2=0.2123, 32_h1=0.8466, 32_l2=0.6000
[18] time=10.03, avg_loss=0.1247, train_err=3.8961
Eval: 16_h1=0.2340, 16_l2=0.1354, 32_h1=0.8477, 32_l2=0.5822
{'train_err': 4.981805760413408, 'avg_loss': 0.15941778433322906, 'avg_lasso_loss': None, 'epoch_train_time': 10.092775133000032}
{'train_err': 4.671443767845631, 'avg_loss': 0.14948620057106018, 'avg_lasso_loss': None, 'epoch_train_time': 10.053496939999945}
Expand Down Expand Up @@ -499,7 +499,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (3 minutes 26.114 seconds)
**Total running time of the script:** (3 minutes 24.779 seconds)


.. _sphx_glr_download_auto_examples_plot_UNO_darcy.py:
Expand Down
Loading

0 comments on commit 420e0b5

Please sign in to comment.