diff --git a/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip b/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip index f5cbab6..276f634 100644 Binary files a/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip and b/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip differ diff --git a/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip b/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip index 97451cf..6fb3e60 100644 Binary files a/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip and b/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip differ diff --git a/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip b/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip index 2773fbc..ccf5f21 100644 Binary files a/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip and b/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip differ diff --git a/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip b/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip index 62aa398..1fe66e2 100644 Binary files a/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip and b/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip differ diff --git a/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip b/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip index c0f8990..11986db 100644 Binary files a/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip and b/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip differ diff --git a/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip b/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip index 4d2a4fc..e0d110a 100644 Binary files a/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip and b/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip differ diff --git a/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip b/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip index 61d5b86..31a02ea 100644 Binary files a/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip and b/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip differ diff --git a/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip b/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip index 007e64e..20acecf 100644 Binary files a/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip and b/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip differ diff --git a/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip b/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip index d734c5b..78781fd 100644 Binary files a/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip and b/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip differ diff --git a/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip b/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip index cb4123d..1347728 100644 Binary files a/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip and b/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip differ diff --git a/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip b/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip index 8c67d2a..5790219 100644 Binary files a/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip and b/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip differ diff --git a/dev/_images/sphx_glr_plot_SFNO_swe_001.png b/dev/_images/sphx_glr_plot_SFNO_swe_001.png index 234d688..2c67ee8 100644 Binary files a/dev/_images/sphx_glr_plot_SFNO_swe_001.png and b/dev/_images/sphx_glr_plot_SFNO_swe_001.png differ diff --git a/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png b/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png index 69ac86b..20e8a79 100644 Binary files a/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png and b/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png differ diff --git a/dev/_images/sphx_glr_plot_UNO_darcy_001.png b/dev/_images/sphx_glr_plot_UNO_darcy_001.png index 8a27389..7e5409a 100644 Binary files a/dev/_images/sphx_glr_plot_UNO_darcy_001.png and b/dev/_images/sphx_glr_plot_UNO_darcy_001.png differ diff --git a/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png b/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png index 6e31866..dfb8914 100644 Binary files a/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png and b/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png differ diff --git a/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt b/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt index 0e6f3f0..68ad30a 100644 --- a/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt +++ b/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt @@ -357,7 +357,7 @@ in order to compute the convolved image, we need to first bring it into the righ .. code-block:: none - + @@ -448,7 +448,7 @@ in order to compute the convolved image, we need to first bring it into the righ .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 36.353 seconds) + **Total running time of the script:** (0 minutes 36.596 seconds) .. _sphx_glr_download_auto_examples_plot_DISCO_convolutions.py: diff --git a/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt b/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt index 4aa45fc..5d9eda8 100644 --- a/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt +++ b/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt @@ -248,13 +248,13 @@ Training the model ) ### SCHEDULER ### - + ### LOSSES ### - * Train: + * Train: - * Test: {'h1': , 'l2': } + * Test: {'h1': , 'l2': } @@ -317,16 +317,16 @@ Then train the model on our small Darcy-Flow dataset: Eval: 16_h1=0.2058, 16_l2=0.1716, 32_h1=0.3869, 32_l2=0.2037 [6] time=2.54, avg_loss=0.1687, train_err=5.2734 Eval: 16_h1=0.1864, 16_l2=0.1414, 32_h1=0.3874, 32_l2=0.1798 - [9] time=2.55, avg_loss=0.1457, train_err=4.5546 + [9] time=2.54, avg_loss=0.1457, train_err=4.5546 Eval: 16_h1=0.1864, 16_l2=0.1451, 32_h1=0.4279, 32_l2=0.1923 - [12] time=2.55, avg_loss=0.1348, train_err=4.2138 + [12] time=2.53, avg_loss=0.1348, train_err=4.2138 Eval: 16_h1=0.1892, 16_l2=0.1436, 32_h1=0.4446, 32_l2=0.1909 - [15] time=2.55, avg_loss=0.1176, train_err=3.6743 + [15] time=2.54, avg_loss=0.1176, train_err=3.6743 Eval: 16_h1=0.1565, 16_l2=0.1118, 32_h1=0.3807, 32_l2=0.1519 [18] time=2.54, avg_loss=0.0866, train_err=2.7047 Eval: 16_h1=0.1576, 16_l2=0.1159, 32_h1=0.4055, 32_l2=0.1698 - {'train_err': 2.8488178942352533, 'avg_loss': 0.0911621726155281, 'avg_lasso_loss': None, 'epoch_train_time': 2.5436050599998907} + {'train_err': 2.8488178942352533, 'avg_loss': 0.0911621726155281, 'avg_lasso_loss': None, 'epoch_train_time': 2.536003271000027} @@ -476,7 +476,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 52.263 seconds) + **Total running time of the script:** (0 minutes 52.108 seconds) .. _sphx_glr_download_auto_examples_plot_FNO_darcy.py: diff --git a/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt b/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt index 26e3b33..6ee4c80 100644 --- a/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt +++ b/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt @@ -239,13 +239,13 @@ Creating the losses ) ### SCHEDULER ### - + ### LOSSES ### - * Train: + * Train: - * Test: {'l2': } + * Test: {'l2': } @@ -302,22 +302,22 @@ Train the model on the spherical SWE dataset Training on 200 samples Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)]. Raw outputs of shape torch.Size([4, 3, 32, 64]) - [0] time=3.51, avg_loss=2.5811, train_err=10.3245 - Eval: (32, 64)_l2=1.7888, (64, 128)_l2=2.3886 - [3] time=3.49, avg_loss=0.4420, train_err=1.7682 - Eval: (32, 64)_l2=0.4180, (64, 128)_l2=2.5558 - [6] time=3.46, avg_loss=0.2738, train_err=1.0950 - Eval: (32, 64)_l2=0.3758, (64, 128)_l2=2.4626 - [9] time=3.47, avg_loss=0.2376, train_err=0.9506 - Eval: (32, 64)_l2=0.2817, (64, 128)_l2=2.4057 - [12] time=3.50, avg_loss=0.2004, train_err=0.8015 - Eval: (32, 64)_l2=0.2895, (64, 128)_l2=2.3442 - [15] time=3.46, avg_loss=0.1736, train_err=0.6945 - Eval: (32, 64)_l2=0.2295, (64, 128)_l2=2.3143 - [18] time=3.46, avg_loss=0.1513, train_err=0.6052 - Eval: (32, 64)_l2=0.2300, (64, 128)_l2=2.2962 + [0] time=3.62, avg_loss=2.5873, train_err=10.3491 + Eval: (32, 64)_l2=1.6625, (64, 128)_l2=2.3805 + [3] time=3.59, avg_loss=0.4299, train_err=1.7195 + Eval: (32, 64)_l2=0.3705, (64, 128)_l2=2.3807 + [6] time=3.56, avg_loss=0.2634, train_err=1.0535 + Eval: (32, 64)_l2=0.3257, (64, 128)_l2=2.3691 + [9] time=3.56, avg_loss=0.2119, train_err=0.8477 + Eval: (32, 64)_l2=0.2609, (64, 128)_l2=2.3360 + [12] time=3.55, avg_loss=0.1822, train_err=0.7286 + Eval: (32, 64)_l2=0.2307, (64, 128)_l2=2.3128 + [15] time=3.56, avg_loss=0.1581, train_err=0.6322 + Eval: (32, 64)_l2=0.1898, (64, 128)_l2=2.3094 + [18] time=3.53, avg_loss=0.1431, train_err=0.5725 + Eval: (32, 64)_l2=0.1784, (64, 128)_l2=2.3245 - {'train_err': 0.601245778799057, 'avg_loss': 0.15031144469976426, 'avg_lasso_loss': None, 'epoch_train_time': 3.4630227890000356} + {'train_err': 0.5559453934431076, 'avg_loss': 0.1389863483607769, 'avg_lasso_loss': None, 'epoch_train_time': 3.525268703999984} @@ -388,7 +388,7 @@ In practice we would train a Neural Operator on one or multiple GPUs .. rst-class:: sphx-glr-timing - **Total running time of the script:** (1 minutes 25.195 seconds) + **Total running time of the script:** (1 minutes 27.116 seconds) .. _sphx_glr_download_auto_examples_plot_SFNO_swe.py: diff --git a/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt b/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt index c21f191..487850f 100644 --- a/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt +++ b/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt @@ -345,13 +345,13 @@ Creating the losses ) ### SCHEDULER ### - + ### LOSSES ### - * Train: + * Train: - * Test: {'h1': , 'l2': } + * Test: {'h1': , 'l2': } @@ -410,22 +410,22 @@ Actually train the model on our small Darcy-Flow dataset Training on 1000 samples Testing on [50, 50] samples on resolutions [16, 32]. Raw outputs of shape torch.Size([32, 1, 16, 16]) - [0] time=10.17, avg_loss=0.6441, train_err=20.1282 - Eval: 16_h1=0.3095, 16_l2=0.2449, 32_h1=0.7229, 32_l2=0.5358 - [3] time=10.14, avg_loss=0.2428, train_err=7.5868 - Eval: 16_h1=0.2699, 16_l2=0.2214, 32_h1=0.7097, 32_l2=0.5759 - [6] time=10.13, avg_loss=0.2352, train_err=7.3495 - Eval: 16_h1=0.2115, 16_l2=0.1576, 32_h1=0.7179, 32_l2=0.5758 - [9] time=10.13, avg_loss=0.2030, train_err=6.3450 - Eval: 16_h1=0.2484, 16_l2=0.1970, 32_h1=0.6780, 32_l2=0.5503 - [12] time=10.10, avg_loss=0.1893, train_err=5.9155 - Eval: 16_h1=0.2512, 16_l2=0.1867, 32_h1=0.6899, 32_l2=0.5352 - [15] time=10.07, avg_loss=0.1575, train_err=4.9215 - Eval: 16_h1=0.1997, 16_l2=0.1529, 32_h1=0.6739, 32_l2=0.5376 - [18] time=10.10, avg_loss=0.1334, train_err=4.1696 - Eval: 16_h1=0.1973, 16_l2=0.1490, 32_h1=0.6727, 32_l2=0.5054 + [0] time=10.16, avg_loss=0.6551, train_err=20.4733 + Eval: 16_h1=0.3185, 16_l2=0.2467, 32_h1=0.8165, 32_l2=0.6623 + [3] time=10.08, avg_loss=0.2450, train_err=7.6547 + Eval: 16_h1=0.2639, 16_l2=0.2082, 32_h1=0.7555, 32_l2=0.6193 + [6] time=10.11, avg_loss=0.2239, train_err=6.9981 + Eval: 16_h1=0.2054, 16_l2=0.1505, 32_h1=0.7867, 32_l2=0.6405 + [9] time=10.19, avg_loss=0.2389, train_err=7.4641 + Eval: 16_h1=0.2084, 16_l2=0.1527, 32_h1=0.7668, 32_l2=0.6469 + [12] time=10.10, avg_loss=0.1708, train_err=5.3378 + Eval: 16_h1=0.2286, 16_l2=0.1798, 32_h1=0.7274, 32_l2=0.5666 + [15] time=10.11, avg_loss=0.1493, train_err=4.6665 + Eval: 16_h1=0.2102, 16_l2=0.1550, 32_h1=0.7123, 32_l2=0.5348 + [18] time=10.16, avg_loss=0.1667, train_err=5.2080 + Eval: 16_h1=0.1794, 16_l2=0.1285, 32_h1=0.6974, 32_l2=0.5106 - {'train_err': 4.134030694141984, 'avg_loss': 0.13228898221254348, 'avg_lasso_loss': None, 'epoch_train_time': 10.076509817999977} + {'train_err': 4.332731641829014, 'avg_loss': 0.13864741253852844, 'avg_lasso_loss': None, 'epoch_train_time': 10.153018183000086} @@ -499,7 +499,7 @@ In practice we would train a Neural Operator on one or multiple GPUs .. rst-class:: sphx-glr-timing - **Total running time of the script:** (3 minutes 25.421 seconds) + **Total running time of the script:** (3 minutes 26.027 seconds) .. _sphx_glr_download_auto_examples_plot_UNO_darcy.py: diff --git a/dev/_sources/auto_examples/plot_count_flops.rst.txt b/dev/_sources/auto_examples/plot_count_flops.rst.txt index 25bb451..da27595 100644 --- a/dev/_sources/auto_examples/plot_count_flops.rst.txt +++ b/dev/_sources/auto_examples/plot_count_flops.rst.txt @@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e .. code-block:: none - defaultdict(. at 0x7f0e7769c310>, {'': defaultdict(, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'projection': defaultdict(, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(, {'convolution.default': 4194304})}) + defaultdict(. at 0x7f8cf1695310>, {'': defaultdict(, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'projection': defaultdict(, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(, {'convolution.default': 4194304})}) @@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 3.686 seconds) + **Total running time of the script:** (0 minutes 3.795 seconds) .. _sphx_glr_download_auto_examples_plot_count_flops.py: diff --git a/dev/_sources/auto_examples/plot_darcy_flow.rst.txt b/dev/_sources/auto_examples/plot_darcy_flow.rst.txt index 39afcdd..bae8f3d 100644 --- a/dev/_sources/auto_examples/plot_darcy_flow.rst.txt +++ b/dev/_sources/auto_examples/plot_darcy_flow.rst.txt @@ -163,7 +163,7 @@ Visualizing the data .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 0.399 seconds) + **Total running time of the script:** (0 minutes 0.406 seconds) .. _sphx_glr_download_auto_examples_plot_darcy_flow.py: diff --git a/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt b/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt index 49cbd73..b062f57 100644 --- a/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt +++ b/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt @@ -219,7 +219,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 0.276 seconds) + **Total running time of the script:** (0 minutes 0.278 seconds) .. _sphx_glr_download_auto_examples_plot_darcy_flow_spectrum.py: diff --git a/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt b/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt index 302ec3c..9670c70 100644 --- a/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt +++ b/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt @@ -240,15 +240,15 @@ Set up the losses ) ### SCHEDULER ### - + ### LOSSES ### ### INCREMENTAL RESOLUTION + GRADIENT EXPLAINED ### - * Train: + * Train: - * Test: {'h1': , 'l2': } + * Test: {'h1': , 'l2': } @@ -329,13 +329,13 @@ Train the model Training on 100 samples Testing on [50, 50] samples on resolutions [16, 32]. Raw outputs of shape torch.Size([16, 1, 8, 8]) - [0] time=0.24, avg_loss=0.8115, train_err=11.5929 + [0] time=0.25, avg_loss=0.8115, train_err=11.5929 Eval: 16_h1=0.7332, 16_l2=0.5739, 32_h1=0.7863, 32_l2=0.5720 - [1] time=0.23, avg_loss=0.6661, train_err=9.5159 + [1] time=0.22, avg_loss=0.6661, train_err=9.5159 Eval: 16_h1=0.8889, 16_l2=0.7005, 32_h1=1.1195, 32_l2=0.7407 [2] time=0.22, avg_loss=0.6497, train_err=9.2815 Eval: 16_h1=0.6372, 16_l2=0.4883, 32_h1=0.6967, 32_l2=0.5000 - [3] time=0.22, avg_loss=0.5559, train_err=7.9411 + [3] time=0.23, avg_loss=0.5559, train_err=7.9411 Eval: 16_h1=0.6112, 16_l2=0.4348, 32_h1=0.7432, 32_l2=0.4530 [4] time=0.22, avg_loss=0.4852, train_err=6.9312 Eval: 16_h1=0.5762, 16_l2=0.4037, 32_h1=0.7138, 32_l2=0.4262 @@ -352,9 +352,9 @@ Train the model Incre Res Update: change index to 1 Incre Res Update: change sub to 1 Incre Res Update: change res to 16 - [10] time=0.28, avg_loss=0.4253, train_err=6.0757 + [10] time=0.29, avg_loss=0.4253, train_err=6.0757 Eval: 16_h1=0.4069, 16_l2=0.2959, 32_h1=0.4904, 32_l2=0.2928 - [11] time=0.27, avg_loss=0.3745, train_err=5.3500 + [11] time=0.28, avg_loss=0.3745, train_err=5.3500 Eval: 16_h1=0.3820, 16_l2=0.2869, 32_h1=0.4769, 32_l2=0.3026 [12] time=0.28, avg_loss=0.3405, train_err=4.8636 Eval: 16_h1=0.3404, 16_l2=0.2598, 32_h1=0.4410, 32_l2=0.2731 @@ -362,18 +362,18 @@ Train the model Eval: 16_h1=0.3231, 16_l2=0.2452, 32_h1=0.4245, 32_l2=0.2586 [14] time=0.28, avg_loss=0.2896, train_err=4.1368 Eval: 16_h1=0.3130, 16_l2=0.2380, 32_h1=0.4161, 32_l2=0.2522 - [15] time=0.28, avg_loss=0.2789, train_err=3.9843 + [15] time=0.29, avg_loss=0.2789, train_err=3.9843 Eval: 16_h1=0.3072, 16_l2=0.2324, 32_h1=0.4151, 32_l2=0.2455 [16] time=0.28, avg_loss=0.2690, train_err=3.8434 Eval: 16_h1=0.3042, 16_l2=0.2305, 32_h1=0.4100, 32_l2=0.2425 [17] time=0.28, avg_loss=0.2637, train_err=3.7674 Eval: 16_h1=0.2954, 16_l2=0.2229, 32_h1=0.4023, 32_l2=0.2354 - [18] time=0.28, avg_loss=0.2557, train_err=3.6533 + [18] time=0.29, avg_loss=0.2557, train_err=3.6533 Eval: 16_h1=0.2756, 16_l2=0.2105, 32_h1=0.3780, 32_l2=0.2269 [19] time=0.28, avg_loss=0.2395, train_err=3.4208 Eval: 16_h1=0.2735, 16_l2=0.2106, 32_h1=0.3738, 32_l2=0.2303 - {'train_err': 3.4207682268960133, 'avg_loss': 0.23945377588272096, 'avg_lasso_loss': None, 'epoch_train_time': 0.2815485109999827, '16_h1': tensor(0.2735), '16_l2': tensor(0.2106), '32_h1': tensor(0.3738), '32_l2': tensor(0.2303)} + {'train_err': 3.4207682268960133, 'avg_loss': 0.23945377588272096, 'avg_lasso_loss': None, 'epoch_train_time': 0.2821420150000904, '16_h1': tensor(0.2735), '16_l2': tensor(0.2106), '32_h1': tensor(0.3738), '32_l2': tensor(0.2303)} @@ -447,7 +447,7 @@ In practice we would train a Neural Operator on one or multiple GPUs .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 7.019 seconds) + **Total running time of the script:** (0 minutes 7.106 seconds) .. _sphx_glr_download_auto_examples_plot_incremental_FNO_darcy.py: diff --git a/dev/_sources/auto_examples/sg_execution_times.rst.txt b/dev/_sources/auto_examples/sg_execution_times.rst.txt index 1c4a565..8b27e7a 100644 --- a/dev/_sources/auto_examples/sg_execution_times.rst.txt +++ b/dev/_sources/auto_examples/sg_execution_times.rst.txt @@ -6,7 +6,7 @@ Computation times ================= -**06:30.613** total execution time for 9 files **from auto_examples**: +**06:33.431** total execution time for 9 files **from auto_examples**: .. container:: @@ -33,28 +33,28 @@ Computation times - Time - Mem (MB) * - :ref:`sphx_glr_auto_examples_plot_UNO_darcy.py` (``plot_UNO_darcy.py``) - - 03:25.421 + - 03:26.027 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_SFNO_swe.py` (``plot_SFNO_swe.py``) - - 01:25.195 + - 01:27.116 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_FNO_darcy.py` (``plot_FNO_darcy.py``) - - 00:52.263 + - 00:52.108 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_DISCO_convolutions.py` (``plot_DISCO_convolutions.py``) - - 00:36.353 + - 00:36.596 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_incremental_FNO_darcy.py` (``plot_incremental_FNO_darcy.py``) - - 00:07.019 + - 00:07.106 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_count_flops.py` (``plot_count_flops.py``) - - 00:03.686 + - 00:03.795 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow.py` (``plot_darcy_flow.py``) - - 00:00.399 + - 00:00.406 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow_spectrum.py` (``plot_darcy_flow_spectrum.py``) - - 00:00.276 + - 00:00.278 - 0.0 * - :ref:`sphx_glr_auto_examples_checkpoint_FNO_darcy.py` (``checkpoint_FNO_darcy.py``) - 00:00.000 diff --git a/dev/_sources/sg_execution_times.rst.txt b/dev/_sources/sg_execution_times.rst.txt index b927afa..0c9cf3b 100644 --- a/dev/_sources/sg_execution_times.rst.txt +++ b/dev/_sources/sg_execution_times.rst.txt @@ -6,7 +6,7 @@ Computation times ================= -**06:30.613** total execution time for 9 files **from all galleries**: +**06:33.431** total execution time for 9 files **from all galleries**: .. container:: @@ -33,28 +33,28 @@ Computation times - Time - Mem (MB) * - :ref:`sphx_glr_auto_examples_plot_UNO_darcy.py` (``../../examples/plot_UNO_darcy.py``) - - 03:25.421 + - 03:26.027 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_SFNO_swe.py` (``../../examples/plot_SFNO_swe.py``) - - 01:25.195 + - 01:27.116 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_FNO_darcy.py` (``../../examples/plot_FNO_darcy.py``) - - 00:52.263 + - 00:52.108 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_DISCO_convolutions.py` (``../../examples/plot_DISCO_convolutions.py``) - - 00:36.353 + - 00:36.596 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_incremental_FNO_darcy.py` (``../../examples/plot_incremental_FNO_darcy.py``) - - 00:07.019 + - 00:07.106 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_count_flops.py` (``../../examples/plot_count_flops.py``) - - 00:03.686 + - 00:03.795 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow.py` (``../../examples/plot_darcy_flow.py``) - - 00:00.399 + - 00:00.406 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow_spectrum.py` (``../../examples/plot_darcy_flow_spectrum.py``) - - 00:00.276 + - 00:00.278 - 0.0 * - :ref:`sphx_glr_auto_examples_checkpoint_FNO_darcy.py` (``../../examples/checkpoint_FNO_darcy.py``) - 00:00.000 diff --git a/dev/auto_examples/plot_DISCO_convolutions.html b/dev/auto_examples/plot_DISCO_convolutions.html index 6ec61d6..1e0bc50 100644 --- a/dev/auto_examples/plot_DISCO_convolutions.html +++ b/dev/auto_examples/plot_DISCO_convolutions.html @@ -330,7 +330,7 @@ # plt.show() -plot DISCO convolutions
<matplotlib.colorbar.Colorbar object at 0x7f0e461dc7f0>
+plot DISCO convolutions
<matplotlib.colorbar.Colorbar object at 0x7f8cc01feb50>
 
convt = DiscreteContinuousConvTranspose2d(1, 1, grid_in=grid_out, grid_out=grid_in, quadrature_weights=q_out, kernel_shape=[2,4], radius_cutoff=3/nyo, periodic=False).float()
@@ -381,7 +381,7 @@
 plot DISCO convolutions
torch.Size([1, 1, 120, 90])
 
-

Total running time of the script: (0 minutes 36.353 seconds)

+

Total running time of the script: (0 minutes 36.596 seconds)

Create the trainer

@@ -329,22 +329,22 @@
Training on 200 samples
 Testing on [50, 50] samples         on resolutions [(32, 64), (64, 128)].
 Raw outputs of shape torch.Size([4, 3, 32, 64])
-[0] time=3.51, avg_loss=2.5811, train_err=10.3245
-Eval: (32, 64)_l2=1.7888, (64, 128)_l2=2.3886
-[3] time=3.49, avg_loss=0.4420, train_err=1.7682
-Eval: (32, 64)_l2=0.4180, (64, 128)_l2=2.5558
-[6] time=3.46, avg_loss=0.2738, train_err=1.0950
-Eval: (32, 64)_l2=0.3758, (64, 128)_l2=2.4626
-[9] time=3.47, avg_loss=0.2376, train_err=0.9506
-Eval: (32, 64)_l2=0.2817, (64, 128)_l2=2.4057
-[12] time=3.50, avg_loss=0.2004, train_err=0.8015
-Eval: (32, 64)_l2=0.2895, (64, 128)_l2=2.3442
-[15] time=3.46, avg_loss=0.1736, train_err=0.6945
-Eval: (32, 64)_l2=0.2295, (64, 128)_l2=2.3143
-[18] time=3.46, avg_loss=0.1513, train_err=0.6052
-Eval: (32, 64)_l2=0.2300, (64, 128)_l2=2.2962
-
-{'train_err': 0.601245778799057, 'avg_loss': 0.15031144469976426, 'avg_lasso_loss': None, 'epoch_train_time': 3.4630227890000356}
+[0] time=3.62, avg_loss=2.5873, train_err=10.3491
+Eval: (32, 64)_l2=1.6625, (64, 128)_l2=2.3805
+[3] time=3.59, avg_loss=0.4299, train_err=1.7195
+Eval: (32, 64)_l2=0.3705, (64, 128)_l2=2.3807
+[6] time=3.56, avg_loss=0.2634, train_err=1.0535
+Eval: (32, 64)_l2=0.3257, (64, 128)_l2=2.3691
+[9] time=3.56, avg_loss=0.2119, train_err=0.8477
+Eval: (32, 64)_l2=0.2609, (64, 128)_l2=2.3360
+[12] time=3.55, avg_loss=0.1822, train_err=0.7286
+Eval: (32, 64)_l2=0.2307, (64, 128)_l2=2.3128
+[15] time=3.56, avg_loss=0.1581, train_err=0.6322
+Eval: (32, 64)_l2=0.1898, (64, 128)_l2=2.3094
+[18] time=3.53, avg_loss=0.1431, train_err=0.5725
+Eval: (32, 64)_l2=0.1784, (64, 128)_l2=2.3245
+
+{'train_err': 0.5559453934431076, 'avg_loss': 0.1389863483607769, 'avg_lasso_loss': None, 'epoch_train_time': 3.525268703999984}
 

Plot the prediction, and compare with the ground-truth @@ -391,7 +391,7 @@ fig.show()

-Inputs, ground-truth output and prediction., Input x (32, 64), Ground-truth y, Model prediction, Input x (64, 128), Ground-truth y, Model prediction

Total running time of the script: (1 minutes 25.195 seconds)

+Inputs, ground-truth output and prediction., Input x (32, 64), Ground-truth y, Model prediction, Input x (64, 128), Ground-truth y, Model prediction

Total running time of the script: (1 minutes 27.116 seconds)

Create the trainer

@@ -454,22 +454,22 @@
Training on 1000 samples
 Testing on [50, 50] samples         on resolutions [16, 32].
 Raw outputs of shape torch.Size([32, 1, 16, 16])
-[0] time=10.17, avg_loss=0.6441, train_err=20.1282
-Eval: 16_h1=0.3095, 16_l2=0.2449, 32_h1=0.7229, 32_l2=0.5358
-[3] time=10.14, avg_loss=0.2428, train_err=7.5868
-Eval: 16_h1=0.2699, 16_l2=0.2214, 32_h1=0.7097, 32_l2=0.5759
-[6] time=10.13, avg_loss=0.2352, train_err=7.3495
-Eval: 16_h1=0.2115, 16_l2=0.1576, 32_h1=0.7179, 32_l2=0.5758
-[9] time=10.13, avg_loss=0.2030, train_err=6.3450
-Eval: 16_h1=0.2484, 16_l2=0.1970, 32_h1=0.6780, 32_l2=0.5503
-[12] time=10.10, avg_loss=0.1893, train_err=5.9155
-Eval: 16_h1=0.2512, 16_l2=0.1867, 32_h1=0.6899, 32_l2=0.5352
-[15] time=10.07, avg_loss=0.1575, train_err=4.9215
-Eval: 16_h1=0.1997, 16_l2=0.1529, 32_h1=0.6739, 32_l2=0.5376
-[18] time=10.10, avg_loss=0.1334, train_err=4.1696
-Eval: 16_h1=0.1973, 16_l2=0.1490, 32_h1=0.6727, 32_l2=0.5054
-
-{'train_err': 4.134030694141984, 'avg_loss': 0.13228898221254348, 'avg_lasso_loss': None, 'epoch_train_time': 10.076509817999977}
+[0] time=10.16, avg_loss=0.6551, train_err=20.4733
+Eval: 16_h1=0.3185, 16_l2=0.2467, 32_h1=0.8165, 32_l2=0.6623
+[3] time=10.08, avg_loss=0.2450, train_err=7.6547
+Eval: 16_h1=0.2639, 16_l2=0.2082, 32_h1=0.7555, 32_l2=0.6193
+[6] time=10.11, avg_loss=0.2239, train_err=6.9981
+Eval: 16_h1=0.2054, 16_l2=0.1505, 32_h1=0.7867, 32_l2=0.6405
+[9] time=10.19, avg_loss=0.2389, train_err=7.4641
+Eval: 16_h1=0.2084, 16_l2=0.1527, 32_h1=0.7668, 32_l2=0.6469
+[12] time=10.10, avg_loss=0.1708, train_err=5.3378
+Eval: 16_h1=0.2286, 16_l2=0.1798, 32_h1=0.7274, 32_l2=0.5666
+[15] time=10.11, avg_loss=0.1493, train_err=4.6665
+Eval: 16_h1=0.2102, 16_l2=0.1550, 32_h1=0.7123, 32_l2=0.5348
+[18] time=10.16, avg_loss=0.1667, train_err=5.2080
+Eval: 16_h1=0.1794, 16_l2=0.1285, 32_h1=0.6974, 32_l2=0.5106
+
+{'train_err': 4.332731641829014, 'avg_loss': 0.13864741253852844, 'avg_lasso_loss': None, 'epoch_train_time': 10.153018183000086}
 

Plot the prediction, and compare with the ground-truth @@ -519,7 +519,7 @@ fig.show() -Inputs, ground-truth output and prediction., Input x, Ground-truth y, Model prediction

Total running time of the script: (3 minutes 25.421 seconds)

+Inputs, ground-truth output and prediction., Input x, Ground-truth y, Model prediction

Total running time of the script: (3 minutes 26.027 seconds)