diff --git a/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip b/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip index 37eb9bd..6a712e7 100644 Binary files a/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip and b/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip differ diff --git a/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip b/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip index ec1689d..73eda60 100644 Binary files a/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip and b/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip differ diff --git a/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip b/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip index 8046c57..752421b 100644 Binary files a/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip and b/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip differ diff --git a/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip b/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip index 2ebd0fb..67c7c5d 100644 Binary files a/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip and b/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip differ diff --git a/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip b/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip index 5829bf8..96d636d 100644 Binary files a/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip and b/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip differ diff --git a/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip b/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip index 4adbb42..48a7fd9 100644 Binary files a/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip and b/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip differ diff --git a/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip b/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip index 72880bb..9ec04d5 100644 Binary files a/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip and b/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip differ diff --git a/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip b/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip index 61c927c..1b8ccba 100644 Binary files a/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip and b/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip differ diff --git a/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip b/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip index d923135..e62328f 100644 Binary files a/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip and b/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip differ diff --git a/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip b/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip index 1be1399..bc28deb 100644 Binary files a/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip and b/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip differ diff --git a/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip b/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip index 5abc6a2..6428370 100644 Binary files a/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip and b/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip differ diff --git a/dev/_images/sphx_glr_plot_SFNO_swe_001.png b/dev/_images/sphx_glr_plot_SFNO_swe_001.png index c2fc6c9..4d3ad46 100644 Binary files a/dev/_images/sphx_glr_plot_SFNO_swe_001.png and b/dev/_images/sphx_glr_plot_SFNO_swe_001.png differ diff --git a/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png b/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png index 647a6f6..af8748d 100644 Binary files a/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png and b/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png differ diff --git a/dev/_images/sphx_glr_plot_UNO_darcy_001.png b/dev/_images/sphx_glr_plot_UNO_darcy_001.png index 9a70fdb..9c487f2 100644 Binary files a/dev/_images/sphx_glr_plot_UNO_darcy_001.png and b/dev/_images/sphx_glr_plot_UNO_darcy_001.png differ diff --git a/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png b/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png index 645b4f7..21b7488 100644 Binary files a/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png and b/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png differ diff --git a/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt b/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt index 4d79c38..fd7bd92 100644 --- a/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt +++ b/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt @@ -357,7 +357,7 @@ in order to compute the convolved image, we need to first bring it into the righ .. code-block:: none - + @@ -448,7 +448,7 @@ in order to compute the convolved image, we need to first bring it into the righ .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 37.420 seconds) + **Total running time of the script:** (0 minutes 36.824 seconds) .. _sphx_glr_download_auto_examples_plot_DISCO_convolutions.py: diff --git a/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt b/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt index 2b82921..5407374 100644 --- a/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt +++ b/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt @@ -248,13 +248,13 @@ Training the model ) ### SCHEDULER ### - + ### LOSSES ### - * Train: + * Train: - * Test: {'h1': , 'l2': } + * Test: {'h1': , 'l2': } @@ -311,22 +311,22 @@ Then train the model on our small Darcy-Flow dataset: Training on 1000 samples Testing on [50, 50] samples on resolutions [16, 32]. Raw outputs of shape torch.Size([32, 1, 16, 16]) - [0] time=2.60, avg_loss=0.6956, train_err=21.7383 + [0] time=2.55, avg_loss=0.6956, train_err=21.7383 Eval: 16_h1=0.4298, 16_l2=0.3487, 32_h1=0.5847, 32_l2=0.3542 - [3] time=2.57, avg_loss=0.2103, train_err=6.5705 + [3] time=2.52, avg_loss=0.2103, train_err=6.5705 Eval: 16_h1=0.2030, 16_l2=0.1384, 32_h1=0.5075, 32_l2=0.1774 - [6] time=2.57, avg_loss=0.1911, train_err=5.9721 + [6] time=2.54, avg_loss=0.1911, train_err=5.9721 Eval: 16_h1=0.2099, 16_l2=0.1374, 32_h1=0.4907, 32_l2=0.1783 - [9] time=2.57, avg_loss=0.1410, train_err=4.4073 + [9] time=2.53, avg_loss=0.1410, train_err=4.4073 Eval: 16_h1=0.2052, 16_l2=0.1201, 32_h1=0.5268, 32_l2=0.1615 - [12] time=2.58, avg_loss=0.1422, train_err=4.4434 + [12] time=2.53, avg_loss=0.1422, train_err=4.4434 Eval: 16_h1=0.2131, 16_l2=0.1285, 32_h1=0.5413, 32_l2=0.1741 - [15] time=2.58, avg_loss=0.1198, train_err=3.7424 + [15] time=2.52, avg_loss=0.1198, train_err=3.7424 Eval: 16_h1=0.1984, 16_l2=0.1137, 32_h1=0.5255, 32_l2=0.1569 - [18] time=2.56, avg_loss=0.1104, train_err=3.4502 + [18] time=2.54, avg_loss=0.1104, train_err=3.4502 Eval: 16_h1=0.2039, 16_l2=0.1195, 32_h1=0.5062, 32_l2=0.1603 - {'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.5626969950000102} + {'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.5328369260000727} @@ -476,7 +476,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 52.899 seconds) + **Total running time of the script:** (0 minutes 51.949 seconds) .. _sphx_glr_download_auto_examples_plot_FNO_darcy.py: diff --git a/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt b/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt index 97e5463..3ba5a3a 100644 --- a/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt +++ b/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt @@ -234,13 +234,13 @@ Creating the losses ) ### SCHEDULER ### - + ### LOSSES ### - * Train: + * Train: - * Test: {'l2': } + * Test: {'l2': } @@ -297,22 +297,22 @@ Train the model on the spherical SWE dataset Training on 200 samples Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)]. Raw outputs of shape torch.Size([4, 3, 32, 64]) - [0] time=3.55, avg_loss=2.5143, train_err=10.0572 - Eval: (32, 64)_l2=1.9155, (64, 128)_l2=2.4213 - [3] time=3.55, avg_loss=0.4193, train_err=1.6771 - Eval: (32, 64)_l2=0.4481, (64, 128)_l2=2.5097 - [6] time=3.49, avg_loss=0.2898, train_err=1.1590 - Eval: (32, 64)_l2=0.3605, (64, 128)_l2=2.4361 - [9] time=3.53, avg_loss=0.2269, train_err=0.9075 - Eval: (32, 64)_l2=0.3382, (64, 128)_l2=2.4815 - [12] time=3.52, avg_loss=0.1930, train_err=0.7721 - Eval: (32, 64)_l2=0.3280, (64, 128)_l2=2.4038 - [15] time=3.54, avg_loss=0.1609, train_err=0.6438 - Eval: (32, 64)_l2=0.3029, (64, 128)_l2=2.4056 - [18] time=3.57, avg_loss=0.1422, train_err=0.5687 - Eval: (32, 64)_l2=0.2940, (64, 128)_l2=2.3771 + [0] time=3.52, avg_loss=2.5951, train_err=10.3803 + Eval: (32, 64)_l2=1.7616, (64, 128)_l2=2.4268 + [3] time=3.52, avg_loss=0.4486, train_err=1.7945 + Eval: (32, 64)_l2=0.4180, (64, 128)_l2=2.3742 + [6] time=3.47, avg_loss=0.2711, train_err=1.0843 + Eval: (32, 64)_l2=0.3335, (64, 128)_l2=2.3245 + [9] time=3.48, avg_loss=0.2257, train_err=0.9030 + Eval: (32, 64)_l2=0.3089, (64, 128)_l2=2.3506 + [12] time=3.47, avg_loss=0.1931, train_err=0.7726 + Eval: (32, 64)_l2=0.2473, (64, 128)_l2=2.3245 + [15] time=3.46, avg_loss=0.1670, train_err=0.6680 + Eval: (32, 64)_l2=0.2365, (64, 128)_l2=2.3338 + [18] time=3.46, avg_loss=0.1504, train_err=0.6015 + Eval: (32, 64)_l2=0.2184, (64, 128)_l2=2.3164 - {'train_err': 0.5653003603219986, 'avg_loss': 0.14132509008049965, 'avg_lasso_loss': None, 'epoch_train_time': 3.540815240000029} + {'train_err': 0.5853043901920318, 'avg_loss': 0.14632609754800796, 'avg_lasso_loss': None, 'epoch_train_time': 3.4730680470000834} @@ -383,7 +383,7 @@ In practice we would train a Neural Operator on one or multiple GPUs .. rst-class:: sphx-glr-timing - **Total running time of the script:** (1 minutes 26.316 seconds) + **Total running time of the script:** (1 minutes 25.227 seconds) .. _sphx_glr_download_auto_examples_plot_SFNO_swe.py: diff --git a/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt b/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt index fa558e3..dd859de 100644 --- a/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt +++ b/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt @@ -345,13 +345,13 @@ Creating the losses ) ### SCHEDULER ### - + ### LOSSES ### - * Train: + * Train: - * Test: {'h1': , 'l2': } + * Test: {'h1': , 'l2': } @@ -410,22 +410,22 @@ Actually train the model on our small Darcy-Flow dataset Training on 1000 samples Testing on [50, 50] samples on resolutions [16, 32]. Raw outputs of shape torch.Size([32, 1, 16, 16]) - [0] time=10.26, avg_loss=0.7033, train_err=21.9780 - Eval: 16_h1=0.4504, 16_l2=0.2891, 32_h1=0.8744, 32_l2=0.5683 - [3] time=10.22, avg_loss=0.2421, train_err=7.5659 - Eval: 16_h1=0.2374, 16_l2=0.1541, 32_h1=0.7924, 32_l2=0.5027 - [6] time=10.14, avg_loss=0.2417, train_err=7.5527 - Eval: 16_h1=0.2790, 16_l2=0.1725, 32_h1=0.8010, 32_l2=0.4774 - [9] time=10.14, avg_loss=0.2402, train_err=7.5063 - Eval: 16_h1=0.2619, 16_l2=0.1642, 32_h1=0.7790, 32_l2=0.4700 - [12] time=10.11, avg_loss=0.2047, train_err=6.3980 - Eval: 16_h1=0.2947, 16_l2=0.1913, 32_h1=0.7847, 32_l2=0.4989 - [15] time=10.14, avg_loss=0.1683, train_err=5.2603 - Eval: 16_h1=0.2925, 16_l2=0.1709, 32_h1=0.7781, 32_l2=0.4702 - [18] time=10.13, avg_loss=0.1472, train_err=4.5987 - Eval: 16_h1=0.2312, 16_l2=0.1432, 32_h1=0.7338, 32_l2=0.4617 + [0] time=10.09, avg_loss=0.6742, train_err=21.0681 + Eval: 16_h1=0.3964, 16_l2=0.2586, 32_h1=0.9592, 32_l2=0.6138 + [3] time=9.98, avg_loss=0.2421, train_err=7.5670 + Eval: 16_h1=0.2568, 16_l2=0.1667, 32_h1=0.8420, 32_l2=0.5287 + [6] time=10.09, avg_loss=0.2110, train_err=6.5937 + Eval: 16_h1=0.2970, 16_l2=0.1961, 32_h1=0.8152, 32_l2=0.5072 + [9] time=10.02, avg_loss=0.1948, train_err=6.0878 + Eval: 16_h1=0.2791, 16_l2=0.1853, 32_h1=0.8134, 32_l2=0.4969 + [12] time=10.03, avg_loss=0.2015, train_err=6.2963 + Eval: 16_h1=0.2579, 16_l2=0.1594, 32_h1=0.8114, 32_l2=0.4822 + [15] time=10.01, avg_loss=0.1410, train_err=4.4049 + Eval: 16_h1=0.2554, 16_l2=0.1777, 32_h1=0.8068, 32_l2=0.5007 + [18] time=10.00, avg_loss=0.1490, train_err=4.6550 + Eval: 16_h1=0.2411, 16_l2=0.1463, 32_h1=0.7796, 32_l2=0.4553 - {'train_err': 3.8512582033872604, 'avg_loss': 0.12324026250839233, 'avg_lasso_loss': None, 'epoch_train_time': 10.164662551999982} + {'train_err': 4.482214251533151, 'avg_loss': 0.14343085604906083, 'avg_lasso_loss': None, 'epoch_train_time': 9.988673681000023} @@ -499,7 +499,7 @@ In practice we would train a Neural Operator on one or multiple GPUs .. rst-class:: sphx-glr-timing - **Total running time of the script:** (3 minutes 26.647 seconds) + **Total running time of the script:** (3 minutes 23.629 seconds) .. _sphx_glr_download_auto_examples_plot_UNO_darcy.py: diff --git a/dev/_sources/auto_examples/plot_count_flops.rst.txt b/dev/_sources/auto_examples/plot_count_flops.rst.txt index 69122a1..781e050 100644 --- a/dev/_sources/auto_examples/plot_count_flops.rst.txt +++ b/dev/_sources/auto_examples/plot_count_flops.rst.txt @@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e .. code-block:: none - defaultdict(. at 0x7f055d093ca0>, {'': defaultdict(, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'projection': defaultdict(, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(, {'convolution.default': 4194304})}) + defaultdict(. at 0x7fc35f094ca0>, {'': defaultdict(, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'projection': defaultdict(, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(, {'convolution.default': 4194304})}) @@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 3.530 seconds) + **Total running time of the script:** (0 minutes 3.946 seconds) .. _sphx_glr_download_auto_examples_plot_count_flops.py: diff --git a/dev/_sources/auto_examples/plot_darcy_flow.rst.txt b/dev/_sources/auto_examples/plot_darcy_flow.rst.txt index e2e167c..1c09a44 100644 --- a/dev/_sources/auto_examples/plot_darcy_flow.rst.txt +++ b/dev/_sources/auto_examples/plot_darcy_flow.rst.txt @@ -163,7 +163,7 @@ Visualizing the data .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 0.390 seconds) + **Total running time of the script:** (0 minutes 0.408 seconds) .. _sphx_glr_download_auto_examples_plot_darcy_flow.py: diff --git a/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt b/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt index b062f57..49cbd73 100644 --- a/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt +++ b/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt @@ -219,7 +219,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 0.278 seconds) + **Total running time of the script:** (0 minutes 0.276 seconds) .. _sphx_glr_download_auto_examples_plot_darcy_flow_spectrum.py: diff --git a/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt b/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt index cf9de48..4ac1015 100644 --- a/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt +++ b/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt @@ -240,15 +240,15 @@ Set up the losses ) ### SCHEDULER ### - + ### LOSSES ### ### INCREMENTAL RESOLUTION + GRADIENT EXPLAINED ### - * Train: + * Train: - * Test: {'h1': , 'l2': } + * Test: {'h1': , 'l2': } @@ -329,15 +329,15 @@ Train the model Training on 100 samples Testing on [50, 50] samples on resolutions [16, 32]. Raw outputs of shape torch.Size([16, 1, 8, 8]) - [0] time=0.25, avg_loss=0.9239, train_err=13.1989 + [0] time=0.24, avg_loss=0.9239, train_err=13.1989 Eval: 16_h1=0.8702, 16_l2=0.5536, 32_h1=0.9397, 32_l2=0.5586 - [1] time=0.23, avg_loss=0.7589, train_err=10.8413 + [1] time=0.22, avg_loss=0.7589, train_err=10.8413 Eval: 16_h1=0.7880, 16_l2=0.5167, 32_h1=0.9011, 32_l2=0.5234 - [2] time=0.23, avg_loss=0.6684, train_err=9.5484 + [2] time=0.22, avg_loss=0.6684, train_err=9.5484 Eval: 16_h1=0.7698, 16_l2=0.4235, 32_h1=0.9175, 32_l2=0.4405 - [3] time=0.22, avg_loss=0.6082, train_err=8.6886 + [3] time=0.23, avg_loss=0.6082, train_err=8.6886 Eval: 16_h1=0.7600, 16_l2=0.4570, 32_h1=1.0126, 32_l2=0.5005 - [4] time=0.23, avg_loss=0.5604, train_err=8.0054 + [4] time=0.22, avg_loss=0.5604, train_err=8.0054 Eval: 16_h1=0.8301, 16_l2=0.4577, 32_h1=1.1722, 32_l2=0.4987 [5] time=0.23, avg_loss=0.5366, train_err=7.6663 Eval: 16_h1=0.8099, 16_l2=0.4076, 32_h1=1.1414, 32_l2=0.4436 @@ -354,26 +354,26 @@ Train the model Incre Res Update: change res to 16 [10] time=0.29, avg_loss=0.5158, train_err=7.3681 Eval: 16_h1=0.5133, 16_l2=0.3167, 32_h1=0.6120, 32_l2=0.3022 - [11] time=0.28, avg_loss=0.4536, train_err=6.4795 + [11] time=0.27, avg_loss=0.4536, train_err=6.4795 Eval: 16_h1=0.4680, 16_l2=0.3422, 32_h1=0.6436, 32_l2=0.3659 [12] time=0.28, avg_loss=0.4155, train_err=5.9358 Eval: 16_h1=0.4119, 16_l2=0.2692, 32_h1=0.5285, 32_l2=0.2692 [13] time=0.28, avg_loss=0.3724, train_err=5.3195 Eval: 16_h1=0.4045, 16_l2=0.2569, 32_h1=0.5620, 32_l2=0.2747 - [14] time=0.29, avg_loss=0.3555, train_err=5.0783 + [14] time=0.28, avg_loss=0.3555, train_err=5.0783 Eval: 16_h1=0.3946, 16_l2=0.2485, 32_h1=0.5278, 32_l2=0.2523 - [15] time=0.29, avg_loss=0.3430, train_err=4.8998 + [15] time=0.28, avg_loss=0.3430, train_err=4.8998 Eval: 16_h1=0.3611, 16_l2=0.2323, 32_h1=0.5079, 32_l2=0.2455 [16] time=0.28, avg_loss=0.3251, train_err=4.6438 Eval: 16_h1=0.3433, 16_l2=0.2224, 32_h1=0.4757, 32_l2=0.2351 [17] time=0.28, avg_loss=0.3072, train_err=4.3888 Eval: 16_h1=0.3458, 16_l2=0.2226, 32_h1=0.4776, 32_l2=0.2371 - [18] time=0.29, avg_loss=0.2982, train_err=4.2593 + [18] time=0.28, avg_loss=0.2982, train_err=4.2593 Eval: 16_h1=0.3251, 16_l2=0.2116, 32_h1=0.4519, 32_l2=0.2245 - [19] time=0.29, avg_loss=0.2802, train_err=4.0024 + [19] time=0.28, avg_loss=0.2802, train_err=4.0024 Eval: 16_h1=0.3201, 16_l2=0.2110, 32_h1=0.4533, 32_l2=0.2245 - {'train_err': 4.002395851271493, 'avg_loss': 0.2801677095890045, 'avg_lasso_loss': None, 'epoch_train_time': 0.28665670099996987, '16_h1': tensor(0.3201), '16_l2': tensor(0.2110), '32_h1': tensor(0.4533), '32_l2': tensor(0.2245)} + {'train_err': 4.002395851271493, 'avg_loss': 0.2801677095890045, 'avg_lasso_loss': None, 'epoch_train_time': 0.2827713069999618, '16_h1': tensor(0.3201), '16_l2': tensor(0.2110), '32_h1': tensor(0.4533), '32_l2': tensor(0.2245)} @@ -447,7 +447,7 @@ In practice we would train a Neural Operator on one or multiple GPUs .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 7.180 seconds) + **Total running time of the script:** (0 minutes 7.074 seconds) .. _sphx_glr_download_auto_examples_plot_incremental_FNO_darcy.py: diff --git a/dev/_sources/auto_examples/sg_execution_times.rst.txt b/dev/_sources/auto_examples/sg_execution_times.rst.txt index 52749d0..1bece67 100644 --- a/dev/_sources/auto_examples/sg_execution_times.rst.txt +++ b/dev/_sources/auto_examples/sg_execution_times.rst.txt @@ -6,7 +6,7 @@ Computation times ================= -**06:34.661** total execution time for 9 files **from auto_examples**: +**06:29.334** total execution time for 9 files **from auto_examples**: .. container:: @@ -33,28 +33,28 @@ Computation times - Time - Mem (MB) * - :ref:`sphx_glr_auto_examples_plot_UNO_darcy.py` (``plot_UNO_darcy.py``) - - 03:26.647 + - 03:23.629 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_SFNO_swe.py` (``plot_SFNO_swe.py``) - - 01:26.316 + - 01:25.227 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_FNO_darcy.py` (``plot_FNO_darcy.py``) - - 00:52.899 + - 00:51.949 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_DISCO_convolutions.py` (``plot_DISCO_convolutions.py``) - - 00:37.420 + - 00:36.824 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_incremental_FNO_darcy.py` (``plot_incremental_FNO_darcy.py``) - - 00:07.180 + - 00:07.074 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_count_flops.py` (``plot_count_flops.py``) - - 00:03.530 + - 00:03.946 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow.py` (``plot_darcy_flow.py``) - - 00:00.390 + - 00:00.408 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow_spectrum.py` (``plot_darcy_flow_spectrum.py``) - - 00:00.278 + - 00:00.276 - 0.0 * - :ref:`sphx_glr_auto_examples_checkpoint_FNO_darcy.py` (``checkpoint_FNO_darcy.py``) - 00:00.000 diff --git a/dev/_sources/sg_execution_times.rst.txt b/dev/_sources/sg_execution_times.rst.txt index 59767f5..7663d25 100644 --- a/dev/_sources/sg_execution_times.rst.txt +++ b/dev/_sources/sg_execution_times.rst.txt @@ -6,7 +6,7 @@ Computation times ================= -**06:34.661** total execution time for 9 files **from all galleries**: +**06:29.334** total execution time for 9 files **from all galleries**: .. container:: @@ -33,28 +33,28 @@ Computation times - Time - Mem (MB) * - :ref:`sphx_glr_auto_examples_plot_UNO_darcy.py` (``../../examples/plot_UNO_darcy.py``) - - 03:26.647 + - 03:23.629 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_SFNO_swe.py` (``../../examples/plot_SFNO_swe.py``) - - 01:26.316 + - 01:25.227 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_FNO_darcy.py` (``../../examples/plot_FNO_darcy.py``) - - 00:52.899 + - 00:51.949 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_DISCO_convolutions.py` (``../../examples/plot_DISCO_convolutions.py``) - - 00:37.420 + - 00:36.824 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_incremental_FNO_darcy.py` (``../../examples/plot_incremental_FNO_darcy.py``) - - 00:07.180 + - 00:07.074 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_count_flops.py` (``../../examples/plot_count_flops.py``) - - 00:03.530 + - 00:03.946 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow.py` (``../../examples/plot_darcy_flow.py``) - - 00:00.390 + - 00:00.408 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow_spectrum.py` (``../../examples/plot_darcy_flow_spectrum.py``) - - 00:00.278 + - 00:00.276 - 0.0 * - :ref:`sphx_glr_auto_examples_checkpoint_FNO_darcy.py` (``../../examples/checkpoint_FNO_darcy.py``) - 00:00.000 diff --git a/dev/auto_examples/plot_DISCO_convolutions.html b/dev/auto_examples/plot_DISCO_convolutions.html index 7081ab7..5c6f64c 100644 --- a/dev/auto_examples/plot_DISCO_convolutions.html +++ b/dev/auto_examples/plot_DISCO_convolutions.html @@ -330,7 +330,7 @@ # plt.show() -plot DISCO convolutions
<matplotlib.colorbar.Colorbar object at 0x7f052afd4880>
+plot DISCO convolutions
<matplotlib.colorbar.Colorbar object at 0x7fc32d0138b0>
 
convt = DiscreteContinuousConvTranspose2d(1, 1, grid_in=grid_out, grid_out=grid_in, quadrature_weights=q_out, kernel_shape=[2,4], radius_cutoff=3/nyo, periodic=False).float()
@@ -381,7 +381,7 @@
 plot DISCO convolutions
torch.Size([1, 1, 120, 90])
 
-

Total running time of the script: (0 minutes 37.420 seconds)

+

Total running time of the script: (0 minutes 36.824 seconds)

Create the trainer

@@ -323,22 +323,22 @@
Training on 200 samples
 Testing on [50, 50] samples         on resolutions [(32, 64), (64, 128)].
 Raw outputs of shape torch.Size([4, 3, 32, 64])
-[0] time=3.55, avg_loss=2.5143, train_err=10.0572
-Eval: (32, 64)_l2=1.9155, (64, 128)_l2=2.4213
-[3] time=3.55, avg_loss=0.4193, train_err=1.6771
-Eval: (32, 64)_l2=0.4481, (64, 128)_l2=2.5097
-[6] time=3.49, avg_loss=0.2898, train_err=1.1590
-Eval: (32, 64)_l2=0.3605, (64, 128)_l2=2.4361
-[9] time=3.53, avg_loss=0.2269, train_err=0.9075
-Eval: (32, 64)_l2=0.3382, (64, 128)_l2=2.4815
-[12] time=3.52, avg_loss=0.1930, train_err=0.7721
-Eval: (32, 64)_l2=0.3280, (64, 128)_l2=2.4038
-[15] time=3.54, avg_loss=0.1609, train_err=0.6438
-Eval: (32, 64)_l2=0.3029, (64, 128)_l2=2.4056
-[18] time=3.57, avg_loss=0.1422, train_err=0.5687
-Eval: (32, 64)_l2=0.2940, (64, 128)_l2=2.3771
-
-{'train_err': 0.5653003603219986, 'avg_loss': 0.14132509008049965, 'avg_lasso_loss': None, 'epoch_train_time': 3.540815240000029}
+[0] time=3.52, avg_loss=2.5951, train_err=10.3803
+Eval: (32, 64)_l2=1.7616, (64, 128)_l2=2.4268
+[3] time=3.52, avg_loss=0.4486, train_err=1.7945
+Eval: (32, 64)_l2=0.4180, (64, 128)_l2=2.3742
+[6] time=3.47, avg_loss=0.2711, train_err=1.0843
+Eval: (32, 64)_l2=0.3335, (64, 128)_l2=2.3245
+[9] time=3.48, avg_loss=0.2257, train_err=0.9030
+Eval: (32, 64)_l2=0.3089, (64, 128)_l2=2.3506
+[12] time=3.47, avg_loss=0.1931, train_err=0.7726
+Eval: (32, 64)_l2=0.2473, (64, 128)_l2=2.3245
+[15] time=3.46, avg_loss=0.1670, train_err=0.6680
+Eval: (32, 64)_l2=0.2365, (64, 128)_l2=2.3338
+[18] time=3.46, avg_loss=0.1504, train_err=0.6015
+Eval: (32, 64)_l2=0.2184, (64, 128)_l2=2.3164
+
+{'train_err': 0.5853043901920318, 'avg_loss': 0.14632609754800796, 'avg_lasso_loss': None, 'epoch_train_time': 3.4730680470000834}
 

Plot the prediction, and compare with the ground-truth @@ -385,7 +385,7 @@ fig.show()

-Inputs, ground-truth output and prediction., Input x (32, 64), Ground-truth y, Model prediction, Input x (64, 128), Ground-truth y, Model prediction

Total running time of the script: (1 minutes 26.316 seconds)

+Inputs, ground-truth output and prediction., Input x (32, 64), Ground-truth y, Model prediction, Input x (64, 128), Ground-truth y, Model prediction

Total running time of the script: (1 minutes 25.227 seconds)

Create the trainer

@@ -454,22 +454,22 @@
Training on 1000 samples
 Testing on [50, 50] samples         on resolutions [16, 32].
 Raw outputs of shape torch.Size([32, 1, 16, 16])
-[0] time=10.26, avg_loss=0.7033, train_err=21.9780
-Eval: 16_h1=0.4504, 16_l2=0.2891, 32_h1=0.8744, 32_l2=0.5683
-[3] time=10.22, avg_loss=0.2421, train_err=7.5659
-Eval: 16_h1=0.2374, 16_l2=0.1541, 32_h1=0.7924, 32_l2=0.5027
-[6] time=10.14, avg_loss=0.2417, train_err=7.5527
-Eval: 16_h1=0.2790, 16_l2=0.1725, 32_h1=0.8010, 32_l2=0.4774
-[9] time=10.14, avg_loss=0.2402, train_err=7.5063
-Eval: 16_h1=0.2619, 16_l2=0.1642, 32_h1=0.7790, 32_l2=0.4700
-[12] time=10.11, avg_loss=0.2047, train_err=6.3980
-Eval: 16_h1=0.2947, 16_l2=0.1913, 32_h1=0.7847, 32_l2=0.4989
-[15] time=10.14, avg_loss=0.1683, train_err=5.2603
-Eval: 16_h1=0.2925, 16_l2=0.1709, 32_h1=0.7781, 32_l2=0.4702
-[18] time=10.13, avg_loss=0.1472, train_err=4.5987
-Eval: 16_h1=0.2312, 16_l2=0.1432, 32_h1=0.7338, 32_l2=0.4617
-
-{'train_err': 3.8512582033872604, 'avg_loss': 0.12324026250839233, 'avg_lasso_loss': None, 'epoch_train_time': 10.164662551999982}
+[0] time=10.09, avg_loss=0.6742, train_err=21.0681
+Eval: 16_h1=0.3964, 16_l2=0.2586, 32_h1=0.9592, 32_l2=0.6138
+[3] time=9.98, avg_loss=0.2421, train_err=7.5670
+Eval: 16_h1=0.2568, 16_l2=0.1667, 32_h1=0.8420, 32_l2=0.5287
+[6] time=10.09, avg_loss=0.2110, train_err=6.5937
+Eval: 16_h1=0.2970, 16_l2=0.1961, 32_h1=0.8152, 32_l2=0.5072
+[9] time=10.02, avg_loss=0.1948, train_err=6.0878
+Eval: 16_h1=0.2791, 16_l2=0.1853, 32_h1=0.8134, 32_l2=0.4969
+[12] time=10.03, avg_loss=0.2015, train_err=6.2963
+Eval: 16_h1=0.2579, 16_l2=0.1594, 32_h1=0.8114, 32_l2=0.4822
+[15] time=10.01, avg_loss=0.1410, train_err=4.4049
+Eval: 16_h1=0.2554, 16_l2=0.1777, 32_h1=0.8068, 32_l2=0.5007
+[18] time=10.00, avg_loss=0.1490, train_err=4.6550
+Eval: 16_h1=0.2411, 16_l2=0.1463, 32_h1=0.7796, 32_l2=0.4553
+
+{'train_err': 4.482214251533151, 'avg_loss': 0.14343085604906083, 'avg_lasso_loss': None, 'epoch_train_time': 9.988673681000023}
 

Plot the prediction, and compare with the ground-truth @@ -519,7 +519,7 @@ fig.show() -Inputs, ground-truth output and prediction., Input x, Ground-truth y, Model prediction

Total running time of the script: (3 minutes 26.647 seconds)

+Inputs, ground-truth output and prediction., Input x, Ground-truth y, Model prediction

Total running time of the script: (3 minutes 23.629 seconds)