Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: skuschel/postpic
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.0.1
Choose a base ref
...
head repository: skuschel/postpic
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref

Commits on Dec 5, 2014

  1. Copy the full SHA
    fd4093f View commit details
  2. Copy the full SHA
    9eb9f94 View commit details

Commits on Dec 6, 2014

  1. implement VSIM Dumpreader

    tested if methods return right value (except getderived all methods
    implemented). Using _const.axesidentify for x,y,z,px,py,pz; weight and
    ID not included. All other methods should behave like for sdf.
    Neverthelerss additional esting required--
    planto authored and skuschel committed Dec 6, 2014
    Copy the full SHA
    f4ee6c2 View commit details

Commits on Dec 7, 2014

  1. Copy the full SHA
    ac58217 View commit details
  2. Copy the full SHA
    39d95ab View commit details
  3. add support for fixed particle weight in SingleSpeciesAnalyzer

    hope the conversion fix in __getitem__ doesnt break anything
    skuschel committed Dec 7, 2014
    Copy the full SHA
    3884ee2 View commit details
  4. improvements on vsim dumpreader

    fixed collecting of h5 files in subfolders
    fixed timestep
    improved listSpecies
    fixed typo
    skuschel committed Dec 7, 2014
    Copy the full SHA
    4b41fb2 View commit details

Commits on Dec 11, 2014

  1. add VSIM Simulationsreader

    closes #4
    
    fixed wrong mass and charge return value
    skuschel committed Dec 11, 2014
    Copy the full SHA
    a24a4a6 View commit details
  2. add example tree

    a simple example using fake data will be started by the run-tests
    script.
    skuschel committed Dec 11, 2014
    Copy the full SHA
    c7a8ded View commit details
  3. fix example to run without a DISPLAY

    this is necessary to run the example on travis-ci
    skuschel committed Dec 11, 2014
    Copy the full SHA
    4517536 View commit details

Commits on Dec 14, 2014

  1. add 'ext' option to MatplotlibPlotter

    this option sets the extension (and therefore the format) of saved
    pictures. If not set, the default is 'png'.
    skuschel committed Dec 14, 2014
    Copy the full SHA
    6c50b62 View commit details

Commits on Dec 15, 2014

  1. autodetect nosetests2 or nosetests in run-tests

    'nosetests2' will be used if available, 'nosetests' if not.
    On systems like arch linux 'nosetests' defaults to the python3 version
    causing tests to fail. Thus nosetests2 has preference.
    
    fixes #9
    skuschel committed Dec 15, 2014
    Copy the full SHA
    0a7935b View commit details

Commits on Dec 25, 2014

  1. Copy the full SHA
    6a47477 View commit details
  2. Merge branch 'examples'

    skuschel committed Dec 25, 2014
    Copy the full SHA
    901a432 View commit details
  3. Merge pull request #10 from skuschel/vsimreader

    Vsimreader
    skuschel committed Dec 25, 2014
    Copy the full SHA
    bd7c244 View commit details
  4. add EPOCH reader support for subsets

    This commit also fixes that species will be recognized even if Px is not dumped.
    fix #8
    skuschel committed Dec 25, 2014
    Copy the full SHA
    0fb9903 View commit details
  5. plot particle data independently of the simgrid

    The ParticleAnalyzer doesnt depend on the simgrid properties anymore. In
    case they arend dumped the simgrid=True option in the createHistrogram*d
    functions will be without any effect.
    
    fix #7
    skuschel committed Dec 25, 2014
    Copy the full SHA
    7b98ed9 View commit details

Commits on Dec 28, 2014

  1. changed reader policy

    the default behavior of the reader classes is now to raise exceptions
    whenever not dumped data is requested. This gives more control in the
    program to intentionally handle that as an exception or allow to show
    comprehensive error messages.
    skuschel committed Dec 28, 2014
    Copy the full SHA
    bab0391 View commit details

Commits on Dec 29, 2014

  1. unify data handling of paricle properties

    coherent implementation of atomic particle properties in
    SingleSpeciesAnalyzer.
    coherent mapping from ParticleAnalyzer functions to contained
    SingleSpeciesAnalyzers.
    skuschel committed Dec 29, 2014
    Copy the full SHA
    93d94a3 View commit details
  2. speed up calaculation of particle properties

    calculations of particle properties may now be done already inside the
    SingleSpeciesAnalyzer class. Calculations of particle properties which
    require scalar values on a per species basis (usually mass and charge)
    will speed up, because these constant values will not be repeated
    using np.repeat before the calculation.
    Timed on Laptop with 10e6 particles (timit package, 20 repetitions):
    calculating gamma        0.43 --> 0.245 sec (57%)
    calculating Ekin_MeV_qm  0.90 --> 0.36  sec (40%)
    skuschel committed Dec 29, 2014
    Copy the full SHA
    97594a3 View commit details
  3. Merge pull request #14 from skuschel/scalarspeciesproperties

    handle scalar species properties more efficiently and change reader policy.
    Trying to access particle properties that arend dumped will now yield a KeyError.
    skuschel committed Dec 29, 2014
    Copy the full SHA
    f5899ef View commit details
  4. fixed typo

    skuschel committed Dec 29, 2014
    Copy the full SHA
    97467fa View commit details

Commits on Jan 6, 2015

  1. Copy the full SHA
    1e6a536 View commit details

Commits on Jan 20, 2015

  1. update setup.py

    skuschel committed Jan 20, 2015
    Copy the full SHA
    e42c546 View commit details
  2. add options to MatplotlibPlotter for saving

    New Keyword arguemnts accepte by the MatplotlibPlotter, together
    with their default values are:
    * dpi=160
    * facecolor=(1,1,1,0.01)
    * transparent=False
    * size_inches=(9,7)
    
    This commit changes the default value of transparent from True to False.
    skuschel committed Jan 20, 2015
    Copy the full SHA
    bb6f25a View commit details

Commits on Jan 21, 2015

  1. tests become windows compatible

    instead of `run-tests` there is a `run-tests.py` now,
    that runs all the tests on execution. Tested with the latest
    pythonxy on windows 7.
    skuschel committed Jan 21, 2015
    Copy the full SHA
    3389180 View commit details
  2. Copy the full SHA
    8346379 View commit details

Commits on Jan 23, 2015

  1. fixed bug in plotting lineouts

    a lineout is also added to the simpleexample.py
    skuschel committed Jan 23, 2015
    Copy the full SHA
    3b8c3d3 View commit details

Commits on Jan 29, 2015

  1. seed dummyreader with random seed

    also adjust the dummyreader data to more realistic values. some plots
    are added to the simpleexample.py as well.
    skuschel committed Jan 29, 2015
    Copy the full SHA
    9a1b700 View commit details
  2. Copy the full SHA
    9b2387f View commit details

Commits on Feb 2, 2015

  1. update .gitignore

    skuschel committed Feb 2, 2015
    Copy the full SHA
    ff401e3 View commit details

Commits on Feb 3, 2015

  1. Dumpreader_ifc inherits high level functions from FieldAnalyzer

    The Dumpreader_ifc (and thus any Dumpreader) inherits all the high level
    functions from the FieldAnalyzer. The Files are still kept separate to
    make the diffenrent abstraction levels more obvious. Anyways, there is
    no need to call the FieldAnalyzer directly anymore.
    skuschel committed Feb 3, 2015
    Copy the full SHA
    a5d577f View commit details
  2. shortcut frequently used functions and classes

    ParticleAnalyzer, identifyspecies, chooseCode, readDump and readSim are
    now directly accessible from postpic (no need to point to the sub
    packages).
    skuschel committed Feb 3, 2015
    Copy the full SHA
    992d286 View commit details
  3. Merge pull request #23 from skuschel/topic-usability

    usability improvements
    
    Most importantly:
    1) Dumpreader_ifc inherits all the high level Functions from the FieldAnalyzer. So there is no need to call the FieldAnalyzer directly anymore
    2) ParticleAnalyzer and chooseCode got linked directly into postpic, no need to call the subpackages anymore.
    skuschel committed Feb 3, 2015
    Copy the full SHA
    7b87c0a View commit details

Commits on Feb 6, 2015

  1. tidy up, more tests

    bugfixed datahandling.Field.exporttocsv.
    tests added.
    
    commit-on-a-plane :)
    skuschel committed Feb 6, 2015
    Copy the full SHA
    d12115f View commit details

Commits on Feb 10, 2015

  1. Copy the full SHA
    00b37f3 View commit details
  2. run-tests.py has to return the correct error codes

    `os.system(cmd)` does not return the correct error codes as expected.
    now replaced by `subprocess.call(cmd)` which does.
    skuschel committed Feb 10, 2015
    Copy the full SHA
    467ee55 View commit details

Commits on Feb 17, 2015

  1. Copy the full SHA
    bd62e23 View commit details
  2. satisfy new pep8 W503

    skuschel committed Feb 17, 2015
    Copy the full SHA
    9c55f05 View commit details
  3. Copy the full SHA
    cb59032 View commit details

Commits on Mar 9, 2015

  1. bump to v0.1.1

    skuschel committed Mar 9, 2015
    Copy the full SHA
    8d5d06c View commit details

Commits on Mar 10, 2015

  1. add histogram function written in cython

    simple tests are added as well
    skuschel committed Mar 10, 2015
    Copy the full SHA
    d452351 View commit details
  2. use 1D Histogram functions written in Cython

    Support for Particle Shape Orders 0 and 1.
    skuschel committed Mar 10, 2015
    Copy the full SHA
    9b8a7b7 View commit details

Commits on Mar 11, 2015

  1. fix travis ci

    skuschel committed Mar 11, 2015
    Copy the full SHA
    157cf25 View commit details
  2. add particleshapedemo.py

    this file is also included in the tests.
    skuschel committed Mar 11, 2015
    Copy the full SHA
    554ec10 View commit details

Commits on Mar 15, 2015

  1. Copy the full SHA
    f888659 View commit details
  2. Copy the full SHA
    9b70127 View commit details
  3. add histogram2d to cythonfunctions

    tests included
    skuschel committed Mar 15, 2015
    Copy the full SHA
    dc66cef View commit details

Commits on Mar 23, 2015

  1. find subpackages automatically

    also add --dirty to git version string creation
    skuschel committed Mar 23, 2015
    Copy the full SHA
    5a6f0cf View commit details

Commits on Mar 25, 2015

  1. Copy the full SHA
    f425ec0 View commit details
Showing with 16,421 additions and 3,116 deletions.
  1. +1 −0 .gitattributes
  2. +13 −0 .github/issue_template.md
  3. +65 −0 .github/workflows/run-tests.yml
  4. +78 −5 .gitignore
  5. +16 −0 .gitlab-ci.yml
  6. +4 −0 .mailmap
  7. +0 −27 .travis.yml
  8. +175 −0 CHANGELOG.md
  9. +24 −15 CONTRIBUTING.md
  10. +8 −0 MANIFEST.in
  11. +52 −18 README.md
  12. +54 −0 conda/README.rst
  13. +41 −0 conda/common.sh
  14. +72 −0 conda/create_environment.sh
  15. +59 −0 conda/do_tests_env.sh
  16. +12 −8 postpic/analyzer/__init__.py → conda/make_envs.sh
  17. +52 −0 conda/pre-commit
  18. +56 −0 conda/run_tests.sh
  19. +12 −169 doc/Makefile
  20. +0 −38 doc/apidoc/postpic.analyzer.rst
  21. +0 −30 doc/apidoc/postpic.datareader.rst
  22. +0 −22 doc/apidoc/postpic.plotting.rst
  23. +0 −31 doc/apidoc/postpic.rst
  24. +0 −348 doc/conf.py
  25. +0 −1 doc/contributing.rst
  26. +0 −53 doc/dummyexample.py
  27. +0 −28 doc/index.rst
  28. +14 −220 doc/make.bat
  29. +0 −1 doc/readme.rst
  30. +1 −0 doc/source/CHANGELOG.md
  31. +1 −0 doc/source/CONTRIBUTING.md
  32. +2 −2 doc/{ → source}/apidoc/modules.rst
  33. +50 −0 doc/source/apidoc/postpic.datareader.rst
  34. +50 −0 doc/source/apidoc/postpic.io.rst
  35. +26 −0 doc/source/apidoc/postpic.particles.rst
  36. +18 −0 doc/source/apidoc/postpic.plotting.rst
  37. +53 −0 doc/source/apidoc/postpic.rst
  38. +210 −0 doc/source/conf.py
  39. +1 −1 doc/{ → source}/getstarted.rst
  40. +34 −0 doc/source/index.rst
  41. +21 −0 doc/source/introduction.rst
  42. +13 −0 doc/updateapidoc.sh
  43. +2 −0 examples/.gitignore
  44. +336 −0 examples/kspace-test-2d.py
  45. +151 −0 examples/openPMD.py
  46. +136 −0 examples/particleshapedemo.py
  47. +128 −0 examples/simpleexample.py
  48. +190 −0 examples/time_cythonfunctions.py
  49. +8 −4 pip-requirements.txt
  50. +56 −36 postpic/__init__.py
  51. +20 −0 postpic/_compat/__init__.py
  52. +121 −0 postpic/_compat/functions.py
  53. +198 −0 postpic/_compat/mixins.py
  54. +0 −112 postpic/_const.py
  55. +384 −0 postpic/_field_calc.py
  56. +683 −0 postpic/_version.py
  57. +0 −191 postpic/analyzer/analyzer.py
  58. +0 −181 postpic/analyzer/fields.py
  59. +0 −706 postpic/analyzer/particles.py
  60. +2,133 −239 postpic/datahandling.py
  61. +105 −248 postpic/datareader/__init__.py
  62. +375 −0 postpic/datareader/datareader.py
  63. +82 −50 postpic/datareader/dummy.py
  64. +149 −80 postpic/datareader/epochsdf.py
  65. +375 −0 postpic/datareader/openPMDh5.py
  66. +404 −0 postpic/datareader/smileih5.py
  67. +183 −0 postpic/datareader/vsimhdf5.py
  68. +121 −0 postpic/experimental.py
  69. +1,378 −0 postpic/helper.py
  70. +95 −0 postpic/helper_fft.py
  71. +78 −0 postpic/io/__init__.py
  72. +39 −0 postpic/io/common.py
  73. +61 −0 postpic/io/csv.py
  74. +128 −0 postpic/io/image.py
  75. +116 −0 postpic/io/npy.py
  76. +364 −0 postpic/io/vtk.py
  77. +37 −0 postpic/particles/__init__.py
  78. +465 −0 postpic/particles/_particlestogrid.pyx
  79. +321 −0 postpic/particles/_routines.py
  80. +1,281 −0 postpic/particles/particles.py
  81. +230 −0 postpic/particles/scalarproperties.py
  82. +4 −2 postpic/plotting/__init__.py
  83. +112 −57 postpic/plotting/plotter_matplotlib.py
  84. +34 −16 pre-commit
  85. +0 −25 run-tests
  86. +136 −0 run-tests.py
  87. +11 −0 setup.cfg
  88. +47 −7 setup.py
  89. +0 −56 test/test_analyzer.py
  90. +724 −55 test/test_datahandling.py
  91. +5 −5 test/test_dumpreader.py
  92. +18 −0 test/test_field_calc.py
  93. +0 −20 test/test_fields.py
  94. +242 −0 test/test_helper.py
  95. +74 −0 test/test_io.py
  96. +87 −9 test/test_particles.py
  97. +376 −0 test/test_particlestogrid.py
  98. +53 −0 update-gh-pages-branch.sh
  99. +2,277 −0 versioneer.py
1 change: 1 addition & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
postpic/_version.py export-subst
13 changes: 13 additions & 0 deletions .github/issue_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@

Please tell us which version of postpic you are using by adding the output of
```python
import postpic as pp
print(pp.__version__)
```
Please also add if you are using python2 or python3. You can find out using
```python
import sys
print(sys.version)
```

Now continue with your question or bug report here. Thank you!
65 changes: 65 additions & 0 deletions .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
name: run-tests

on: [push, pull_request]

jobs:
latest:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12"]

steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m venv --system-site-packages env
source env/bin/activate
python -m pip install --upgrade pip
python -m pip install -r pip-requirements.txt
python --version
python -c 'import numpy; print(numpy.__version__)'
python -c 'import cython; print(cython.__version__)'
python ./setup.py develop
# python -m pip -vvv install -e .
# fails with "ModuleNotFoundError: No module named 'Cython'"
# running setup.py directly is just a workaround.
- name: run tests
run: |
source env/bin/activate
python run-tests.py --skip-setup
other-os:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-latest, windows-latest]
python-version: [3.12]

# have to copy steps from above, as anchors are currently
# not supported by github workflow (Jan 2020).
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r pip-requirements.txt
python --version
python -c 'import numpy; print(numpy.__version__)'
python -c 'import cython; print(cython.__version__)'
python ./setup.py develop
# python -m pip -vvv install -e .
# fails with "ModuleNotFoundError: No module named 'Cython'"
# running setup.py directly is just a workaround.
- name: run tests
run: |
python run-tests.py --skip-setup
83 changes: 78 additions & 5 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,8 +1,81 @@
# auto generated by setup.py
/_examplepictures

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]

# C extensions
*.so

# Cython
*.c
*.html

# Distribution / packaging
.Python
/env/
/build/
/develop-eggs/
/dist/
/downloads/
/eggs/
/.eggs/
/lib/
/lib64/
/parts/
/sdist/
/var/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.cache
nosetests.xml
coverage.xml

# Translations
*.mo
*.pot

# Django stuff:
*.log

# Sphinx documentation
/doc/build/

# PyBuilder
target/

# more stuff
*~
*.pyc
*.odt
.project
.pydevproject
*.egg-info
build/
dist/
doc/_build
*.png
*.swp
.idea/

# there might be ipython notebooks
.ipynb_checkpoints

# if using spyder for development
.spyderworkspace
.spyderproject

# ignored for github codespaces
.venv
16 changes: 16 additions & 0 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@


image: python:latest


before_script:
- python -V # Print out python version for debugging

stages:
- test

job1:
stage: test
script:
- python -m pip install -r pip-requirements.txt
- ./run-tests.py
4 changes: 4 additions & 0 deletions .mailmap
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Stephan Kuschel <stephan.kuschel@gmail.com> <Stephan.Kuschel@gmail.com>
Mark Yeung <myeung01@qub.ac.uk>
Georg Wittig <georg.wittig@uni-hamburg.de>
Dominik Hollatz <dominik.hollatz@uni-jena.de>
27 changes: 0 additions & 27 deletions .travis.yml

This file was deleted.

175 changes: 175 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
Changelog of postpic
====================

current master
--------------


**Highlights**

* Add support to read the `Smilei` PIC (https://smileipic.github.io/Smilei/) data format in both cartesian and azimuthal geometry. Postpic uses a build in azimuthal mode expansion very similar to the one used for fbpic.
* To read smilei data, postpic only relies on the hdf5 package and not smilei's happi module for data access. Paricle ID's (ParticleTracking as described by smilei) can be read directly from the hdf5. Happi requires to sort the IDs and write a new hdf5, which can be twice as big as the original dumps. Using postpic's access this step will be skipped and thus access is much faster (but by default with unordered particle IDs as in any other code).


**Incompatible adjustments to previous version**

* `scipy.integrate.simps` has been removed in scipy 1.14. Postpic uses `scipy.integrate.simpson` which has been introduced in scipy 1.6.0 (Dec 31st 2020).


**Other improvements and new features**

* Compatibility with numpy 2.0 and the sanitizer checks of numexpr 2.8.6. ([#281](https://github.com/skuschel/postpic/pull/281) -- [a304e73](https://github.com/skuschel/postpic/commit/a304e7370259b3b09873a998ccc8f6a6c0241b2c))


v0.5
----

2022-10-22

This is the last version with python 2 support. Changes in setuptools (see for example python [PEP 517](https://peps.python.org/pep-0517/) and [PEP 660](https://peps.python.org/pep-0660/) ) require changes in the setup. Backward compatibility will therefore be dropped. Current tests already do not inlcude python2 anymore.


**Highlights**

* Parallelized implementation of `Field.map_coordinates`
* Added support for reading files written by the fbpic v0.13.0 code ( https://github.com/fbpic ). The fields can be accessed by `Er` and `Etheta`, which have been introduced to the fbpic data reader. Particles are saved in Cartesian coordinates, hence the interface does not change there.
* Reimplementation of `Field.fft`, see below.

**Incompatible adjustments to previous version**

* Reimplementation of `Field.fft`. This changes the phases of Fourier transforms in a way to make it more consistent. However, if your code depends on the phases, `Field.fft()` now has a parameter `old_behaviour` that can be used to switch back to the old behaviour.
* Indexing a field by a number (integer or float) will now remove the according axis altogether, instead of leaving behind a length-1 axis.
A new class `KeepDim` was introduced through which the old behaviour can still be used.
Behaviour of PostPic before this change:
```
field.shape == (x,y,z)
field[:, 0.0, :].shape == (x,1,z)
```
Using the new class `KeepDim`, it is possible to retain that behaviour in the new version:
```
field.shape == (x,y,z)
field[:, 0.0, :].shape == (x, z)
field[:, KeepDim(0.0), :].shape == (x,1,z)
```

**Other improvements and new features**

* New convenience method `Field.copy`


v0.4
----

2019-01-14

**Highlights**

* Improved interoperability with numpy:
* `Field` now understands most of numpy's broadcasting
* `Field` can be used as an argument to numpy's ufuncs.
* Import and export routines for `Field` incuding vtk, compatible with [paraview](https://www.paraview.org/).
* Coordinate mapping and transform for `Field`.
* Brand new `Multispecies.__call__` interface: This takes an expression, which is evaluated by `numexr`, increasing the speed of per-particle scalar computations strongly. It's also really user-friendly.

**Incompatible adjustments to previous version**
* `postpic.Field` method `exporttocsv` is removed. Use `export` instead.
* `postpic.Field` method `transform` is renamed to `map_coordinates`, matching the underlying scipy-function.
* `postpic.Field` method `mean` has now an interface matching `ndarray.mean`. This means that, if the `axis` argument is not given, it averages across all axes instead the last axis.
* `postpic.Field.map_coordinates` applies now the Jacobian determinant of the transformation, in order to preserve the definite integral.
In your code you will need to turn calls to `Field.transform` into calls to `Field.map_coordinates` and set the keyword argument `preserve_integral=False` to get the old behaviour.
* `postpic.MultiSpecies.createField` has now keyword arguments (`bins`, `shape`), which replace the corresponding entries from the `optargsh` dictionary. The use of the`optargsh` keyword argument has been deprecated.
* The functions `MultiSpecies.compress`, `MultiSpecies.filter`, `MultiSpecies.uncompress` and `ParticleHistory.skip` return a new object now. Before this release, they modified the current object. Assuming `ms` is a `MultiSpecies` object, the corresponding adjustemens read:<br>
old: `ms.filter('gamma > 2')`<br>
new: `ms = ms.filter('gamma > 2')`
* `plotter_matplotlib` has a new default symmetric colormap

**Other improvements and new features**
* Overload of the `~` (invert) operator on `postpic.MultiSpecies`. If `ms` is a MultiSpecies object with filtered particles (created by the use of `compress` or `filter`), then `~ms` inverts the selection of particles.
* `postpic.Field` has methods `.loadfrom` and `.saveto`. These can be used to save a Field to a ` .npz` file for later use. Use `.loadfrom` to load a Field object from such a file. All attributes of the Field are restored.
* `postpic.Field` has methods `.export` and `.import`. These are used to export fields to and import fields from foreign file formats such as `.csv`, `.vtk`, `.png`, `.tif`, `.jpg`. It is not guaranteed to get all attributes back after `.export`ing and than `.import`ing a Field. Some formats are not available for both methods.
* `postpic` has a new function `time_profile_at_plane` that 'measures' the temporal profile of a pulse while passing through a plane
* `postpic` has a new function `unstagger_fields` that will take a set of staggered fields and returns the fields after removing the stagger
* `postpic` has a new function `export_vector_vtk` that takes up to three fields and exports them as a vector field in the `.vtk` format
* `postpic` has a new function `export_scalars_vtk` that takes up to four fields and exports them as multiple scalar fields on the same grid in the `.vtk` format
* `postpic.Field` works now with all numpy ufuncs, also with `ufunc.reduce`, `ufunc.outer`, `ufunc.accumulate` and `ufunc.at`
* `postpic.Field` now supports broadcasting like numpy arrays, for binary operators as well as binary ufunc operations
* `postpic.Field` has methods `.swapaxes`, `.transpose` and properties `.T` and `ndim` compatible to numpy.ndarray
* `postpic.Field` has methods `all`, `any`, `max`, `min`, `prod`, `sum`, `ptp`, `std`, `var`, `mean`, `clip` compatible to numpy.ndarray
* `postpic.Field` has a new method `map_axis_grid` for transforming the coordinates only along one axis which is simpler than `map_coordinates`, but also takes care of the Jacobian
* `postpic.Field` has a new method `autocutout` used to slice away close-to-zero regions from the borders
* `postpic.Field` has a new method `fft_autopad` used to pad a small number of grid points to each axis such that the dimensions of the Field are favourable to FFTW
* `postpic.Field` has a new method `adjust_stagger_to` to adjust the grid origin to match the grid origin of another field
* `postpic.Field` has a new method `phase` to get the unwrapped phase of the field
* `postpic.Field` has a new method `derivative` to calculate the derivative of a field
* `postpic.Field` has new methods `flip` and `rot90` similar to `np.flip()` and `np.rot90()`
* `postpic.Field.topolar` has new defaults for extent and shape
* `postpic.Field.integrate` now uses the simpson method by default
* `postpic.Field.integrate` now has a new 'fast' method that uses numexpr, suitable for large datasets
* New module `postpic.experimental` to contain experimental algorithms for your reference. These algorithms are not meant to be useable as-is, but may serve as recipes to write your own algorithms.
* k-space reconstruction from EPOCH dumps has greatly improved accuracy due to a new algorithm correctly incorporating the frequency response of the implicit linear interpolation performed by EPOCH's half-steps
* `plotter_matplotlib.plotField` allows to override `aspect` option to `imshow`

v0.3.1
------

2017-10-03

Only internal changes. Versioning is handled by [versioneer](https://github.com/warner/python-versioneer).


v0.3
----

2017-09-28

Many improvements in terms of speed and features. Unfortunately some changes are not backwards-compatible to v0.2.3, so you may have to adapt your code to the new interface. For details, see the corresponding section below.


**Highlights**
* kspace reconstruction and propagation of EM waves.
* `postpic.Field` properly handles operator overloading and slicing. Slicing can be index based (integers) or referring the actual physical extent on the axis of a Field object (using floats).
* Expression based interface to particle properties (see below)

**Incompatible adjustments to previous version**
* New dependency: Postpic requires the `numexpr` package to be installed now.
* Expression based interface of for particles: If `ms` is a `postpic.MultiSpecies` object, then the call `ms.X()` has been deprecated. Use `ms('x')` instead. This new particle interface can handle expressions that the `numexpr` package understands. Also `ms('sqrt(x**2 + gamma - id)')` is valid. This interface is easier to use, has better functionality and is faster due to `numexpr`.
The list of known per particle scalars and their definitions is available at `postpic.particle_scalars`. In addition all constants of `scipy.constants.*` can be used.
In case you find particle scalar that you use regularly which is not in the list, please open an issue and let us know!
* The `postpic.Field` class now behaves more like an `numpy.ndarray` which means that almost all functions return a new field object instead of modifying the current. This change affects the following functions: `half_resolution`, `autoreduce`, `cutout`, `mean`.


**Other improvements and new features**
* `postpic.helper.kspace` can reconstruct the correct k-space from three EM fields provided to distinguish between forward and backward propagating waves (thanks to @Ablinne)
* `postpic.helper.kspace_propagate` will turn the phases in k-space to propagate the EM-wave.
* List of new functions in `postpic` from `postpic.helper` (thanks to @Ablinne): `kspace_epoch_like`, `kspace`, `kspace_propagate`.
* `Field.fft` function for fft optimized with pyfftw (thanks to @Ablinne).
* `Field.__getitem__` to slice a Field object. If integers are provided, it will interpret them as gridpoints. If float are provided they are interpreted as the physical region of the data and slice along the corresponding axis positions (thanks to @Ablinne).
* `Field` class has been massively impoved (thanks to @Ablinne): The operator overloading is now properly implemented and thanks to `__array__` method, it can be interpreted by numpy as an ndarray whenever necessary.
* List of new functions of the `Field` class (thanks to @Ablinne): `meshgrid`, `conj`, `replace_data`, `pad`, `transform`, `squeeze`, `integrate`, `fft`, `shift_grid_by`, `__getitem__`, `__setitem__`, `evaluate`.
* List of new properties of the `Field` class (thanks to @Ablinne): `matrix`, `real`, `imag`, `angle`.
* Many performance optimizations using pyfftw library (optional) or numexpr (now required by postpic) or by avoiding in memory data copying.
* Lots of fixes


v0.2.3
------

2017-02-17

This release brings some bugfixes and various new features.

**Bugfixes**
* Particle property Bz.
* plotting of contourlevels.

**Improvements and new features**
* openPMD support (thanks to @ax3l).
* ParticleHistory class to collect particle information over the entire simulation.
* added particle properties v{x,y,z} and beta{x,y,z}.
* Lots of performance improvemts: particle data will be much less copied in memory now.


v0.2.2 and earlier
------------------

There hasnt been any changelog. Dont use those versions anymore.
39 changes: 24 additions & 15 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@

Contributing to the PostPic code base
Contributing to the postpic code base
=====================================

Any help contributing to the PostPic project ist greatly appreciated! Feel free to contact any of the developers or ask for help using the [Issues](https://github.com/skuschel/postpic/issues) Page.
Any help contributing to the postpic project ist greatly appreciated! Feel free to contact any of the developers or ask for help using the [Issues](https://github.com/skuschel/postpic/issues) Page.

Why me?
-------
@@ -15,38 +15,47 @@ How to contribute?

Reporting bugs or asking questions works with a GitHub account simply on the [Issues](https://github.com/skuschel/postpic/issues) page.

For any coding you need to be familiar with [git](http://git-scm.com/). Its a distributed version control system created by Linus Torvalds (and more importantly: he is also using it for maintaining the linux kernel). There is a nice introduction to git at [try.github.io/](http://try.github.io/), but in general you can follow the bootcamp section at [https://help.github.com/](https://help.github.com/) for your first steps.
For any coding you need to be familiar with [git](http://git-scm.com/). Its a distributed version control system created by Linus Torvalds (and more importantly: he is also using it for maintaining the linux kernel). There is a nice introduction to git at [try.github.io/](http://try.github.io/), but in general you can follow the bootcamp section at [https://help.github.com/](https://help.github.com/) for your first steps.

One of the most comprehensive guides is probably [this book](http://git-scm.com/doc). Just start reading from the beginning. It is worth it!

## The Workflow
The Workflow
------------

The typical workflow should be:
Adding a feature is often triggered by the personal demand for it. Thats why production ready features should propagte to master as fast as possible. Everything on master is considered to be production ready. We follow the [github-flow](http://scottchacon.com/2011/08/31/github-flow.html) describing this very nicely.

In short:

0. [Fork](https://help.github.com/articles/fork-a-repo) the PostPic repo to your own GitHub account.
0. Clone from your fork to your local computer.
0. Implement a new feature/bugfix/documentation/whatever commit to your local repository. It is highly recommended that new features will have test cases.
0. Create a branch whose name tells what you do. Something like `codexy-reader` or `fixwhatever`,... is a good choice. Do NOT call it `issue42`. Git history should be clearly readable without external information. If its somehow unspecific in the worst case call it `dev` or even commit onto your `master` branch.
0. Implement a new feature/bugfix/documentation/whatever commit to your local repository. It is highly recommended that the new features will have test cases.
0. KEEP YOUR FORK UP TO DATE! Your fork is yours, only. So you have to update it to whatever happens in the main repository. To do so add the main repository as a second remote with

`git remote add upstream git@github.com:skuschel/postpic.git`

and pull from it regularly with

`git pull --rebase upstream master`

0. Make sure all tests are running smoothly (the `run-tests` script also involves pep8 style verification!)
0. push to your fork and create a [pull request](https://help.github.com/articles/using-pull-requests/) to merge your changes into the codebase.

## Coding and general remaks
0. Make sure all tests are running smoothly (the `run-tests.py` script also involves pep8 style verification!) Run `run-tests.py` before EVERY commit!
0. push to your fork and create a [pull request](https://help.github.com/articles/using-pull-requests/) EARLY! Even if your feature or fix is not yet finished, create the pull request and start it with `WIP:` or `[WIP]` (work-in-progress) to show its not yet ready to merge in. But the pull request will
* trigger travis.ci to run the tests whenever you push
* show other people what you work on
* ensure early feedback on your work


Coding and general remaks
-------------------------

* Make sure, that the `run-tests` script exits without error on EVERY commit. To do so, it is HIGHLY RECOMMENDED to add the `pre-commit` script as the git pre-commit hook. For instructions see [pre-commit](../master/pre-commit).
* The Coding style is according to slightly simplified pep8 rules. This is included in the `run-tests` script. If that script runs without error, you should be good to <del>go</del> commit.
* If your implemented feature works as expected you can send the pull request to the master branch. Additional branches should be used only if there are unfinished or experimental features.
* Make sure, that the `run-tests.py` script exits without error on EVERY commit. To do so, it is HIGHLY RECOMMENDED to add the `pre-commit` script as the git pre-commit hook. For instructions see [pre-commit](../master/pre-commit).
* The Coding style is according to slightly simplified pep8 rules. This is included in the `run-tests.py` script. If that script runs without error, you should be good to <del>go</del> commit.
* Add the GPLv3+ licence notice on top of every new file. If you add a new file you are free to add your name as a author. This will let other people know that you are in charge if there is any trouble with the code. This is only useful if the file you provide adds functionality like a new datareader. Thats why the `__init__.py` files typically do not have a name written. In doubt, the git revision history will always show who added which line.


What to contribute?
-------------------

## What to contribute?

Here is a list for your inspiration:

8 changes: 8 additions & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
include *.md
include LICENSE
include CONTRIBUTING.md
include postpic/particles/*.pyx
include _version.txt
include postpic/_version.txt
include versioneer.py
include postpic/_version.py
70 changes: 52 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,77 @@
PostPic
postpic
=======
[![Build Status](https://travis-ci.org/skuschel/postpic.svg?branch=master)](https://travis-ci.org/skuschel/postpic)
[![Documentation Status](https://readthedocs.org/projects/postpic/badge/?version=latest)](https://readthedocs.org/projects/postpic/?badge=latest)

PostPic is a open-source post processor for Particle-in-cell (PIC) simulations written in python. If you are doing PIC Simulations (likely for you PhD in physics...) you need tools to provide proper post-processing to create nice graphics from many GB of raw simulation output data -- regardless of what simulation code you are using.
[![run-tests](https://github.com/skuschel/postpic/workflows/run-tests/badge.svg)](https://github.com/skuschel/postpic/actions?query=workflow%3Arun-tests)
[![PyPI version](https://badge.fury.io/py/postpic.png)](http://badge.fury.io/py/postpic)

Idea of PostPic
Postpic is an open-source post-processor for Particle-in-cell (PIC) simulations written in python. If you are doing PIC Simulations (likely for you PhD in physics...) you need tools to provide proper post-processing to create nice graphics from many GB of raw simulation output data -- regardless of what simulation code you are using.

**For working examples, please go to https://github.com/skuschel/postpic-examples**

With questions, issues, ideas or general remarks feel free to get in touch via
* the public matrix chatroom: https://matrix.to/#/#postpic:matrix.org
* or use the [Github issue tracker](https://github.com/skuschel/postpic/issues)

The (technical, but complete) documentation is hosted on
https://skuschel.github.io/postpic/


Idea of postpic
---------------

The basic idea of PostPic is to calculate the plots you are interested in just from the basic data the Simulation provides. This data includes electric and magnetic fields and a tuple (weight, x, y, z, px, py, pz) for every macro-particle. Anything else you want to look at (for example a spectrum at your detector) should just be calculated from these values. This is exactly what postpic can do for you, and even more:
The basic idea of postpic is to calculate the plots you are interested in just from the basic data the simulation provides. This data includes electric and magnetic fields and a tuple (`weight`, `x`, `y`, `z`, `px`, `py`, `pz`, `id`, `mass`, `charge`, `time`) for every macro-particle. Anything else you like to look at (for example a spectrum at your detector) should just be calculated from these values. This is exactly what postpic can do for you, and even more:

postpic has a unified interface for reading the required simulation data. If the simulation code of your choice is not supported by postic, this is the perfect opportunity to add a new datareader.

Additionally postpic can plot and label your plot automatically. This makes it easy to work with and avoids mistakes. Currently matplotlib is used for that but this is also extensible.


PostPic has a unified interface for reading the required simulation data. If the simulation code of your choice is not supported by postic, this is the perfect opportunity to add a new datareader.

Additionally PostPic can plot and label your plot automatically. This makes it easy to work with and avoids mistakes. Currently matplotlib is used for that but this is also extensible.
Installation
------------

Usage
-----
Postpic can be used with Python 3 or higher. The use of Python 2 is deprecated and will be removed in the future.

Download the latest version and run
1) Always install in a virtualenv. You can create a new virtualenv with:

`python2 setup.py install --user`
`python -m venv --system-site-packages ~/.venv/defaultpyvenv`

to install PostPic in your local home folder. After that you should be able to import it into any python session using `import postpic`.
2) activate the environment using `source ~/.venv/defaultpyvenv/bin/activate`.

If you are changing the postpic codebase it is probably better to link the current folder. This can be done by
3) Install into the venv
```
pip install git+https://github.com/skuschel/postpic.git
```

`python2 setup.py develop --user`
Please note that, depending on your system python setup, `pip` may default to `python2`.
In that case you will need use `pip3` instead to make sure that postpic is installed for `python3`:

However, PostPic's main functions should work but there is still much work to do and lots of documentation missing. If postpic awakened your interest you are welcome to contribute. Even if your programming skills are limited there are various ways how to contribute and adopt PostPic for your particular research. Read [CONTRIBUTING.md](../master/CONTRIBUTING.md).
`pip3 install git+https://github.com/skuschel/postpic.git`

The latest *release* is also available in the python package index [pypi](https://pypi.python.org/pypi/postpic/), thus it can be installed by using the python package manager pip:

PostPic in Science
`pip install --user postpic`

**Developers** should clone the git repository (or their fork of it) and install it using

`pip install -e .`

This command will link the current folder to global python scope, such that changing the code will immediately update the installed package.

**After installing** you should be able to import it into any python session using `import postpic`.

Postpic's main functions should work but there is still much work to do and lots of documentation missing. If postpic awakened your interest you are welcome to contribute. Even if your programming skills are limited there are various ways how to contribute and adopt postpic for your particular research. Read [CONTRIBUTING.md](../master/CONTRIBUTING.md).


Postpic in Science
------------------

If you use PostPic for your research and present or publish results, please show your support for the PostPic project and its [contributers](https://github.com/skuschel/postpic/graphs/contributors) by:
If you use postpic for your research and present or publish results, please show your support for the postpic project and its [contributers](https://github.com/skuschel/postpic/graphs/contributors) by:

* Add a note in the acknowledgements section of your publication.
* Drop a line to one of the core developers including a link to your work to make them smile (there might be a public list in the future).


License
-------

54 changes: 54 additions & 0 deletions conda/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@

Scripts for testing
===================

These scripts will help you to test postpic against different python platforms.

Please note that the canonical tests for the postpit projects are the one run via ``.travis.yml`` and ``travis-ci``.
In contrast, the scripts in this directoryare meant to have a method to do local tests against different platforms in parallel and as fast as possible so they are suitable as a pre-commit hook.

Prerequisites
-------------

All you need to install on your system beforehand is the ``conda`` package manager.
On Arch linux it is available as an AUR package ``python-conda``.
The ``conda`` package manager is used to create environments like ``virtualenv``, where it is able to install different python versions like e. g. ``pyenv``.
It can easily install binary python extensions like ``numpy`` into these environments from pre-compiled packages that match the python interpreter, which is quicker and more robust than using ``pip``.
Alternatives to these shell scripts may be tools like ``tox``/``detox`` which will use ``pyenv``, ``virtualenv`` and ``pip`` for all packages, which means a lot of different tools that interoperate, which may or may not lead to difficulties.
The solution here uses only ``bash``, ``conda`` and ``pip``, which may or may not be more robust.

Beware that, by default, they will create a directory ``conda``, next to your clone of postpic.git.
So if you have cloned postpic to ``/home/username/postpic``, the scripts will use ``/home/username/conda`` as base directory.
With the default configuration you will need a few gigabytes of space there.
You can override this by changing the ``CONDA_BASE`` in ``conda/common.sh``.
See ``conda/common.sh`` to find out how the other path names mentioned here are created and other variables.

Quickstart
----------

Run ``conda/make_envs.sh`` once.
After that, you can use ``conda/run_tests.sh`` as a replacement for ``run-tests.py``.
You can even symlink the ``pre-commit`` from this folder to your ``.git/hooks`` like this, if the current working directory is the directory which contains this file::

ln -s ../../conda/pre-commit ../.git/hooks/pre-commit


Description of the scripts
--------------------------

The scripts have the following tasks:

* create_environment.sh ENV
Build the environment ENV in ``CONDA_ENVS/ENV`` and create a hard-linked copy in ``CONDA_ENVS/ENV_clean``.

* make_envs.sh
Build all the environments listed in ``ENVS``.

* do_tests_env.sh ENV ARGS
run the tests on a single environment ENV. This will assume that the copy of your sources in ``SOURCETMP`` is already up-to-date.
Additional ARGS are passed to postpics ``run-tests.py`` script.

* run_tests.sh ARGS
This runs the tests on all environments after updating the copy of your sources in ``SOURCETMP`` with the contents of ``SOURCE``.
ARGS are passed on to ``run-tests.py`` (via ``do_tests_env.sh``). Because Environments might be modified during the tests (e.g. by installation of
third-party packages), a new hard-linked copy of the clean environment is used each time.
41 changes: 41 additions & 0 deletions conda/common.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Alexander Blinne 2017

# just get the directory that this file resides within
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

# configure these to your needs, defaults should work fine though

# this is the base directory used to store the conda environments, copies of the source-tree and the log files
CONDA_BASE=$DIR/../../conda
CONDA_ENVS=$CONDA_BASE/envs
LOGDIR=$CONDA_BASE/log
TMPDIR=$CONDA_BASE/tmp
mkdir -p "$CONDA_BASE" "$CONDA_ENVS" "$LOGDIR" "$TMPDIR" # just make sure the directories exist

# this is the path to the source that should be tested.
SOURCE="$(dirname "$DIR")" # assume the parent dir of this dir

# this is just the name of the source dir, usually postpic or postpic.git
SOURCEDIRNAME="$(basename "$SOURCE")"

# directory to hold a copy of the source which will be created and maintained by run_tests.sh and then used by do_tests_env.sh
SOURCETMP=$TMPDIR/$SOURCEDIRNAME

# this is the list of all environments specified in create_environment.sh
ENVS=(ubuntu1404 ubuntu1604py2 ubuntu1604py3 default2 default3)
72 changes: 72 additions & 0 deletions conda/create_environment.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
#!/bin/bash
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Alexander Blinne 2017

# just get the directory that this file resides within and load the common variables
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
. "$DIR/common.sh"

ENV="$1"

rm -rf "${CONDA_ENVS}/${ENV}" "${CONDA_ENVS}/${ENV}_clean"

case $ENV in
ubuntu1404)
conda create -y -p "$CONDA_ENVS/$ENV" python=3.4 matplotlib=1.3.1 numpy=1.8.1 scipy=0.13.3 cython=0.20.1 libgfortran=1 pep8 nose sphinx numexpr future
source activate "$CONDA_ENVS/$ENV"
pip install recommonmark
pip install urllib3
pip install pyvtk
source deactivate
;;

ubuntu1604py2)
conda create -y -p "$CONDA_ENVS/$ENV" python=2.7 matplotlib=1.5.1 numpy=1.11.0 scipy=0.17.0 cython=0.23.4 pycodestyle nose sphinx numexpr future urllib3 functools32
source activate "$CONDA_ENVS/$ENV"
pip install recommonmark
pip install pyvtk
source deactivate
;;

ubuntu1604py3)
conda create -y -p "$CONDA_ENVS/$ENV" python=3.5 matplotlib=1.5.1 numpy=1.11.0 scipy=0.17.0 cython=0.23.4 pycodestyle nose sphinx numexpr future urllib3
source activate "$CONDA_ENVS/$ENV"
pip install recommonmark
pip install pyvtk
source deactivate
;;

default2)
conda create -y -p "$CONDA_ENVS/$ENV" python=2 matplotlib numpy scipy cython pycodestyle nose sphinx numexpr future urllib3 functools32
source activate "$CONDA_ENVS/$ENV"
pip install recommonmark
pip install pyvtk
source deactivate
;;

default3)
conda create -y -p "$CONDA_ENVS/$ENV" python=3 matplotlib numpy scipy cython pycodestyle nose sphinx numexpr future urllib3
source activate "$CONDA_ENVS/$ENV"
pip install recommonmark
pip install pyvtk
source deactivate
;;

esac

cp -al "${CONDA_ENVS}/${ENV}" "${CONDA_ENVS}/${ENV}_clean"
59 changes: 59 additions & 0 deletions conda/do_tests_env.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
#!/bin/bash
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Alexander Blinne 2017

# just get the directory that this file resides within and load the common variables
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
. "$DIR/common.sh"

ENV="$1"
shift

#make sure environment is clean
echo "Prepping $ENV..."
rm -rf "${CONDA_ENVS}/${ENV}"
cp -al "${CONDA_ENVS}/${ENV}_clean" "${CONDA_ENVS}/${ENV}"


#make copy of source to tmp dir
SRCDIR="$TMPDIR/$ENV"
rm -rf "$SRCDIR/$SOURCEDIRNAME"
mkdir -p "$SRCDIR"
cp -al "$SOURCETMP" "$SRCDIR/$SOURCEDIRNAME"

#go to source dir and activate environment
pushd "$SRCDIR/$SOURCEDIRNAME" > /dev/null
source activate "$CONDA_ENVS/$ENV"

echo "Starting test for $ENV..."

# simple redirection mixes stdout and stderr in a non-ordered way, very unreadable!
# ./setup.py develop >$LOGDIR/$ENV.log 2>&1
# ./run-tests.py "$@" >$LOGDIR/$ENV.log 2>&1

# script leaves in vt escape codes. a little bad but ok
script -c "./setup.py develop" "$LOGDIR/$ENV.log" >/dev/null
script -ae -c "./run-tests.py --skip-setup $*" "$LOGDIR/$ENV.log" >/dev/null

RESULT=$?

source deactivate
popd > /dev/null
echo "Test for $ENV finished with exit code $RESULT"

exit $RESULT
20 changes: 12 additions & 8 deletions postpic/analyzer/__init__.py → conda/make_envs.sh
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#!/bin/bash
#
# This file is part of postpic.
#
@@ -14,13 +15,16 @@
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
"""
Analyzer package provides Classes and functions for analyzing
Particle and Field Data.
"""
# Copyright Alexander Blinne 2017

from analyzer import *
from particles import ParticleAnalyzer
from fields import FieldAnalyzer
# just get the directory that this file resides within and load the common variables
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
. "$DIR/common.sh"

identifyspecies = analyzer.SpeciesIdentifier.identifyspecies
for ENV in ${ENVS[@]}; do
echo Create environment "$CONDA_ENVS/$ENV"
#script -c "$CONDA_BASE/create_environment.sh $ENV" $LOGDIR/create_$ENV >/dev/null &
"$DIR/create_environment.sh" $ENV >"$LOGDIR/create_$ENV" 2>&1 &
done

wait
52 changes: 52 additions & 0 deletions conda/pre-commit
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
#!/bin/bash
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Stephan Kuschel, 2014-2015
# Copyright Alexander Blinne 2017

# This script is a wrapper around the run-tests.py script.
# It is highly recommended to link it as the git pre-commit hook via:

# ln -s ../../conda/pre-commit .git/hooks/pre-commit

# The difference to the run-tests.py script is that this script stashes
# unstaged changes before testing (see below). Thus only the changeset to be
# commited will be tested. No worries, "git commit -a" will still work ;)

# Stash unstaged changes if and only if possible:
# Its important to skip the stashing if there are no changes detected!
# Otherwise 'git stash -q --keep-index' will not create a stash.
# Later 'git stash apply --index -q' would then unintentionally apply
# an older stash.

if git diff-index --quiet HEAD --; then
# this happens for example on git commit --amend
echo "No changes detected. Skipping tests..."
sleep 1
exitcode=0
elif [ -e .git/MERGE_HEAD ]; then
echo "Merge in progress... Testing against the working directory leaving unstages changes intact!"
conda/run_tests.sh --fast
exitcode=$?
else
git stash -q --keep-index
conda/run_tests.sh --fast
exitcode=$?
git reset --hard -q && git stash apply --index -q && git stash drop -q
fi

exit $exitcode
56 changes: 56 additions & 0 deletions conda/run_tests.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
#!/bin/bash
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Alexander Blinne 2017

# just get the directory that this file resides within and load the common variables
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
. "$DIR/common.sh"

rm -rf $SOURCETMP
cp -al $SOURCE $SOURCETMP

pushd $SOURCETMP > /dev/null
git clean -Xdf
popd > /dev/null

# $SOURCETMP is now the source to be copied for all the envs

PIDS=()

# Start all the tests in the background, remember the PIDs
for ENV in ${ENVS[@]}
do
"$DIR/do_tests_env.sh" "$ENV" "$@" &
PIDS+=($!)
done

# Get the return statuses of the PIDs one by one
for pid in ${PIDS[@]}
do
wait $pid
STATUS=$?
if [ $STATUS -ne 0 ]; then
# One of the tests failed. Wait for all the other running tests
# and return the failed status
wait
exit $STATUS
fi
done

# Apparently all tests succeded
exit 0
181 changes: 12 additions & 169 deletions doc/Makefile
Original file line number Diff line number Diff line change
@@ -1,177 +1,20 @@
# Makefile for Sphinx documentation
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build2
PAPER =
BUILDDIR = _build

# User-friendly check for sphinx-build2
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif

# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .

.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
SPHINXBUILD = sphinx-build
SPHINXPROJ = postpic
SOURCEDIR = source
BUILDDIR = build

# Put it first so that "make" without argument is like "make help".
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"

clean:
rm -rf $(BUILDDIR)/*

html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."

dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."

singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."

pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."

json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."

htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."

qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/PostPIC.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/PostPIC.qhc"

devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/PostPIC"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/PostPIC"
@echo "# devhelp"

epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."

latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."

latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."

latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."

text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."

man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."

texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."

info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."

gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."

changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."

linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."

doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: help Makefile

pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
38 changes: 0 additions & 38 deletions doc/apidoc/postpic.analyzer.rst

This file was deleted.

30 changes: 0 additions & 30 deletions doc/apidoc/postpic.datareader.rst

This file was deleted.

22 changes: 0 additions & 22 deletions doc/apidoc/postpic.plotting.rst

This file was deleted.

31 changes: 0 additions & 31 deletions doc/apidoc/postpic.rst

This file was deleted.

348 changes: 0 additions & 348 deletions doc/conf.py

This file was deleted.

1 change: 0 additions & 1 deletion doc/contributing.rst

This file was deleted.

53 changes: 0 additions & 53 deletions doc/dummyexample.py

This file was deleted.

28 changes: 0 additions & 28 deletions doc/index.rst

This file was deleted.

234 changes: 14 additions & 220 deletions doc/make.bat
Original file line number Diff line number Diff line change
@@ -1,242 +1,36 @@
@ECHO OFF

pushd %~dp0

REM Command file for Sphinx documentation

if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build2
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
set SPHINXPROJ=postpic

if "%1" == "" goto help

if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)

if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)


%SPHINXBUILD% 2> nul
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build2' command was not found. Make sure you have Sphinx
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build2' executable. Alternatively you
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)

if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)

if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)

if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)

if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)

if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)

if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)

if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\PostPIC.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\PostPIC.ghc
goto end
)

if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)

if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)

if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)

if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)

if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)

if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)

if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)

if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end

if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)

if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)

if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)

if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)

if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)

if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%

:end
popd
1 change: 0 additions & 1 deletion doc/readme.rst

This file was deleted.

1 change: 1 addition & 0 deletions doc/source/CHANGELOG.md
1 change: 1 addition & 0 deletions doc/source/CONTRIBUTING.md
4 changes: 2 additions & 2 deletions doc/apidoc/modules.rst → doc/source/apidoc/modules.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
api
===
postpic
=======

.. toctree::
:maxdepth: 4
50 changes: 50 additions & 0 deletions doc/source/apidoc/postpic.datareader.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
postpic.datareader package
==========================

.. automodule:: postpic.datareader
:members:
:undoc-members:
:show-inheritance:

Submodules
----------

postpic.datareader.datareader module
------------------------------------

.. automodule:: postpic.datareader.datareader
:members:
:undoc-members:
:show-inheritance:

postpic.datareader.dummy module
-------------------------------

.. automodule:: postpic.datareader.dummy
:members:
:undoc-members:
:show-inheritance:

postpic.datareader.epochsdf module
----------------------------------

.. automodule:: postpic.datareader.epochsdf
:members:
:undoc-members:
:show-inheritance:

postpic.datareader.openPMDh5 module
-----------------------------------

.. automodule:: postpic.datareader.openPMDh5
:members:
:undoc-members:
:show-inheritance:

postpic.datareader.vsimhdf5 module
----------------------------------

.. automodule:: postpic.datareader.vsimhdf5
:members:
:undoc-members:
:show-inheritance:
50 changes: 50 additions & 0 deletions doc/source/apidoc/postpic.io.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
postpic.io package
==================

.. automodule:: postpic.io
:members:
:undoc-members:
:show-inheritance:

Submodules
----------

postpic.io.common module
------------------------

.. automodule:: postpic.io.common
:members:
:undoc-members:
:show-inheritance:

postpic.io.csv module
---------------------

.. automodule:: postpic.io.csv
:members:
:undoc-members:
:show-inheritance:

postpic.io.image module
-----------------------

.. automodule:: postpic.io.image
:members:
:undoc-members:
:show-inheritance:

postpic.io.npy module
---------------------

.. automodule:: postpic.io.npy
:members:
:undoc-members:
:show-inheritance:

postpic.io.vtk module
---------------------

.. automodule:: postpic.io.vtk
:members:
:undoc-members:
:show-inheritance:
26 changes: 26 additions & 0 deletions doc/source/apidoc/postpic.particles.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
postpic.particles package
=========================

.. automodule:: postpic.particles
:members:
:undoc-members:
:show-inheritance:

Submodules
----------

postpic.particles.particles module
----------------------------------

.. automodule:: postpic.particles.particles
:members:
:undoc-members:
:show-inheritance:

postpic.particles.scalarproperties module
-----------------------------------------

.. automodule:: postpic.particles.scalarproperties
:members:
:undoc-members:
:show-inheritance:
18 changes: 18 additions & 0 deletions doc/source/apidoc/postpic.plotting.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
postpic.plotting package
========================

.. automodule:: postpic.plotting
:members:
:undoc-members:
:show-inheritance:

Submodules
----------

postpic.plotting.plotter\_matplotlib module
-------------------------------------------

.. automodule:: postpic.plotting.plotter_matplotlib
:members:
:undoc-members:
:show-inheritance:
53 changes: 53 additions & 0 deletions doc/source/apidoc/postpic.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
postpic package
===============

.. automodule:: postpic
:members:
:undoc-members:
:show-inheritance:

Subpackages
-----------

.. toctree::
:maxdepth: 4

postpic.datareader
postpic.io
postpic.particles
postpic.plotting

Submodules
----------

postpic.datahandling module
---------------------------

.. automodule:: postpic.datahandling
:members:
:undoc-members:
:show-inheritance:

postpic.experimental module
---------------------------

.. automodule:: postpic.experimental
:members:
:undoc-members:
:show-inheritance:

postpic.helper module
---------------------

.. automodule:: postpic.helper
:members:
:undoc-members:
:show-inheritance:

postpic.helper\_fft module
--------------------------

.. automodule:: postpic.helper_fft
:members:
:undoc-members:
:show-inheritance:
210 changes: 210 additions & 0 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,210 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# postpic documentation build configuration file, created by
# sphinx-quickstart on Tue Nov 21 16:26:48 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath('../'))

# numpy and scipy are installed on readthedocs.org
autodoc_mock_imports = ['skimage',
'h5py', 'argparse']

# -- General configuration ------------------------------------------------

# If your documentation needs a minimal Sphinx version, state it here.
#
needs_sphinx = '1.3' # because of auto_mock_imports, see
# http://www.sphinx-doc.org/en/stable/ext/autodoc.html

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.mathjax',
'sphinx.ext.viewcode',
'sphinx.ext.napoleon']
# napoleon allows numpy like docstrings
# https://sphinxcontrib-napoleon.readthedocs.io/en/latest/


# private-members, special-members
autodoc_default_flags = ['members', 'inherited_members', 'special-members', 'show-inheritance']

def autodoc_skip_member(app, what, name, obj, skip, options):
exclusions = ('__weakref__', '__doc__', '__module__', '__dict__',
'__abstractmethods__', '__hash__', '__init__',
'__len__', '__str__', '__repr__')
exclude = name in exclusions
return skip or exclude

def setup(app):
app.connect('autodoc-skip-member', autodoc_skip_member)

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']

# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
source_suffix = ['.rst', '.md']

from recommonmark.parser import CommonMarkParser
# requires the recommonmark package
source_parsers = {
'.md': CommonMarkParser,
}



# The master toctree document.
master_doc = 'index'

# General information about the project.
project = 'postpic'
copyright = '2020, the postpic developers'
author = 'the postpic developers'

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
sys.path.insert(0, os.path.abspath('../../postpic'))
import _version
version = _version.get_versions()['version']
# The full version, including alpha/beta/rc tags.
release = version
#os.chdir('doc')

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'

# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True


# -- Options for HTML output ----------------------------------------------

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'

# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
html_theme_options = {
'github_user': 'skuschel',
'github_repo': 'postpic',
'github_banner': True, # fork me on github banner
'github_button': True,
'description': 'the open-source partice-in-cell postprocessor.'
}

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']

# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# This is required for the alabaster theme
# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'relations.html', # needs 'show_related': True theme option to display
#'localtoc.html',
'sourcelink.html',
'searchbox.html',
]
}


# -- Options for HTMLHelp output ------------------------------------------

# Output file base name for HTML help builder.
htmlhelp_basename = 'postpicdoc'


# -- Options for LaTeX output ---------------------------------------------

latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',

# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',

# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',

# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}

# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'postpic.tex', 'postpic Documentation',
'the postpic developers', 'manual'),
]


# -- Options for manual page output ---------------------------------------

# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'postpic', 'postpic Documentation',
[author], 1)
]


# -- Options for Texinfo output -------------------------------------------

# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'postpic', 'postpic Documentation',
author, 'postpic', 'One line description of project.',
'Miscellaneous'),
]
2 changes: 1 addition & 1 deletion doc/getstarted.rst → doc/source/getstarted.rst
Original file line number Diff line number Diff line change
@@ -5,5 +5,5 @@ Getting started

The following script should just show an example how to get started using the postpic postprocessor. This script uses the dummy reader (thus auto generated random data). Thats why there is no input file needed to read the simulation data from.

.. literalinclude:: dummyexample.py
.. literalinclude:: ../../examples/simpleexample.py
:language: python
34 changes: 34 additions & 0 deletions doc/source/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
.. postpic documentation master file, created by
sphinx-quickstart on Tue Nov 21 16:26:48 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to postpic's documentation!
===================================

.. toctree::
:maxdepth: 4
:caption: Contents

introduction

getstarted

CHANGELOG

CONTRIBUTING

Postpic API Documentation
=========================

.. toctree::

apidoc/modules


Indices and tables
==================

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
21 changes: 21 additions & 0 deletions doc/source/introduction.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@


Introduction
============

.. automodule:: postpic
:no-members:



What is postpic?
----------------

Postpic is an open-source package aiming to ease the postprocessing of particle-in-cell simulation data. Particle-in-cell simulations are often used to simulate the behaviour of plasmas in non-equilibrium states.


.. automodule:: postpic.datareader

.. autofunction:: postpic.readDump

.. autofunction:: postpic.readSim
13 changes: 13 additions & 0 deletions doc/updateapidoc.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
#!/bin/bash

# update the apidoc folder, which is used
# by sphinx autodoc to convert docstings
# into the manual pages.

# need to rerun this file whenever a new python file
# is created within postpic.

# Stephan Kuschel, 2017

rm source/apidoc/*
sphinx-apidoc -o source/apidoc ../postpic -M
2 changes: 2 additions & 0 deletions examples/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
kspace-test-2d/
_openPMDdata/
336 changes: 336 additions & 0 deletions examples/kspace-test-2d.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,336 @@
#!/usr/bin/env python
# coding: utf-8

# In[1]:

import sys


def download(url, file):
import urllib3
import shutil
import os
if os.path.isfile(file):
return True
try:
urllib3.disable_warnings()
http = urllib3.PoolManager()
print('downloading {:} ...'.format(file))
with http.request('GET', url, preload_content=False) as r, open(file, 'wb') as out_file:
shutil.copyfileobj(r, out_file)
success = True
except urllib3.exceptions.MaxRetryError:
success = False
return success

def main():
import os
import os.path as osp
datadir = 'examples/kspace-test-2d'
tarball = osp.join(datadir, 'kspace-test-2d-resources.tar.gz')
tarball_url = 'https://blinne.net/files/postpic/kspace-test-2d-resources.tar.gz'
if not os.path.exists(datadir):
os.mkdir(datadir)
s = download(tarball_url, tarball)

if s:
import hashlib
chksum = hashlib.sha256(open(tarball, 'rb').read()).hexdigest()
if chksum != "54bbeae19e1f412ddd475f565c2193e92922ed8a22e7eb3ecb4d73b5cf193b24":
os.remove(tarball)
s = False

if not s:
print('Failed to Download example data. Skipping this example.')
return

import tarfile
tar = tarfile.open(tarball)
tar.extractall(datadir)


import matplotlib
matplotlib.use('Agg')

font = {'size' : 12}
matplotlib.rc('font', **font)

import copy
import postpic as pp
import numpy as np
import pickle

try:
import sdf
sdfavail = True
except ImportError:
sdfavail = False

# basic constants
micro = 1e-6
femto = 1e-15
c = pp.PhysicalConstants.c

# known parameters from simulation
lam = 0.5 * micro
k0 = 2*np.pi/lam
f0 = pp.PhysicalConstants.c/lam

pymajorver = str(sys.version_info[0])

if sdfavail:
pp.chooseCode("EPOCH")

dump = pp.readDump(osp.join(datadir, '0002.sdf'))
plotter = pp.plotting.plottercls(dump, autosave=False)
fields = dict()
for fc in ['Ey', 'Bz']:
fields[fc] = getattr(dump, fc)()
fields[fc].saveto(osp.join(datadir, '0002_'+fc+pymajorver), compressed=False)
t = dump.time()
dt = t/dump.timestep()
pickle.dump(dict(t=t, dt=dt), open(osp.join(datadir,'0002_meta'+pymajorver+'.pickle'), 'wb'))
else:
fields = dict()
for fc in ['Ey', 'Bz']:
fields[fc] = pp.Field.loadfrom(osp.join(datadir,'0002_{}.npz').format(fc+pymajorver))
meta = pickle.load(open(osp.join(datadir,'0002_meta'+pymajorver+'.pickle'), 'rb'))
t = meta['t']
dt = meta['dt']
plotter = pp.plotting.plottercls(None, autosave=False)

Ey = fields['Ey']
# grid spacing from field
dx = [ax.grid[1] - ax.grid[0] for ax in Ey.axes]


#w0 = 2*np.pi*f0
#w0 = pp.PhysicalConstants.c * k0

wn = np.pi/dt
omega_yee = pp.helper.omega_yee_factory(dx=dx, dt=dt)

wyee = omega_yee([k0,0])
w0 = pp.helper.omega_free([k0,0])

print('dx', dx)
print('t', t)
print('dt', dt)
print('wn', wn)
print('w0', w0)
print('wyee', wyee)
print('wyee/w0', wyee/w0)
print('wyee/wn', wyee/wn)
print('lam/dx[0]', lam/dx[0])

print('cos(1/2 wyee dt)', np.cos(1/2 * wyee * dt))

vg_yee = c*np.cos(k0*dx[0]/2.0)/np.sqrt(1-(c*dt/dx[0]*np.sin(k0*dx[0]/2.0))**2)
print('vg/c', vg_yee/c)

r = np.sqrt(1.0 - (pp.PhysicalConstants.c * dt)**2 * (1/dx[0]*np.sin(1/2.0*k0*dx[0]))**2)
print('r', r)


# In[2]:



omega_yee = pp.helper.omega_yee_factory(dx=Ey.spacing, dt=dt)
lin_int_response_omega = pp.helper._linear_interpolation_frequency_response(dt)
lin_int_response_k = pp.helper._linear_interpolation_frequency_response_on_k(lin_int_response_omega,
Ey.fft().axes, omega_yee)

lin_int_response_k_vac = pp.helper._linear_interpolation_frequency_response_on_k(lin_int_response_omega,
Ey.fft().axes, pp.helper.omega_free)


# In[3]:


_=plotter.plotField(Ey[:,0.0])


# In[4]:


kspace = dict()
component = "Ey"


# In[5]:


#key = 'default epoch map=yee, omega=vac'
key = 'linresponse map=yee, omega=vac'
if sdfavail:
kspace[key] = abs(dump.kspace_Ey(solver='yee'))
else:
kspace[key] = abs(pp.helper.kspace(component, fields=dict(Ey=fields['Ey'],
Bz=fields['Bz'].fft()/lin_int_response_k),
interpolation='fourier'))
# using the helper function `kspace_epoch_like` would yield same result:
# kspace[key] = abs(pp.helper.kspace_epoch_like(component, fields=fields, dt=dt, omega_func=omega_yee))
normalisation = 1.0/np.max(kspace[key].matrix)
kspace[key] *= normalisation
kspace[key].name = r'corrected $\vec{k}$-space, $\omega_0=c|\vec{k}|$'
kspace[key].unit = ''


# In[6]:


key = 'simple fft'
kspace[key] = abs(Ey.fft()) * normalisation
kspace[key].name = r'plain fft'
kspace[key].unit = ''


# In[7]:


key = 'fourier'
kspace[key] = abs(pp.helper.kspace(component,
fields=fields,
interpolation='fourier')
) * normalisation
kspace[key].name = u'naïve $\\vec{k}$-space, $\\omega_0=c|\\vec{k}|$'
kspace[key].unit = ''


# In[8]:


key = 'fourier yee'
kspace[key] = abs(pp.helper.kspace(component,
fields=fields, interpolation='fourier',
omega_func=omega_yee)
) * normalisation
kspace[key].name = u'naïve $\\vec{k}$-space, $\\omega_0=\\omega_\\mathrm{grid}$'
kspace[key].unit = ''


# In[9]:


key = 'linresponse map=yee, omega=yee'
kspace[key] = abs(pp.helper.kspace(component, fields=dict(Ey=fields['Ey'],
Bz=fields['Bz'].fft()/lin_int_response_k),
interpolation='fourier', omega_func=omega_yee)
) * normalisation
kspace[key].name = r'corrected $\vec{k}$-space, $\omega_0=\omega_\mathrm{grid}$'
kspace[key].unit = ''


# In[10]:


slices = [slice(360-120, 360+120), slice(120, 121)]


# In[11]:


keys = ['simple fft',
'fourier yee',
'fourier',
'linresponse map=yee, omega=yee',
'linresponse map=yee, omega=vac'
]
figure2 = plotter.plotFields1d(*[kspace[k][slices] for k in keys],
log10plot=True, ylim=(5e-17, 5))
figure2.set_figwidth(8)
figure2.set_figheight(6)
while figure2.axes[0].texts:
figure2.axes[0].texts[-1].remove()

figure2.axes[0].set_title('')
figure2.axes[0].set_ylabel(r'$|E_y(k_x,0,0)|\, [a. u.]$')
figure2.axes[0].set_xlabel(r'$k_x\,[m^{-1}]$')
figure2.tight_layout()
figure2.savefig(osp.join(datadir, 'gaussian-kspace.pdf'))

print("Integrated ghost peaks")
for k in keys:
I = kspace[k][:0.0,:].integrate().matrix
print(k, I)
if k == 'linresponse map=yee, omega=vac':
if I < 30000000.:
print('linresponse map=yee, omega=vac value is low enough: YES' )
else:
print('linresponse map=yee, omega=vac value is low enough: NO' )
print('Something is WRONG' )
sys.exit(1)



# In[13]:


if sdfavail:
kspace_ey = dump.kspace_Ey(solver='yee')
else:
kspace_ey = pp.helper.kspace_epoch_like(component, fields=fields, dt=dt, omega_func=omega_yee)
complex_ey = kspace_ey.fft()
envelope_ey_2d = abs(complex_ey)[:,:].squeeze()
try:
from skimage.restoration import unwrap_phase
phase_ey = complex_ey.replace_data( unwrap_phase(np.angle(complex_ey)) )
except ImportError:
phase_ey = complex_ey.replace_data( np.angle(complex_ey) )
phase_ey_2d = phase_ey[:,:].squeeze()


# In[14]:


ey = complex_ey.real[-0.1e-5:0.2e-5, :]
#ey = Ey

ey.name = r'$E_y$'
ey.unit = r'$\frac{\mathrm{V}}{\mathrm{m}}$'

figure = plotter.plotField2d(ey)
figure.set_figwidth(6)
figure.axes[0].set_title(r'')#$\Re E_y\ [\frac{\mathrm{V}}{\mathrm{m}}]$')
figure.axes[0].set_xlabel(u'$x\\,[µm]$')
figure.axes[0].set_ylabel(u'$y\\,[µm]$')

import matplotlib.ticker as ticker
ticks_x = ticker.FuncFormatter(lambda x, pos: '{0:g}'.format(x/1e-6))
figure.axes[0].xaxis.set_major_formatter(ticks_x)
figure.axes[0].yaxis.set_major_formatter(ticks_x)

try:
figure.axes[0].images[0].colorbar.remove()
except AttributeError:
pass
figure.colorbar(figure.axes[0].images[0], format='%6.0e', pad=0.15, label=r'$\Re E_y\ [\frac{\mathrm{V}}{\mathrm{m}}]$')

axes2 = figure.axes[0].twinx()
axes2.set_ylabel(r'$\operatorname{Arg}(E_y)\, [\pi],\;|E_y|\, [a. u.]$')

env = abs(complex_ey[-0.1e-5:0.2e-5,0.0])
m = np.max(env)
env = (env/m*40/np.pi).squeeze()
p = phase_ey[-0.1e-5:0.2e-5,0.0].squeeze()/np.pi

_ = axes2.plot(env.grid, env.matrix, label=r'$|E_y|\, [a. u.]$')
_ = axes2.plot(p.grid, p.matrix, label=r'$\operatorname{Arg}(E_y)\, [\pi]$')

handles, labels = axes2.get_legend_handles_labels()
axes2.legend(handles, labels)

#figure.axes[0].set_title(r'$E_y\,[V/m]$')

figure.set_figwidth(6)
figure.set_figheight(6)
figure.tight_layout()
figure.savefig(osp.join(datadir, 'gaussian-env-arg.pdf'))


# In[ ]:


if __name__=='__main__':
main()
151 changes: 151 additions & 0 deletions examples/openPMD.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
#!/usr/bin/env python
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Stephan Kuschel 2016
#

def download(url, file):
import urllib3
import shutil
import os
if os.path.isfile(file):
return True
try:
urllib3.disable_warnings()
http = urllib3.PoolManager()
print('downloading {:} ...'.format(file))
with http.request('GET', url, preload_content=False) as r, open(file, 'wb') as out_file:
shutil.copyfileobj(r, out_file)
success = True
except urllib3.exceptions.MaxRetryError:
success = False
return success

def main():
import os
if not os.path.exists('examples/_openPMDdata'):
os.mkdir('examples/_openPMDdata')
s = download('https://github.com/openPMD/openPMD-example-datasets/'
+ 'raw/776ae3a96c02b20cfae56efafcbda6ca76d4c78d/example-2d.tar.gz',
'examples/_openPMDdata/example-2d.tar.gz')
if not s:
print('Failed to Download example data. Skipping this example.')
return

import tarfile
tar = tarfile.open('examples/_openPMDdata/example-2d.tar.gz')
tar.extractall('examples/_openPMDdata')


# now that files are downloaded and extracted, start data evaluation

import numpy as np
import postpic as pp

# postpic will use matplotlib for plotting. Changing matplotlibs backend
# to "Agg" makes it possible to save plots without a display attached.
# This is necessary to run this example within the "run-tests" script
# on travis-ci.
import matplotlib; matplotlib.use('Agg')
pp.chooseCode('openpmd')
dr = pp.readDump('examples/_openPMDdata/example-2d/hdf5/data00000300.h5')
print('The simulations was running on {} spatial dimensions.'.format(dr.simdimensions()))
# set and create directory for pictures.
savedir = '_examplepictures/openPMD/'
import os
if not os.path.exists(savedir):
os.mkdir(savedir)

# initialze the plotter object.
# project name will be prepended to all output names
plotter = pp.plotting.plottercls(dr, outdir=savedir, autosave=True, project='openPMD')

# we will need a refrence to the MultiSpecies quite often
from postpic import MultiSpecies as MS

# create MultiSpecies Object for every particle species that exists.
pas = [MS(dr, s) for s in dr.listSpecies()]

if True:
# Plot Data from the FieldAnalyzer fa. This is very simple: every line creates one plot
plotter.plotField(dr.Ex()) # plot 0
plotter.plotField(dr.Ey()) # plot 1
plotter.plotField(dr.Ez()) # plot 2
plotter.plotField(dr.energydensityEM()) # plot 3

# plot additional, derived fields if available in `dr.getderived()`
#plotter.plotField(dr.createfieldfromkey("fields/e_chargeDensity"))

# Using the MultiSpecies requires an additional step:
# 1) The MultiSpecies.createField method will be used to create a Field object
# with choosen particle scalars on every axis
# 2) Plot the Field object
optargsh={'bins': [200,50]}
for pa in pas:
# Remark on 2D or 3D Simulation:
# the data fields for the plots in the following section are build by postpic
# only from the particle data. The following plots would just the same,
# if we would use the data of a 3D Simulation instead.

# create a Field object nd holding the number density
nd = pa.createField('z', 'x', simextent=False, **optargsh)
# plot the Field object nd
plotter.plotField(nd, name='NumberDensity') # plot 4

# create a Field object nd holding the charge density
qd = pa.createField('z', 'x', weights='charge',
simextent=False, **optargsh)
# plot the Field object qd
plotter.plotField(qd, name='ChargeDensity') # plot 5

# more advanced: create a field holding the total kinetic energy on grid
ekin = pa.createField('z', 'x', weights='Ekin_MeV', simextent=False, **optargsh)
# The Field objectes can be used for calculations. Here we use this to
# calculate the average kinetic energy on grid and plot
plotter.plotField(ekin / nd, name='Avg Kin Energy (MeV)') # plot 6

# use optargsh to force lower resolution
# plot number density
plotter.plotField(pa.createField('z', 'x', **optargsh), lineoutx=True, lineouty=True) # plot 7
# plot phase space
plotter.plotField(pa.createField('z', 'p', **optargsh)) # plot 8
plotter.plotField(pa.createField('z', 'gamma', **optargsh)) # plot 9
plotter.plotField(pa.createField('z', 'beta', **optargsh)) # plot 10


if True:
# instead of reading a single dump read a collection of dumps using the simulationreader
sr = pp.readSim('examples/_openPMDdata/example-2d/hdf5/*.h5')
print('There are {:} dumps in this simulationreader object:'.format(len(sr)))
# now you can iterate over the dumps written easily
for dr in sr:
print('Simulation time of current dump t = {:.2e} s'.format(dr.time()))
# there are the frames of a movie:
plotter.plotField(dr.Ez())

if __name__=='__main__':
# the openPMD reader needs h5py to be installed.
# in case its not skip the tests
try:
import h5py
except ImportError as ie:
print('dependency missing: ' + str(ie))
print('SKIPPING examples/openPMD.py')
exit(0)
# beeing here means, that h5py is available.
# any other exception should still cause the execution to fail!
main()
136 changes: 136 additions & 0 deletions examples/particleshapedemo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
#!/usr/bin/env python
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Stephan Kuschel 2015-2018
#

'''
This is a demonstration file to show the differences between various particles
shapes used.
'''

def main():
import numpy as np
import postpic as pp

# postpic will use matplotlib for plotting. Changing matplotlibs backend
# to "Agg" makes it possible to save plots without a display attached.
# This is necessary to run this example within the "run-tests" script
# on travis-ci.
import matplotlib; matplotlib.use('Agg')


# choose the dummy reader. This reader will create fake data for testing.
pp.chooseCode('dummy')

# Create a dummy reader with 300 particles, not initialized with a seed and use
# uniform distribution
dr = pp.readDump(300, seed=None, randfunc=np.random.random)
# set and create directory for pictures.
savedir = '_examplepictures/'
import os
if not os.path.exists(savedir):
os.mkdir(savedir)

# initialze the plotter object.
# project name will be prepended to all output names
plotter = pp.plotting.plottercls(dr, outdir=savedir, autosave=True, project='particleshapedemo')

# we will need a refrence to the MultiSpecies quite often
from postpic import MultiSpecies as MS

# create MultiSpecies object for every particle species that exists.
pas = [MS(dr, s) for s in dr.listSpecies()]

# --- 1D visualization of particle contributions ---

def particleshapedemo(shape):
from postpic.particles import histogramdd
import matplotlib.pyplot as plt
ptclpos = np.array([4.5, 9.75, 15.0, 20.25])
y, (edges, ) = histogramdd(ptclpos, bins=25, range=(0,25), shape=shape)
x = np.convolve(edges, [0.5, 0.5], mode='valid')
fig = plt.figure()
fig.suptitle('ParticleShape: {:s}'.format(str(shape)))
ax = fig.add_subplot(111)
ax.plot(x,y)
ax.set_ylim((0,1))
ax.set_xticks(x, minor=True)
ax.grid(which='minor')
for ix in ptclpos:
ax.axvline(x=ix, color='y')
fig.savefig(savedir + 'particleshapedemo{:s}.png'.format(str(shape)), dpi=160)
plt.close(fig)

if True:
particleshapedemo(0)
particleshapedemo(1)
particleshapedemo(2)
particleshapedemo(3)

# --- 1D ---
if True:
pa = pas[0]
plotargs = {'ylim': (0,1600), 'log10plot': False}

# 1 particle per cell
plotter.plotField(pa.createField('x', bins=300, shape=0, title='1ppc_order0', rangex=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', bins=300, shape=1, title='1ppc_order1', rangex=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', bins=300, shape=2, title='1ppc_order2', rangex=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', bins=300, shape=3, title='1ppc_order3', rangex=(0,1)), **plotargs)

# 3 particles per cell
plotter.plotField(pa.createField('x', bins=100, shape=0, title='3ppc_order0', rangex=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', bins=100, shape=1, title='3ppc_order1', rangex=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', bins=100, shape=2, title='3ppc_order2', rangex=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', bins=100, shape=3, title='3ppc_order3', rangex=(0,1)), **plotargs)

# 10 particles per cell
plotter.plotField(pa.createField('x', bins=30, shape=0, title='10ppc_order0', rangex=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', bins=30, shape=1, title='10ppc_order1', rangex=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', bins=30, shape=2, title='10ppc_order2', rangex=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', bins=30, shape=3, title='10ppc_order3', rangex=(0,1)), **plotargs)

# --- 2D ---
if True:
dr = pp.readDump(300*30, seed=None, randfunc=np.random.random)
pa = MS(dr, dr.listSpecies()[0])
plotargs = {'clim': (0,3e4), 'log10plot': False}

# 1 particle per cell
plotter.plotField(pa.createField('x', 'y', bins=(300,30), shape=0, title='1ppc_order0', rangex=(0,1), rangey=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', 'y', bins=(300,30), shape=1, title='1ppc_order1', rangex=(0,1), rangey=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', 'y', bins=(300,30), shape=2, title='1ppc_order2', rangex=(0,1), rangey=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', 'y', bins=(300,30), shape=3, title='1ppc_order3', rangex=(0,1), rangey=(0,1)), **plotargs)

# 3 particles per cell
plotter.plotField(pa.createField('x', 'y', bins=(100,10), shape=0, title='3ppc_order0', rangex=(0,1), rangey=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', 'y', bins=(100,10), shape=1, title='3ppc_order1', rangex=(0,1), rangey=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', 'y', bins=(100,10), shape=2, title='3ppc_order2', rangex=(0,1), rangey=(0,1)), **plotargs)
plotter.plotField(pa.createField('x', 'y', bins=(100,10), shape=3, title='3ppc_order3', rangex=(0,1), rangey=(0,1)), **plotargs)


# --- 3D ---
if True:
dr = pp.readDump(300*30, seed=None, randfunc=np.random.random, dimensions=3)
pa = MS(dr, dr.listSpecies()[0])
# just try to create the field. not plotting routines yet
f = pa.createField('x', 'y', 'z', bins=(30,30,10), shape=2, title='1ppc_order2', rangex=(0,1), rangey=(0,1), rangez=(0,1))
f = pa.createField('x', 'y', 'z', bins=(30,30,10), shape=3, title='1ppc_order3', rangex=(0,1), rangey=(0,1), rangez=(0,1))

if __name__=='__main__':
main()
128 changes: 128 additions & 0 deletions examples/simpleexample.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
#!/usr/bin/env python
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Stephan Kuschel 2015
#

def main():
import numpy as np
import postpic as pp

# postpic will use matplotlib for plotting. Changing matplotlibs backend
# to "Agg" makes it possible to save plots without a display attached.
# This is necessary to run this example within the "run-tests" script
# on travis-ci.
import matplotlib; matplotlib.use('Agg')


# choose the dummy reader. This reader will create fake data for testing.
pp.chooseCode('dummy')

dr = pp.readDump(3e5) # Dummyreader takes a float as argument, not a string.
# set and create directory for pictures.
savedir = '_examplepictures/'
import os
if not os.path.exists(savedir):
os.mkdir(savedir)

# initialze the plotter object.
# project name will be prepended to all output names
plotter = pp.plotting.plottercls(dr, outdir=savedir, autosave=True, project='simpleexample')

# we will need a refrence to the MultiSpecies quite often
from postpic.particles import MultiSpecies

# create MultiSpecies Object for every particle species that exists.
pas = [MultiSpecies(dr, s) for s in dr.listSpecies()]

if True:
# Plot Data from the FieldAnalyzer fa. This is very simple: every line creates one plot
plotter.plotField(dr.Ex()) # plot 0
plotter.plotField(dr.Ey()) # plot 1
plotter.plotField(dr.Ez()) # plot 2
plotter.plotField(dr.energydensityEM()) # plot 3

# Using the MultiSpecies requires an additional step:
# 1) The MultiSpecies.createField method will be used to create a Field object
# with choosen particle scalars on every axis
# 2) Plot the Field object
optargsh={'bins': [300,300]}
for pa in pas:
# create a Field object nd holding the number density
nd = pa.createField('x', 'y', simextent=True, **optargsh)
# plot the Field object nd
plotter.plotField(nd, name='NumberDensity') # plot 4
# if you like to keep working with the just created number density
# yourself, it will convert to an numpy array whenever needed:
arr = np.asarray(nd)
print('Shape of number density: {}'.format(arr.shape))

# more advanced: create a field holding the total kinetic energy on grid
ekin = pa.createField('x', 'y', weights='Ekin_MeV', simextent=True, **optargsh)
# The Field objectes can be used for calculations. Here we use this to
# calculate the average kinetic energy on grid and plot
plotter.plotField(ekin / nd, name='Avg Kin Energy (MeV)') # plot 5
# use optargsh to force lower resolution
# plot number density
plotter.plotField(pa.createField('x', 'y', **optargsh), lineoutx=True, lineouty=True) # plot 6
# plot phase space
plotter.plotField(pa.createField('x', 'p', **optargsh)) # plot 7
plotter.plotField(pa.createField('x', 'gamma', **optargsh)) # plot 8
plotter.plotField(pa.createField('x', 'beta', **optargsh)) # plot 9

# same with high resolution
plotter.plotField(pa.createField('x', 'y', bins=[1000,1000])) # plot 10
plotter.plotField(pa.createField('x', 'p', bins=[1000,1000])) # plot 11

# advanced: postpic has already defined a lot of particle scalars as Px, Py, Pz, P, X, Y, Z, gamma, beta, Ekin, Ekin_MeV, Ekin_MeV_amu, ... but if needed you can also define your own particle scalar on the fly.
# In case its regularly used it should be added to postpic. If you dont know how, just let us know about your own useful particle scalar by email or adding an issue at
# https://github.com/skuschel/postpic/issues

# define your own particle scalar: p_r = sqrt(px**2 + py**2)/p
plotter.plotField(pa.createField('sqrt(px**2 + py**2)/p', 'sqrt(x**2 + y**2)', bins=[400,400])) # plot 12

# however, since its unknown to the program, what quantities were calculated the axis of plot 12 will only say "unknown"
# this can be avoided in two ways:
# 1st: define your own ScalarProperty(name, expr, unit):
p_perp = pp.particles.ScalarProperty('sqrt(px**2 + py**2)/p', name='p_perp', unit='kg*m/s')
r_xy = pp.particles.ScalarProperty('sqrt(x**2 + y**2)', name='r_xy', unit='m')
# this will create an identical plot, but correcly labled
plotter.plotField(pa.createField(p_perp, r_xy, bins=[400,400])) # plot 13
# if those quantities are reused often, teach postip to recognize them within the string expression:
pp.particles.particle_scalars.add(p_perp)
#pp.particles.scalars.add(r_xy) # we cannot execute this line, because r_xy is already predefinded
plotter.plotField(pa.createField('p_perp', 'r_xy', bins=[400,400])) # plot 14



# choose particles by their properies
# this has been the old interface, which would still work
# def cf(ms):
# return ms('x') > 0.0 # only use particles with x > 0.0
# cf.name = 'x>0.0'
# pa.compress(cf)
# nicer is the new filter function, which does exactly the same:
pf = pa.filter('x>0')
# plot 15, compare with plot 10
plotter.plotField(pf.createField('x', 'y', bins=[1000,1000]))
# plot 16, compare with plot 12
plotter.plotField(pf.createField('p_perp', 'r_xy', bins=[400,400]))

plotter.plotField(dr.divE()) # plot 13

if __name__=='__main__':
main()
190 changes: 190 additions & 0 deletions examples/time_cythonfunctions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
#!/usr/bin/env python
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Stephan Kuschel 2015-2018
#

from __future__ import division

def main():
time_histogram()
time_particlescalars()
time_particleidfilter()
time_particleidsort()

def time_histogram():
import timeit
import numpy as np
from postpic.particles import histogramdd

# --- 1D

def time1D(data, bins, weights, shape, tn):
t = timeit.Timer(lambda: histogramdd(data, range=(0.001,0.999), bins=bins, weights=weights, shape=shape))
tc = t.timeit(number=5)/5.0
ws = ' ' if weights is None else 'weights'
print('1D, {:d} shape, {:s}: {:0.2e} sec -> factor {:5.2f} faster'.format(shape, ws, tc, tn/tc))

bins = 1000
npart = int(1e6)
print('=== Histogram 1D bins: {:6d}, npart: {:.1e} ==='.format(bins, npart))
data = np.random.random(npart)
weights = np.random.random(npart)
# time numpy function
t = timeit.Timer(lambda: np.histogram(data, range=(0.001,0.999), bins=bins, weights=None))
tn = t.timeit(number=2)/2.0
t = timeit.Timer(lambda: np.histogram(data, range=(0.001,0.999), bins=bins, weights=weights))
tnw = t.timeit(number=2)/2.0
print('numpy : {:0.2e} sec'.format(tn))
print('numpy weights: {:0.2e} sec'.format(tnw))
time1D(data, bins, None, 0, tn)
time1D(data, bins, weights, 0, tnw)
time1D(data, bins, None, 1, tn)
time1D(data, bins, weights, 1, tnw)
time1D(data, bins, None, 2, tn)
time1D(data, bins, weights, 2, tnw)
time1D(data, bins, None, 3, tn)
time1D(data, bins, weights, 3, tnw)

# --- 2D

def time2D(datax, datay, bins, weights, shape, tn):
t = timeit.Timer(lambda: histogramdd((datax, datay), range=((0.01,0.99),(0.01,0.99)), bins=bins, weights=weights, shape=shape))
tc = t.timeit(number=3)/3.0
ws = ' ' if weights is None else 'weights'
print('2D, {:d} shape, {:s}: {:0.2e} sec -> factor {:5.2f} faster'.format(shape, ws, tc, tn/tc))

bins = (1000,700)
npart = int(1e6)
print('=== Histogram 2D bins: {:6s}, npart: {:.1e} ==='.format(str(bins), npart))
datax = np.random.rand(npart)
datay = np.random.rand(npart)
weights = np.random.random(npart)
# time numpy function
t = timeit.Timer(lambda: np.histogram2d(datax, datay, range=((0.01,0.99),(0.01,0.99)), bins=bins, weights=None))
tn = t.timeit(number=1)/1.0
t = timeit.Timer(lambda: np.histogram2d(datax, datay, range=((0.01,0.99),(0.01,0.99)), bins=bins, weights=weights))
tnw = t.timeit(number=1)/1.0
print('numpy : {:0.2e} sec'.format(tn))
print('numpy weights: {:0.2e} sec'.format(tnw))
time2D(datax,datay, bins, None, 0, tn)
time2D(datax, datay, bins, weights, 0, tnw)
time2D(datax,datay, bins, None, 1, tn)
time2D(datax, datay, bins, weights, 1, tnw)
time2D(datax,datay, bins, None, 2, tn)
time2D(datax, datay, bins, weights, 2, tnw)
time2D(datax,datay, bins, None, 3, tn)
time2D(datax, datay, bins, weights, 3, tnw)


# --- 3D

def time3D(datax, datay, dataz, bins, weights, shape, tn):
t = timeit.Timer(lambda: histogramdd((datax, datay, dataz), range=((0.01,0.99),(0.01,0.99),(0.01,0.99)), bins=bins, weights=weights, shape=shape))
tc = t.timeit(number=1)/1.0
ws = ' ' if weights is None else 'weights'
print('3D, {:d} shape, {:s}: {:0.2e} sec -> factor {:5.2f} faster'.format(shape, ws, tc, tn/tc))

bins = (200,250,300) # 15e6 Cells
npart = int(1e6)
print('=== Histogram 3D bins: {:6s}, npart: {:.1e} ==='.format(str(bins), npart))
datax = np.random.rand(npart)
datay = np.random.rand(npart)
dataz = np.random.rand(npart)
weights = np.random.random(npart)
# time numpy function
t = timeit.Timer(lambda: np.histogramdd((datax, datay, dataz), range=((0.01,0.99),(0.01,0.99),(0.01,0.99)), bins=bins, weights=None))
tn = t.timeit(number=1)/1.0
t = timeit.Timer(lambda: np.histogramdd((datax, datay, dataz), range=((0.01,0.99),(0.01,0.99),(0.01,0.99)), bins=bins, weights=weights))
tnw = t.timeit(number=1)/1.0
print('numpy : {:0.2e} sec'.format(tn))
print('numpy weights: {:0.2e} sec'.format(tnw))
time3D(datax, datay, dataz, bins, None, 0, tn)
time3D(datax, datay, dataz, bins, weights, 0, tnw)
time3D(datax, datay, dataz, bins, None, 1, tn)
time3D(datax, datay, dataz, bins, weights, 1, tnw)
time3D(datax, datay, dataz, bins, None, 2, tn)
time3D(datax, datay, dataz, bins, weights, 2, tnw)
time3D(datax, datay, dataz, bins, None, 3, tn)
time3D(datax, datay, dataz, bins, weights, 3, tnw)


def time_particlescalars():
import postpic as pp
import timeit
pp.chooseCode('dummy')
dr1 = pp.readDump(0.01e6, dimensions=3)
dr2 = pp.readDump(1.0e6, dimensions=3)
ms1 = pp.MultiSpecies(dr1, 'electron')
ms2 = pp.MultiSpecies(dr2, 'electron')
testexprs = ['x', 'x + y + z', 'gamma', 'beta', 'angle_xy', 'angle_xaxis',
'sqrt(x**2 + y**2 + z**2)', 'r_xyz',
'(gamma > 1.5) & (angle_xaxis < 0.2) & (r_xyz < 2)']
print('')
print('calculation times for n million particles, averaged over 3 calculations each...')
headformat = ' {:2s} | {:6s} | {:6s} | {:s}'
print(headformat.format('n', ' t', ' t/n', 'per particle quantity'))
print(headformat.format('', ' ms', 'ms/mio', ''))
def timeexpr(expr, ms):
t = timeit.Timer(lambda: ms(expr))
tc = t.timeit(number=3)/3.0
npartmio = len(ms)/1e6
print('{:4.2f} | {:6.2f} | {:6.2f} | "{}"'.format(npartmio, tc*1e3, (tc*1e3)/npartmio, expr))
for expr in testexprs:
for ms in (ms1, ms2):
timeexpr(expr, ms)

def time_particleidfilter():
import postpic as pp
import numpy as np
from timeit import default_timer as timer
pp.chooseCode('dummy')
dr = pp.readDump(1e6, dimensions=3)
ms = pp.MultiSpecies(dr, 'electron')
def timefilter(ms, expr):
t0 = timer()
ms2 = ms.filter(expr)
tf = timer()
print('npart = {:.1e}, fraction_taken = {:6.2f}%, time = {:6.2f}ms, expr: "{}"' \
.format(len(ms), len(ms2)/len(ms)*100, (tf-t0)*1e3, expr))
print('')
print('time to filter to a fraction of the particles by expression')
timefilter(ms, 'x > 0')
timefilter(ms, 'gamma > 1.5')
timefilter(ms, '(gamma > 1.5) & (angle_xaxis < 0.2) & (r_xyz < 2)')

def time_particleidsort():
import postpic as pp
import numpy as np
from timeit import default_timer as timer
pp.chooseCode('dummy')
dr = pp.readDump(1e6, dimensions=3)
ms = pp.MultiSpecies(dr, 'electron')
def timeid(ms, ids):
t0 = timer()
ms2 = ms.compress(ids)
tf = timer()
print('npart = {:.1e}, fraction_taken ={:6.2f}%, time ={:6.2f}ms' \
.format(len(ms), len(ms2)/len(ms)*100, (tf-t0)*1e3))
print('')
print('time to take a fraction of the particles by their ids')
timeid(ms, np.arange(100))
timeid(ms, np.arange(len(ms)/20))
timeid(ms, np.arange(len(ms)))

if __name__=='__main__':
main()
12 changes: 8 additions & 4 deletions pip-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
--index-url https://pypi.python.org/simple/

mock
pep8
nose
-e .
pycodestyle
nose2
Cython>=0.18
numpy>=1.8
setuptools

# required for building the docs
recommonmark
92 changes: 56 additions & 36 deletions postpic/__init__.py
Original file line number Diff line number Diff line change
@@ -20,43 +20,63 @@
| POSTPIC |
+--------------+
The open source particle-in-cell post processor.
*The open source particle-in-cell post processor.*
Particle-in-cell simulations are a valuable tool for the simulation of non-equelibrium
systems in plasma- or astrophysics.
Such simulations usually produce a large amount of data consisting of electric and magnetic
field data as well as particle positions and momenta. While there are various PIC codes freely
available, the task of post-processing -- essentially condensing the large amounts of data
into small units suitable for plotting routines -- is typically left to each user individually.
As post-processing may be a time consuming and error-prone process,
this python package has been developed.
*Postpic* can handle two different types of data:
Field data
which is data sampled on a predefined grid, such as electic and magnetic fields, particle- or
charge densities, currents, etc.
Fields are usually the data, which can be plotted directly.
See :class:`postpic.Field`.
Particle data
which is data of multiple particles and for each particle positions (`x`, `y`, `z`) and
momenta (`px`, `py`, `pz`) are known. Particles usually also have `weight`,
`charge`, `time` and a unique `id`.
Postpic can transform particle data to field data using the same algorithm and particle shapes,
which are used in most PIC Simulations. The particle-to-grid routines are written in C
for maximum performance. See :class:`postpic.MultiSpecies`.
"""
from __future__ import absolute_import, division, print_function, unicode_literals

from . import helper
from . import datahandling
from . import particles
from . import experimental
from . import plotting
from . import io
from .datahandling import *
from .helper import *
from .particles import *
from . import datareader
from .datareader import chooseCode, readDump, readSim
from ._version import get_versions
from .io import *

import datareader
import analyzer
import plotting

__all__ = ['datareader', 'analyzer', 'plotting']

# read version from installed metadata
from pkg_resources import get_distribution, DistributionNotFound
try:
import os.path
_dist = get_distribution('postpic')
# Normalize case for Windows systems
dist_loc = os.path.normcase(_dist.location)
here = os.path.normcase(__file__)
if not here.startswith(os.path.join(dist_loc, 'postpic')):
# not installed, but there is another version that *is*
raise DistributionNotFound
except DistributionNotFound:
__version__ = 'Please install this project with setup.py'
else:
__version__ = _dist.version

# add Git description for __version__ if present
try:
import subprocess as sub
import os.path
cwd = os.path.dirname(__file__)
p = sub.Popen(['git', 'describe', '--always'], stdout=sub.PIPE,
stderr=sub.PIPE, cwd=cwd)
out, err = p.communicate()
if not p.returncode: # git exited without error
__version__ += '_g' + out
except OSError:
# 'git' command not found
pass
__all__ = ['helper']
__all__ += datahandling.__all__
__all__ += helper.__all__
__all__ += particles.__all__
__all__ += ['datareader', 'plotting']
# high level functions
__all__ += ['chooseCode', 'readDump', 'readSim']
__all__ += io.__all__

__version__ = get_versions()['version']
__git_version__ = get_versions()['full-revisionid']
# work around if zip is downloaded from github and current version does not have a tag.
if __version__ == '0+unknown':
__version__ = __git_version__
del get_versions

20 changes: 20 additions & 0 deletions postpic/_compat/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@

from .functions import replacements
from packaging.version import parse as parse_version
import numpy

__all__ = []
for repl in replacements:
if parse_version(repl.lib.__version__) < parse_version(repl.minver):
vars()[repl.name] = repl.replacement
else:
vars()[repl.name] = getattr(repl.originalmodule, repl.name)

__all__.append(repl.name)

if parse_version(numpy.__version__) < parse_version('1.13'):
from .mixins import NDArrayOperatorsMixin
else:
from numpy.lib.mixins import NDArrayOperatorsMixin

__all__.append('NDArrayOperatorsMixin')
121 changes: 121 additions & 0 deletions postpic/_compat/functions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Stephan Kuschel 2017
# Alexander Blinne, 2017
"""
This module provides compatibility replacements of functions from external
libraries which have changed w.r.t. older versions of these libraries or
were not present in older versions of these libraries
"""

import numpy as np
import scipy as sp
import scipy.signal.windows as sps
import collections

try:
from collections.abc import Iterable
except ImportError:
from collections import Iterable


def np_meshgrid(*args, **kwargs):
if len(args) == 0:
return tuple()

if len(args) == 1:
if kwargs.get('copy', False):
return (args[0].copy(),)
return (args[0].view(),)

return np.meshgrid(*args, **kwargs)


def np_broadcast_to(*args, **kwargs):
array, shape = args
a, b = np.broadcast_arrays(array, np.empty(shape), **kwargs)
return a


def np_moveaxis(*args, **kwargs):
a, source, destination = args

# twice a quick implementation of numpy.numeric.normalize_axis_tuple
if not isinstance(source, Iterable):
source = (source,)
if not isinstance(destination, Iterable):
destination = (destination,)
source = [s % a.ndim for s in source]
destination = [d % a.ndim for d in destination]

# the real work copied from np.moveaxis
order = [n for n in range(a.ndim) if n not in source]
for dest, src in sorted(zip(destination, source)):
order.insert(dest, src)

return np.transpose(a, order)


def sps_tukey(M, alpha=0.5, sym=True):
"""
Copied from scipy commit 870abd2f1fcc1fcf491324cdf5f78b4310c84446
and replaced some functions by their implementation
"""
if int(M) != M or M < 0:
raise ValueError('Window length M must be a non-negative integer')
if M <= 1:
return np.ones(M)

if alpha <= 0:
return np.ones(M, 'd')
elif alpha >= 1.0:
return hann(M, sym=sym)

if not sym:
M, needs_trunc = M + 1, True
else:
M, needs_trunc = M, False

n = np.arange(0, M)
width = int(np.floor(alpha*(M-1)/2.0))
n1 = n[0:width+1]
n2 = n[width+1:M-width-1]
n3 = n[M-width-1:]

w1 = 0.5 * (1 + np.cos(np.pi * (-1 + 2.0*n1/alpha/(M-1))))
w2 = np.ones(n2.shape)
w3 = 0.5 * (1 + np.cos(np.pi * (-2.0/alpha + 1 + 2.0*n3/alpha/(M-1))))

w = np.concatenate((w1, w2, w3))

if needs_trunc:
return w[:-1]
else:
return w


ReplacementFunction = collections.namedtuple('ReplacementFunction', ['name', 'originalmodule',
'replacement', 'lib',
'minver'])


replacements = [
ReplacementFunction('meshgrid', np, np_meshgrid, np, '1.9'),
ReplacementFunction('broadcast_to', np, np_broadcast_to, np, '1.10'),
ReplacementFunction('moveaxis', np, np_moveaxis, np, '1.11'),
ReplacementFunction('tukey', sps, sps_tukey, sp, '0.16')
]
198 changes: 198 additions & 0 deletions postpic/_compat/mixins.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
"""
Mixin classes for custom array types that don't inherit from ndarray.
This file was copied from the Numpy project and is only imported if it is not present in
numpy itself. Some small modifications w. r. t. the upstream version are present in this file.
Copied from:
<https://raw.githubusercontent.com/numpy/numpy/
d51b538ba80d36841cc57911d77ea61cd1d3fb25/numpy/lib/mixins.py>
"""

from __future__ import division, absolute_import, print_function

import sys

from numpy.core import umath as um

# Nothing should be exposed in the top-level module.
__all__ = []


def _disables_array_ufunc(obj):
"""True when __array_ufunc__ is set to None."""
try:
return obj.__array_ufunc__ is None
except AttributeError:
return False


def _binary_method(ufunc, name):
"""Implement a forward binary method with a ufunc, e.g., __add__."""
def func(self, other):
if _disables_array_ufunc(other):
return NotImplemented
return ufunc(self, other)
func.__name__ = '__{}__'.format(name)
return func


def _reflected_binary_method(ufunc, name):
"""Implement a reflected binary method with a ufunc, e.g., __radd__."""
def func(self, other):
if _disables_array_ufunc(other):
return NotImplemented
return ufunc(other, self)
func.__name__ = '__r{}__'.format(name)
return func


def _inplace_binary_method(ufunc, name):
"""Implement an in-place binary method with a ufunc, e.g., __iadd__."""
def func(self, other):
ufunc(self, other, out=self.matrix)
return self
func.__name__ = '__i{}__'.format(name)
return func


def _numeric_methods(ufunc, name):
"""Implement forward, reflected and inplace binary methods with a ufunc."""
return (_binary_method(ufunc, name),
_reflected_binary_method(ufunc, name),
_inplace_binary_method(ufunc, name))


def _unary_method(ufunc, name):
"""Implement a unary special method with a ufunc."""
def func(self):
return ufunc(self)
func.__name__ = '__{}__'.format(name)
return func


class NDArrayOperatorsMixin(object):
"""Mixin defining all operator special methods using __array_ufunc__.
This class implements the special methods for almost all of Python's
builtin operators defined in the `operator` module, including comparisons
(``==``, ``>``, etc.) and arithmetic (``+``, ``*``, ``-``, etc.), by
deferring to the ``__array_ufunc__`` method, which subclasses must
implement.
This class does not yet implement the special operators corresponding
to ``matmul`` (``@``), because ``np.matmul`` is not yet a NumPy ufunc.
It is useful for writing classes that do not inherit from `numpy.ndarray`,
but that should support arithmetic and numpy universal functions like
arrays as described in :ref:`A Mechanism for Overriding Ufuncs
<neps.ufunc-overrides>`.
As an trivial example, consider this implementation of an ``ArrayLike``
class that simply wraps a NumPy array and ensures that the result of any
arithmetic operation is also an ``ArrayLike`` object::
class ArrayLike(np.lib.mixins.NDArrayOperatorsMixin):
def __init__(self, value):
self.value = np.asarray(value)
# One might also consider adding the built-in list type to this
# list, to support operations like np.add(array_like, list)
_HANDLED_TYPES = (np.ndarray, numbers.Number)
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
out = kwargs.get('out', ())
for x in inputs + out:
# Only support operations with instances of _HANDLED_TYPES.
# Use ArrayLike instead of type(self) for isinstance to
# allow subclasses that don't override __array_ufunc__ to
# handle ArrayLike objects.
if not isinstance(x, self._HANDLED_TYPES + (ArrayLike,)):
return NotImplemented
# Defer to the implementation of the ufunc on unwrapped values.
inputs = tuple(x.value if isinstance(x, ArrayLike) else x
for x in inputs)
if out:
kwargs['out'] = tuple(
x.value if isinstance(x, ArrayLike) else x
for x in out)
result = getattr(ufunc, method)(*inputs, **kwargs)
if type(result) is tuple:
# multiple return values
return tuple(type(self)(x) for x in result)
elif method == 'at':
# no return value
return None
else:
# one return value
return type(self)(result)
def __repr__(self):
return '%s(%r)' % (type(self).__name__, self.value)
In interactions between ``ArrayLike`` objects and numbers or numpy arrays,
the result is always another ``ArrayLike``:
>>> x = ArrayLike([1, 2, 3])
>>> x - 1
ArrayLike(array([0, 1, 2]))
>>> 1 - x
ArrayLike(array([ 0, -1, -2]))
>>> np.arange(3) - x
ArrayLike(array([-1, -1, -1]))
>>> x - np.arange(3)
ArrayLike(array([1, 1, 1]))
Note that unlike ``numpy.ndarray``, ``ArrayLike`` does not allow operations
with arbitrary, unrecognized types. This ensures that interactions with
ArrayLike preserve a well-defined casting hierarchy.
"""
# Like np.ndarray, this mixin class implements "Option 1" from the ufunc
# overrides NEP.

# comparisons don't have reflected and in-place versions
__lt__ = _binary_method(um.less, 'lt')
__le__ = _binary_method(um.less_equal, 'le')
__eq__ = _binary_method(um.equal, 'eq')
__ne__ = _binary_method(um.not_equal, 'ne')
__gt__ = _binary_method(um.greater, 'gt')
__ge__ = _binary_method(um.greater_equal, 'ge')

# numeric methods
__add__, __radd__, __iadd__ = _numeric_methods(um.add, 'add')
__sub__, __rsub__, __isub__ = _numeric_methods(um.subtract, 'sub')
__mul__, __rmul__, __imul__ = _numeric_methods(um.multiply, 'mul')
if sys.version_info.major < 3:
# Python 3 uses only __truediv__ and __floordiv__
__div__, __rdiv__, __idiv__ = _numeric_methods(um.divide, 'div')
__truediv__, __rtruediv__, __itruediv__ = _numeric_methods(
um.true_divide, 'truediv')
__floordiv__, __rfloordiv__, __ifloordiv__ = _numeric_methods(
um.floor_divide, 'floordiv')
__mod__, __rmod__, __imod__ = _numeric_methods(um.remainder, 'mod')

if hasattr(um, 'divmod'):
__divmod__ = _binary_method(um.divmod, 'divmod')
__rdivmod__ = _reflected_binary_method(um.divmod, 'divmod')

# __idivmod__ does not exist
# TODO: handle the optional third argument for __pow__?
__pow__, __rpow__, __ipow__ = _numeric_methods(um.power, 'pow')
__lshift__, __rlshift__, __ilshift__ = _numeric_methods(
um.left_shift, 'lshift')
__rshift__, __rrshift__, __irshift__ = _numeric_methods(
um.right_shift, 'rshift')
__and__, __rand__, __iand__ = _numeric_methods(um.bitwise_and, 'and')
__xor__, __rxor__, __ixor__ = _numeric_methods(um.bitwise_xor, 'xor')
__or__, __ror__, __ior__ = _numeric_methods(um.bitwise_or, 'or')

# unary methods
__neg__ = _unary_method(um.negative, 'neg')

if hasattr(um, 'positive'):
__pos__ = _unary_method(um.positive, 'pos')

__abs__ = _unary_method(um.absolute, 'abs')
__invert__ = _unary_method(um.invert, 'invert')
112 changes: 0 additions & 112 deletions postpic/_const.py

This file was deleted.

384 changes: 384 additions & 0 deletions postpic/_field_calc.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,384 @@
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Stephan Kuschel 2014-2019
# Alexander Blinne, 2017
"""
Field related routines.
"""
from __future__ import absolute_import, division, print_function, unicode_literals

import numpy as np
from packaging.version import parse as parse_version
from .helper import PhysicalConstants as pc
from . import helper
from .datahandling import *
import warnings

__all__ = ['FieldAnalyzer']


class FieldAnalyzer(object):
'''
This class transforms any data written to a dump to a field object
ready to be plotted. This should provide an object to make it easy
to plot data, that is already dumped.
Since the postpic.datareader.Dumpreader_ifc is derived from this class,
all methods here are automatically available to all dumpreaders.
Simple calucations (like calculating the energy density) are
performed here as well but might move to another class in the future.
Entirely new calculations (like fourier transforming gridded data)
will be build up somewhere else, since they act on a
field transforming it into another.
'''

def __init__(self):
pass

# General interface for everything
def _createfieldfromdata(self, data, gridkey, **kwargs):
ret = Field(np.float64(data))
self.setgridtofield(ret, gridkey, **kwargs)
return ret

def createfieldfromkey(self, key, gridkey=None, **kwargs):
'''
This method creates a Field object from the data identified by "key".
The Grid is also inferred from that key unless an alternate "gridkey"
is provided.
'''
if gridkey is None:
gridkey = key
ret = self._createfieldfromdata(self.data(key, **kwargs), gridkey, **kwargs)
ret.name = key
return ret

def getaxisobj(self, gridkey, axis, **kwargs):
'''
returns an Axis object for the "axis" and the grid defined by "gridkey".
**kwargs can be any dictionary overriding from the grid for a specific axis,
e.g. {'theta': [0, pi/2, pi, 3/2*pi]}
'''
name = {0: 'x', 1: 'y', 2: 'z', 90: 'r', 91: 'theta'}[helper.axesidentify[axis]]
Ntheta = kwargs.pop('Ntheta', None)
if Ntheta is not None and name == 'theta':
# linear spacing from fft
return np.linspace(0, 2*np.pi, Ntheta, endpoint=False)
grid = kwargs.pop(name, None)
if grid is not None:
# override with given grid
ax = Axis(name=name, unit='m', grid=grid)
else:
ax = Axis(name=name, unit='m', grid_node=self.gridnode(gridkey, axis))
return ax

def _defaultaxisorder(self, gridkey):
return ('x', 'y', 'z')

def setgridtofield(self, field, gridkey, **kwargs):
'''
add spacial field information to the given field object.
'''
x, y, z = self._defaultaxisorder(gridkey)
field.setaxisobj(x, self.getaxisobj(gridkey, x, **kwargs))
if field.dimensions > 1:
field.setaxisobj(y, self.getaxisobj(gridkey, y, **kwargs))
if field.dimensions > 2:
field.setaxisobj(z, self.getaxisobj(gridkey, z, **kwargs))

# --- Always return an object of Field type
# just to shortcut

def _Ex(self, **kwargs):
return np.float64(self.dataE('x', **kwargs))

def _Ey(self, **kwargs):
return np.float64(self.dataE('y', **kwargs))

def _Ez(self, **kwargs):
return np.float64(self.dataE('z', **kwargs))

def _Er(self, **kwargs):
return np.float64(self.dataE('r', **kwargs))

def _Etheta(self, **kwargs):
return np.float64(self.dataE('theta', **kwargs))

def _Bx(self, **kwargs):
return np.float64(self.dataB('x', **kwargs))

def _By(self, **kwargs):
return np.float64(self.dataB('y', **kwargs))

def _Bz(self, **kwargs):
return np.float64(self.dataB('z', **kwargs))

def _Br(self, **kwargs):
return np.float64(self.dataB('r', **kwargs))

def _Btheta(self, **kwargs):
return np.float64(self.dataB('theta', **kwargs))

def createfieldsfromkeys(self, *keys, **kwargs):
for key in keys:
yield self.createfieldfromkey(key, **kwargs)

# most common fields listed here nicely
def Ex(self, **kwargs):
ret = self._createfieldfromdata(self._Ex(**kwargs),
self.gridkeyE('x', **kwargs), **kwargs)
ret.unit = 'V/m'
ret.name = 'Ex'
ret.shortname = 'Ex'
return ret

def Ey(self, **kwargs):
ret = self._createfieldfromdata(self._Ey(**kwargs),
self.gridkeyE('y', **kwargs), **kwargs)
ret.unit = 'V/m'
ret.name = 'Ey'
ret.shortname = 'Ey'
return ret

def Ez(self, **kwargs):
ret = self._createfieldfromdata(self._Ez(**kwargs),
self.gridkeyE('z', **kwargs), **kwargs)
ret.unit = 'V/m'
ret.name = 'Ez'
ret.shortname = 'Ez'
return ret

def Er(self, **kwargs):
ret = self._createfieldfromdata(self._Er(**kwargs),
self.gridkeyE('r', **kwargs), **kwargs)
ret.unit = 'V/m'
ret.name = 'Er'
ret.shortname = 'Er'
return ret

def Etheta(self, **kwargs):
ret = self._createfieldfromdata(self._Etheta(**kwargs),
self.gridkeyE('theta', **kwargs), **kwargs)
ret.unit = 'V/m'
ret.name = 'Etheta'
ret.shortname = 'Etheta'
return ret

def Bx(self, **kwargs):
ret = self._createfieldfromdata(self._Bx(**kwargs),
self.gridkeyB('x', **kwargs), **kwargs)
ret.unit = 'T'
ret.name = 'Bx'
ret.shortname = 'Bx'
return ret

def By(self, **kwargs):
ret = self._createfieldfromdata(self._By(**kwargs),
self.gridkeyB('y', **kwargs), **kwargs)
ret.unit = 'T'
ret.name = 'By'
ret.shortname = 'By'
return ret

def Bz(self, **kwargs):
ret = self._createfieldfromdata(self._Bz(**kwargs),
self.gridkeyB('z', **kwargs), **kwargs)
ret.unit = 'T'
ret.name = 'Bz'
ret.shortname = 'Bz'
return ret

def Br(self, **kwargs):
ret = self._createfieldfromdata(self._Br(**kwargs),
self.gridkeyB('r', **kwargs), **kwargs)
ret.unit = 'V/m'
ret.name = 'Br'
ret.shortname = 'Br'
return ret

def Btheta(self, **kwargs):
ret = self._createfieldfromdata(self._Btheta(**kwargs),
self.gridkeyB('theta', **kwargs), **kwargs)
ret.unit = 'V/m'
ret.name = 'Btheta'
ret.shortname = 'Btheta'
return ret

# --- spezielle Funktionen

def _kspace(self, component, fields, alignment='auto', solver=None, **kwargs):
if alignment not in ['auto', 'default', 'epoch', 'epoch-final']:
raise ValueError()

if alignment == 'auto':
if self.name.lower().endswith('.sdf'):
alignment = 'epoch'
else:
alignment = 'default'

if alignment == 'default':
return helper.kspace(component, fields, interpolation='fourier', **kwargs)

if alignment.startswith('epoch'):
dt = self.time()/self.timestep()
if 'omega_func' not in kwargs and solver == 'yee':
dx = [self.simgridspacing(axis) for axis in range(self.simdimensions())]
kwargs['omega_func'] = helper.omega_yee_factory(dx, dt)

if alignment == 'epoch':
return helper.kspace_epoch_like(component, fields, dt, align_to='B', **kwargs)

if alignment == 'epoch-final':
return helper.kspace_epoch_like(component, fields, dt, align_to='E', **kwargs)

def kspace_Ex(self, **kwargs):
fields = dict()
fields['Ex'] = self.Ex()
if fields['Ex'].dimensions >= 2:
fields['Bz'] = self.Bz()
if fields['Ex'].dimensions >= 3:
fields['By'] = self.By()

return self._kspace('Ex', fields, **kwargs)

def kspace_Ey(self, **kwargs):
fields = dict()
fields['Ey'] = self.Ey()
fields['Bz'] = self.Bz()
if fields['Ey'].dimensions >= 3:
fields['Bx'] = self.Bx()

return self._kspace('Ey', fields, **kwargs)

def kspace_Ez(self, **kwargs):
fields = dict()
fields['Ez'] = self.Ez()
fields['By'] = self.By()
if fields['Ez'].dimensions >= 2:
fields['Bx'] = self.Bx()

return self._kspace('Ez', fields, **kwargs)

def kspace_Bx(self, **kwargs):
fields = dict()
fields['Bx'] = self.Bx()
if fields['Bx'].dimensions >= 2:
fields['Ez'] = self.Ez()
if fields['Bx'].dimensions >= 3:
fields['Ey'] = self.Ey()

return self._kspace('Bx', fields, **kwargs)

def kspace_By(self, **kwargs):
fields = dict()
fields['By'] = self.By()
fields['Ez'] = self.Ez()
if fields['By'].dimensions >= 3:
fields['Ex'] = self.Ex()

return self._kspace('By', fields, **kwargs)

def kspace_Bz(self, **kwargs):
fields = dict()
fields['Bz'] = self.Bz()
fields['Ey'] = self.Ey()
if fields['Bz'].dimensions >= 2:
fields['Ex'] = self.Ex()

return self._kspace('Bz', fields, **kwargs)

def energydensityE(self, **kwargs):
ret = self._createfieldfromdata(0.5 * pc.epsilon0 *
(self._Ex(**kwargs) ** 2 +
self._Ey(**kwargs) ** 2 +
self._Ez(**kwargs) ** 2),
self.gridkeyE('x', **kwargs))
ret.unit = 'J/m^3'
ret.name = 'Energy Density Electric-Field'
ret.shortname = 'E'
return ret

def energydensityM(self, **kwargs):
ret = self._createfieldfromdata(0.5 / pc.mu0 *
(self._Bx(**kwargs) ** 2 +
self._By(**kwargs) ** 2 +
self._Bz(**kwargs) ** 2),
self.gridkeyB('x', **kwargs))
ret.unit = 'J/m^3'
ret.name = 'Energy Density Magnetic-Field'
ret.shortname = 'M'
return ret

def energydensityEM(self, **kwargs):
ret = self._createfieldfromdata(0.5 * pc.epsilon0 *
(self._Ex(**kwargs) ** 2 +
self._Ey(**kwargs) ** 2 +
self._Ez(**kwargs) ** 2) +
0.5 / pc.mu0 *
(self._Bx(**kwargs) ** 2 +
self._By(**kwargs) ** 2 +
self._Bz(**kwargs) ** 2),
self.gridkeyE('x', **kwargs))
ret.unit = 'J/m^3'
ret.name = 'Energy Density EM-Field'
ret.shortname = 'EM'
return ret

def _divE1d(self, **kwargs):
return np.gradient(self._Ex(**kwargs))

def _divE2d(self, **kwargs):
if parse_version(np.__version__) < parse_version('1.11'):
warnings.warn('''
The support for numpy < "1.11" will be dropped in the future. Upgrade!
''', DeprecationWarning)
return np.gradient(self._Ex(**kwargs))[0] \
+ np.gradient(self._Ey(**kwargs))[1]
return np.gradient(self._Ex(**kwargs), axis=0) \
+ np.gradient(self._Ey(**kwargs), axis=1)

def _divE3d(self, **kwargs):
if parse_version(np.__version__) < parse_version('1.11'):
warnings.warn('''
The support for numpy < "1.11" will be dropped in the future. Upgrade!
''', DeprecationWarning)
return np.gradient(self._Ex(**kwargs))[0] \
+ np.gradient(self._Ey(**kwargs))[1] \
+ np.gradient(self._Ez(**kwargs))[2]
return np.gradient(self._Ex(**kwargs), axis=0) \
+ np.gradient(self._Ey(**kwargs), axis=1) \
+ np.gradient(self._Ez(**kwargs), axis=2)

def divE(self, **kwargs):
'''
returns the divergence of E.
This is calculated in the number of dimensions the simulation was running on.
'''
# this works because the datareader extents this class
simdims = self.simdimensions()
opts = {1: self._divE1d,
2: self._divE2d,
3: self._divE3d}
data = opts[simdims](**kwargs)
ret = self._createfieldfromdata(data, self.gridkeyE('x', **kwargs))
ret.unit = 'V/m^2'
ret.name = 'div E'
ret.shortname = 'divE'
return ret
683 changes: 683 additions & 0 deletions postpic/_version.py

Large diffs are not rendered by default.

191 changes: 0 additions & 191 deletions postpic/analyzer/analyzer.py

This file was deleted.

181 changes: 0 additions & 181 deletions postpic/analyzer/fields.py

This file was deleted.

706 changes: 0 additions & 706 deletions postpic/analyzer/particles.py

This file was deleted.

2,372 changes: 2,133 additions & 239 deletions postpic/datahandling.py

Large diffs are not rendered by default.

353 changes: 105 additions & 248 deletions postpic/datareader/__init__.py

Large diffs are not rendered by default.

375 changes: 375 additions & 0 deletions postpic/datareader/datareader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,375 @@
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Stephan Kuschel 2015
# Alexander Blinne, 2017

from __future__ import absolute_import, division, print_function, unicode_literals
from future.utils import with_metaclass

import abc

try:
from collections.abc import Sequence
except ImportError:
from collections import Sequence

import warnings
import numpy as np
from .. import helper
from .._field_calc import FieldAnalyzer

__all__ = ['Dumpreader_ifc', 'Simulationreader_ifc']


class Dumpreader_ifc(with_metaclass(abc.ABCMeta, FieldAnalyzer)):
'''
Interface class for reading a single dump. A dump contains informations
about the simulation at a single timestep (Usually E- and B-Fields on
grid + particles).
Any Dumpreader_ifc implementation will always be initialized using a
dumpidentifier. This dumpidentifier can be anything, that points to
the data of a dump. In the easiest case this is just the filename of
the dump holding all data of that timestep
(for example .sdf file for EPOCH, .hdf5 for some other code).
The dumpreader should provide all necessary informations in a unified interface
but at the same time it should not restrict the user to these properties of there
dump only. The recommended implementation is shown here (EPOCH and VSim reader
work like this):
All (!) data, that is saved in a single dump should be accessible via the
self.__getitem__(key) method. Together with the self.keys() method, this will ensure,
that every dumpreader works as a dictionary and every dump attribute is accessible
via this dictionary.
Hirachy of methods
------------------
* Level 0:
__getitem__ and keys(self) are level 0 methods, meaning it must be possible to access
everthing with those methods.
* Level 1:
provide direct data access by forwarding the requests to the corresponding Level 0
or Level 1 methods.
* Level 2:
provide user access to the data by forwarding the request to Level 1 or Level 2 methods,
but NOT to Level 0 methods.
If some attribute wasnt dumped a KeyError must be thrown. This allows
classes which are using the reader to just exit if a needed property wasnt dumped
or to catch the KeyError and proceed by actively ignoring it.
It is highly recommended to also override the functions __str__ and gridpoints.
Args:
dumpidentifier : variable type
whatever identifies the dump. It is recommended to use a String
here pointing to a file.
'''

def __init__(self, dumpidentifier, name=None):
super(Dumpreader_ifc, self).__init__()
self.dumpidentifier = dumpidentifier
self._name = name

# --- Level 0 methods ---

@abc.abstractmethod
def keys(self):
'''
Returns:
a list of keys, that can be used in __getitem__ to read
any information from this dump.
'''
pass

@abc.abstractmethod
def __getitem__(self, key):
'''
access to everything. May return hd5 objects corresponding to the "key".
'''
pass

# --- Level 1 methods ---

@abc.abstractmethod
def data(self, key):
'''
access to every raw data. needs to return numpy arrays corresponding to the "key".
'''
pass

@abc.abstractmethod
def gridoffset(self, key, axis):
'''
offset of the beginning of the first cell of the grid.
'''
pass

@abc.abstractmethod
def gridspacing(self, key, axis):
'''
size of one grid cell in the direction "axis".
'''
pass

def gridpoints(self, key, axis):
'''
Number of grid points along "axis". It is highly recommended to override this
method due to performance reasons.
'''
warnings.warn('Method "gridpoints(self, key, axis)" is not overridden in datareader. '
'This is may alter performance.')
return self.data(key).shape[helper.axesidentify[axis]]

def gridnode(self, key, axis):
'''
The grid nodes along "axis". Grid nodes include the beginning and the end of the grid.
Example: If the grid has 20 grid points, it has 21 grid nodes or grid edges.
'''
offset = self.gridoffset(key, axis)
n = self.gridpoints(key, axis)
return np.linspace(offset,
offset + self.gridspacing(key, axis) * n,
n + 1)

def grid(self, key, axis):
return np.convolve(self.gridnode(key, axis), [0.5, 0.5], mode='valid')

# --- Level 2 methods ---

# --- General Information ---
@abc.abstractmethod
def timestep(self):
pass

@abc.abstractmethod
def time(self):
pass

@abc.abstractmethod
def simdimensions(self):
'''
the number of spatial dimensions the simulations was using.
Must be 1, 2 or 3.
'''
pass

# --- Data on Grid ---
# _key[E,B] methods are ONLY used inside the datareader class
@abc.abstractmethod
def _keyE(self, component, **kwargs):
'''
The key where the E field component can be found.
kwargs will be forwarded and can be used here to specify that alternate
keys will be used instead. For example you might have dumped a default Ex
field (no kwargs), but also another one with low resolution (lowres=2)
and another one with low resolution and averaged over some laser periods
(lowres=3, average=100).
The naming of those kwargs is reader specific.
'''
pass

@abc.abstractmethod
def _keyB(self, component, **kwargs):
'''
The key where the B field component can be found.
see _keyE for a description of kwargs.
'''
pass

# if you need to customize more, just skip _key[E,B] methods and
# override the following 4 methods to have full control.
def dataE(self, component, **kwargs):
return np.float64(self.data(self._keyE(component, **kwargs)))

def gridkeyE(self, component, **kwargs):
return self._keyE(component, **kwargs)

def dataB(self, component, **kwargs):
return np.float64(self.data(self._keyB(component, **kwargs)))

def gridkeyB(self, component, **kwargs):
return self._keyB(component, **kwargs)

def _simgridkeys(self):
'''
returns a list of keys that can be tried one after another to determine the grid,
that the actual simulations was running on.
This is dirty. Rather override self.simgridpoints and self.simextent
with your own (better performance) implementation.
'''
return []

def simgridpoints(self, axis):
for key in self._simgridkeys():
try:
return self.gridpoints(key, axis)
except KeyError:
pass
raise KeyError

def simextent(self, axis):
'''
returns the extent of the actual simulation box.
Override in your own reader class for better performance implementation.
'''
for key in self._simgridkeys():
try:
offset = self.gridoffset(key, axis)
n = self.gridpoints(key, axis)
return np.array([offset, offset + self.gridspacing(key, axis) * n])
except KeyError:
pass
raise KeyError('Unable to resolve "simexent" for axis "{:}"'.format(axis))

def simgridspacing(self, axis):
extent = self.simextent(axis)
return (extent[1]-extent[0])/self.simgridpoints(axis)

# --- Particle Data ---
@abc.abstractmethod
def listSpecies(self):
pass

@abc.abstractmethod
def getSpecies(self, species, attrib):
'''
This function gives access to any of the particle properties in
..helper.attribidentify
This method can behave in the following ways:
1) Return a list of scalar properties for each particle of this species
2) Return a single float (i.e. `1.2`, NOT `[1.2]`) to show that
every particle of this species has the same scalar value to thisdimmax
property assigned. This might be quite often used for charge or mass
that are defined per species.
3) Raise a KeyError if the requested property or species wasn dumped.
'''
pass

@property
def name(self):
if self._name:
ret = self._name
else:
ret = str(self.dumpidentifier)
return ret

@name.setter
def name(self, val):
self._name = str(val) if bool(val) else None

def __repr__(self):
return '<Dumpreader at "{:}">'.format(self.dumpidentifier)

def __eq__(self, other):
"""
Two dumpreader are equal, if they represent the same dump.
Assuming both dumpidentifier are paths to the dumpfiles, simple string comparison
may give a "False",
although they both point to the same file:
* ./path/to/file
* path/to/file
* /absolute/path/to/file
Therefore this functions tries to interpret the dumpidentifier as paths/to/files.
In case this is successful and both files exist,
the function checks if they point to the same file.
"""
import os.path as osp
s1 = str(self.dumpidentifier)
s2 = str(other.dumpidentifier)
if osp.isfile(s1) and osp.isfile(s2):
# osp.samefile available under Windows since python 3.2
return osp.samefile(s1, s2)
else:
# seems to be something else than a path to a file
return self.dumpidentifier == other.dumpidentifier


class Simulationreader_ifc(Sequence):
'''
Interface for reading the data of a full Simulation.
Any Simulationreader_ifc implementation will always be initialized using a
simidentifier. This simidentifier can be anything, that points to
the data of multiple dumps. In the easiest case this can be the .visit
file.
The Simulationreader_ifc is subclass of Sequence and will
thus behave as a Sequence. The Objects in the Sequence are supposed to be
subclassed from Dumpreader_ifc.
It is highly recommended to also override the __str__ function.
Args:
simidentifier : variable type
something identifiying a series of dumps.
'''

def __init__(self, simidentifier, name=None):
self.simidentifier = simidentifier
self._name = name

def __getitem__(self, key):
if isinstance(key, slice):
return [self[ii] for ii in range(*key.indices(len(self)))]
elif isinstance(key, int):
if key < 0: # Handle negative indices
key += len(self)
if key >= len(self):
raise IndexError("The index (%d) is out of range." % key)
return self._getDumpreader(key)
else:
raise TypeError("Invalid argument type.")

@abc.abstractmethod
def _getDumpreader(self, number):
'''
:returns: the corresponding Dumpreader.
'''
pass

@property
def name(self):
if self._name:
ret = self._name
else:
ret = str(self.simidentifier)
return ret

@name.setter
def name(self, val):
self._name = str(val) if bool(val) else None

@abc.abstractmethod
def __len__(self):
pass

def __repr__(self):
s = '<Simulationreader initialized with "{:}" ({:} dumps)'
return s.format(self.simidentifier, len(self))

# Higher Level Functions for usability

def times(self):
return np.array([s.time() for s in self])

def timesteps(self):
return np.array([s.timestep() for s in self])
132 changes: 82 additions & 50 deletions postpic/datareader/dummy.py
Original file line number Diff line number Diff line change
@@ -20,11 +20,13 @@
Stephan Kuschel 2014
'''
from __future__ import absolute_import, division, print_function, unicode_literals

from . import Dumpreader_ifc
from . import Simulationreader_ifc
import numpy as np
from .. import _const
from .. import helper
from ..helper import PhysicalConstants


class Dummyreader(Dumpreader_ifc):
@@ -38,22 +40,39 @@ class Dummyreader(Dumpreader_ifc):
will pretend to have dumpid many particles).
'''

def __init__(self, dumpid, dimensions=2, **kwargs):
def __init__(self, dumpid, dimensions=2, randfunc=np.random.normal, seed=0, **kwargs):
super(self.__class__, self).__init__(dumpid, **kwargs)
self._dimensions = dimensions
self._seed = seed
self._randfunc = randfunc
# initialize fake data
self._xdata = np.random.normal(size=dumpid)
if seed is not None:
np.random.seed(seed)
self._xdata = randfunc(size=int(dumpid))
if dimensions > 1:
self._ydata = np.random.normal(size=dumpid)
self._ydata = randfunc(size=int(dumpid))
if dimensions > 2:
self._zdata = np.random.normal(size=dumpid)
self._zdata = randfunc(size=int(dumpid))
self._pxdata = np.roll(self._xdata, 1) ** 2 * (PhysicalConstants.me * PhysicalConstants.c)
self._pydata = np.roll(self._xdata, 2) * (PhysicalConstants.me * PhysicalConstants.c)
self._pzdata = np.roll(self._xdata, 3) * (PhysicalConstants.me * PhysicalConstants.c)
self._weights = np.repeat(1, len(self._xdata))
self._ids = np.arange(len(self._xdata))
np.random.shuffle(self._ids)

def keys(self):
pass

def __getitem__(self, key):
pass

def __eq__(self, other):
ret = super(self.__class__, self).__eq__(other)
return ret \
and self._randfunc == other._randfunc \
and self._seed == other._seed \
and self._dimensions == other._dimensions

def timestep(self):
return self.dumpidentifier

@@ -63,8 +82,14 @@ def time(self):
def simdimensions(self):
return self._dimensions

def dataE(self, axis):
axid = _const.axesidentify[axis]
def gridoffset(self, key, axis):
raise Exception('Not Implemented')

def gridspacing(self, key, axis):
raise Exception('Not Implemented')

def data(self, axis):
axid = helper.axesidentify[axis]

def _Ex(x, y, z):
ret = np.sin(np.pi * self.timestep() *
@@ -85,21 +110,51 @@ def _Ez(x, y, z):
1: _Ey,
2: _Ez}
if self.simdimensions() == 1:
ret = fkts[axid](self.grid('x'), 0, 0)
ret = fkts[axid](self.grid(None, 'x'), 0, 0)
elif self.simdimensions() == 2:
xx, yy = np.meshgrid(self.grid('x'), self.grid('y'), indexing='ij')
xx, yy = np.meshgrid(self.grid(None, 'x'), self.grid(None, 'y'), indexing='ij')
ret = fkts[axid](xx, yy, 0)
elif self.simdimensions() == 3:
xx, yy, zz = np.meshgrid(self.grid('x'),
self.grid('y'),
self.grid('z'), indexing='ij')
xx, yy, zz = np.meshgrid(self.grid(None, 'x'),
self.grid(None, 'y'),
self.grid(None, 'z'), indexing='ij')
ret = fkts[axid](xx, yy, zz)
return ret

def dataB(self, axis):
return 10 * self.dataE(axis)
def _keyE(self, component):
return component

def _keyB(self, component):
return component

def simgridpoints(self, axis):
return self.grid(None, axis)

def simextent(self, axis):
g = self.grid(None, axis)
return np.asarray([g[0], g[-1]], dtype=np.float64)

def gridnode(self, key, axis):
'''
Args:
axis : string or int
the axisidentifier
Returns: list of grid points of the axis specified.
def grid(self, axis):
Thus only regular grids are supported currently.
'''
axid = helper.axesidentify[axis]
grids = {1: [(-2, 10, 601)],
2: [(-2, 10, 301), (-5, 5, 401)],
3: [(-2, 10, 101), (-5, 5, 81), (-4, 4, 61)]}
if axid >= self.simdimensions():
raise KeyError('axis ' + str(axis) + ' not present.')
args = grids[self.simdimensions()][axid]
ret = np.linspace(*args)
return ret

def grid(self, key, axis):
'''
Args:
axis : string or int
@@ -109,12 +164,12 @@ def grid(self, axis):
Thus only regular grids are supported currently.
'''
axid = _const.axesidentify[axis]
axid = helper.axesidentify[axis]
grids = {1: [(-2, 10, 600)],
2: [(-2, 10, 300), (-5, 5, 400)],
3: [(-2, 10, 100), (-5, 5, 80), (-4, 4, 60)]}
if axid >= self.simdimensions():
raise IndexError('axis ' + str(axis) + ' not present.')
raise KeyError('axis ' + str(axis) + ' not present.')
args = grids[self.simdimensions()][axid]
ret = np.linspace(*args)
return ret
@@ -123,25 +178,26 @@ def listSpecies(self):
return ['electron']

def getSpecies(self, species, attrib):
attribid = _const.attribidentify[attrib]
attribid = helper.attribidentify[attrib]
if attribid == 0: # x
ret = self._xdata
elif attribid == 1 and self.simdimensions() > 1: # y
ret = self._ydata
elif attribid == 2 and self.simdimensions() > 2: # z
ret = self._zdata
elif attribid == 3: # px
ret = self._xdata ** 2
ret = self._pxdata
elif attribid == 4: # py
ret = self._ydata ** 2 if self.simdimensions() > 1 \
else np.repeat(0, len(self._xdata))
ret = self._pydata
elif attribid == 5: # pz
ret = self._ydata * self._xdata if self.simdimensions() > 1 \
else np.repeat(0, len(self._xdata))
ret = self._pzdata
elif attribid == 9: # weights
ret = np.repeat(1, len(self._xdata))
ret = self._weights
elif attribid == 10: # ids
ret = self._ids
else:
ret = None
raise KeyError('Attrib "' + str(attrib) + '" of species "' +
str(species) + '" not present')
return ret

def __str__(self):
@@ -160,7 +216,7 @@ def __init__(self, simidentifier, dimensions=2, **kwargs):
def __len__(self):
return self.simidentifier

def getDumpreader(self, index):
def _getDumpreader(self, index):
if index < len(self):
return Dummyreader(index, dimensions=self._dimensions)
else:
@@ -171,27 +227,3 @@ def __str__(self):
+ str(self.dumpidentifier) + '">'
ret = ret.format(self._dimensions)
return ret
























229 changes: 149 additions & 80 deletions postpic/datareader/epochsdf.py
Original file line number Diff line number Diff line change
@@ -14,33 +14,55 @@
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Stephan Kuschel 2014
# Copyright Stephan Kuschel 2014, 2015
# Alexander Blinne, 2017
'''
.. _EPOCH: http://ccpforge.cse.rl.ac.uk/gf/project/epoch/
.. _EPOCH: https://cfsa-pmw.warwick.ac.uk/EPOCH/epoch
.. _SDF: https://github.com/keithbennett/SDF
Reader for SDF File format written by the EPOCH_ Code.
Reader for SDF_ File format written by the EPOCH_ Code.
Dependecies:
- sdf: The actual python reader for the .sdf file format written in C.
It is part of the EPOCH_ code base and needs to be
compiled and installed from there.
Stephan Kuschel 2014
Written by Stephan Kuschel 2014, 2015
'''
from __future__ import absolute_import, division, print_function, unicode_literals

from . import Dumpreader_ifc
from . import Simulationreader_ifc
import numpy as np
import re
from .. import _const
from .. import helper
from warnings import warn
from packaging.version import parse as parse_version

__all__ = ['Sdfreader', 'Visitreader']


# The default staggering of the Grid.
# Newer versions of EPOCH dump it (certainly v4.9.2)
# older versions dont. The defaults will only be used,
# if the stagger was not found in the dump.
_default_stagger = {'Electric Field/Ex': 1,
'Electric Field/Ey': 2,
'Electric Field/Ez': 4,
'Magnetic Field/Bx': 6,
'Magnetic Field/By': 5,
'Magnetic Field/Bz': 3,
'Current/Jx': 1,
'Current/Jy': 2,
'Current/Jz': 4}
_default_stagger.update({k+'_averaged': v for k, v in _default_stagger.items()})


class Sdfreader(Dumpreader_ifc):
'''
The Reader implementation for Data written by the EPOCH_ Code
in .sdf format.
in .sdf format. Written for SDF v2.2.0 or higher.
SDF_ can be obtained without EPOCH_ from SDF_.
Args:
sdffile : String
@@ -51,15 +73,70 @@ def __init__(self, sdffile, **kwargs):
super(self.__class__, self).__init__(sdffile, **kwargs)
import os.path
import sdf
try:
sdfversion = sdf.__version__
except AttributeError:
sdfversion = '0.0.0'
if parse_version(sdfversion) < parse_version('2.2.0'):
raise ImportError('Upgrade sdf package to 2.2.0 or higher.')
if not os.path.isfile(sdffile):
raise IOError('File "' + str(sdffile) + '" doesnt exist.')
self._data = sdf.SDF(sdffile).read()
self._sdffile = sdffile
self._sdfreader = sdf.read(sdffile, dict=True)

# --- Level 0 methods ---

def keys(self):
return self._data.keys()
return list(self._sdfreader.keys())

def __getitem__(self, key):
return self._data[key]
return self._sdfreader[key]

def dumpsize(self):
'''
returns the file size of the sdf file in bytes.
'''
import os
return os.path.getsize(self._sdffile)

# --- Level 1 methods ---

def data(self, key):
return self[key].data

def gridoffset(self, key, axis):
axid = helper.axesidentify[axis]
dx = self.gridspacing(key, axis)
if hasattr(self[key], 'stagger'):
# best case: stagger is saved
stagger = self[key].stagger
elif key in _default_stagger:
stagger = _default_stagger[key]
elif key.startswith('Derived/'):
# c_stagger_cell_centre in EPOCH code
stagger = 0
else:
warn('Stagger of "{:}" could not be found. \
Assuming no stagger (that is cell center).'.format(key))
stagger = 0

staggered = stagger & (1 << axid)
if staggered:
return self[key].grid_mid.data[axid][0] - dx/2.0
else:
return self[key].grid.data[axid][0] - dx/2.0

def gridspacing(self, key, axis):
axid = helper.axesidentify[axis]
grid = self[key].grid
extent = float(grid.extents[axid + len(grid.dims)] - grid.extents[axid])
return extent / (grid.dims[axid]-1)

def gridpoints(self, key, axis):
axid = helper.axesidentify[axis]
return self[key].dims[axid]

# --- Level 2 methods ---

def timestep(self):
return self['Header']['step']
@@ -68,74 +145,90 @@ def time(self):
return np.float64(self['Header']['time'])

def simdimensions(self):
return float(re.match('Epoch(\d)d',
self['Header']['code_name']).group(1))
return int(re.match(r'Epoch(\d)d', self['Header']['code_name']).group(1))

def _returnkey2(self, key1, key2, average=False):
key = key1 + key2
def _keyE(self, component, average=False):
axsuffix = {0: 'x', 1: 'y', 2: 'z'}[helper.axesidentify[component]]
ret = 'Electric Field/E' + axsuffix
if average:
key = key1 + '_average' + key2
return self[key]

def dataE(self, axis, **kwargs):
axsuffix = {0: 'x', 1: 'y', 2: 'z'}[_const.axesidentify[axis]]
return np.float64(self._returnkey2('Electric Field', '/E' +
axsuffix, **kwargs))
ret += '_averaged'
return ret

def dataB(self, axis, **kwargs):
axsuffix = {0: 'x', 1: 'y', 2: 'z'}[_const.axesidentify[axis]]
return np.float64(self._returnkey2('Magnetic Field', '/B' +
axsuffix, **kwargs))
def _keyB(self, component, average=False):
axsuffix = {0: 'x', 1: 'y', 2: 'z'}[helper.axesidentify[component]]
ret = 'Magnetic Field/B' + axsuffix
if average:
ret += '_averaged'
return ret

def grid(self, axis):
axsuffix = {0: 'X', 1: 'Y', 2: 'Z'}[_const.axesidentify[axis]]
return self['Grid/Grid/' + axsuffix]
def simextent(self, axis):
'''
Returns the extent of the actual simulation box.
'''
m = self['Grid/Grid']
extents = m.extents
dims = len(m.dims)
axid = helper.axesidentify.get(axis)
if axid is None or axid + dims >= len(extents):
s = 'Axis "{}" is not an axis of the simulation box. Extents are: "{}".'
raise KeyError(s.format(axis, extents))
return np.array([extents[axid], extents[axid + dims]])

def simgridpoints(self, axis):
'''
Returns the number of grid points of the actual simulation.
'''
mesh = self['Grid/Grid']
axid = helper.axesidentify[axis]
return mesh.dims[axid] - 1

def listSpecies(self):
ret = []
for key in self.keys():
match = re.match('Particles/Px/(\w+)', key)
ret = set()
for key in list(self.keys()):
match = re.match(r'Particles/\w+/([\w-]+(/[\w-]+)?)', key)
if match:
ret = np.append(ret, match.group(1))
ret.add(match.group(1))
ret = list(ret)
ret.sort()
return ret

def getSpecies(self, species, attrib):
"""
Returns one of the attributes out of (x,y,z,px,py,pz,weight,ID) of
Returns one of the attributes out of (x,y,z,px,py,pz,weight,ID,mass,charge) of
this particle species.
returning None means that this particle property wasnt dumped.
Note that this is different from returning an empty list!
raises KeyError if the requested species or property wasnt dumped.
"""
attribid = _const.attribidentify[attrib]
options = {9: lambda s: 'Particles/Weight/' + s,
0: lambda s: 'Grid/Particles/' + s + '/X',
1: lambda s: 'Grid/Particles/' + s + '/Y',
2: lambda s: 'Grid/Particles/' + s + '/Z',
3: lambda s: 'Particles/Px/' + s,
4: lambda s: 'Particles/Py/' + s,
5: lambda s: 'Particles/Pz/' + s,
10: lambda s: 'Particles/ID/' + s}
attribid = helper.attribidentify[attrib]
options = {9: lambda s: self['Particles/Weight/' + s].data,
0: lambda s: self['Grid/Particles/' + s].data[0],
1: lambda s: self['Grid/Particles/' + s].data[1],
2: lambda s: self['Grid/Particles/' + s].data[2],
3: lambda s: self['Particles/Px/' + s].data,
4: lambda s: self['Particles/Py/' + s].data,
5: lambda s: self['Particles/Pz/' + s].data,
10: lambda s: self['Particles/ID/' + s].data,
11: lambda s: self['Particles/Mass/' + s].data,
12: lambda s: self['Particles/Charge/' + s].data}
try:
ret = np.float64(self[options[attribid](species)])
except(KeyError):
ret = None
ret = options[attribid](species)
except IndexError:
raise KeyError('Attribute "{}" of species "{}" not found.'.format(attrib, species))
return ret

def getderived(self):
'''
Returns all Keys starting with "Derived/".
'''
ret = []
for key in self._data.keys():
r = re.match('Derived/[\w/ ]*', key)
for key in list(self.keys()):
r = re.match(r'Derived/[\w/ ]*', key)
if r:
ret.append(r.group(0))
ret.sort()
return ret

def __str__(self):
return '<Sdfreader at "' + str(self.dumpidentifier) + '">'
def __repr__(self):
return '<Sdfreader at "{:}">'.format(self.dumpidentifier)


class Visitreader(Simulationreader_ifc):
@@ -154,40 +247,16 @@ def __init__(self, visitfile, dumpreadercls=Sdfreader, **kwargs):
raise IOError('File "' + str(visitfile) + '" doesnt exist.')
self._dumpfiles = []
with open(visitfile) as f:
relpath = os.path.dirname(visitfile)
path = os.path.dirname(os.path.abspath(visitfile))
for line in f:
self._dumpfiles.append(os.path.join(relpath,
line.replace('\n', '')))
self._dumpfiles.append(os.path.join(path,
line.replace('\n', '')))

def __len__(self):
return len(self._dumpfiles)

def getDumpreader(self, index):
def _getDumpreader(self, index):
return self.dumpreadercls(self._dumpfiles[index])

def __str__(self):
return '<Visitreader at "' + self.visitfile + '">'
























def __repr__(self):
return '<Visitreader at "{:}" ({:} dumps)>'.format(self.visitfile, len(self))
375 changes: 375 additions & 0 deletions postpic/datareader/openPMDh5.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,375 @@
#
# This file is part of postpic.
#
# postpic is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# postpic is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with postpic. If not, see <http://www.gnu.org/licenses/>.
#
# Copyright Stephan Kuschel, 2018-2019
'''
.. _openPMD: https://github.com/openPMD/openPMD-standard
Support for hdf5 files following the openPMD_ Standard.
Dependecies:
- h5py: read hdf5 files with python
Written by Stephan Kuschel 2016
'''
from __future__ import absolute_import, division, print_function, unicode_literals

from . import Dumpreader_ifc
from . import Simulationreader_ifc
import numpy as np
import re
from .. import helper
from ..helper_fft import fft

__all__ = ['OpenPMDreader', 'FileSeries',
'FbpicReader', 'FbpicFileSeries']


class OpenPMDreader(Dumpreader_ifc):
'''
The Reader implementation for Data written in the hdf5 file
format following openPMD_ naming conventions.
Args:
h5file : String
A String containing the relative Path to the .h5 file.
Kwargs:
iteration: Integer
An integer indicating the iteration to be loaded. Default is None, leading
to the first iteration found in the h5file.
'''

def __init__(self, h5file, iteration=None, **kwargs):
super(OpenPMDreader, self).__init__(h5file, **kwargs)
import os.path
import h5py
if not os.path.isfile(h5file):
raise IOError('File "' + str(h5file) + '" doesnt exist.')
self._h5 = h5py.File(h5file, 'r')
self._iteration = iteration
if self._iteration is None:
self._iteration = int(list(self._h5['data'].keys())[0])
self._data = self._h5['/data/{:d}/'.format(self._iteration)]
self.attrs = self._data.attrs

def __del__(self):
del self._data

# --- Level 0 methods ---

def keys(self):
return list(self._data.keys())

def __getitem__(self, key):
return self._data[key]

# --- Level 1 methods ---

def data(self, key):
'''
should work with any key, that contains data, thus on every hdf5.Dataset,
but not on hdf5.Group. Will extract the data, convert it to SI and return it
as a numpy array. Constant records will be detected and converted to
a numpy array containing a single value only.
'''
record = self[key]
if "value" in record.attrs:
# constant data (a single int or float)
ret = np.float64(record.attrs['value']) * record.attrs['unitSI']
else:
# array data
ret = np.float64(record[()]) * record.attrs['unitSI']
return ret

def gridoffset(self, key, axis):
axid = helper.axesidentify[axis]
if "gridUnitSI" in self[key].attrs:
attrs = self[key].attrs
else:
attrs = self[key].parent.attrs
return attrs['gridGlobalOffset'][axid] * attrs['gridUnitSI']

def gridspacing(self, key, axis):
axid = helper.axesidentify[axis]
if "gridUnitSI" in self[key].attrs:
attrs = self[key].attrs
else:
attrs = self[key].parent.attrs
return attrs['gridSpacing'][axid] * attrs['gridUnitSI']

def gridpoints(self, key, axis):
axid = helper.axesidentify[axis]
return self[key].shape[axid]

# --- Level 2 methods ---

def timestep(self):
return self._iteration

def time(self):
return np.float64(self.attrs['time'] * self.attrs['timeUnitSI'])

def simdimensions(self):
'''
the number of spatial dimensions the simulation was using.
'''
for k in self._simgridkeys():
try:
gs = self.gridspacing(k, None)
return len(gs)
except KeyError:
pass
raise KeyError('number of simdimensions could not be retrieved for {}'.format(self))

def _keyE(self, component, **kwargs):
axsuffix = {0: 'x', 1: 'y', 2: 'z', 90: 'r', 91: 't'}[helper.axesidentify[component]]
return 'fields/E/{}'.format(axsuffix)

def _keyB(self, component, **kwargs):
axsuffix = {0: 'x', 1: 'y', 2: 'z', 90: 'r', 91: 't'}[helper.axesidentify[component]]
return 'fields/B/{}'.format(axsuffix)

def _simgridkeys(self):
return ['fields/E/x', 'fields/E/y', 'fields/E/z',
'fields/B/x', 'fields/B/y', 'fields/B/z']

def listSpecies(self):
ret = list(self['particles'].keys())
return ret

def getSpecies(self, species, attrib):
"""
Returns one of the attributes out of (x,y,z,px,py,pz,weight,ID,mass,charge) of
this particle species.
"""
attribid = helper.attribidentify[attrib]
options = {9: 'particles/{}/weighting',
0: 'particles/{}/position/x',
1: 'particles/{}/position/y',
2: 'particles/{}/position/z',
3: 'particles/{}/momentum/x',
4: 'particles/{}/momentum/y',
5: 'particles/{}/momentum/z',
10: 'particles/{}/id',
11: 'particles/{}/mass',
12: 'particles/{}/charge'}
optionsoffset = {0: 'particles/{}/positionOffset/x',
1: 'particles/{}/positionOffset/y',
2: 'particles/{}/positionOffset/z'}
key = options[attribid]
offsetkey = optionsoffset.get(attribid)
try:
data = self.data(key.format(species))
if offsetkey is not None:
data += self.data(offsetkey.format(species))
ret = np.asarray(data, dtype=np.float64)
except IndexError:
raise KeyError
return ret

def getderived(self):
'''
return all other fields dumped, except E and B.
'''
ret = []
self['fields'].visit(ret.append)
ret = ['fields/{}'.format(r) for r in ret if not (r.startswith('E') or r.startswith('B'))]
ret = [r for r in ret if hasattr(self[r], 'value')]
ret.sort()
return ret

def __str__(self):
return '<OpenPMDh5reader at "' + str(self.dumpidentifier) + '">'


class FbpicReader(OpenPMDreader):
'''
Special OpenPMDreader for FBpic, which is using an expansion into radial modes.
This is subclass of the OpenPMDreader which is converting the modes to
a radial representation.
'''
def __init__(self, simidentifier, **kwargs):
super(FbpicReader, self).__init__(simidentifier, **kwargs)

@staticmethod
def modeexpansion(rawdata, theta=None, Ntheta=None):
'''
rawdata has to be shaped (Nm, Nr, Nz).
Returns an array of shape (Nr, Ntheta, Nz), with
`Ntheta = (Nm+1)//2`. If Ntheta is given only larger
values are permitted.
The corresponding values for theta are given by
`np.linspace(0, 2*np.pi, Ntheta, endpoint=False)`
'''
rawdata = np.asarray(rawdata)
Nm, Nr, Nz = rawdata.shape
if Ntheta is not None or theta is None:
return FbpicReader._modeexpansion_fft(rawdata, Ntheta=Ntheta)
else:
return FbpicReader._modeexpansion_naiv(rawdata, theta=theta)

@staticmethod
def _modeexpansion_naiv_single(rawdata, theta=0):
'''
The mode representation will be expanded for a given theta.
rawdata has to have the shape (Nm, Nr, Nz).
the returned array will be of shape (Nr, Nz).
'''
rawdata = np.float64(rawdata)
(Nm, Nr, Nz) = rawdata.shape
mult_above_axis = [1]
for mode in range(1, (Nm+1)//2):
cos = np.cos(mode * theta)
sin = np.sin(mode * theta)
mult_above_axis += [cos, sin]
mult_above_axis = np.float64(mult_above_axis)
F_total = np.tensordot(mult_above_axis,
rawdata, axes=(0, 0))
assert F_total.shape == (Nr, Nz), \
'''
Assertion error. Please open a new issue on github to report this.
shape={}, Nr={}, Nz={}
'''.format(F_total.shape, Nr, Nz)
return F_total

@staticmethod
def _modeexpansion_naiv(rawdata, theta=0):
'''
converts to radial data using `modeexpansion`, possibly for multiple
theta at once.
'''
if np.asarray(theta).shape == ():
# single theta
theta = [theta]
# multiple theta
data = np.asarray([FbpicReader._modeexpansion_naiv_single(rawdata, theta=t)
for t in theta])
# switch from (theta, r, z) to (r, theta, z)
data = data.swapaxes(0, 1)
return data

@staticmethod
def _modeexpansion_fft(rawdata, Ntheta=None):
'''
calculate the radialdata using an fft. This is by far the fastest
way to do the modeexpansion.
'''
Nm, Nr, Nz = rawdata.shape
Nth = (Nm+1)//2
if Ntheta is None or Ntheta < Nth:
Ntheta = Nth
fd = np.empty((Nr, Ntheta, Nz), dtype=np.complex128)

fd[:, 0, :].real = rawdata[0, :, :]
rawdatasw = np.swapaxes(rawdata, 0, 1)
fd[:, 1:Nth, :].real = rawdatasw[:, 1::2, :]
fd[:, 1:Nth, :].imag = rawdatasw[:, 2::2, :]

fd = fft.fft(fd, axis=1).real
return fd

# override inherited method to count points after mode expansion
def gridoffset(self, key, axis):
axid = helper.axesidentify[axis]
if axid == 91: # theta
return 0
else:
# r, theta, z
axidremap = {90: 0, 2: 1}[axid]
return super(FbpicReader, self).gridoffset(key, axidremap)

# override inherited method to count points after mode expansion
def gridspacing(self, key, axis):
axid = helper.axesidentify[axis]
if axid == 91: # theta
return 2 * np.pi / self.gridpoints(key, axis)
else:
# r, theta, z
axidremap = {90: 0, 2: 1}[axid]
return super(FbpicReader, self).gridspacing(key, axidremap)

# override inherited method to count points after mode expansion
def gridpoints(self, key, axis):
axid = helper.axesidentify[axis]
axid = axid % 90 # for r and theta
(Nm, Nr, Nz) = self[key].shape
# Ntheta does technically not exists because of the mode
# representation. To do a proper conversion from the modes to
# the grid, choose Ntheta based on the number of modes.
Ntheta = (Nm + 1) // 2
return (Nr, Ntheta, Nz)[axid]

# override
def _defaultaxisorder(self, gridkey):
return ('r', 'theta', 'z')

# override from OpenPMDreader
def data(self, key, **kwargs):
raw = super(FbpicReader, self).data(key) # SI conversion
if key.startswith('particles'):
return raw
# for fields expand the modes into a spatial grid first:
data = self.modeexpansion(raw, **kwargs) # modeexpansion
return data

def dataE(self, component, theta=None, Ntheta=None, **kwargs):
return self.data(self._keyE(component, **kwargs), theta=theta, Ntheta=Ntheta)

def dataB(self, component, theta=None, **kwargs):
return self.data(self._keyB(component, **kwargs), theta=theta, Ntheta=Ntheta)

# override
def __str__(self):
return '<FbpicReader at "' + str(self.dumpidentifier) + '">'


class FileSeries(Simulationreader_ifc):
'''
Reads a time series of dumps from a given directory.
The simidentifier is expanded using glob in order to
find matching files.
'''

def __init__(self, simidentifier, dumpreadercls=OpenPMDreader, **kwargs):
super(FileSeries, self).__init__(simidentifier, **kwargs)
self.dumpreadercls = dumpreadercls
import glob
self._dumpfiles = glob.glob(simidentifier)
self._dumpfiles.sort()

def _getDumpreader(self, n):
'''
Do not use this method. It will be called by __getitem__.
Use __getitem__ instead.
'''
return self.dumpreadercls(self._dumpfiles[n])

def __len__(self):
return len(self._dumpfiles)

def __str__(self):
return '<FileSeries at "' + self.simidentifier + '">'


class FbpicFileSeries(FileSeries):

def __init__(self, *args, **kwargs):
super(FbpicFileSeries, self).__init__(*args, **kwargs)
self.dumpreadercls = FbpicReader
404 changes: 404 additions & 0 deletions postpic/datareader/smileih5.py
183 changes: 183 additions & 0 deletions postpic/datareader/vsimhdf5.py
121 changes: 121 additions & 0 deletions postpic/experimental.py
1,378 changes: 1,378 additions & 0 deletions postpic/helper.py
95 changes: 95 additions & 0 deletions postpic/helper_fft.py
78 changes: 78 additions & 0 deletions postpic/io/__init__.py
39 changes: 39 additions & 0 deletions postpic/io/common.py
61 changes: 61 additions & 0 deletions postpic/io/csv.py
128 changes: 128 additions & 0 deletions postpic/io/image.py
116 changes: 116 additions & 0 deletions postpic/io/npy.py
364 changes: 364 additions & 0 deletions postpic/io/vtk.py
37 changes: 37 additions & 0 deletions postpic/particles/__init__.py
465 changes: 465 additions & 0 deletions postpic/particles/_particlestogrid.pyx
321 changes: 321 additions & 0 deletions postpic/particles/_routines.py
1,281 changes: 1,281 additions & 0 deletions postpic/particles/particles.py
230 changes: 230 additions & 0 deletions postpic/particles/scalarproperties.py
6 changes: 4 additions & 2 deletions postpic/plotting/__init__.py
169 changes: 112 additions & 57 deletions postpic/plotting/plotter_matplotlib.py
50 changes: 34 additions & 16 deletions pre-commit
25 changes: 0 additions & 25 deletions run-tests

This file was deleted.

136 changes: 136 additions & 0 deletions run-tests.py
11 changes: 11 additions & 0 deletions setup.cfg
54 changes: 47 additions & 7 deletions setup.py
56 changes: 0 additions & 56 deletions test/test_analyzer.py

This file was deleted.

779 changes: 724 additions & 55 deletions test/test_datahandling.py
100644 → 100755
10 changes: 5 additions & 5 deletions test/test_dumpreader.py
100644 → 100755
18 changes: 18 additions & 0 deletions test/test_field_calc.py
20 changes: 0 additions & 20 deletions test/test_fields.py

This file was deleted.

242 changes: 242 additions & 0 deletions test/test_helper.py
74 changes: 74 additions & 0 deletions test/test_io.py
96 changes: 87 additions & 9 deletions test/test_particles.py
100644 → 100755
376 changes: 376 additions & 0 deletions test/test_particlestogrid.py
53 changes: 53 additions & 0 deletions update-gh-pages-branch.sh
2,277 changes: 2,277 additions & 0 deletions versioneer.py