Memory problem #301
-
Beta Was this translation helpful? Give feedback.
Answered by
ThomasLecocq
Jun 5, 2023
Replies: 1 comment 6 replies
-
first questions:
In recent scipy versions, the numpy's FFT is called under the hood, so there should be no added memory issues. There could be a ghost from the past when scipy/numpy FFT did some caching of the memory blocks and never released them, but I don't think it's that... |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
If your input data is e.g. 100 Hz, to go to 44 Hz, you are forced to use the Lanczos resampler, which is heavy computationally. You could set the CC_sampling_rate to 50 Hz, and just Decimate.
I have no idea what "load average" means in your case.
The development version (called "master" on github) is fully functional, BUT is not compatible with your existing database, because there are new elements (support for location codes, channel names) etc.
Are you using many filters? Many components (other than ZZ) ? Cross-station AND single-station ? etc... all those will add elements in memory during computation and output to disk after all slides are finished. So yes, the memory imprint can grow…