Discussion model benchmarks #2462
brosaplanella
started this conversation in
General
Replies: 2 comments
-
What's the plan with this? |
Beta Was this translation helpful? Give feedback.
0 replies
-
I guess it is on hold (I opened this issue more as a discussion rather than a specific task). Saying that, I realise we should probably convert it into a discussion and follow it up there. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This issue is to discuss ideas around standard model benchmarks. Here is my suggestion, as a starting point for the discussion. The main idea is to group benchmarks for a specific model, and by a specific model I mean with all the options (e.g. thermal, degradation...) fixed. Then, for each model we test various:
I think for the moment we can just change one setting at the time (the default could be the first setting of each group). We can later think how to sweep two or more variables (e.g. parameter sets & mesh sizes).
In terms of implementation, we can define a base class to which we can pass the model we want to study, and all benchmarks can be inherited from there. If we keep notation consistent, it should be fairly easy to write a script to process the JSON files.
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions