-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Precompute summation indices for the model expressions #350
Precompute summation indices for the model expressions #350
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #350 +/- ##
===========================================
- Coverage 100.00% 99.77% -0.23%
===========================================
Files 10 10
Lines 405 450 +45
===========================================
+ Hits 405 449 +44
- Misses 0 1 +1 ☔ View full report in Codecov by Sentry. |
PS, the profiling has to be run locally with the EU data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have to admit I don't really understand it. Maybe good to have Diego or Ni review?
src/model.jl
Outdated
) | ||
end | ||
end | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll take your word for it, but is it really faster to use nested loops? What were we using before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please don't take my word for it! I would be great if you could run the profile code to check for the speed up - so it isn't unilaterally informed by me. I can help with how to run it, if you want to do it, or you leave to Diego or Ni.
This is, allegedly, faster, because the sum(... if ...)
structure inside the JuMP @expression
is slow. It is faster to create the sets outside the loop, and then just use them. Furthermore, the loops inside the @expression
were a double loop with non-contiguous values. Now, the loop inside the @expression
is simpler.
The nested loops above are slow, but apparently not as slow as the older code.
1225288
to
e1e1ea2
Compare
In the SpineOpt model (MOPO project), they follow this same strategy, meaning that they have a function that creates the indices for the constraints as a dictionary and then use it to create the constraints in the model. That's the basic idea, but they have a different structure for the temporal resolution, so their functions to find the indices are quite different. |
So far... ;) |
c8b4050
to
2be9659
Compare
ce8d830
to
a8aca39
Compare
a8aca39
to
0db252a
Compare
Results copied my fork (abelsiqueira#6 (comment)): Benchmark Results
|
Closing in favor of #408 |
Pull request details
Describe the changes made in this pull request
To avoid using conditionals inside of summations in the expressions, we precompute them.
We also compute the duration.
This speed up the code, but maybe is it readable enough?
List of related issues or pull requests
Related to #294
Collaboration confirmation
As a contributor I confirm