Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precompute summation indices for the model expressions #350

Conversation

abelsiqueira
Copy link
Member

Pull request details

Describe the changes made in this pull request

To avoid using conditionals inside of summations in the expressions, we precompute them.
We also compute the duration.
This speed up the code, but maybe is it readable enough?

List of related issues or pull requests

Related to #294

Collaboration confirmation

As a contributor I confirm

  • I read and followed the instructions in README.dev.md
  • The documentation is up to date with the changes introduced in this Pull Request (or NA)
  • Tests are passing
  • Lint is passing

Copy link

codecov bot commented Dec 11, 2023

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (463079b) 100.00% compared to head (0db252a) 99.77%.
Report is 1 commits behind head on main.

Files Patch % Lines
src/create-model.jl 97.95% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##              main     #350      +/-   ##
===========================================
- Coverage   100.00%   99.77%   -0.23%     
===========================================
  Files           10       10              
  Lines          405      450      +45     
===========================================
+ Hits           405      449      +44     
- Misses           0        1       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@abelsiqueira
Copy link
Member Author

PS, the profiling has to be run locally with the EU data.

Copy link
Member

@clizbe clizbe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have to admit I don't really understand it. Maybe good to have Diego or Ni review?

src/model.jl Outdated
)
end
end
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll take your word for it, but is it really faster to use nested loops? What were we using before?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't take my word for it! I would be great if you could run the profile code to check for the speed up - so it isn't unilaterally informed by me. I can help with how to run it, if you want to do it, or you leave to Diego or Ni.
This is, allegedly, faster, because the sum(... if ...) structure inside the JuMP @expression is slow. It is faster to create the sets outside the loop, and then just use them. Furthermore, the loops inside the @expression were a double loop with non-contiguous values. Now, the loop inside the @expression is simpler.
The nested loops above are slow, but apparently not as slow as the older code.

@abelsiqueira abelsiqueira force-pushed the 294-precompute-summation-indexes branch from 1225288 to e1e1ea2 Compare December 11, 2023 13:52
@abelsiqueira abelsiqueira changed the title Precompute summation indexes for the model expressions Precompute summation indices for the model expressions Dec 11, 2023
@datejada
Copy link
Member

In the SpineOpt model (MOPO project), they follow this same strategy, meaning that they have a function that creates the indices for the constraints as a dictionary and then use it to create the constraints in the model. That's the basic idea, but they have a different structure for the temporal resolution, so their functions to find the indices are quite different.
We can learn how to run the profiler during tomorrow's meeting and make a decision. The readability is not looking as bad as I expected 😄

@abelsiqueira
Copy link
Member Author

The readability is not looking as bad as I expected 😄

So far... ;)

@abelsiqueira abelsiqueira force-pushed the 294-precompute-summation-indexes branch 2 times, most recently from c8b4050 to 2be9659 Compare December 19, 2023 09:30
@abelsiqueira abelsiqueira marked this pull request as draft December 22, 2023 21:39
@abelsiqueira abelsiqueira force-pushed the 294-precompute-summation-indexes branch 2 times, most recently from ce8d830 to a8aca39 Compare January 10, 2024 16:17
@abelsiqueira abelsiqueira force-pushed the 294-precompute-summation-indexes branch from a8aca39 to 0db252a Compare January 11, 2024 11:31
@abelsiqueira
Copy link
Member Author

Results copied my fork (abelsiqueira#6 (comment)):

Benchmark Results

5c5e5b8... 0db252a... t[5c5e5b8...]/t[0db252a...]
energy_problem/create_model 22.8 s 5.07 ± 0.32 s 4.49
energy_problem/input_and_constructor 2.6 ± 0.0058 s 2.61 ± 0.0092 s 0.994
time_to_load 7.28 ± 0.024 s 7.29 ± 0.035 s 0.998

@abelsiqueira
Copy link
Member Author

Closing in favor of #408

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants