-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor the model #701
Labels
epic
Epic issues (collection of smaller tasks towards a goal)
Milestone
Comments
@abelsiqueira @datejada Can I try tackling this one? Or should I wait? |
@clizbe you can do it, it's mostly independent from the rest. |
9 tasks
3 tasks
This was referenced Feb 18, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Discussed in #688
Originally posted by datejada July 1, 2024
Overview of the changes
Change graph data access and constraint creation to use the indices tables #942
Create all data and tables necessary for the model before the function #885
We can control and improve both the creation of the data/tables and the create_model function better if these concerns are separated.
This includes the "expression" data, e.g., data that informs the incoming and outgoing flow.
Sets
sets
#892Variables
The way that we control the indexing - to prevent sparsity issues - is to precompute the indexes of the variables. The way that we are doing it right now is to compute tables for these indexes. Assuming that keeps being the case, we have to define what is necessary for these tables and define their names.
These tables will probably also save the final result, so they are used outside of the model as well.
1:3
in the column but we have to change to two columns (?) 1 and 3.This involves:
construct_dataframes
tocompute_variables_indices
dataframes
to instead unpackx = model[:x]
, or instead usevariables
. We should try both strategies and see what makes more senseMake sure that we differentiate them from the input tables in DuckDB/TulipaIO
Constraints and expressions
The way that the (balance) constraints work is by asset. The table defines the necessary information for each of these assets. In the current implementation, one of the columns of this table is the JuMP expression with the incoming flow (and one for the output, and possibly more for extra stuff).
Constructing the incoming and outgoing flows is a huge issue. It was slow and we had to figure out a nice way to do it.
Furthermore, if we want to use an external table format, we can't save the expressions in the table. So the format will need to be changed.
Whatever we do we must be sure that it doesn't compromise performance.
Do we need something separate for partitions in general, or the structs above are sufficient?
The add_expression functions receive an asset-based constraint (e.g., balance) and the flow variable, and computes all incoming and outgoing flow for each row of the constraint.
The current implementation stores the flow and the resulting expressions in the tables. This will not be possible with a DuckDB table, so we either store the resulting expressions in a vector alongside the constraints, or in a separate structure, or don't pre-computed the resulting expressions.
Given that we want to precompute as much as possible separately from the model creation, computing the indexes required for the incoming and outgoing expressions might be desired, at first.
Renaming/Style
x = model[:x]
) or usevariables[:x]
,expressions[:x]
?Work on Create flow variables using the indices in the variable structure #898 should help decide what we'll do.
See https://jump.dev/JuMP.jl/stable/tutorials/getting_started/design_patterns_for_larger_models/#Generalize-constraints-and-objectives
This will make the code cleaner in many places because it simplifies the function arguments, and the required variables and explicitly unpacked.
Some of the code will still depend on the indexes of the variables, and these might still need to be passed.
Checklist of how we want things to look
Pipeline draft:
3.1 Create variables
3.2 Create constraint partitions
Input data names
create_input_dataframes
create_internal_structures
Example:
representative_periods
to use DuckDB #713compute_assets_partitions
compute_assets_partitions!
to use DuckDB more efficiently #948compute_constraints_partitions
compute_rp_partitions
solve_model
andsolve_model!
energy_problem.dataframes[:highest_in_out]
#637create_model
construct_dataframes
is not necessary.add_expression_terms_intra_rp_contraints
will need a refactor, and it will determine changes we need before and afterprofile_aggregation
functions are in the table in the documentationfor ... for ...
instead.AND THEN:
Maybe also:
The text was updated successfully, but these errors were encountered: