From 3d9893a8dcac8d0203fbf3184e539562f9fc947d Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Tue, 29 Oct 2024 08:09:55 +0000 Subject: [PATCH] build based on 4d35a79 --- dev/.documenter-siteinfo.json | 2 +- dev/10-how-to-use/index.html | 2 +- dev/20-tutorials/index.html | 38 +++++++++++++++--------------- dev/30-concepts/index.html | 14 ++++++------ dev/40-formulation/index.html | 2 +- dev/90-contributing/index.html | 2 +- dev/91-developer/index.html | 2 +- dev/95-reference/index.html | 42 +++++++++++++++++----------------- dev/index.html | 2 +- 9 files changed, 53 insertions(+), 53 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 8579c5e4..18167c00 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-10-29T06:56:50","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-10-29T08:09:46","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/10-how-to-use/index.html b/dev/10-how-to-use/index.html index 2b51948c..5cc14411 100644 --- a/dev/10-how-to-use/index.html +++ b/dev/10-how-to-use/index.html @@ -3,4 +3,4 @@ compute_conflict!(energy_problem.model) iis_model, reference_map = copy_conflict(energy_problem.model) print(iis_model) -end

Storage specific setups

Seasonal and non-seasonal storage

Section Storage Modeling explains the main concepts for modeling seasonal and non-seasonal storage in TulipaEnergyModel.jl. To define if an asset is one type or the other then consider the following:

Note: If the input data covers only one representative period for the entire year, for example, with 8760-hour timesteps, and you have a monthly hydropower plant, then you should set the is_seasonal parameter for that asset to false. This is because the length of the representative period is greater than the storage capacity of the storage asset.

The energy storage investment method

Energy storage assets have a unique characteristic wherein the investment is based not solely on the capacity to charge and discharge, but also on the energy capacity. Some storage asset types have a fixed duration for a given capacity, which means that there is a predefined ratio between energy and power. For instance, a battery of 10MW/unit and 4h duration implies that the energy capacity is 40MWh. Conversely, other storage asset types don't have a fixed ratio between the investment of capacity and storage capacity. Therefore, the energy capacity can be optimized independently of the capacity investment, such as hydrogen storage in salt caverns. To define if an energy asset is one type or the other then consider the following parameter setting in the file assets-data.csv:

In addition, the parameter capacity_storage_energy in the graph-assets-data.csv defines the energy per unit of storage capacity invested in (e.g., MWh/unit).

For more details on the constraints that apply when selecting one method or the other, please visit the mathematical formulation section.

Control simultaneous charging and discharging

Depending on the configuration of the energy storage assets, it may or may not be possible to charge and discharge them simultaneously. For instance, a single battery cannot charge and discharge at the same time, but some pumped hydro storage technologies have separate components for charging (pump) and discharging (turbine) that can function independently, allowing them to charge and discharge simultaneously. To account for these differences, the model provides users with three options for the use_binary_storage_method parameter in the assets-data.csv file:

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up unit commitment constraints

The unit commitment constraints are only applied to producer and conversion assets. The unit_commitment parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up ramping constraints

The ramping constraints are only applied to producer and conversion assets. The ramping parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up a maximum or minimum outgoing energy limit

For the model to add constraints for a maximum or minimum energy limit for an asset throughout the model's timeframe (e.g., a year), we need to establish a couple of parameters:

Tip: If you want to set a limit on the maximum or minimum outgoing energy for a year with representative days, you can use the partition definition to create a single partition for the entire year to combine the profile.

Example: Setting Energy Limits

Let's assume we have a year divided into 365 days because we are using days as periods in the representatives from TulipaClustering.jl. Also, we define the max_energy_timeframe_partition = 10 MWh, meaning the peak energy we want to have is 10MWh for each period or period partition. So depending on the optional information, we can have:

ProfilePeriod PartitionsExample
NoneNoneThe default profile is 1.p.u. for each period and since there are no period partitions, the constraints will be for each period (i.e., daily). So the outgoing energy of the asset for each day must be less than or equal to 10MWh.
DefinedNoneThe profile definition and value will be in the assets-timeframe-profiles.csv and profiles-timeframe.csv files. For example, we define a profile that has the following first four values: 0.6 p.u., 1.0 p.u., 0.8 p.u., and 0.4 p.u. There are no period partitions, so constraints will be for each period (i.e., daily). Therefore the outgoing energy of the asset for the first four days must be less than or equal to 6MWh, 10MWh, 8MWh, and 4MWh.
DefinedDefinedUsing the same profile as above, we now define a period partition in the assets-timeframe-partitions.csv file as uniform with a value of 2. This value means that we will aggregate every two periods (i.e., every two days). So, instead of having 365 constraints, we will have 183 constraints (182 every two days and one last constraint of 1 day). Then the profile is aggregated with the sum of the values inside the periods within the partition. Thus, the outgoing energy of the asset for the first two partitions (i.e., every two days) must be less than or equal to 16MWh and 12MWh, respectively.

Defining a group of assets

A group of assets refers to a set of assets that share certain constraints. For example, the investments of a group of assets may be capped at a maximum value, which represents the potential of a specific area that is restricted in terms of the maximum allowable MW due to limitations on building licenses.

In order to define the groups in the model, the following steps are necessary:

  1. Create a group in the groups-data.csv file by defining the name property and its parameters.

  2. In the file graph-assets-data.csv, assign assets to the group by setting the name in the group parameter/column.

    Note: A missing value in the parameter group in the graph-assets-data.csv means that the asset does not belong to any group.

Groups are useful to represent several common constraints, the following group constraints are available.

Setting up a maximum or minimum investment limit for a group

The mathematical formulation of the maximum and minimum investment limit for group constraints is available here. The parameters to set up these constraints in the model are in the groups-data.csv file.

Example: Group of Assets

Let's explore how the groups are set up in the test case called Norse. First, let's take a look at the groups-data.csv file:

2×5 DataFrame
Rownameyearinvest_methodmin_investment_limitmax_investment_limit
String15Int64BoolInt64?Int64?
1renewables2030truemissing40000
2ccgt2030true10000missing

In the given data, there are two groups: renewables and ccgt. Both groups have the invest_method parameter set to true, indicating that investment group constraints apply to both. For the renewables group, the min_investment_limit parameter is missing, signifying that there is no minimum limit imposed on the group. However, the max_investment_limit parameter is set to 40000 MW, indicating that the total investments of assets in the group must be less than or equal to this value. In contrast, the ccgt group has a missing value in the max_investment_limit parameter, indicating no maximum limit, while the min_investment_limit is set to 10000 MW for the total investments in that group.

Let's now explore which assets are in each group. To do so, we can take a look at the graph-assets-data.csv file:

4×3 DataFrame
Rownametypegroup
String31String15String15?
1Asgard_Solarproducerrenewables
2Asgard_CCGTconversionccgt
3Midgard_Windproducerrenewables
4Midgard_CCGTconversionccgt

Here we can see that the assets Asgard_Solar and Midgard_Wind belong to the renewables group, while the assets Asgard_CCGT and Midgard_CCGT belong to the ccgt group.

Note: If the group has a min_investment_limit, then assets in the group have to allow investment (investable = true) for the model to be feasible. If the assets are not investable then they cannot satisfy the minimum constraint.

Setting up multi-year investments

It is possible to simutaneously model different years, which is especially relevant for modeling multi-year investments. Multi-year investments refer to making investment decisions at different points in time, such that a pathway of investments can be modeled. This is particularly useful when long-term scenarios are modeled, but modeling each year is not practical. Or in a business case, investment decisions are supposed to be made in different years which has an impact on the cash flow.

In order to set up a model with year information, the following steps are necessary.

+end

Storage specific setups

Seasonal and non-seasonal storage

Section Storage Modeling explains the main concepts for modeling seasonal and non-seasonal storage in TulipaEnergyModel.jl. To define if an asset is one type or the other then consider the following:

Note: If the input data covers only one representative period for the entire year, for example, with 8760-hour timesteps, and you have a monthly hydropower plant, then you should set the is_seasonal parameter for that asset to false. This is because the length of the representative period is greater than the storage capacity of the storage asset.

The energy storage investment method

Energy storage assets have a unique characteristic wherein the investment is based not solely on the capacity to charge and discharge, but also on the energy capacity. Some storage asset types have a fixed duration for a given capacity, which means that there is a predefined ratio between energy and power. For instance, a battery of 10MW/unit and 4h duration implies that the energy capacity is 40MWh. Conversely, other storage asset types don't have a fixed ratio between the investment of capacity and storage capacity. Therefore, the energy capacity can be optimized independently of the capacity investment, such as hydrogen storage in salt caverns. To define if an energy asset is one type or the other then consider the following parameter setting in the file assets-data.csv:

In addition, the parameter capacity_storage_energy in the graph-assets-data.csv defines the energy per unit of storage capacity invested in (e.g., MWh/unit).

For more details on the constraints that apply when selecting one method or the other, please visit the mathematical formulation section.

Control simultaneous charging and discharging

Depending on the configuration of the energy storage assets, it may or may not be possible to charge and discharge them simultaneously. For instance, a single battery cannot charge and discharge at the same time, but some pumped hydro storage technologies have separate components for charging (pump) and discharging (turbine) that can function independently, allowing them to charge and discharge simultaneously. To account for these differences, the model provides users with three options for the use_binary_storage_method parameter in the assets-data.csv file:

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up unit commitment constraints

The unit commitment constraints are only applied to producer and conversion assets. The unit_commitment parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up ramping constraints

The ramping constraints are only applied to producer and conversion assets. The ramping parameter must be set to true to include the constraints in the assets-data.csv. Additionally, the following parameters should be set in that same file:

For more details on the constraints that apply when selecting this method, please visit the mathematical formulation section.

Setting up a maximum or minimum outgoing energy limit

For the model to add constraints for a maximum or minimum energy limit for an asset throughout the model's timeframe (e.g., a year), we need to establish a couple of parameters:

Tip: If you want to set a limit on the maximum or minimum outgoing energy for a year with representative days, you can use the partition definition to create a single partition for the entire year to combine the profile.

Example: Setting Energy Limits

Let's assume we have a year divided into 365 days because we are using days as periods in the representatives from TulipaClustering.jl. Also, we define the max_energy_timeframe_partition = 10 MWh, meaning the peak energy we want to have is 10MWh for each period or period partition. So depending on the optional information, we can have:

ProfilePeriod PartitionsExample
NoneNoneThe default profile is 1.p.u. for each period and since there are no period partitions, the constraints will be for each period (i.e., daily). So the outgoing energy of the asset for each day must be less than or equal to 10MWh.
DefinedNoneThe profile definition and value will be in the assets-timeframe-profiles.csv and profiles-timeframe.csv files. For example, we define a profile that has the following first four values: 0.6 p.u., 1.0 p.u., 0.8 p.u., and 0.4 p.u. There are no period partitions, so constraints will be for each period (i.e., daily). Therefore the outgoing energy of the asset for the first four days must be less than or equal to 6MWh, 10MWh, 8MWh, and 4MWh.
DefinedDefinedUsing the same profile as above, we now define a period partition in the assets-timeframe-partitions.csv file as uniform with a value of 2. This value means that we will aggregate every two periods (i.e., every two days). So, instead of having 365 constraints, we will have 183 constraints (182 every two days and one last constraint of 1 day). Then the profile is aggregated with the sum of the values inside the periods within the partition. Thus, the outgoing energy of the asset for the first two partitions (i.e., every two days) must be less than or equal to 16MWh and 12MWh, respectively.

Defining a group of assets

A group of assets refers to a set of assets that share certain constraints. For example, the investments of a group of assets may be capped at a maximum value, which represents the potential of a specific area that is restricted in terms of the maximum allowable MW due to limitations on building licenses.

In order to define the groups in the model, the following steps are necessary:

  1. Create a group in the groups-data.csv file by defining the name property and its parameters.

  2. In the file graph-assets-data.csv, assign assets to the group by setting the name in the group parameter/column.

    Note: A missing value in the parameter group in the graph-assets-data.csv means that the asset does not belong to any group.

Groups are useful to represent several common constraints, the following group constraints are available.

Setting up a maximum or minimum investment limit for a group

The mathematical formulation of the maximum and minimum investment limit for group constraints is available here. The parameters to set up these constraints in the model are in the groups-data.csv file.

Example: Group of Assets

Let's explore how the groups are set up in the test case called Norse. First, let's take a look at the groups-data.csv file:

2×5 DataFrame
Rownameyearinvest_methodmin_investment_limitmax_investment_limit
String15Int64BoolInt64?Int64?
1renewables2030truemissing40000
2ccgt2030true10000missing

In the given data, there are two groups: renewables and ccgt. Both groups have the invest_method parameter set to true, indicating that investment group constraints apply to both. For the renewables group, the min_investment_limit parameter is missing, signifying that there is no minimum limit imposed on the group. However, the max_investment_limit parameter is set to 40000 MW, indicating that the total investments of assets in the group must be less than or equal to this value. In contrast, the ccgt group has a missing value in the max_investment_limit parameter, indicating no maximum limit, while the min_investment_limit is set to 10000 MW for the total investments in that group.

Let's now explore which assets are in each group. To do so, we can take a look at the graph-assets-data.csv file:

4×3 DataFrame
Rownametypegroup
String31String15String15?
1Asgard_Solarproducerrenewables
2Asgard_CCGTconversionccgt
3Midgard_Windproducerrenewables
4Midgard_CCGTconversionccgt

Here we can see that the assets Asgard_Solar and Midgard_Wind belong to the renewables group, while the assets Asgard_CCGT and Midgard_CCGT belong to the ccgt group.

Note: If the group has a min_investment_limit, then assets in the group have to allow investment (investable = true) for the model to be feasible. If the assets are not investable then they cannot satisfy the minimum constraint.

Setting up multi-year investments

It is possible to simutaneously model different years, which is especially relevant for modeling multi-year investments. Multi-year investments refer to making investment decisions at different points in time, such that a pathway of investments can be modeled. This is particularly useful when long-term scenarios are modeled, but modeling each year is not practical. Or in a business case, investment decisions are supposed to be made in different years which has an impact on the cash flow.

In order to set up a model with year information, the following steps are necessary.

diff --git a/dev/20-tutorials/index.html b/dev/20-tutorials/index.html index 2b54b389..801d62ea 100644 --- a/dev/20-tutorials/index.html +++ b/dev/20-tutorials/index.html @@ -6,17 +6,17 @@ connection = DBInterface.connect(DuckDB.DB) read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name) energy_problem = run_scenario(connection)
EnergyProblem:
-  - Time creating internal structures (in seconds): 8.477514402
-  - Time computing constraints partitions (in seconds): 4.29620603
-  - Time creating dataframes (in seconds): 1.516605841
-  - Time creating variables indices (in seconds): 1.653e-6
+  - Time creating internal structures (in seconds): 8.928622023
+  - Time computing constraints partitions (in seconds): 4.490356425
+  - Time creating dataframes (in seconds): 1.455075528
+  - Time creating variables indices (in seconds): 1.983e-6
   - Model created!
-    - Time for  creating the model (in seconds): 15.651206269
+    - Time for  creating the model (in seconds): 16.137012815
     - Number of variables: 368
     - Number of constraints for variable bounds: 368
     - Number of structural constraints: 432
   - Model solved! 
-    - Time for  solving the model (in seconds): 2.497279289
+    - Time for  solving the model (in seconds): 2.60770982
     - Termination status: OPTIMAL
     - Objective value: 269238.4382375954
 

The energy_problem variable is of type EnergyProblem. For more details, see the documentation for that type or the section Structures.

That's all it takes to run a scenario! To learn about the data required to run your own scenario, see the Input section of How to Use.

Manually running each step

If we need more control, we can create the energy problem first, then the optimization model inside it, and finally ask for it to be solved.

using DuckDB, TulipaIO, TulipaEnergyModel
@@ -25,10 +25,10 @@
 connection = DBInterface.connect(DuckDB.DB)
 read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
 energy_problem = EnergyProblem(connection)
EnergyProblem:
-  - Time creating internal structures (in seconds): 0.13342404
-  - Time computing constraints partitions (in seconds): 0.00018746
-  - Time creating dataframes (in seconds): 0.005203897
-  - Time creating variables indices (in seconds): 1.683e-6
+  - Time creating internal structures (in seconds): 0.136561249
+  - Time computing constraints partitions (in seconds): 0.000190425
+  - Time creating dataframes (in seconds): 0.00638123
+  - Time creating variables indices (in seconds): 1.794e-6
   - Model not created!
   - Model not solved!
 

The energy problem does not have a model yet:

energy_problem.model === nothing
true

To create the internal model, we call the function create_model!.

create_model!(energy_problem)
@@ -82,7 +82,7 @@
   :max_energy_inter_rp    => 0×4 DataFrame…
   :storage_level_intra_rp => 0×5 DataFrame…
   :highest_in             => 0×5 DataFrame…
-  :units_on               => 0×1 DataFrame…
+  :units_on               => 0×7 DataFrame…
   :highest_in_out         => 72×7 DataFrame…
   :flows                  => 360×9 DataFrame…
   :lowest                 => 360×5 DataFrame…
@@ -93,7 +93,7 @@
 variables = compute_variables_indices(dataframes)
Dict{Symbol, TulipaVariable} with 5 entries:
   :is_charging            => TulipaVariable(0×5 DataFrame…
   :flow                   => TulipaVariable(360×9 DataFrame…
-  :units_on               => TulipaVariable(0×1 DataFrame…
+  :units_on               => TulipaVariable(0×7 DataFrame…
   :storage_level_inter_rp => TulipaVariable(0×4 DataFrame…
   :storage_level_intra_rp => TulipaVariable(0×5 DataFrame

Now we can compute the model.

model = create_model(graph, sets, variables, representative_periods, dataframes, years, timeframe, groups, model_parameters)
A JuMP Model
 ├ solver: none
@@ -112,17 +112,17 @@
 connection = DBInterface.connect(DuckDB.DB)
 read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
 energy_problem = run_scenario(connection, optimizer = GLPK.Optimizer)
EnergyProblem:
-  - Time creating internal structures (in seconds): 0.136411911
-  - Time computing constraints partitions (in seconds): 0.000185927
-  - Time creating dataframes (in seconds): 0.00583825
-  - Time creating variables indices (in seconds): 1.623e-6
+  - Time creating internal structures (in seconds): 0.137979912
+  - Time computing constraints partitions (in seconds): 0.000174967
+  - Time creating dataframes (in seconds): 0.006570406
+  - Time creating variables indices (in seconds): 1.904e-6
   - Model created!
-    - Time for  creating the model (in seconds): 0.006936498
+    - Time for  creating the model (in seconds): 0.006729089
     - Number of variables: 368
     - Number of constraints for variable bounds: 368
     - Number of structural constraints: 432
   - Model solved! 
-    - Time for  solving the model (in seconds): 2.046274089
+    - Time for  solving the model (in seconds): 2.122235957
     - Termination status: OPTIMAL
     - Objective value: 269238.4382417078
 

or

using GLPK
@@ -492,4 +492,4 @@
  54777.52475361511
  52128.801264625705
  46907.046645223265

Here value. (i.e., broadcasting) was used instead of the vector comprehension from previous examples just to show that it also works.

The value of the constraint is obtained by looking only at the part with variables. So a constraint like 2x + 3y - 1 <= 4 would return the value of 2x + 3y.

Writing the output to CSV

To save the solution to CSV files, you can use save_solution_to_file:

mkdir("outputs")
-save_solution_to_file("outputs", energy_problem)

Plotting

In the previous sections, we have shown how to create vectors such as the one for flows. If you want simple plots, you can plot the vectors directly using any package you like.

If you would like more custom plots, check out TulipaPlots.jl, under development, which provides tailor-made plots for TulipaEnergyModel.jl.

+save_solution_to_file("outputs", energy_problem)

Plotting

In the previous sections, we have shown how to create vectors such as the one for flows. If you want simple plots, you can plot the vectors directly using any package you like.

If you would like more custom plots, check out TulipaPlots.jl, under development, which provides tailor-made plots for TulipaEnergyModel.jl.

diff --git a/dev/30-concepts/index.html b/dev/30-concepts/index.html index 51fc4f97..01a42816 100644 --- a/dev/30-concepts/index.html +++ b/dev/30-concepts/index.html @@ -66,17 +66,17 @@ connection = DBInterface.connect(DuckDB.DB) read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name) energy_problem = run_scenario(connection)
EnergyProblem:
-  - Time creating internal structures (in seconds): 0.16852234
-  - Time computing constraints partitions (in seconds): 0.000248794
-  - Time creating dataframes (in seconds): 0.005003489
-  - Time creating variables indices (in seconds): 2.014e-6
+  - Time creating internal structures (in seconds): 0.178616774
+  - Time computing constraints partitions (in seconds): 0.00025679
+  - Time creating dataframes (in seconds): 0.006399288
+  - Time creating variables indices (in seconds): 1.683e-6
   - Model created!
-    - Time for  creating the model (in seconds): 3.399579548
+    - Time for  creating the model (in seconds): 3.543088927
     - Number of variables: 727
     - Number of constraints for variable bounds: 727
     - Number of structural constraints: 957
   - Model solved! 
-    - Time for  solving the model (in seconds): 0.029106612
+    - Time for  solving the model (in seconds): 0.029591687
     - Termination status: OPTIMAL
     - Objective value: 2409.3840293440285
-

Since the battery is not seasonal, it only has results for the intra-storage level of each representative period, as shown in the following figure:

Battery-intra-storage-level

Since the phs is defined as seasonal, it has results for only the inter-storage level. Since we defined the period partition as 1, we get results for each period (i.e., day). We can see that the inter-temporal constraints in the model keep track of the storage level through the whole timeframe definition (i.e., week).

PHS-inter-storage-level

In this example, we have demonstrated how to partially recover the chronological information of a storage asset with a longer discharge duration (such as 48 hours) than the representative period length (24 hours). This feature enables us to model both short- and long-term storage in TulipaEnergyModel.jl.

+

Since the battery is not seasonal, it only has results for the intra-storage level of each representative period, as shown in the following figure:

Battery-intra-storage-level

Since the phs is defined as seasonal, it has results for only the inter-storage level. Since we defined the period partition as 1, we get results for each period (i.e., day). We can see that the inter-temporal constraints in the model keep track of the storage level through the whole timeframe definition (i.e., week).

PHS-inter-storage-level

In this example, we have demonstrated how to partially recover the chronological information of a storage asset with a longer discharge duration (such as 48 hours) than the representative period length (24 hours). This feature enables us to model both short- and long-term storage in TulipaEnergyModel.jl.

diff --git a/dev/40-formulation/index.html b/dev/40-formulation/index.html index 41c26bc7..5316e40a 100644 --- a/dev/40-formulation/index.html +++ b/dev/40-formulation/index.html @@ -122,4 +122,4 @@ \end{aligned}\]

Maximum Investment Limit of a Group

\[\begin{aligned} \sum_{a \in \mathcal{A}^{\text{i}}_y | p^{\text{group}}_{a} = g} p^{\text{capacity}}_{a} \cdot v^{\text{inv}}_{a,y} \leq p^{\text{max invest limit}}_{g,y} \\ \\ & \forall y \in \mathcal{Y}, \forall g \in \mathcal{G}^{\text{ai}}_y -\end{aligned}\]

References

Damcı-Kurt, P., Küçükyavuz, S., Rajan, D., Atamtürk, A., 2016. A polyhedral study of production ramping. Math. Program. 158, 175–205. doi: 10.1007/s10107-015-0919-9.

Morales-España, G., Ramos, A., García-González, J., 2014. An MIP Formulation for Joint Market-Clearing of Energy and Reserves Based on Ramp Scheduling. IEEE Transactions on Power Systems 29, 476-488. doi: 10.1109/TPWRS.2013.2259601.

Morales-España, G., Latorre, J. M., Ramos, A., 2013. Tight and Compact MILP Formulation for the Thermal Unit Commitment Problem. IEEE Transactions on Power Systems 28, 4897-4908. doi: 10.1109/TPWRS.2013.2251373.

Tejada-Arango, D.A., Domeshek, M., Wogrin, S., Centeno, E., 2018. Enhanced representative days and system states modeling for energy storage investment analysis. IEEE Transactions on Power Systems 33, 6534–6544. doi:10.1109/TPWRS.2018.2819578.

Tejada-Arango, D.A., Wogrin, S., Siddiqui, A.S., Centeno, E., 2019. Opportunity cost including short-term energy storage in hydrothermal dispatch models using a linked representative periods approach. Energy 188, 116079. doi:10.1016/j.energy.2019.116079.

+\end{aligned}\]

References

Damcı-Kurt, P., Küçükyavuz, S., Rajan, D., Atamtürk, A., 2016. A polyhedral study of production ramping. Math. Program. 158, 175–205. doi: 10.1007/s10107-015-0919-9.

Morales-España, G., Ramos, A., García-González, J., 2014. An MIP Formulation for Joint Market-Clearing of Energy and Reserves Based on Ramp Scheduling. IEEE Transactions on Power Systems 29, 476-488. doi: 10.1109/TPWRS.2013.2259601.

Morales-España, G., Latorre, J. M., Ramos, A., 2013. Tight and Compact MILP Formulation for the Thermal Unit Commitment Problem. IEEE Transactions on Power Systems 28, 4897-4908. doi: 10.1109/TPWRS.2013.2251373.

Tejada-Arango, D.A., Domeshek, M., Wogrin, S., Centeno, E., 2018. Enhanced representative days and system states modeling for energy storage investment analysis. IEEE Transactions on Power Systems 33, 6534–6544. doi:10.1109/TPWRS.2018.2819578.

Tejada-Arango, D.A., Wogrin, S., Siddiqui, A.S., Centeno, E., 2019. Opportunity cost including short-term energy storage in hydrothermal dispatch models using a linked representative periods approach. Energy 188, 116079. doi:10.1016/j.energy.2019.116079.

diff --git a/dev/90-contributing/index.html b/dev/90-contributing/index.html index a9f55afc..e4e14773 100644 --- a/dev/90-contributing/index.html +++ b/dev/90-contributing/index.html @@ -1,2 +1,2 @@ -Contributing Guidelines · TulipaEnergyModel.jl

Contributing Guidelines

Great that you want to contribute to the development of Tulipa! Please read these guidelines and our Developer Documentation to get you started.

GitHub Rules of Engagement

  • If you want to discuss something that isn't immediately actionable, post under Discussions. Convert it to an issue once it's actionable.
  • All PR's should have an associated issue (unless it's a very minor fix).
  • All issues should have 1 Type and 1+ Zone labels (unless Type: epic).
  • Assign yourself to issues you want to address. Consider if you will be able to work on them in the near future (this week) — if not, leave them available for someone else.
  • Set the issue Status to "In Progress" when you have started working on it.
  • When finalizing a pull request, set the Status to "Ready for Review." If someone specific needs to review it, assign them as the reviewer (otherwise anyone can review).
  • Issues addressed by merged PRs will automatically move to Done.
  • If you want to discuss an issue at the next group meeting (or just get some attention), mark it with the "question" label.
  • Issues without updates for 60 days (and PRs without updates in 30 days) will be labelled as "stale" and filtered out of view. There is a Stale project board to view and revive these.

Contributing Workflow

Fork → Branch → Code → Push → Pull → Squash & Merge

  1. Fork the repository
  2. Create a new branch (in your fork)
  3. Do fantastic coding
  4. Push to your fork
  5. Create a pull request from your fork to the main repository
  6. (After review) Squash and merge

For a step-by-step guide to these steps, see our Developer Documentation.

We use this workflow in our quest to achieve the Utopic Git History.

+Contributing Guidelines · TulipaEnergyModel.jl

Contributing Guidelines

Great that you want to contribute to the development of Tulipa! Please read these guidelines and our Developer Documentation to get you started.

GitHub Rules of Engagement

  • If you want to discuss something that isn't immediately actionable, post under Discussions. Convert it to an issue once it's actionable.
  • All PR's should have an associated issue (unless it's a very minor fix).
  • All issues should have 1 Type and 1+ Zone labels (unless Type: epic).
  • Assign yourself to issues you want to address. Consider if you will be able to work on them in the near future (this week) — if not, leave them available for someone else.
  • Set the issue Status to "In Progress" when you have started working on it.
  • When finalizing a pull request, set the Status to "Ready for Review." If someone specific needs to review it, assign them as the reviewer (otherwise anyone can review).
  • Issues addressed by merged PRs will automatically move to Done.
  • If you want to discuss an issue at the next group meeting (or just get some attention), mark it with the "question" label.
  • Issues without updates for 60 days (and PRs without updates in 30 days) will be labelled as "stale" and filtered out of view. There is a Stale project board to view and revive these.

Contributing Workflow

Fork → Branch → Code → Push → Pull → Squash & Merge

  1. Fork the repository
  2. Create a new branch (in your fork)
  3. Do fantastic coding
  4. Push to your fork
  5. Create a pull request from your fork to the main repository
  6. (After review) Squash and merge

For a step-by-step guide to these steps, see our Developer Documentation.

We use this workflow in our quest to achieve the Utopic Git History.

diff --git a/dev/91-developer/index.html b/dev/91-developer/index.html index 0555dd71..4722fec9 100644 --- a/dev/91-developer/index.html +++ b/dev/91-developer/index.html @@ -30,4 +30,4 @@ git rebase --continue git push --force origin <branch_name>

8. Create a Pull Request

When there are no more conflicts and all the test are passing, create a pull request to merge your remote branch into the org main. You can do this on GitHub by opening the branch in your fork and clicking "Compare & pull request".

Screenshot of Compare & pull request button on GitHub

Fill in the pull request details:

  1. Describe the changes.
  2. List the issue(s) that this pull request closes.
  3. Fill in the collaboration confirmation.
  4. (Optional) Choose a reviewer.
  5. When all of the information is filled in, click "Create pull request".

Screenshot of the pull request information

You pull request will appear in the list of pull requests in the TulipaEnergyModel.jl repository, where you can track the review process.

Sometimes reviewers request changes. After pushing any changes, the pull request will be automatically updated. Do not forget to re-request a review.

Once your reviewer approves the pull request, you need to merge it with the main branch using "Squash and Merge". You can also delete the branch that originated the pull request by clicking the button that appears after the merge. For branches that were pushed to the main repo, it is recommended that you do so.

Building the Documentation Locally

Following the latest suggestions, we recommend using LiveServer to build the documentation.

Note: Ensure you have the package Revise installed in your global environment before running servedocs.

Here is how you do it:

  1. Run julia --project=docs in the package root to open Julia in the environment of the docs.
  2. If this is the first time building the docs
    1. Press ] to enter pkg mode
    2. Run pkg> dev . to use the development version of your package
    3. Press backspace to leave pkg mode
  3. Run julia> using LiveServer
  4. Run julia> servedocs(launch_browser=true)

Performance Considerations

If you updated something that might impact the performance of the package, you can run the Benchmark.yml workflow from your pull request. To do that, add the tag benchmark in the pull request. This will trigger the workflow and post the results as a comment in you pull request.

Warning: This requires that your branch was pushed to the main repo. If you have created a pull request from a fork, the Benchmark.yml workflow does not work. Instead, close your pull request, push your branch to the main repo, and open a new pull request.

If you want to manually run the benchmarks, you can do the following:

Profiling

To profile the code in a more manual way, here are some tips:

See the file <benchmark/profiling.jl> for an example of profiling code.

Procedure for Releasing a New Version (Julia Registry)

When publishing a new version of the model to the Julia Registry, follow this procedure:

Note: To be able to register, you need to be a member of the organisation TulipaEnergy and have your visibility set to public: Screenshot of public members of TulipaEnergy on GitHub

  1. Click on the Project.toml file on GitHub.

  2. Edit the file and change the version number according to semantic versioning: Major.Minor.Patch Screenshot of editing Project.toml on GitHub

  3. Commit the changes in a new branch and open a pull request. Change the commit message according to the version number. Screenshot of PR with commit message "Release 0.6.1"

  4. Create the pull request and squash & merge it after the review and testing process. Delete the branch after the squash and merge. Screenshot of full PR template on GitHub

  5. Go to the main page of repo and click in the commit. Screenshot of how to access commit on GitHub

  6. Add the following comment to the commit: @JuliaRegistrator register Screenshot of calling JuliaRegistrator in commit comments

  7. The bot should start the registration process. Screenshot of JuliaRegistrator bot message

  8. After approval, the bot will take care of the PR at the Julia Registry and automatically create the release for the new version. Screenshot of new version on registry

    Thank you for helping make frequent releases!

Adding a Package to the TulipaEnergy Organisation

To get started creating a new (Julia) package that will live in the TulipaEnergy organisation and interact with TulipaEnergyModel, please start by using BestieTemplate.jl, and follow the steps in their Full guide for a new package.

This will set up the majority of automation and workflows we use and make your repo consistent with the others!

Note: TulipaEnergyModel.jl is the core repo of the organisation. The Discussions are focused there and in some cases the documentation of other packages should forward to the TulipaEnergyModel docs to avoid duplicate or scattered information.

+results = run(SUITE, verbose=true)

Profiling

To profile the code in a more manual way, here are some tips:

See the file <benchmark/profiling.jl> for an example of profiling code.

Procedure for Releasing a New Version (Julia Registry)

When publishing a new version of the model to the Julia Registry, follow this procedure:

Note: To be able to register, you need to be a member of the organisation TulipaEnergy and have your visibility set to public: Screenshot of public members of TulipaEnergy on GitHub

  1. Click on the Project.toml file on GitHub.

  2. Edit the file and change the version number according to semantic versioning: Major.Minor.Patch Screenshot of editing Project.toml on GitHub

  3. Commit the changes in a new branch and open a pull request. Change the commit message according to the version number. Screenshot of PR with commit message "Release 0.6.1"

  4. Create the pull request and squash & merge it after the review and testing process. Delete the branch after the squash and merge. Screenshot of full PR template on GitHub

  5. Go to the main page of repo and click in the commit. Screenshot of how to access commit on GitHub

  6. Add the following comment to the commit: @JuliaRegistrator register Screenshot of calling JuliaRegistrator in commit comments

  7. The bot should start the registration process. Screenshot of JuliaRegistrator bot message

  8. After approval, the bot will take care of the PR at the Julia Registry and automatically create the release for the new version. Screenshot of new version on registry

    Thank you for helping make frequent releases!

Adding a Package to the TulipaEnergy Organisation

To get started creating a new (Julia) package that will live in the TulipaEnergy organisation and interact with TulipaEnergyModel, please start by using BestieTemplate.jl, and follow the steps in their Full guide for a new package.

This will set up the majority of automation and workflows we use and make your repo consistent with the others!

Note: TulipaEnergyModel.jl is the core repo of the organisation. The Discussions are focused there and in some cases the documentation of other packages should forward to the TulipaEnergyModel docs to avoid duplicate or scattered information.

diff --git a/dev/95-reference/index.html b/dev/95-reference/index.html index dc3df3e1..ae9e78ed 100644 --- a/dev/95-reference/index.html +++ b/dev/95-reference/index.html @@ -1,9 +1,9 @@ -Reference · TulipaEnergyModel.jl

Reference

TulipaEnergyModel.EnergyProblemType

Structure to hold all parts of an energy problem. It is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.

Fields

  • graph: The Graph object that defines the geometry of the energy problem.
  • representative_periods: A vector of Representative Periods.
  • constraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks)
  • timeframe: The number of periods of the representative_periods.
  • dataframes: The data frames used to linearize the variables and constraints. These are used internally in the model only.
  • groups: The input data of the groups to create constraints that are common to a set of assets in the model.
  • model_parameters: The model parameters.
  • model: A JuMP.Model object representing the optimization model.
  • solved: A boolean indicating whether the model has been solved or not.
  • objective_value: The objective value of the solved problem.
  • termination_status: The termination status of the optimization model.
  • timings: Dictionary of elapsed time for various parts of the code (in seconds).

Constructor

  • EnergyProblem(connection): Constructs a new EnergyProblem object with the given connection. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.

See the basic example tutorial to see how these can be used.

source
TulipaEnergyModel.ModelParametersType
ModelParameters(;key = value, ...)
+Reference · TulipaEnergyModel.jl

Reference

TulipaEnergyModel.EnergyProblemType

Structure to hold all parts of an energy problem. It is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.

Fields

  • graph: The Graph object that defines the geometry of the energy problem.
  • representative_periods: A vector of Representative Periods.
  • constraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks)
  • timeframe: The number of periods of the representative_periods.
  • dataframes: The data frames used to linearize the variables and constraints. These are used internally in the model only.
  • groups: The input data of the groups to create constraints that are common to a set of assets in the model.
  • model_parameters: The model parameters.
  • model: A JuMP.Model object representing the optimization model.
  • solved: A boolean indicating whether the model has been solved or not.
  • objective_value: The objective value of the solved problem.
  • termination_status: The termination status of the optimization model.
  • timings: Dictionary of elapsed time for various parts of the code (in seconds).

Constructor

  • EnergyProblem(connection): Constructs a new EnergyProblem object with the given connection. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.

See the basic example tutorial to see how these can be used.

source
TulipaEnergyModel.ModelParametersType
ModelParameters(;key = value, ...)
 ModelParameters(path; ...)
 ModelParameters(connection; ...)
-ModelParameters(connection, path; ...)

Structure to hold the model parameters. Some values are defined by default and some required explicit definition.

If path is passed, it is expected to be a string pointing to a TOML file with a key = value list of parameters. Explicit keyword arguments take precedence.

If connection is passed, the default discount_year is set to the minimum of all milestone years. In other words, we check for the table year_data for the column year where the column is_milestone is true. Explicit keyword arguments take precedence.

If both are passed, then path has preference. Explicit keyword arguments take precedence.

Parameters

  • discount_rate::Float64 = 0.0: The model discount rate.
  • discount_year::Int: The model discount year.
source
TulipaEnergyModel._check_initial_storage_level!Method
_check_initial_storage_level!(df)

Determine the starting value for the initial storage level for interpolating the storage level. If there is no initial storage level given, we will use the final storage level. Otherwise, we use the given initial storage level.

source
TulipaEnergyModel._construct_inter_rp_dataframesMethod
df = _construct_inter_rp_dataframes(assets, graph, years, asset_filter)

Constructs dataframes for inter representative period constraints.

Arguments

  • assets: An array of assets.
  • graph: The energy problem graph with the assets data.
  • asset_filter: A function that filters assets based on certain criteria.

Returns

A dataframe containing the constructed dataframe for constraints.

source
TulipaEnergyModel._interpolate_storage_level!Method
_interpolate_storage_level!(df, time_column::Symbol)

Transform the storage level dataframe from grouped timesteps or periods to incremental ones by interpolation. The starting value is the value of the previous grouped timesteps or periods or the initial value. The ending value is the value for the grouped timesteps or periods.

source
TulipaEnergyModel._parse_rp_partitionFunction
_parse_rp_partition(Val(specification), timestep_string, rp_timesteps)

Parses the timestep_string according to the specification. The representative period timesteps (rp_timesteps) might not be used in the computation, but it will be used for validation.

The specification defines what is expected from the timestep_string:

  • :uniform: The timestep_string should be a single number indicating the duration of each block. Examples: "3", "4", "1".
  • :explicit: The timestep_string should be a semicolon-separated list of integers. Each integer is a duration of a block. Examples: "3;3;3;3", "4;4;4", "1;1;1;1;1;1;1;1;1;1;1;1", and "3;3;4;2".
  • :math: The timestep_string should be an expression of the form NxD+NxD…, where D is the duration of the block and N is the number of blocks. Examples: "4x3", "3x4", "12x1", and "2x3+1x4+1x2".

The generated blocks will be ranges (a:b). The first block starts at 1, and the last block ends at length(rp_timesteps).

The following table summarizes the formats for a rp_timesteps = 1:12:

Output:uniform:explicit:math
1:3, 4:6, 7:9, 10:1233;3;3;34x3
1:4, 5:8, 9:1244;4;43x4
1:1, 2:2, …, 12:1211;1;1;1;1;1;1;1;1;1;1;112x1
1:3, 4:6, 7:10, 11:12NA3;3;4;22x3+1x4+1x2

Examples

using TulipaEnergyModel
+ModelParameters(connection, path; ...)

Structure to hold the model parameters. Some values are defined by default and some required explicit definition.

If path is passed, it is expected to be a string pointing to a TOML file with a key = value list of parameters. Explicit keyword arguments take precedence.

If connection is passed, the default discount_year is set to the minimum of all milestone years. In other words, we check for the table year_data for the column year where the column is_milestone is true. Explicit keyword arguments take precedence.

If both are passed, then path has preference. Explicit keyword arguments take precedence.

Parameters

  • discount_rate::Float64 = 0.0: The model discount rate.
  • discount_year::Int: The model discount year.
source
TulipaEnergyModel._check_initial_storage_level!Method
_check_initial_storage_level!(df)

Determine the starting value for the initial storage level for interpolating the storage level. If there is no initial storage level given, we will use the final storage level. Otherwise, we use the given initial storage level.

source
TulipaEnergyModel._construct_inter_rp_dataframesMethod
df = _construct_inter_rp_dataframes(assets, graph, years, asset_filter)

Constructs dataframes for inter representative period constraints.

Arguments

  • assets: An array of assets.
  • graph: The energy problem graph with the assets data.
  • asset_filter: A function that filters assets based on certain criteria.

Returns

A dataframe containing the constructed dataframe for constraints.

source
TulipaEnergyModel._interpolate_storage_level!Method
_interpolate_storage_level!(df, time_column::Symbol)

Transform the storage level dataframe from grouped timesteps or periods to incremental ones by interpolation. The starting value is the value of the previous grouped timesteps or periods or the initial value. The ending value is the value for the grouped timesteps or periods.

source
TulipaEnergyModel._parse_rp_partitionFunction
_parse_rp_partition(Val(specification), timestep_string, rp_timesteps)

Parses the timestep_string according to the specification. The representative period timesteps (rp_timesteps) might not be used in the computation, but it will be used for validation.

The specification defines what is expected from the timestep_string:

  • :uniform: The timestep_string should be a single number indicating the duration of each block. Examples: "3", "4", "1".
  • :explicit: The timestep_string should be a semicolon-separated list of integers. Each integer is a duration of a block. Examples: "3;3;3;3", "4;4;4", "1;1;1;1;1;1;1;1;1;1;1;1", and "3;3;4;2".
  • :math: The timestep_string should be an expression of the form NxD+NxD…, where D is the duration of the block and N is the number of blocks. Examples: "4x3", "3x4", "12x1", and "2x3+1x4+1x2".

The generated blocks will be ranges (a:b). The first block starts at 1, and the last block ends at length(rp_timesteps).

The following table summarizes the formats for a rp_timesteps = 1:12:

Output:uniform:explicit:math
1:3, 4:6, 7:9, 10:1233;3;3;34x3
1:4, 5:8, 9:1244;4;43x4
1:1, 2:2, …, 12:1211;1;1;1;1;1;1;1;1;1;1;112x1
1:3, 4:6, 7:10, 11:12NA3;3;4;22x3+1x4+1x2

Examples

using TulipaEnergyModel
 TulipaEnergyModel._parse_rp_partition(Val(:uniform), "3", 1:12)
 
 # output
@@ -29,27 +29,27 @@
  1:3
  4:6
  7:10
- 11:12
source
TulipaEnergyModel.add_expression_is_charging_terms_intra_rp_constraints!Method
add_expression_is_charging_terms_intra_rp_constraints!(df_cons,
                                                    is_charging_indices,
                                                    is_charging_variables,
                                                    workspace
-                                                   )

Computes the is_charging expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_expression_terms_inter_rp_constraints!Method
add_expression_terms_inter_rp_constraints!(df_inter,
+                                                   )

Computes the is_charging expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_expression_terms_inter_rp_constraints!Method
add_expression_terms_inter_rp_constraints!(df_inter,
                                            df_flows,
                                            df_map,
                                            graph,
                                            representative_periods,
-                                           )

Computes the incoming and outgoing expressions per row of df_inter for the constraints that are between (inter) the representative periods.

This function is only used internally in the model.

source
TulipaEnergyModel.add_expression_terms_intra_rp_constraints!Method
add_expression_terms_intra_rp_constraints!(df_cons,
                                            df_flows,
                                            workspace,
                                            representative_periods,
                                            graph;
                                            use_highest_resolution = true,
                                            multiply_by_duration = true,
-                                           )

Computes the incoming and outgoing expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_expression_units_on_terms_intra_rp_constraints!Method
add_expression_units_on_terms_intra_rp_constraints!(
+                                           )

Computes the incoming and outgoing expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_expression_units_on_terms_intra_rp_constraints!Method
add_expression_units_on_terms_intra_rp_constraints!(
     df_cons,
     df_units_on,
     workspace,
-)

Computes the units_on expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_flow_variables!Method
add_flow_variables!(model, variables)

Adds flow variables to the optimization model based on data from the variables. The flow variables are created using the @variable macro for each row in the :flows dataframe.

source
TulipaEnergyModel.add_investment_variables!Method
add_investment_variables!(model, graph, sets)

Adds investment, decommission, and energy-related variables to the optimization model, and sets integer constraints on selected variables based on the graph data.

source
TulipaEnergyModel.add_storage_variables!Method
add_storage_variables!(model, ...)

Adds storage-related variables to the optimization model, including storage levels for both intra-representative periods and inter-representative periods, as well as charging state variables. The function also optionally sets binary constraints for certain charging variables based on storage methods.

source
TulipaEnergyModel.add_unit_commitment_variables!Method
add_unit_commitment_variables!(model, ...)

Adds unit commitment variables to the optimization model based on the :units_on indices. Additionally, variables are constrained to be integers based on the sets structure.

source
TulipaEnergyModel.calculate_annualized_costMethod
calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)

Calculates the annualized cost for each asset, both energy assets and transport assets, in each year using provided discount rates, economic lifetimes, and investment costs.

Arguments

  • discount_rate::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the discount rate.
  • economic_lifetime::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the economic lifetime.
  • investment_cost::Dict: A dictionary where the key is a tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the investment cost.
  • years::Array: An array of years to be considered.
  • investable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.

Returns

  • A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the calculated annualized cost for each asset in each year.

Formula

The annualized cost for each asset in year is calculated using the formula:

annualized_cost = discount_rate[asset] / (
+)

Computes the units_on expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.add_flow_variables!Method
add_flow_variables!(model, variables)

Adds flow variables to the optimization model based on data from the variables. The flow variables are created using the @variable macro for each row in the :flows dataframe.

source
TulipaEnergyModel.add_investment_variables!Method
add_investment_variables!(model, graph, sets)

Adds investment, decommission, and energy-related variables to the optimization model, and sets integer constraints on selected variables based on the graph data.

source
TulipaEnergyModel.add_storage_variables!Method
add_storage_variables!(model, ...)

Adds storage-related variables to the optimization model, including storage levels for both intra-representative periods and inter-representative periods, as well as charging state variables. The function also optionally sets binary constraints for certain charging variables based on storage methods.

source
TulipaEnergyModel.add_unit_commitment_variables!Method
add_unit_commitment_variables!(model, ...)

Adds unit commitment variables to the optimization model based on the :units_on indices. Additionally, variables are constrained to be integers based on the sets structure.

source
TulipaEnergyModel.calculate_annualized_costMethod
calculate_annualized_cost(discount_rate, economic_lifetime, investment_cost, years, investable_assets)

Calculates the annualized cost for each asset, both energy assets and transport assets, in each year using provided discount rates, economic lifetimes, and investment costs.

Arguments

  • discount_rate::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the discount rate.
  • economic_lifetime::Dict: A dictionary where the key is an asset or a pair of assets (asset1, asset2) for transport assets, and the value is the economic lifetime.
  • investment_cost::Dict: A dictionary where the key is a tuple (year, asset) or (year, (asset1, asset2)) for transport assets, and the value is the investment cost.
  • years::Array: An array of years to be considered.
  • investable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.

Returns

  • A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the calculated annualized cost for each asset in each year.

Formula

The annualized cost for each asset in year is calculated using the formula:

annualized_cost = discount_rate[asset] / (
     (1 + discount_rate[asset]) *
     (1 - 1 / (1 + discount_rate[asset])^economic_lifetime[asset])
 ) * investment_cost[(year, asset)]

Example for energy assets

discount_rate = Dict("asset1" => 0.05, "asset2" => 0.07)
@@ -88,7 +88,7 @@
 Dict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:
   (2022, ("asset1", "asset2")) => 135.671
   (2021, ("asset3", "asset4")) => 153.918
-  (2021, ("asset1", "asset2")) => 123.338
source
TulipaEnergyModel.calculate_salvage_valueMethod
calculate_salvage_value(discount_rate,
                         economic_lifetime,
                         annualized_cost,
                         years,
@@ -144,7 +144,7 @@
 Dict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:
   (2022, ("asset1", "asset2")) => 964.325
   (2021, ("asset3", "asset4")) => 1202.24
-  (2021, ("asset1", "asset2")) => 759.2
source
TulipaEnergyModel.calculate_weight_for_investment_discountsMethod
calculate_weight_for_investment_discounts(social_rate,
                                           discount_year,
                                           salvage_value,
                                           investment_cost,
@@ -215,12 +215,12 @@
 Dict{Tuple{Int64, Tuple{String, String}}, Float64} with 3 entries:
   (2022, ("asset1", "asset2")) => 0.0797817
   (2021, ("asset3", "asset4")) => 0.13097
-  (2021, ("asset1", "asset2")) => 0.158874
source
TulipaEnergyModel.calculate_weight_for_investment_discountsMethod
calculate_weight_for_investment_discounts(graph::MetaGraph,
                                           years,
                                           investable_assets,
                                           assets,
                                           model_parameters,
-                                         )

Calculates the weight for investment discounts for each asset, both energy assets and transport assets. Internally calls calculate_annualized_cost, calculate_salvage_value, calculate_weight_for_investment_discounts.

Arguments

  • graph::MetaGraph: A graph
  • years::Array: An array of years to be considered.
  • investable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.
  • assets::Array: An array of assets.
  • model_parameters::ModelParameters: A model parameters structure.

Returns

  • A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the weights for investment discounts.
source
TulipaEnergyModel.compute_assets_partitions!Method
compute_assets_partitions!(partitions, df, a, representative_periods)

Parses the time blocks in the DataFrame df for the asset a and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_constraints_partitionsMethod
cons_partitions = compute_constraints_partitions(graph, representative_periods)

Computes the constraints partitions using the assets and flows partitions stored in the graph, and the representative periods.

The function computes the constraints partitions by iterating over the partition dictionary, which specifies the partition strategy for each resolution (i.e., lowest or highest). For each asset and representative period, it calls the compute_rp_partition function to compute the partition based on the strategy.

source
TulipaEnergyModel.compute_dual_variablesMethod
compute_dual_variables(model)

Compute the dual variables for the given model.

If the model does not have dual variables, this function fixes the discrete variables, optimizes the model, and then computes the dual variables.

Arguments

  • model: The model for which to compute the dual variables.

Returns

A named tuple containing the dual variables of selected constraints.

source
TulipaEnergyModel.compute_flows_partitions!Method
compute_flows_partitions!(partitions, df, u, v, representative_periods)

Parses the time blocks in the DataFrame df for the flow (u, v) and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_rp_partitionMethod
rp_partition = compute_rp_partition(partitions, :lowest)

Given the timesteps of various flows/assets in the partitions input, compute the representative period partitions.

Each element of partitions is a partition with the following assumptions:

  • An element is of the form V = [r₁, r₂, …, rₘ], where each rᵢ is a range a:b.
  • r₁ starts at 1.
  • rᵢ₊₁ starts at the end of rᵢ plus 1.
  • rₘ ends at some value N, that is the same for all elements of partitions.

Notice that this implies that they form a disjunct partition of 1:N.

The output will also be a partition with the conditions above.

Strategies

:lowest

If strategy = :lowest (default), then the output is constructed greedily, i.e., it selects the next largest breakpoint following the algorithm below:

  1. Input: Vᴵ₁, …, Vᴵₚ, a list of time blocks. Each element of Vᴵⱼ is a range r = r.start:r.end. Output: V.
  2. Compute the end of the representative period N (all Vᴵⱼ should have the same end)
  3. Start with an empty V = []
  4. Define the beginning of the range s = 1
  5. Define an array with all the next breakpoints B such that Bⱼ is the first r.end such that r.end ≥ s for each r ∈ Vᴵⱼ.
  6. The end of the range will be the e = max Bⱼ.
  7. Define r = s:e and add r to the end of V.
  8. If e = N, then END
  9. Otherwise, define s = e + 1 and go to step 4.

Examples

partition1 = [1:4, 5:8, 9:12]
+                                         )

Calculates the weight for investment discounts for each asset, both energy assets and transport assets. Internally calls calculate_annualized_cost, calculate_salvage_value, calculate_weight_for_investment_discounts.

Arguments

  • graph::MetaGraph: A graph
  • years::Array: An array of years to be considered.
  • investable_assets::Dict: A dictionary where the key is a year, and the value is an array of assets that are relevant for that year.
  • assets::Array: An array of assets.
  • model_parameters::ModelParameters: A model parameters structure.

Returns

  • A Dict where the keys are tuples (year, asset) representing the year and the asset, and the values are the weights for investment discounts.
source
TulipaEnergyModel.compute_assets_partitions!Method
compute_assets_partitions!(partitions, df, a, representative_periods)

Parses the time blocks in the DataFrame df for the asset a and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_constraints_partitionsMethod
cons_partitions = compute_constraints_partitions(graph, representative_periods)

Computes the constraints partitions using the assets and flows partitions stored in the graph, and the representative periods.

The function computes the constraints partitions by iterating over the partition dictionary, which specifies the partition strategy for each resolution (i.e., lowest or highest). For each asset and representative period, it calls the compute_rp_partition function to compute the partition based on the strategy.

source
TulipaEnergyModel.compute_dual_variablesMethod
compute_dual_variables(model)

Compute the dual variables for the given model.

If the model does not have dual variables, this function fixes the discrete variables, optimizes the model, and then computes the dual variables.

Arguments

  • model: The model for which to compute the dual variables.

Returns

A named tuple containing the dual variables of selected constraints.

source
TulipaEnergyModel.compute_flows_partitions!Method
compute_flows_partitions!(partitions, df, u, v, representative_periods)

Parses the time blocks in the DataFrame df for the flow (u, v) and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rep_period and its values are the timesteps of that rep_period.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_rp_partitionMethod
rp_partition = compute_rp_partition(partitions, :lowest)

Given the timesteps of various flows/assets in the partitions input, compute the representative period partitions.

Each element of partitions is a partition with the following assumptions:

  • An element is of the form V = [r₁, r₂, …, rₘ], where each rᵢ is a range a:b.
  • r₁ starts at 1.
  • rᵢ₊₁ starts at the end of rᵢ plus 1.
  • rₘ ends at some value N, that is the same for all elements of partitions.

Notice that this implies that they form a disjunct partition of 1:N.

The output will also be a partition with the conditions above.

Strategies

:lowest

If strategy = :lowest (default), then the output is constructed greedily, i.e., it selects the next largest breakpoint following the algorithm below:

  1. Input: Vᴵ₁, …, Vᴵₚ, a list of time blocks. Each element of Vᴵⱼ is a range r = r.start:r.end. Output: V.
  2. Compute the end of the representative period N (all Vᴵⱼ should have the same end)
  3. Start with an empty V = []
  4. Define the beginning of the range s = 1
  5. Define an array with all the next breakpoints B such that Bⱼ is the first r.end such that r.end ≥ s for each r ∈ Vᴵⱼ.
  6. The end of the range will be the e = max Bⱼ.
  7. Define r = s:e and add r to the end of V.
  8. If e = N, then END
  9. Otherwise, define s = e + 1 and go to step 4.

Examples

partition1 = [1:4, 5:8, 9:12]
 partition2 = [1:3, 4:6, 7:9, 10:12]
 compute_rp_partition([partition1, partition2], :lowest)
 
@@ -268,13 +268,13 @@
  7:7
  8:9
  10:10
- 11:12
source
TulipaEnergyModel.construct_dataframesMethod
dataframes = construct_dataframes(
     connection,
     graph,
     representative_periods,
     constraints_partitions,, IteratorSize
     years,
-)

Computes the data frames used to linearize the variables and constraints. These are used internally in the model only.

source
TulipaEnergyModel.create_internal_structuresMethod
graph, representative_periods, timeframe  = create_internal_structures(connection)

Return the graph, representative_periods, and timeframe structures given the input dataframes structure.

The details of these structures are:

source
TulipaEnergyModel.create_modelMethod
model = create_model(graph, representative_periods, dataframes, timeframe, groups; write_lp_file = false)

Create the energy model given the graph, representative_periods, dictionary of dataframes (created by construct_dataframes), timeframe, and groups.

source
TulipaEnergyModel.default_parametersMethod
default_parameters(Val(optimizer_name_symbol))
+)

Computes the data frames used to linearize the variables and constraints. These are used internally in the model only.

source
TulipaEnergyModel.create_internal_structuresMethod
graph, representative_periods, timeframe  = create_internal_structures(connection)

Return the graph, representative_periods, and timeframe structures given the input dataframes structure.

The details of these structures are:

source
TulipaEnergyModel.create_modelMethod
model = create_model(graph, representative_periods, dataframes, timeframe, groups; write_lp_file = false)

Create the energy model given the graph, representative_periods, dictionary of dataframes (created by construct_dataframes), timeframe, and groups.

source
TulipaEnergyModel.default_parametersMethod
default_parameters(Val(optimizer_name_symbol))
 default_parameters(optimizer)
 default_parameters(optimizer_name_symbol)
 default_parameters(optimizer_name_string)

Returns the default parameters for a given JuMP optimizer. Falls back to Dict() for undefined solvers.

Arguments

There are four ways to use this function:

  • Val(optimizer_name_symbol): This uses type dispatch with the special Val type. Pass the solver name as a Symbol (e.g., Val(:HiGHS)).
  • optimizer: The JuMP optimizer type (e.g., HiGHS.Optimizer).
  • optimizer_name_symbol or optimizer_name_string: Pass the name in Symbol or String format and it will be converted to Val.

Using Val is necessary for the dispatch. All other cases will convert the argument and call the Val version, which might lead to type instability.

Examples

using HiGHS
@@ -292,12 +292,12 @@
 
 # output
 
-true
source
TulipaEnergyModel.durationMethod
Δ = duration(block, rp, representative_periods)

Computes the duration of the block and multiply by the resolution of the representative period rp.

source
TulipaEnergyModel.durationMethod
Δ = duration(block, rp, representative_periods)

Computes the duration of the block and multiply by the resolution of the representative period rp.

source
TulipaEnergyModel.filter_graphMethod
filter_graph(graph, elements, value, key)
 filter_graph(graph, elements, value, key, year)

Helper function to filter elements (assets or flows) in the graph given a key (and possibly year) and value (or values). In the safest case, this is equivalent to the filters

filter_assets_whose_key_equal_to_value = a -> graph[a].key == value
 filter_assets_whose_key_year_equal_to_value = a -> graph[a].key[year] in value
 filter_flows_whose_key_equal_to_value = f -> graph[f...].key == value
-filter_flows_whose_key_year_equal_to_value = f -> graph[f...].key[year] in value
source
TulipaEnergyModel.get_graph_value_or_missingMethod
get_graph_value_or_missing(graph, graph_key, field_key)
-get_graph_value_or_missing(graph, graph_key, field_key, year)

Get graph[graph_key].field_key (or graph[graph_key].field_key[year]) or return missing if any of the values do not exist. We also check if graph[graph_key].active[year] is true if the year is passed and return missing otherwise.

source
TulipaEnergyModel.profile_aggregationMethod
profile_aggregation(agg, profiles, key, block, default_value)

Aggregates the profiles[key] over the block using the agg function. If the profile does not exist, uses default_value instead of each profile value.

profiles should be a dictionary of profiles, for instance graph[a].profiles or graph[u, v].profiles. If profiles[key] exists, then this function computes the aggregation of profiles[key] over the range block using the aggregator agg, i.e., agg(profiles[key][block]). If profiles[key] does not exist, then this substitutes it with a vector of default_values.

source
TulipaEnergyModel.read_parameters_from_fileMethod
read_parameters_from_file(filepath)

Parse the parameters from a file into a dictionary. The keys and values are NOT checked to be valid parameters for any specific solvers.

The file should contain a list of lines of the following type:

key = value

The file is parsed as TOML, which is intuitive. See the example below.

Example

# Creating file
+filter_flows_whose_key_year_equal_to_value = f -> graph[f...].key[year] in value
source
TulipaEnergyModel.get_graph_value_or_missingMethod
get_graph_value_or_missing(graph, graph_key, field_key)
+get_graph_value_or_missing(graph, graph_key, field_key, year)

Get graph[graph_key].field_key (or graph[graph_key].field_key[year]) or return missing if any of the values do not exist. We also check if graph[graph_key].active[year] is true if the year is passed and return missing otherwise.

source
TulipaEnergyModel.profile_aggregationMethod
profile_aggregation(agg, profiles, key, block, default_value)

Aggregates the profiles[key] over the block using the agg function. If the profile does not exist, uses default_value instead of each profile value.

profiles should be a dictionary of profiles, for instance graph[a].profiles or graph[u, v].profiles. If profiles[key] exists, then this function computes the aggregation of profiles[key] over the range block using the aggregator agg, i.e., agg(profiles[key][block]). If profiles[key] does not exist, then this substitutes it with a vector of default_values.

source
TulipaEnergyModel.read_parameters_from_fileMethod
read_parameters_from_file(filepath)

Parse the parameters from a file into a dictionary. The keys and values are NOT checked to be valid parameters for any specific solvers.

The file should contain a list of lines of the following type:

key = value

The file is parsed as TOML, which is intuitive. See the example below.

Example

# Creating file
 filepath, io = mktemp()
 println(io,
   """
@@ -321,7 +321,7 @@
   "small_number"   => 1.0e-8
   "true_or_false"  => true
   "real_number1"   => 3.14
-  "big_number"     => 6.66e6
source
TulipaEnergyModel.run_scenarioMethod
energy_problem = run_scenario(connection; optimizer, parameters, write_lp_file, log_file, show_log)

Run the scenario in the given connection and return the energy problem.

The optimizer and parameters keyword arguments can be used to change the optimizer (the default is HiGHS) and its parameters. The variables are passed to the solve_model function.

Set write_lp_file = true to export the problem that is sent to the solver to a file for viewing. Set show_log = false to silence printing the log while running. Specify a log_file name to export the log to a file.

source
TulipaEnergyModel.safe_comparisonMethod
safe_comparison(graph, a, value, key)
-safe_comparison(graph, a, value, key, year)

Check if graph[a].value (or graph[a].value[year]) is equal to value. This function assumes that if graph[a].value is a dictionary and value is not, then you made a mistake. This makes it safer, because it will not silently return false. It also checks for missing.

source
TulipaEnergyModel.safe_inclusionMethod
safe_inclusion(graph, a, value, key)
-safe_inclusion(graph, a, value, key, year)

Check if graph[a].value (or graph[a].value[year]) is in values. This correctly check that missing in [missing] returns false.

source
TulipaEnergyModel.save_solution_to_fileMethod
save_solution_to_file(output_file, graph, solution)

Saves the solution in CSV files inside output_folder.

The following files are created:

  • assets-investment.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value, and p is the corresponding capacity value. Only investable assets are included.
  • assets-investments-energy.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value on energy, and p is the corresponding energy capacity value. Only investable assets with a storage_method_energy set to true are included.
  • flows-investment.csv: Similar to assets-investment.csv, but for flows.
  • flows.csv: The value of each flow, per (from, to) flow, rp representative period and timestep. Since the flow is in power, the value at a timestep is equal to the value at the corresponding time block, i.e., if flow[1:3] = 30, then flow[1] = flow[2] = flow[3] = 30.
  • storage-level.csv: The value of each storage level, per asset, rp representative period, and timestep. Since the storage level is in energy, the value at a timestep is a proportional fraction of the value at the corresponding time block, i.e., if level[1:3] = 30, then level[1] = level[2] = level[3] = 10.
source
TulipaEnergyModel.solve_modelFunction
solution = solve_model(model[, optimizer; parameters])

Solve the JuMP model and return the solution. The optimizer argument should be an MILP solver from the JuMP list of supported solvers. By default we use HiGHS.

The keyword argument parameters should be passed as a list of key => value pairs. These can be created manually, obtained using default_parameters, or read from a file using read_parameters_from_file.

The solution object is a mutable struct with the following fields:

  • assets_investment[a]: The investment for each asset, indexed on the investable asset a. To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment[a] for a in labels(graph) if graph[a].investable]
    • assets_investment_energy[a]: The investment on energy component for each asset, indexed on the investable asset a with a storage_method_energy set to true.

    To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment_energy[a] for a in labels(graph) if graph[a].investable && graph[a].storage_method_energy
  • flows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v). To create a traditional array in the order given by the investable flows, one can run

    [solution.flows_investment[(u, v)] for (u, v) in edge_labels(graph) if graph[u, v].investable]
  • storage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a for a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model. To create a vector with all values of storage_level_intra_rp for a given a and rp, one can run

    [solution.storage_level_intra_rp[a, rp, timesteps_block] for timesteps_block in constraints_partitions[:lowest_resolution][(a, rp)]]
  • storage_level_inter_rp[a, pb]: The storage level for the storage asset a for a periods block pb. To create a vector with all values of storage_level_inter_rp for a given a, one can run

    [solution.storage_level_inter_rp[a, bp] for bp in graph[a].timeframe_partitions[a]]
  • flow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp]. To create a vector with all values of flow for a given (u, v) and rp, one can run

    [solution.flow[(u, v), rp, timesteps_block] for timesteps_block in graph[u, v].partitions[rp]]
  • objective_value: A Float64 with the objective value at the solution.

  • duals: A NamedTuple containing the dual variables of selected constraints.

Examples

parameters = Dict{String,Any}("presolve" => "on", "time_limit" => 60.0, "output_flag" => true)
-solution = solve_model(model, HiGHS.Optimizer; parameters = parameters)
source
TulipaEnergyModel.solve_model!Method
solution = solve_model!(dataframes, model, ...)

Solves the JuMP model, returns the solution, and modifies dataframes to include the solution. The modifications made to dataframes are:

  • df_flows.solution = solution.flow
  • df_storage_level_intra_rp.solution = solution.storage_level_intra_rp
  • df_storage_level_inter_rp.solution = solution.storage_level_inter_rp
source
TulipaEnergyModel.tmp_create_partition_tablesMethod
tmp_create_partition_tables(connection)

Create the unrolled partition tables using only tables.

The table explicit_assets_rep_periods_partitions is the explicit version of assets_rep_periods_partitions, i.e., it adds the rows not defined in that table by setting the specification to 'uniform' and the partition to '1'.

The table asset_time_resolution is the unrolled version of the table above, i.e., it takes the specification and partition and expands into a series of time blocks. The columns time_block_start and time_block_end replace the specification and partition columns.

Similarly, flow tables are created as well.

source
+ "big_number" => 6.66e6
source
TulipaEnergyModel.run_scenarioMethod
energy_problem = run_scenario(connection; optimizer, parameters, write_lp_file, log_file, show_log)

Run the scenario in the given connection and return the energy problem.

The optimizer and parameters keyword arguments can be used to change the optimizer (the default is HiGHS) and its parameters. The variables are passed to the solve_model function.

Set write_lp_file = true to export the problem that is sent to the solver to a file for viewing. Set show_log = false to silence printing the log while running. Specify a log_file name to export the log to a file.

source
TulipaEnergyModel.safe_comparisonMethod
safe_comparison(graph, a, value, key)
+safe_comparison(graph, a, value, key, year)

Check if graph[a].value (or graph[a].value[year]) is equal to value. This function assumes that if graph[a].value is a dictionary and value is not, then you made a mistake. This makes it safer, because it will not silently return false. It also checks for missing.

source
TulipaEnergyModel.safe_inclusionMethod
safe_inclusion(graph, a, value, key)
+safe_inclusion(graph, a, value, key, year)

Check if graph[a].value (or graph[a].value[year]) is in values. This correctly check that missing in [missing] returns false.

source
TulipaEnergyModel.save_solution_to_fileMethod
save_solution_to_file(output_file, graph, solution)

Saves the solution in CSV files inside output_folder.

The following files are created:

  • assets-investment.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value, and p is the corresponding capacity value. Only investable assets are included.
  • assets-investments-energy.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value on energy, and p is the corresponding energy capacity value. Only investable assets with a storage_method_energy set to true are included.
  • flows-investment.csv: Similar to assets-investment.csv, but for flows.
  • flows.csv: The value of each flow, per (from, to) flow, rp representative period and timestep. Since the flow is in power, the value at a timestep is equal to the value at the corresponding time block, i.e., if flow[1:3] = 30, then flow[1] = flow[2] = flow[3] = 30.
  • storage-level.csv: The value of each storage level, per asset, rp representative period, and timestep. Since the storage level is in energy, the value at a timestep is a proportional fraction of the value at the corresponding time block, i.e., if level[1:3] = 30, then level[1] = level[2] = level[3] = 10.
source
TulipaEnergyModel.solve_modelFunction
solution = solve_model(model[, optimizer; parameters])

Solve the JuMP model and return the solution. The optimizer argument should be an MILP solver from the JuMP list of supported solvers. By default we use HiGHS.

The keyword argument parameters should be passed as a list of key => value pairs. These can be created manually, obtained using default_parameters, or read from a file using read_parameters_from_file.

The solution object is a mutable struct with the following fields:

  • assets_investment[a]: The investment for each asset, indexed on the investable asset a. To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment[a] for a in labels(graph) if graph[a].investable]
    • assets_investment_energy[a]: The investment on energy component for each asset, indexed on the investable asset a with a storage_method_energy set to true.

    To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment_energy[a] for a in labels(graph) if graph[a].investable && graph[a].storage_method_energy
  • flows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v). To create a traditional array in the order given by the investable flows, one can run

    [solution.flows_investment[(u, v)] for (u, v) in edge_labels(graph) if graph[u, v].investable]
  • storage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a for a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model. To create a vector with all values of storage_level_intra_rp for a given a and rp, one can run

    [solution.storage_level_intra_rp[a, rp, timesteps_block] for timesteps_block in constraints_partitions[:lowest_resolution][(a, rp)]]
  • storage_level_inter_rp[a, pb]: The storage level for the storage asset a for a periods block pb. To create a vector with all values of storage_level_inter_rp for a given a, one can run

    [solution.storage_level_inter_rp[a, bp] for bp in graph[a].timeframe_partitions[a]]
  • flow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp]. To create a vector with all values of flow for a given (u, v) and rp, one can run

    [solution.flow[(u, v), rp, timesteps_block] for timesteps_block in graph[u, v].partitions[rp]]
  • objective_value: A Float64 with the objective value at the solution.

  • duals: A NamedTuple containing the dual variables of selected constraints.

Examples

parameters = Dict{String,Any}("presolve" => "on", "time_limit" => 60.0, "output_flag" => true)
+solution = solve_model(model, HiGHS.Optimizer; parameters = parameters)
source
TulipaEnergyModel.solve_model!Method
solution = solve_model!(dataframes, model, ...)

Solves the JuMP model, returns the solution, and modifies dataframes to include the solution. The modifications made to dataframes are:

  • df_flows.solution = solution.flow
  • df_storage_level_intra_rp.solution = solution.storage_level_intra_rp
  • df_storage_level_inter_rp.solution = solution.storage_level_inter_rp
source
TulipaEnergyModel.tmp_create_partition_tablesMethod
tmp_create_partition_tables(connection)

Create the unrolled partition tables using only tables.

The table explicit_assets_rep_periods_partitions is the explicit version of assets_rep_periods_partitions, i.e., it adds the rows not defined in that table by setting the specification to 'uniform' and the partition to '1'.

The table asset_time_resolution is the unrolled version of the table above, i.e., it takes the specification and partition and expands into a series of time blocks. The columns time_block_start and time_block_end replace the specification and partition columns.

Similarly, flow tables are created as well.

source
diff --git a/dev/index.html b/dev/index.html index 3dedf2d0..17e1b146 100644 --- a/dev/index.html +++ b/dev/index.html @@ -27,4 +27,4 @@ - +