diff --git a/docs/src/30-concepts.md b/docs/src/30-concepts.md index 001c3e03..16371f9b 100644 --- a/docs/src/30-concepts.md +++ b/docs/src/30-concepts.md @@ -638,12 +638,7 @@ unstacked_map[!,["k=1", "k=2", "k=3"]] = convert.(Float64, unstacked_map[!,["k=1 unstacked_map # hide ``` -The file `assets-timeframe-partitions` has the information on how often we want to evaluate the inter-temporal constraints that combine the information of the representative periods. In this example, we define a uniform distribution of one period, meaning that we will check the inter-storage level every day of the week timeframe. - -```@example seasonal-storage -phs_partitions_file = "../../test/inputs/Storage/assets-timeframe-partitions.csv" # hide -phs_partitions = CSV.read(phs_partitions_file, DataFrame, header = 1) # hide -``` +The file `assets-timeframe-partitions` has the information on how often we want to evaluate the inter-temporal constraints that combine the information of the representative periods. In this example, the file is missing in the folder, meaning that the default of a `uniform` distribution of one period will be use in the model, see [model parameters](@ref schemas) section. This assumption implies that the model will check the inter-storage level every day of the week timeframe. !!! info For the sake of simplicity, we show how using three representative days can recover part of the chronological information of one week. The same method can be applied to more representative periods to analyze the seasonality across a year or longer timeframe. diff --git a/test/inputs/Storage/assets-timeframe-partitions.csv b/test/inputs/Storage/assets-timeframe-partitions.csv deleted file mode 100644 index 5a0bccaa..00000000 --- a/test/inputs/Storage/assets-timeframe-partitions.csv +++ /dev/null @@ -1,2 +0,0 @@ -asset,year,specification,partition -phs,2030,uniform,1