The goal of this complexity management tool is to provide energy system modelers with guidance towards finding optimal model formulations. These formulations should reflect (1) the focus of your research and (2) the preferences regarding complexity and accuracy of both the modelers and the interpreters of the model's results.

In addition to offering this interactive complexity manager, we have also published a comprehensive literature review on complexity in energy system modeling (see link). You will also find best practices regarding model formulations to handle complexity in that paper.
In the following, you will conduct three steps to find the optimal model formulation:

Selecting the type of model based on the field of research and the underlying research question

Defining your preferences regarding complexity (measured by runtimes and memory usage) and accuracy (measured by deviations in the target function value from a very complex benchmark model)

Evaluating the optimal model parameters

The model parameters to be defined comprise settings regarding the (1) temporal and spatial aggregation, (2) technological aggregation, (3) conversion modeling (e.g. linearization of unit commitment and part load constraints), (4) transmission modeling and (5) storage modeling.

Data basis

To be able to give guidelines for the optimal model formulations, we have conducted a large-scale supercomputing project using the high performance computing cluster JURECA-DC by Jülich Supercomputing Centre.
Our project was granted ~3 mio. core hours with up to 1 TB random access memory per computing node. We conducted more than 100.000 model runs using four different models with individual parameter settings and varying sets of input data.

The data basis was generated using four different energy system models as illustrated in the following. For the evaluation of accuracy, we have focused on deviations in target functions (i.e., costs).

Meaning of results

The main outcome of the complexity manager is a recommendation of optimal parameter settings that will maximize your utility based on preferences for complexity (measured by runtime) and accuracy (measured by deviations from the model’s target function, i.e., costs).
As accuracy and complexity can be defined as complementary goods in the course of our analysis, we use a Cobb Douglas utility function to determine the optimal, i.e. utility maximizing, parameter settings:

\(U (A, C) = A^a . C^c\)

where:

U \( :=\)

Utility (to be maximized)

A \( :=\)

Accuracy (measured as \(|1−|∆𝑇𝐹||\) with \(|∆𝑇𝐹|\) being the deviations in the target functions, normaslized to \([0; 1]\) based on the maximum deviation)

a \(:=\)

Weight for accuracy (to be defined by the user, \(∈[0; 1]\))

C \(:=\)

Complexity reduction (measured as \(|1 − |∆𝑅𝑇||\) with \(|∆𝑅𝑇| \) being the deviations in the runtime, normalized to [0; 1] based on the maximum runtime)

c \(:=\)

Weight for complexity (to be defined by the user, \(∈[0; 1]\) with \(𝑎 + 𝑐 = 1\))

The utility is calculated for each set of parameter settings and the optimal results will be displayed at the end of the tool.

Funding and team

This worked is part of the METIS project and has received funding from BMWi under grant ID 03ET4064B. Please see our project website to have a look at the project team and the contributors to this: metis-platform.net