Skip to content

open-energy-transition/solver-benchmark

Repository files navigation

Open Energy Benchmark

This repository contains code for benchmarking optimization solvers on problems from the energy planning domain, and an interactive website for analyzing the results. The live website can be viewed at:

https://openenergybenchmark.org/

Table of Contents

Benchmark Problems

All our benchmark problems are open and available as LP/MPS files that can be downloaded in one click from our website's Benchmark Set page. Some problems have been generated by us using open source energy model frameworks, and for these we have configuration files and instructions for reproducing the problems.

For more details on how to contribute benchmark problems, see the Benchmarks README.

Solvers

The benchmark runner can run solvers listed in the Solvers README. We use the last released solver version in each calendar year. 2025 solvers will be updated at the end of the year.

Project Structure

Understanding the project layout to help you navigate and contribute:

solver-benchmark/
├── runner/                     # Benchmark execution scripts
│   ├── benchmark_all.sh        # Main entry point for running benchmarks
│   ├── run_benchmarks.py       # Python script that orchestrates benchmark runs
│   ├── run_solver.py           # Individual solver runner
│   ├── envs/                   # Conda environment definitions for each solver year
│   └── benchmarks/             # Downloaded benchmark problem files
├── benchmarks/                 # Benchmark problem definitions and metadata
│   ├── pypsa/                  # PyPSA-generated energy models
│   ├── jump_highs_platform/    # JuMP/HiGHS benchmark metadata
│   └── *_metadata.yaml         # Problem definitions and details
├── website-nextjs/             # Next.js website for viewing results
├── infrastructure/             # GCP VM deployment scripts (for running benchmarks at scale)
├── results/                    # Output directory for benchmark results
    ├── benchmark_results.csv   # Main results file
    └── metadata.yaml           # Merged metadata of all problems on the website

Running Benchmarks

Local Runs

Prerequisites

System Requirements

The benchmark runner currently requires Linux as it uses systemd-run to enforce memory limits on solvers, which is not available on macOS or Windows.

Supported Linux distributions:

  • Ubuntu 20.04 LTS or later
  • Debian 11 or later
  • Other systemd-based Linux distributions
Required Software

Ensure you have the following installed:

  • Python 3.12+
  • Conda (install Miniconda)
  • systemd (usually pre-installed on modern Linux distributions)

Running Supported Solvers on Benchmarks

The benchmark runner script (runner/benchmark_all.sh) is the main entry point for running benchmarks. It takes a list of solvers and a list of years as arguments, and runs the benchmarks for each solver and year. It creates conda environments containing the solvers and other necessary prerequisites, so a virtual environment is not necessary just for running the benchmark runner. See README .

Quickstart:

  1. Run benchmarks
./runner/benchmark_all.sh -s "highs scip" -y "2025" infrastructure/benchmarks/sample_run/standard-00.yaml
  1. View logs and results
tail results/benchmark_results.csv # will overwrite currently committed results
tail runner/logs/*
  1. View and analyze results by running the website locally

The script will save the measured runtime and memory consumption into a CSV file in results/ that the website will then read and display. Running the website locally will allow you to view and analyze results in a user friendly way. It will use the results from results/benchmark_results.csv.

runner/benchmark_all.sh uses runner/run_benchmarks.py to run the benchmarks by year. If you wish to run benchmarks directly, you can set up the requisite conda env manually. See Documentation.

Cloud Runs

We have cloud orchestration setup for running benchmarks on Google Cloud Platform. See Documentation.

Quickstart:

For cloud infrastructure setup, install:

gcloud auth application-default login
cd infrastructure
tofu init
tofu apply -var-file benchmarks/sample_run/run.tfvars

To set up comprehensive benchmark campaigns, like the one available on the website:

  1. Use notebooks/allocate-benchmarks-to-vms.ipynb to create the benchmark campaign.
  2. Run notebooks/run-and-observe-benchmarks.ipynb to observe the benchmark campaign progress.

Running your own benchmarks

To run your own benchmark problems, either locally or on the cloud, follow the steps in the appropriate section above but using a benchmarks.yaml file of your own that gives the details (metadata) and URL/path of your benchmark problems. Here is a small example:

benchmarks:
  genx-3_three_zones_w_co2_capture-no_uc:
    Sizes:
    - Name: 3-1h
      # Size classification
      Size: M
      # URL of the problem (needed for cloud runs)
      URL: https://storage.googleapis.com/solver-benchmarks/genx-3_three_zones_w_co2_capture-no_uc.lp
      # ALTERNATIVELY, for local runs, you can also give a local path
      Path: tests/sample_benchmarks/sample_lp.lp

You can quickly try running your own problem locally on our supported set of solvers by following these instructions.

Running other solvers

To run either our benchmarks, or your own (see the previous section), on a solver that we do not yet support, you need to install it into the active conda evironment and modify the run_solver.py appropriately. Please reach out to us (or open an issue) if you would like more details, or any help with this.

Running the Website

The website code is under website-nextjs/. To run the website locally, you need a recent version of node and npm installed. Then, run the following commands:

cd website-nextjs/
npm install
npm run build && npm run dev

Open http://localhost:3000 with your browser to see the website.

To see the results from your runs, navigate to the results page.

Development

We use the ruff code linter and formatter, and GitHub Actions runs various pre-commit checks to ensure code and files are clean.

You can install a git pre-commit that will ensure that your changes are formatted and no lint issues are detected before creating new commits:

pip install pre-commit
pre-commit install

If you want to skip these pre-commit steps for a particular commit, you can run:

git commit --no-verify

About

A benchmark of (MI)LP solvers on energy modelling problems

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published