This repository mostly contains the run_benchmarks.py script as well as
a collection of MINLP problems/models, which have been taken from the
MINLPlib and converted to Pyomo models with the
translate.py file.
When running
python run_benchmarks.py -h # show some help
python run_benchmarks.py --solver mindtpy --model-dir models
# or, when re-running
python run_benchmarks.py --solver mindtpy --model-dir models --redo-existing --no-skip-failedAfter running, the results/<solver> dir will contain
- a
.txtfile for each model output trace_file.trcwhich can be loaded into Paver to generate automatic benchmarking plots (not yet tested)solving_times.csvwhich contains the model name, aswell as the solving time or the termination condition/error