Skip to content

Releases: mutable-org/mutable

v0.0.31

08 May 10:37
Compare
Choose a tag to compare
[Benchmark] Use correct timeout value

v0.0.30: [Util] Fix in `unittest-parallel.py` script.

27 Apr 16:17
9b39d39
Compare
Choose a tag to compare
Exit with exit code 1 on failure of a test.

v0.0.29: [Coverage] Don't produce a JUnit report.

27 Apr 07:19
0f41d3b
Compare
Choose a tag to compare
The `unittest` binary crashes with
```
unittest: /var/lib/gitlab-runner/builds/yS6csq8A/0/bigdata/mutable/mutable/third-party/catch2/include/catch2/catch.hpp:5918: virtual void Catch::CumulativeReporterBase<Catch::JunitReporter>::testCaseEnded(const Catch::TestCaseStats &) [DerivedT = Catch::JunitReporter]: Assertion `m_sectionStack.size() == 0' failed.
```
This error seems to be related to the Catch 2 JUnit reporter.  See
https://github.com/catchorg/Catch2/issues/1801 and
https://github.com/catchorg/Catch2/issues/1967.

To remedy this problem, we simply don't use produce a report anymore.
It was never used, anyway.

v0.0.28: [Benchmark] Refactor benchmark system.

21 Apr 14:01
Compare
Choose a tag to compare
Instead of using a different script for each experiment and DBMS to
benchmark, implement `connectors` to these DBMS's. The connector has a
method to execute an experiment with the given parameters and returns
the measured times.

In addition, the format of the YAML files of the experiments has been
refactored to contain all the information and parameters to execute them
on each connector.

`Benchmark.py` is refactored as well to read the experiment files and
execute them on each available specified connector, with possibly
multiple configurations.

Some more minor changes:
- Benchmark script now has the option to execute one (or multiple)
  specific experiments.
- The `run_id` of each experiment run is tracked and inserted into the
  database.

v0.0.27

18 Apr 10:42
1d6cd0b
Compare
Choose a tag to compare
[CI] Fix release asset upload path & variable substitution