Skip to content

Releases: understandable-machine-intelligence-lab/Quantus

v0.6.0

21 Jul 15:28
85bcf2c

Choose a tag to compare

v0.6.0

This release introduces a major update to Quantus, focused on performance, scalability, and compatibility with the key addition being the support for batched metric evaluation, which significantly improves runtime efficiency. The release also includes initial support for HuggingFace models, internal cleanups, and compatibility upgrades.


What’s Changed

Batch support for explanation metrics

  • Added batch-based evaluation to the following metrics:
    • PixelFlipping, Monotonicity, MonotonicityCorrelation, FaithfulnessCorrelation, FaithfulnessEstimate.
  • Utility functions adapted for batch computation:
    • correlation_pearson, correlation_spearman, correlation_kendall_tausimilarity, calculate_auc, get_baseline_dict.
  • Legacy per-instance versions removed for consistency and runtime gains.

Batch correctness was validated via side-by-side comparison with previous outputs across 30 random runs per sample. Deterministic metrics were verified using np.allclose, while stochastic ones were compared using statistical tests (e.g., t-tests showed strong output agreement in >90% of cases).

Thanks to @davor10105 for leading the batch refactor and validation.


HuggingFace Transformers support

  • Initial integration of transformers via PyTorchModel wrapper for SequenceClassification tasks.
  • predict now handles HuggingFace models out of the box.
  • Added test coverage and linting via tox and flake8.

Thanks to @abarbosa94 for the transformer implementation.


Other improvements

  • Dropped Python 3.7 support.
  • Cleaned up pyproject.toml; improved linting and module verification (mypy, flake8, isort).
  • Fixed normalise_func_kwargs bug in base.py.
  • Resolved shape mismatch in Region Perturbation visualisations for batched inputs (#353).
  • Updated documentation and README links (e.g., QUANDA references).

v0.5.3

05 Dec 11:42

Choose a tag to compare

What's Changed

  • Bugfix: Added explain_func_kwargs to SmoothMPRT and zennit tests by @annahedstroem in #318
  • Improvement: Add warning of deprecated argument (handle elegantly) by @annahedstroem in #319

Full Changelog: v0.5.2...v0.5.3

v0.5.2

01 Dec 14:26
ac36e91

Choose a tag to compare

What's Changed

Full Changelog: v0.5.1...v0.5.2

v0.5.1

27 Nov 11:16
c33f403

Choose a tag to compare

What's Changed

  • Bug fixes for the EfficientMPRT metric and expand quantus.evaluate func by @annahedstroem in #314
  • Update pandas version control (to work smoothly with Colab environment) and add back print warn message for Model Parameter Randomisation Test by @annahedstroem in #315

Full Changelog: v0.5.0...v0.5.1

v0.5.0

24 Nov 14:46
ea2890e

Choose a tag to compare

What's Changed

In this release, we introduce two new metrics, SmoothMPRT and EfficientMPRT, which are variants of the Model Parameter Randomisation Test (MPRT). These implementations are from the paper "Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test " by Hedström et al., 2023 published at NeurIPS XAIA.

Full Changelog: v0.4.5...v0.5.0

v0.4.5

17 Nov 11:09

Choose a tag to compare

What's Changed

Full Changelog: v0.4.4...v0.4.5

v0.4.4

26 Oct 16:25
1de69e6

Choose a tag to compare

What's Changed

Full Changelog: v0.4.3...v0.4.4

v0.4.3

10 Aug 10:58

Choose a tag to compare

What's Changed

Full Changelog: v0.4.2...v0.4.3

v0.4.2

09 Aug 15:04

Choose a tag to compare

What's Changed

  • More exposure to quantum functionality including model and utils
  • Create transparency with respect to the metric’s data and model applicability by @annahedstroem in #279
  • Update evaluation.py by @annahedstroem in #282
  • Fixed issue in max_sensitivity.py by @annahedstroem in #289

Full Changelog: v0.4.1...v0.4.2

v0.4.1

27 Jun 11:09

Choose a tag to compare

What's Changed

Full Changelog: v0.4.0...v0.4.1