Anomaly eXplanation for Industrial control Systems (AXIS)
This repository implements Stage III: Retrieval-Augmented Explanation Pipeline of the AXIS framework, using advanced auto-retrieval augmented generation methods with LlamaCloud.
git clone https://github.com/siddydutta/ics-anomaly-explanation.git
cd ics-anomaly-explanation
conda env create -f environment.yml
conda activate ics-anomaly-explanation
cp .env.development .envUses the attribution results from the ICS Anomaly Attributions repository to process top-K attributions for each attack.
python process_attributions.py --threshold 60This gives a top k=5 attributions for each attack in attributions.
Uses the detection results from the ICS Anomaly Attributions repository to compute various temporal statistics for each attack.
python process_anomalies.pyThe system supports three distinct approaches for generating explanations for ICS anomalies. Each variant leverages different levels of domain knowledge and metadata filtering:
-
BASELINE
This variant uses a naive Retrieval-Augmented Generation (RAG) approach. Explanations are generated solely based on the top feature attribution for each attack, without applying any metadata filtering. This serves as a simple baseline for comparison. -
NO MITRE
In this variant, explanations are generated using the top feature attribution, but with additional filtering based on SWaT technical metadata. This approach provides more context than the baseline by leveraging domain-specific metadata from the SWaT system. -
FULL
The most advanced variant combines both SWaT technical metadata filtering and MITRE ATT&CK metadata inference. Explanations are generated using the top feature attribution, enriched by comprehensive filtering from both sources.
python main.py --attack 0 --variant BASELINE --output-dir output/
python main.py --attack 0 --variant NO_MITRE --output-dir output/
python main.py --attack 0 --variant FULL --output-dir output/OR
chmod +x run_explanations.sh
./run_explanations.sh 0
