GroundSTory: Interactive Storytelling for Grounded Spatiotemporal Anomaly Explanations with Graphs and Large Language Models
This repository contains the official implementation of the paper: Interactive Exploration and Explanation of Spatiotemporal Anomalies with Graph-LLM Integration, expected for submission in ACM TIST, special issue on New Frontiers in Interactive Storytelling and Computational Models of Narrative.
Explaining anomalies in spatiotemporal data requires more than identifying statistical deviations; it often demands coherent narratives that relate events to their spatial, temporal, and social context. In urban settings, where anomalies may reflect public events, behavioral shifts, or environmental factors, the lack of interpretable, contextual explanations limits the effectiveness of automated analysis.
We present GroundSTory, an interactive framework that reframes spatiotemporal anomaly explanation as a form of computational storytelling. GroundSTory integrates graph-based representations of urban space, coordinated visual analytics, and large language models (LLMs) to support the construction of grounded, human-readable narrative explanations. Anomalies act as narrative anchors, while localized spatial and temporal relations and contextual signals derived from the graph guide and constrain narrative generation. The LLM functions as a narrative assistant, proposing plausible, evidence-backed hypotheses that analysts can inspect, compare, and validate through visual interaction.
We demonstrate the approach using real-world urban datasets from São Paulo and New York City, illustrating how interactive narratives surface meaningful links between detected anomalies and contextual factors such as holidays, mass events, and routine activity patterns. A prompt ablation study further examines how structured graph-derived context affects the stability and perceived quality of generated hypotheses. Overall, our results indicate that combining knowledge-driven representations, interactive visualization, and constrained prompting enables more coherent and interpretable narratives for spatiotemporal anomaly analysis, contributing to ongoing research in interactive storytelling and narrative-centered AI systems.
The easiest way to run the project locally is using Docker Compose.
-
Configure Environment Variables Create a
.envfile in thebackend/directory:# Required Configuration OPENAI_API_KEY=your_openai_key GOOGLE_API_KEY=your_google_key # Dataset Path (Absolute path to the unzipped datasets) DATASET_BASE_PATH=/absolute/path/to/extracted/datasets DATASETS_JSON=./datasets.json # Optional Flask settings FLASK_ENV=development FLASK_DEBUG=1
-
Run with Docker Compose
docker compose up --build
-
Access the Application
- Backend API: http://localhost:8000
- Frontend UI: http://localhost:5173
If you prefer to run the components locally without Docker, follow these steps:
The backend is built with Python and relies on Conda/Micromamba for dependency management.
cd backend
micromamba create -n server -f environment.yml
micromamba activate server
# Ensure your backend/.env file is configured before running
python -m app.mainThe frontend is a Svelte application.
cd frontend
npm install
npm run devTo run the application, you need the pre-processed graph and signal datasets.
- Download the zipped datasets from Google Drive.
- Extract the contents to a local directory on your machine.
- Update the
DATASET_BASE_PATHin yourbackend/.envfile to point to this extracted directory.
st-anomaly-xplain/
├── backend/ # Python Flask server & analytical functions
│ ├── app/ # Main application routes and config
│ ├── functions/ # Core logic, graph processing, and LLM integrations
│ ├── tests/ # Unit and integration tests
│ ├── datasets.json # Dataset metadata and pointers
│ ├── environment.yml # Micromamba/Conda dependencies
│ └── Dockerfile # Backend Docker configuration
├── frontend/ # Svelte-based interactive visualization
│ ├── src/ # Svelte components and application logic
│ ├── static/ # Static assets (images, icons)
│ ├── package.json # Node dependencies
│ ├── svelte.config.js # Svelte application configuration
│ └── Dockerfile # Frontend Docker configuration
├── docker-compose.yml # Orchestration for full-stack deployment
└── README.md # Project documentation
If you use this code or our framework in your research, please cite our paper:
@article{2025-AnomalyExplain,
title={Interactive Exploration and Explanation of Spatiotemporal Anomalies with Graph-LLM Integration},
author={Juanpablo Heredia and Leighton Estrada-Rayme and Jeremy Matos-Cangalaya and Jorge Poco},
journal={Graphics, Patterns and Images (SIBGRAPI)},
year={2025},
url={},
date={2025-09-20}
}