This project focuses on generating natural language explanations for AI decision-making processes. It demonstrates how formal, technical explanations can be transformed into conversational, user-friendly explanations that are more accessible to non-technical users.
- Abhinav Atmuri
- Bar Melinarskiy
- Konstantinos Zavantias
- Nikita Aksjonov
- Python 3.8+
- Docker and Docker Compose
- Required Python packages (install via
pip install -r requirements.txt):- anytree
- numpy
- pandas
- requests
Part1/- Contains all code for the first part of the assignment regarding general goal tree explanations using anytreePart2/- Natural language explanations for decisions made by the agent in the goal treenl_generation.ipynb: Jupyter notebook for generating natural language explanationsollama_docker/ollama.yaml: Docker configuration for the language model
generate_explanations_for_test_cases.py: Generates baseline technical explanationsrequirements.txt: List of required Python packages
Run the following command to generate the initial technical explanations:
python generate_explanations_for_test_cases.pyStart the Docker container with the language model (Mistral):
docker-compose -f ./Part2/ollama_docker/ollama.yaml up --build -dOpen and run the Jupyter notebook:
jupyter notebook Part2/nl_generation.ipynbThe generated explanations will be saved in:
explanations/baseline/: Technical explanationsexplanations/llama3/orexplanations/mistral/: Natural language explanations
To stop the Docker container when finished:
docker-compose -f ./Part2/ollama_docker/ollama.yaml downThis project demonstrates how AI systems can provide more human-friendly explanations of their decision-making processes. By transforming formal reasoning codes and technical explanations into natural language, we make AI decisions more transparent and accessible to non-technical users.