SHARPIE is a Python-based modular framework for Reinforcement Learning and Human-AI interaction experiments.
Reinforcement learning offers a general approach for modeling and training AI agents, including human-AI interaction scenarios. SHARPIE addresses the need for a generic framework to support experiments with RL agents and humans. Its modular design consists of a versatile wrapper for RL environments and algorithm libraries, a participant-facing web interface, logging utilities, deployment on popular cloud and participant recruitment platforms.
It empowers researchers to study a wide variety of research questions related to the interaction between humans and RL agents, including those related to interactive reward specification and learning, learning from human feedback, action delegation, preference elicitation, user-modeling, and human-AI teaming. The platform is based on a generic interface for human-RL interactions that aims to standardize the field of study on RL in human contexts.
You can find the documentation here.
This research was funded by the Hybrid Intelligence Center, a 10-year programme funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, Grant No: 024.004.022.
When using this project in a scientific publication please cite:
@inproceedings{sharpiecaihu25,
booktitle = {AAAI Bridge Program Workshop on Collaborative AI and Modeling of Humans},
title = {{SHARPIE}: A Modular Framework for Reinforcement Learning and Human-AI Interaction Experiments},
author = {Ayd$\i$n, H{\"{u}}seyin and Godin-Dubois, Kevin and Goncalves Braz, Libio and den Hengst,
Floris and Baraka, Kim and {\c{C}}elikok, Mustafa Mert and Sauter, Andreas and Wang, Shihan and
Oliehoek, Frans A},
month = {feb},
address = {Philadelphia, Pennsylvania, USA},
doi={10.48550/arXiv.2501.19245},
year = {2025}
}