A simple Retrieval-Augmented Generation (RAG) Question Answering system built in Python with a Streamlit web app interface.
It retrieves relevant context from a document database using FAISS and generates accurate answers using a language model (Hugging Face).
- Load your documents.
- Retrieve relevant context for each query.
- Generate answers using a connected LLM.
- Fast search with FAISS vector store.
- Easy to customize for different datasets.
- User-friendly Streamlit web interface.
- Python
- Streamlit for the web app
- FAISS for vector search
- LangChain for orchestration
- Hugging Face API for LLM integration
MIT License