This repository contains a full-stack RAG chatbot powered by LangChain and IONOS, with separate frontend and backend folders:
-
frontend: A Next.js (React) application that allows users to input a page URL, select a model, and chat with an AI assistant based on website content.
-
backend: A FastAPI service that:
- Scrapes and indexes webpage text using TF-IDF for RAG.
- Routes chat requests to IONOS AI models, managing conversation history.
- Exposes endpoints for initializing RAG index, fetching chat history, and sending user messages.
Before you begin, ensure you have:
- Node.js (v18 or above) and npm or yarn
- Python (v3.10 or above)
- pip or poetry for Python dependencies
- An IONOS API Key for language model access
Create a .env
file in both the frontend and backend folders using the following template:
(you can instead create a single unique .env file at project root if you prefer)
# Frontend (Next.js)
NEXT_PUBLIC_APP_BASE_URL=http://localhost:8000 # URL of the backend API
# Shared / Backend (.env)
IONOS_API_KEY=your_ionos_api_key_here # IONOS AI Model Hub key
RAG_K=3 # top-k RAG chunks to retrieve (default: 3)
CHUNK_SIZE=500 # chars per chunk (default: 500)
MAX_CHUNK_COUNT=256 # maximum number of chunks (default: 256)
-
NEXT_PUBLIC_APP_BASE_URL: URL where your backend is running, used by the frontend.
-
IONOS_API_KEY: Your secret key for accessing IONOS AI Model Hub (required by the backend).
-
RAG_K: Number of top chunks to retrieve for context.
-
CHUNK_SIZE: Maximum characters per chunk when splitting scraped text.
-
MAX_CHUNK_COUNT: Cap on total chunks to index.
-
Navigate to the
backend
folder:cd backend
-
Install dependencies:
pip install -r requirements.txt
-
Run the FastAPI server locally:
python main.py
The backend will be available at
http://localhost:8000
.
-
Navigate to the
frontend
folder:cd frontend
-
Install dependencies:
npm install # or yarn install
-
Start the development server:
npm run dev # or yarn dev
The frontend will be available at
http://localhost:3000
.
- Open your browser at
http://localhost:3000
. - Enter a page URL to scrape and wait for RAG initialization.
- Select an AI model from the dropdown.
- Start chatting—messages will be sent to the backend, enriched with top-k context, and answered by your chosen model.
├── frontend # Next.js React app
│ ├── .env # Frontend environment config
│ ├── app/
│ ├── components/
│ ├── public/
│ ├── lib/
│ └── package.json
└── backend # FastAPI service
├── .env # Backend environment config
├── main.py # FastAPI entrypoint
├── requirements.txt
└── other modules…
This project is released under the MIT License. Feel free to use and modify it in your own applications.