|
| 1 | +# Minimal AI Chat - made with Resilient LLM |
| 2 | + |
| 3 | +A simple chat interface demonstrating [`ResilientLLM`](https://github.com/gitcommitshow/resilient-llm) usage |
| 4 | + |
| 5 | + |
| 6 | + |
| 7 | +## Features |
| 8 | + |
| 9 | +This project can act as a boilerplate or template to kickstart your AI chatbot project. |
| 10 | + |
| 11 | +- Real-time chat with AI (feel free to change the LLM providers and models in `server/app.js`) |
| 12 | +- Markdown rendering for assistant responses |
| 13 | +- Minimal, clean UI design |
| 14 | +- Lightweight and easy to extend because of the minimal dependencies. It is a vanilla html/js frontend serving AI response via a simple express API server. |
| 15 | + |
| 16 | +## Project Structure |
| 17 | + |
| 18 | +``` |
| 19 | +server/ --Backend files-- |
| 20 | +└── app.js # Express server with ResilientLLM |
| 21 | +client/ --Frontend files-- |
| 22 | +├── index.html # Main HTML file (shows key integration functions) |
| 23 | +├── styles.css # Styling |
| 24 | +├── api.js # API integration with the express API backend |
| 25 | +├── messages.js # Message display and management |
| 26 | +└── ui.js # UI components and interactions |
| 27 | +``` |
| 28 | + |
| 29 | +## Quick Start |
| 30 | + |
| 31 | +### 1. Clone and Setup |
| 32 | + |
| 33 | +```bash |
| 34 | +git clone https://github.com/gitcommitshow/resilient-llm |
| 35 | +cd resilient-llm/examples/chat-basic |
| 36 | +``` |
| 37 | + |
| 38 | +### 2. Install Dependencies |
| 39 | + |
| 40 | +```bash |
| 41 | +npm install |
| 42 | +``` |
| 43 | + |
| 44 | +### 3. Set Environment Variables |
| 45 | + |
| 46 | +Set your API key and choose the default LLM service and model: |
| 47 | + |
| 48 | +```bash |
| 49 | +# OpenAI |
| 50 | +export OPENAI_API_KEY=your_key_here |
| 51 | +export AI_SERVICE=openai |
| 52 | +export AI_MODEL=gpt-4o-mini |
| 53 | + |
| 54 | +# Or Anthropic |
| 55 | +export ANTHROPIC_API_KEY=your_key_here |
| 56 | +export AI_SERVICE=anthropic |
| 57 | +export AI_MODEL=claude-3-5-sonnet-20240620 |
| 58 | + |
| 59 | +# Or Gemini |
| 60 | +export GEMINI_API_KEY=your_key_here |
| 61 | +export AI_SERVICE=gemini |
| 62 | +export AI_MODEL=gemini-2.0-flash |
| 63 | +``` |
| 64 | + |
| 65 | +### 4. Start the Server |
| 66 | + |
| 67 | +```bash |
| 68 | +npm run dev |
| 69 | +``` |
| 70 | + |
| 71 | +The server will start on `http://localhost:3000` and automatically serve the client files. |
| 72 | + |
| 73 | +### 5. Open in Browser |
| 74 | + |
| 75 | +Navigate to **`http://localhost:3000`** in your browser. |
| 76 | + |
| 77 | +<details> |
| 78 | +<summary><strong>Want to preview in the VSCode/Cursor editor directly?</strong></summary> |
| 79 | + |
| 80 | +- Install [Live Preview extension](https://marketplace.cursorapi.com/items/?itemName=ms-vscode.live-server) |
| 81 | +- Right-click on `client/index.html` → **"Show Preview"** |
| 82 | + |
| 83 | +**Note:** The server must be running for the preview to work, as it serves the client files and handles API requests. |
| 84 | + |
| 85 | +</details> |
| 86 | + |
| 87 | +## How It Works |
| 88 | + |
| 89 | +Uses `ResilientLLM` in `server/app.js` as following: |
| 90 | + |
| 91 | +**1. Initialize ResilientLLM:** |
| 92 | +```javascript |
| 93 | +const llm = new ResilientLLM({ |
| 94 | + aiService: process.env.AI_SERVICE || 'openai', |
| 95 | + model: process.env.AI_MODEL || 'gpt-4o-mini', |
| 96 | + maxTokens: 2048, |
| 97 | + temperature: 0.7, |
| 98 | + rateLimitConfig: { |
| 99 | + requestsPerMinute: 60, |
| 100 | + llmTokensPerMinute: 90000 |
| 101 | + }, |
| 102 | + retries: 3, |
| 103 | + backoffFactor: 2 |
| 104 | +}); |
| 105 | +``` |
| 106 | + |
| 107 | +**2. Use it in your API endpoint:** |
| 108 | +```javascript |
| 109 | +app.post('/api/chat', async (req, res) => { |
| 110 | + const { conversationHistory } = req.body; |
| 111 | + const response = await llm.chat(conversationHistory); |
| 112 | + res.json({ response, success: true }); |
| 113 | +});` |
| 114 | +``` |
| 115 | + |
| 116 | +**3. Send chat history from the frontend to get the AI response:** |
| 117 | +```REST |
| 118 | +POST /api/chat |
| 119 | +Content-Type: application/json |
| 120 | +
|
| 121 | +{ |
| 122 | + "conversationHistory": [ |
| 123 | + { "role": "system", "content": "You are a helpful assistant." }, |
| 124 | + { "role": "user", "content": "Hello!" }, |
| 125 | + { "role": "assistant", "content": "Hi there!" }, |
| 126 | + { "role": "user", "content": "What is JavaScript?" } |
| 127 | + ] |
| 128 | +} |
| 129 | +``` |
| 130 | + |
| 131 | +That's it! ResilientLLM returns the LLM response while automatically handling the rate limiting, retries, circuit breaking, and different types of errors thrown by LLM providers. |
| 132 | +
|
| 133 | +**Explore further:** Check `server/app.js` to see all configuration options and customize behavior via environment variables or direct code changes. |
| 134 | +
|
| 135 | +## Troubleshooting |
| 136 | +
|
| 137 | +- Make sure the server is running: `npm run dev` |
| 138 | +- Ensure your API key is set in environment variables. When in doubt, pair the env with the server start command e.g. `OPENAI_API_KEY=your_api_key npm run dev`. |
| 139 | +- Verify you're using the correct service name (openai, anthropic, or gemini) |
| 140 | +- Ensure that you're using the correct model name |
| 141 | +- Check that the API key is valid and has correct permissions |
| 142 | +
|
| 143 | +🐞 Discovered a bug? [Create an issue](https://github.com/gitcommitshow/resilient-llm/issues/new) |
| 144 | +
|
| 145 | +## License |
| 146 | +
|
| 147 | +MIT License |
0 commit comments