Skip to content

Commit 8baf68a

Browse files
feat: a a simple chat demo using resilient llm packckage and express (#21)
1 parent d6d6c77 commit 8baf68a

File tree

15 files changed

+2322
-0
lines changed

15 files changed

+2322
-0
lines changed

.vscode/settings.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
{
2+
"livePreview.defaultPreviewPath": "/examples/chat-basic/index.html"
3+
}
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
{
2+
"version": "0.2.0",
3+
"configurations": [
4+
{
5+
"type": "node",
6+
"request": "launch",
7+
"name": "Launch Server",
8+
"runtimeExecutable": "npm",
9+
"runtimeArgs": ["run", "dev"],
10+
"cwd": "${workspaceFolder}/examples/chat-basic",
11+
"console": "integratedTerminal",
12+
"internalConsoleOptions": "neverOpen",
13+
"skipFiles": ["<node_internals>/**"]
14+
}
15+
]
16+
}
17+
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
{
2+
"files.associations": {
3+
"*.html": "html"
4+
},
5+
"liveServer.settings.port": 8080,
6+
"liveServer.settings.CustomBrowser": "default"
7+
}
8+
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
{
2+
"version": "2.0.0",
3+
"tasks": [
4+
{
5+
"label": "Start Dev Server",
6+
"type": "shell",
7+
"command": "npm run dev",
8+
"problemMatcher": [],
9+
"isBackground": true,
10+
"presentation": {
11+
"reveal": "always",
12+
"panel": "new"
13+
},
14+
"runOptions": {
15+
"runOn": "default"
16+
}
17+
},
18+
{
19+
"label": "Start Production Server",
20+
"type": "shell",
21+
"command": "npm start",
22+
"problemMatcher": [],
23+
"isBackground": true,
24+
"presentation": {
25+
"reveal": "always",
26+
"panel": "new"
27+
}
28+
}
29+
]
30+
}
31+

examples/chat-basic/README.md

Lines changed: 147 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,147 @@
1+
# Minimal AI Chat - made with Resilient LLM
2+
3+
A simple chat interface demonstrating [`ResilientLLM`](https://github.com/gitcommitshow/resilient-llm) usage
4+
5+
![Demo Screenshot](./demo.jpg)
6+
7+
## Features
8+
9+
This project can act as a boilerplate or template to kickstart your AI chatbot project.
10+
11+
- Real-time chat with AI (feel free to change the LLM providers and models in `server/app.js`)
12+
- Markdown rendering for assistant responses
13+
- Minimal, clean UI design
14+
- Lightweight and easy to extend because of the minimal dependencies. It is a vanilla html/js frontend serving AI response via a simple express API server.
15+
16+
## Project Structure
17+
18+
```
19+
server/ --Backend files--
20+
└── app.js # Express server with ResilientLLM
21+
client/ --Frontend files--
22+
├── index.html # Main HTML file (shows key integration functions)
23+
├── styles.css # Styling
24+
├── api.js # API integration with the express API backend
25+
├── messages.js # Message display and management
26+
└── ui.js # UI components and interactions
27+
```
28+
29+
## Quick Start
30+
31+
### 1. Clone and Setup
32+
33+
```bash
34+
git clone https://github.com/gitcommitshow/resilient-llm
35+
cd resilient-llm/examples/chat-basic
36+
```
37+
38+
### 2. Install Dependencies
39+
40+
```bash
41+
npm install
42+
```
43+
44+
### 3. Set Environment Variables
45+
46+
Set your API key and choose the default LLM service and model:
47+
48+
```bash
49+
# OpenAI
50+
export OPENAI_API_KEY=your_key_here
51+
export AI_SERVICE=openai
52+
export AI_MODEL=gpt-4o-mini
53+
54+
# Or Anthropic
55+
export ANTHROPIC_API_KEY=your_key_here
56+
export AI_SERVICE=anthropic
57+
export AI_MODEL=claude-3-5-sonnet-20240620
58+
59+
# Or Gemini
60+
export GEMINI_API_KEY=your_key_here
61+
export AI_SERVICE=gemini
62+
export AI_MODEL=gemini-2.0-flash
63+
```
64+
65+
### 4. Start the Server
66+
67+
```bash
68+
npm run dev
69+
```
70+
71+
The server will start on `http://localhost:3000` and automatically serve the client files.
72+
73+
### 5. Open in Browser
74+
75+
Navigate to **`http://localhost:3000`** in your browser.
76+
77+
<details>
78+
<summary><strong>Want to preview in the VSCode/Cursor editor directly?</strong></summary>
79+
80+
- Install [Live Preview extension](https://marketplace.cursorapi.com/items/?itemName=ms-vscode.live-server)
81+
- Right-click on `client/index.html`**"Show Preview"**
82+
83+
**Note:** The server must be running for the preview to work, as it serves the client files and handles API requests.
84+
85+
</details>
86+
87+
## How It Works
88+
89+
Uses `ResilientLLM` in `server/app.js` as following:
90+
91+
**1. Initialize ResilientLLM:**
92+
```javascript
93+
const llm = new ResilientLLM({
94+
aiService: process.env.AI_SERVICE || 'openai',
95+
model: process.env.AI_MODEL || 'gpt-4o-mini',
96+
maxTokens: 2048,
97+
temperature: 0.7,
98+
rateLimitConfig: {
99+
requestsPerMinute: 60,
100+
llmTokensPerMinute: 90000
101+
},
102+
retries: 3,
103+
backoffFactor: 2
104+
});
105+
```
106+
107+
**2. Use it in your API endpoint:**
108+
```javascript
109+
app.post('/api/chat', async (req, res) => {
110+
const { conversationHistory } = req.body;
111+
const response = await llm.chat(conversationHistory);
112+
res.json({ response, success: true });
113+
});`
114+
```
115+
116+
**3. Send chat history from the frontend to get the AI response:**
117+
```REST
118+
POST /api/chat
119+
Content-Type: application/json
120+
121+
{
122+
"conversationHistory": [
123+
{ "role": "system", "content": "You are a helpful assistant." },
124+
{ "role": "user", "content": "Hello!" },
125+
{ "role": "assistant", "content": "Hi there!" },
126+
{ "role": "user", "content": "What is JavaScript?" }
127+
]
128+
}
129+
```
130+
131+
That's it! ResilientLLM returns the LLM response while automatically handling the rate limiting, retries, circuit breaking, and different types of errors thrown by LLM providers.
132+
133+
**Explore further:** Check `server/app.js` to see all configuration options and customize behavior via environment variables or direct code changes.
134+
135+
## Troubleshooting
136+
137+
- Make sure the server is running: `npm run dev`
138+
- Ensure your API key is set in environment variables. When in doubt, pair the env with the server start command e.g. `OPENAI_API_KEY=your_api_key npm run dev`.
139+
- Verify you're using the correct service name (openai, anthropic, or gemini)
140+
- Ensure that you're using the correct model name
141+
- Check that the API key is valid and has correct permissions
142+
143+
🐞 Discovered a bug? [Create an issue](https://github.com/gitcommitshow/resilient-llm/issues/new)
144+
145+
## License
146+
147+
MIT License

examples/chat-basic/client/api.js

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
// API Integration with ResilientLLM
2+
// This file handles communication with the ResilientLLM backend
3+
4+
const API_URL = 'http://localhost:3000/api/chat';
5+
6+
/**
7+
* Build conversation history from messages array
8+
* Formats messages for ResilientLLM API
9+
*/
10+
function buildConversationHistory(messages) {
11+
const history = [];
12+
// Add system message if this is the first user message
13+
if (messages.length === 0 || (messages.length === 1 && messages[0].role === 'user')) {
14+
history.push({
15+
role: 'system',
16+
content: 'You are a helpful AI assistant powered by ResilientLLM.'
17+
});
18+
}
19+
// Add all messages to history
20+
messages.forEach(msg => {
21+
history.push({
22+
role: msg.role,
23+
content: msg.text
24+
});
25+
});
26+
return history;
27+
}
28+
29+
/**
30+
* Call the backend API to get LLM response
31+
*
32+
* ResilientLLM handles all the complexity automatically:
33+
* - Rate limiting (requests per minute, tokens per minute)
34+
* - Automatic retries with exponential backoff
35+
* - Circuit breaker for service resilience
36+
* - Token estimation
37+
* - Error handling and recovery
38+
*
39+
* @param {Array} conversationHistory - Array of messages with role and content
40+
* @returns {Promise<string>} - The AI response text
41+
*/
42+
async function getAIResponse(conversationHistory) {
43+
try {
44+
const response = await fetch(API_URL, {
45+
method: 'POST',
46+
headers: {
47+
'Content-Type': 'application/json'
48+
},
49+
body: JSON.stringify({
50+
conversationHistory: conversationHistory
51+
})
52+
});
53+
54+
if (!response.ok) {
55+
const errorData = await response.json().catch(() => ({}));
56+
throw new Error(errorData.error || `HTTP error! status: ${response.status}`);
57+
}
58+
59+
const data = await response.json();
60+
if (data.success && data.response) {
61+
return data.response;
62+
} else {
63+
throw new Error(data.error || 'No response from server');
64+
}
65+
} catch (error) {
66+
console.error('Error calling API:', error);
67+
throw error;
68+
}
69+
}
70+

0 commit comments

Comments
 (0)