Express proxy that hosts the LifeMD multi-agent workflow. It exposes HTTP endpoints that forward user text or voice prompts to an OpenAI-powered agent that can route health-related questions to Model Context Protocol (MCP) tools for women's health, weight management, and mental health advice.
@openai/agentsLifeMD agent wired up with three MCP tools (womens_health_tool,weight_management_tool,mental_health_tool).- REST endpoint (
POST /api/ai) for direct text prompts. - Voice endpoint (
POST /api/voice) that accepts an audio file, transcribes it with OpenAI (gpt-4o-mini-transcribe), and funnels the transcript to the agent. - TypeScript source with build step that emits ESM JavaScript to
dist/.
- Node.js 20+
- npm 10+
- An OpenAI API key with access to GPT-4o Mini (chat + transcription models)
Create a .env file or export the variables before launching the server.
| Variable | Description |
|---|---|
OPENAI_API_KEY |
Required. Used by both the HTTP proxy and the MCP tools/agent. |
PORT |
Optional. Defaults to 8080. |
npm install # install dependencies
npm run dev # start TS server with ts-node (hot reload)
npm run build # compile TypeScript to dist/
npm start # run compiled server (build runs automatically first)- Body:
{ "message": "string" } - Response:
{ "answer": "string" } - Errors:
400whenmessagemissing/empty,500for unexpected agent issues.
- Content-Type:
multipart/form-data - Field:
audio(single file, e.g..webm,.wav) - Response:
{ "transcript": "text", "answer": "string", "audio": { "mimeType": "audio/mpeg", "base64": "..." } }audio.base64can be converted into a data URL on the frontend (data:${mimeType};base64,...) for immediate playback.
- Errors:
400when file missing,500if transcription or agent call fails.
- Source entry point:
src/index.ts; build target:dist/index.js. - MCP server entry:
src/mcp/index.ts(compiled todist/mcp/index.js). The agent dynamically picks the compiled version when present, otherwise falls back to ts-node. - The
/api/voiceendpoint depends onmulterfor in-memory uploads andopenai/audio.transcriptions.createwithtoFilehelper from the OpenAI SDK.
src/index.ts– bootstraps Express, mounts shared middleware, and wires the modular routers.src/routes/– houses one router per HTTP surface (ai.tsfor text prompts,voice.tsfor uploads). Each router handles its own middleware stack (e.g., Multer for voice) and uses shared helpers.src/utils/– reusable HTTP helpers (http.ts) and validation logic (validation.ts) that enforce consistent error messaging and request limits.src/config.ts/src/errors.ts– central place for environment-driven settings and HTTP-friendly error types.src/mcp/– contains the MCP server exposed to the agent.openai.tswires the shared SDK client,tools/holds one file per MCP tool plus acreateAdviceToolfactory, andindex.tssimply registers everything.src/agent/– mirrors the same idea for the agent runtime.openai.tsvalidates the API key and sets the default key for@openai/agents,mcpServer.tshandles child-process lifecycle + shutdown hooks,lifeMdAgent.tsdefines the agent persona, andindex.tsexportsrunLifeMdAgentfor the HTTP layer.- When adding a new tool or agent behavior, drop a new file in the respective folder and export it through the local
index.tsbarrel to keep each concern isolated.
- Ensure
OPENAI_API_KEYis set (export OPENAI_API_KEY=...). - Run
npm run devand open another terminal. - Test text flow:
curl -X POST http://localhost:8080/api/ai \ -H 'Content-Type: application/json' \ -d '{"message":"I have a headache"}' - Test voice flow (replace
clip.webmwith your file):curl -X POST http://localhost:8080/api/voice \ -F '[email protected]'