AI-powered language practice app with multi-provider LLM backend (OpenRouter, Ollama, Runware, Fal.ai).
- Create
.envin project root:
PORT=3000
PROVIDER=openrouter # or ollama
# OpenRouter
OPENROUTER_API_KEY=your_key
OPENROUTER_MODEL=anthropic/claude-3.5-sonnet
APP_URL=http://localhost:5173
# Ollama local
OLLAMA_HOST=http://127.0.0.1:11434
OLLAMA_MODEL=qwen2.5:14b
- Install and run:
npm i
npm run dev
- Frontend: http://localhost:5173
- Backend: http://localhost:3000
npm run build
npm start
Serves built frontend from the backend on port 3000.
Build and run with Docker:
docker build -t language-ai-app .
docker run --rm -p 3000:3000 \
-e NODE_ENV=production \
-e PROVIDER=openrouter \
-e OPENROUTER_API_KEY=your_key \
language-ai-app
Or with docker-compose (reads env from your shell):
docker compose up --build
For Ollama, ensure Ollama is running on your host (default 127.0.0.1:11434). On macOS, host.docker.internal is used inside the container via compose.
- The app persists cache and analytics when run in Docker.
- A volume is mounted at
/datainside the container (configurable withCACHE_DIR, default/data). - Configure the host path via
CACHE_HOST_DIRin your.env. The deploy script ensures the directory exists on the remote host. - See
docs/persistent-caching-plan.mdfor full design (schema versions inshared/schemaVersions.js, LRU, image persistence for Cloze, etc.).
Set PROVIDER to one of:
openrouterollama
Optionally override the default model via corresponding *_MODEL env vars.
