This repository contains a backend service that takes a code/script input, generates an AI narration (TTS), converts the script into a tutorial-style scrolling video using FFmpeg and Canvas, and exposes the resulting media for a frontend to display.
This README explains the project structure, how the parts connect, how to run the server, and suggested improvements for production use.
Repository layout
-
src/- Main server code and helpersserver.ts- Express server. Servessrc/outputstatically and mounts the API routes under/api.cli.ts- Command-line interface for local usage (analyze a file and optionally generate voice/video).codeAnalyzer.ts- Uses OpenAI to convert code into segmented narration or tutorial text.textToSpeech.ts- Uses OpenAI TTS to generate.mp3voiceover files.videoGenerator.ts- Creates slides or a scrolling image of the script and usesffmpegto produce a final.mp4synchronized to the audio.output/- Output directory for generated.mp3and.mp4files (served at/outputby the server).slides/- Temporary slide assets used during video generation.types.ts,config.ts, etc. - small helpers and types.
-
weaveit-generator/- Canonical API route implementationsgenerateRoute.ts- POST/api/generate— accepts{ script, walletAddress, prompt? }, auto-generates title, enhances the script withcodeAnalyzer, produces audio and video, and returns{ jobId, title, status }.videosStatusRoute.ts- GET/api/videos/status/:id— checks whether/src/output/:id.mp4or/src/output/:id.mp3exists and returns a JSON status.
Notes: small re-export stubs exist in src/ so existing imports remain compatible while the canonical implementations live under weaveit-generator/.
How the pieces connect
-
Frontend POSTs to
/api/generatewith a JSON body containing the script text, wallet address, and optional prompt. -
weaveit-generator/generateRoute.tsreceives the request and:- Auto-generates a descriptive title based on the script content using AI.
- Uses the optional
promptfield to customize the AI explanation (or default scrolling tutorial format if not provided). - Calls
enhanceScriptfromsrc/codeAnalyzer.tsto produce a narrated explanation. - Calls
generateSpeech(fromsrc/textToSpeech.ts) to produce an.mp3voiceover. - Calls
generateScrollingScriptVideo(fromsrc/videoGenerator.ts) to create a scrolling video withffmpeg. - Returns
{ jobId, title, status: "generating" }immediately, with real-time progress via WebSockets.
-
The server provides WebSocket connections at
ws://localhost:3001for live progress updates (2%, 5%, 10%, etc.). -
Generated media is stored in the database and served via
/api/videos/job/:jobId.
API endpoints
- POST
/api/generate- Request body:
{"script": string, "walletAddress": string, "prompt"?: string} - Title is automatically generated from script content
- Optional
promptfield allows custom questions about the code - Asynchronous behavior: Returns immediately with
jobIdandstatus: "generating". Processing happens in background. - Example using
curl:
- Request body:
curl -X POST 'http://localhost:3001/api/generate' \
-H 'Content-Type: application/json' \
-d '{"script":"console.log(\"hello\")","walletAddress":"abc123","prompt":"Explain what this code does"}'-
POST
/api/generate/audio- Request body:
{"script": string, "walletAddress": string, "prompt"?: string} - Title is automatically generated from script content
- Optional
promptfield allows custom questions about the code - Asynchronous audio-only generation.
- Request body:
-
POST
/api/generate/narrative- Request body:
{"script": string, "walletAddress": string} - Title is automatically generated from script content
- Asynchronous narrative video generation.
- Request body:
-
GET
/api/videos/status/:id- Returns JSON with
status,ready,error, etc. for polling. - Example:
- Returns JSON with
curl 'http://localhost:3001/api/videos/status/<jobId>'-
GET
/api/jobs/events?jobIds=<jobId1>,<jobId2>- Server-Sent Events endpoint for real-time job status updates.
-
POST
/api/webhooks/job-update- Internal webhook endpoint for job status and progress updates (secured with HMAC signature).
-
Static media:
GET /api/videos/job/:jobIdserves the generated video file directly.
WebSocket Real-Time Updates
The server now supports WebSocket connections for real-time progress updates during video/audio generation. Connect to ws://localhost:3001 (or your server URL) to receive live updates.
Frontend WebSocket Integration
// Connect to WebSocket
const ws = new WebSocket('ws://localhost:3001');
// Handle connection
ws.onopen = () => {
console.log('Connected to WebSocket');
// Subscribe to a specific job
ws.send(JSON.stringify({
action: 'subscribe',
jobId: 'your-job-id-here'
}));
};
// Handle incoming messages
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
switch (data.type) {
case 'connected':
console.log('WebSocket connected:', data.message);
break;
case 'subscribed':
console.log('Subscribed to job:', data.jobId);
break;
case 'progress':
console.log(`Job ${data.jobId}: ${data.progress}% - ${data.message}`);
// Update UI progress bar
updateProgress(data.progress, data.status, data.message);
break;
case 'completed':
console.log(`Job ${data.jobId} completed!`);
console.log('Video ID:', data.videoId, 'Duration:', data.duration);
// Show completion UI
showCompletion(data.videoId, data.duration);
break;
case 'error':
console.error(`Job ${data.jobId} failed:`, data.error);
// Show error UI
showError(data.error);
break;
}
};
// Handle errors
ws.onerror = (error) => {
console.error('WebSocket error:', error);
};
// Handle disconnection
ws.onclose = () => {
console.log('WebSocket disconnected');
};
// Helper functions
function updateProgress(progress, status, message) {
// Update your progress bar and status text
document.getElementById('progress-bar').style.width = progress + '%';
document.getElementById('status-text').textContent = message;
}
function showCompletion(videoId, duration) {
// Show success message and video player
document.getElementById('result').innerHTML = `
<p>Video generated successfully! Duration: ${duration}s</p>
<video controls src="/api/videos/job/${videoId}"></video>
`;
}
function showError(error) {
// Show error message
document.getElementById('result').innerHTML = `<p>Error: ${error}</p>`;
}WebSocket Message Types
connected: Connection establishedsubscribed: Successfully subscribed to a jobprogress: Generation progress update with detailed percentage increments (2%, 5%, 10%, etc.), status, and descriptive messagecompleted: Job finished successfully with videoId and durationerror: Job failed with error message
WebSocket Actions
Send JSON messages to the server:
{"action": "subscribe", "jobId": "job-id"}- Subscribe to progress updates for a job{"action": "unsubscribe", "jobId": "job-id"}- Unsubscribe from a job
Local development / prerequisites
- Node.js (compatible with the
package.jsondev dependencies). Recommended: Node 18+. - A package manager:
pnpm,npm, oryarn. ffmpegmust be installed and available onPATH(the video generator usesfluent-ffmpeg).- An OpenAI API key must be present as
OPENAI_API_KEYin a.envfile at the project root.
Install and run locally:
# install
pnpm install
# start dev server (uses ts-node / ESM)
pnpm run dev
# or
npx ts-node-esm src/server.tsCLI
There is a small CLI for local testing:
# Analyze a file and optionally create voice/video
npx ts-node src/cli.ts analyze -f path/to/script.ts --voice --videoBehavioral notes & production recommendations
-
Asynchronous job processing: The API now uses webhooks for job completion. When a generation request is made:
- Job is created in database with status "generating"
- Auto-generated title based on script content
- Optional custom prompt for AI explanation
- Request returns immediately with
jobId,title, andstatus: "generating" - Background processing generates the content with detailed progress updates
- Frontend can use WebSockets for real-time progress or poll
/api/videos/status/:id
-
Webhook security: Webhooks are signed with HMAC-SHA256. Set
WEBHOOK_SECRETin environment variables. -
Real-time updates: Use WebSockets for instant UI updates with detailed progress (2%, 5%, 10%, etc.):
const ws = new WebSocket('ws://localhost:3001'); ws.onopen = () => ws.send(JSON.stringify({ action: 'subscribe', jobId })); ws.onmessage = (event) => { const data = JSON.parse(event.data); if (data.type === 'progress') { console.log(`Progress: ${data.progress}% - ${data.message}`); } };
Recommended production improvements:
- Use a proper job queue (Bull, Redis) for better scalability
- Add authentication and rate-limiting
- ✅ WebSocket connections implemented for bidirectional communication
- Add job retry logic and dead letter queues
- Monitor webhook delivery and implement retry mechanisms
Security & Cost
- TTS uses the OpenAI API: protect your API key and avoid exposing it to clients.
- Generating videos and calling the TTS API has costs — add usage limits, quotas or billing controls.
Troubleshooting
- If ffmpeg fails or you see errors while generating videos, confirm
ffmpegis installed and the PATH is correct. - If TTS fails, ensure
OPENAI_API_KEYis valid and environment variables are loaded (server reads.envby default). - Check server logs (console output) —
videoGenerator.tsandtextToSpeech.tslog progress and errors.
Where to go next
- ✅ WebSocket-based progress events implemented with detailed granularity (2%, 5%, 10%, etc.)
- ✅ Automatic title generation from script content
- ✅ Custom prompt support for flexible AI explanations
- ✅ Conditional logging with VERBOSE_LOGGING environment variable
- I can convert the synchronous generator into a job queue and make
/api/generatereturn immediately with202plus a polling-friendly status endpoint.
If you'd like either of those, tell me which approach you prefer and I will implement it.
Generated by repository automation — keep this README updated as code moves between src/ and weaveit-generator/.
Quick Frontend Example
This minimal example demonstrates how a frontend can POST the script, use WebSockets for real-time updates, and display the video.
// POST the script
const resp = await fetch("http://localhost:3001/api/generate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
script: "console.log('hello')",
walletAddress: "your-wallet-address",
prompt: "Explain what this code does" // optional
}),
});
const { jobId, title } = await resp.json();
console.log("Generated title:", title);
// Connect to WebSocket for real-time updates
const ws = new WebSocket('ws://localhost:3001');
ws.onopen = () => {
// Subscribe to the job
ws.send(JSON.stringify({ action: 'subscribe', jobId }));
};
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.type === 'progress') {
document.getElementById('status').textContent = `${data.progress}% - ${data.message}`;
} else if (data.type === 'completed') {
// Show video
const videoUrl = `http://localhost:3001/api/videos/job/${data.videoId}`;
document.querySelector("#player").src = videoUrl;
document.getElementById('status').textContent = 'Completed!';
} else if (data.type === 'error') {
document.getElementById('status').textContent = `Error: ${data.error}`;
}
};
// Alternative: Poll status (fallback if WebSocket not available)
let ready = false;
const pollStatus = async () => {
if (ready) return;
const s = await fetch(`http://localhost:3001/api/videos/status/${jobId}`);
const data = await s.json();
document.getElementById('status').textContent = data.status;
if (data.ready) {
ready = true;
const videoUrl = `http://localhost:3001/api/videos/job/${jobId}`;
document.querySelector("#player").src = videoUrl;
} else {
setTimeout(pollStatus, 2000);
}
};
// Start polling as fallback
pollStatus();Environment variables
.envat project root should include:OPENAI_API_KEY— required for TTS and code analysis.VERBOSE_LOGGING— set totrueto enable detailed console logging (default:true).
Important runtime details
- Default server port:
3001(seesrc/server.ts). - Generated media is written to
src/output/and served statically at/output. - Video generation is CPU- and IO-heavy. Ensure adequate disk and CPU resources on the host.
npm / Scripts
pnpm run dev— runts-node-esm src/server.ts(development)pnpm run build— compile TypeScript (tsc)pnpm run start— run compiled server (node dist/server.js)
Extra notes
- File retention: generated media remains in
src/outputuntil you remove it. Add a retention policy or cleanup job for production. - Concurrency: if many users submit jobs simultaneously, convert generation to background jobs to avoid exhausting resources.