feat: Allow composing follow-up messages while a run is in flight ("queued message")#518
feat: Allow composing follow-up messages while a run is in flight ("queued message")#518Moe1211 wants to merge 2 commits intonexu-io:mainfrom
Conversation
|
Hi @Moe1211! 🎉 Thanks for making open-design better! |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: cbc6aba0b3
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (!conversationId) return; | ||
| const conversationRuns = Array.from(runs.values()) |
There was a problem hiding this comment.
Start enqueued runs without conversation IDs
enqueue() now relies on maybeStartNext(), but maybeStartNext() immediately returns when conversationId is falsy. That leaves runs permanently stuck in queued for endpoints that don't attach a conversation id (notably /api/chat, which calls create() without metadata in server.ts), so clients can receive a run id/stream connection that never starts or terminates.
Useful? React with 👍 / 👎.
| disabled={!draft.trim() && commentAttachments.length === 0} | ||
| > | ||
| <Icon name="send" size={13} /> | ||
| <span>{streaming ? 'Queue' : t('chat.send')}</span> |
There was a problem hiding this comment.
Make the Queue button actually dispatch while streaming
The composer now shows a Queue button during streaming, but submit() still short-circuits on streaming (if (streaming) return / ... || streaming) return), so clicking this new button is a no-op and users cannot queue a follow-up at all. This is a functional regression in the exact flow this UI change introduces.
Useful? React with 👍 / 👎.
lefarcen
left a comment
There was a problem hiding this comment.
Hey @Moe1211, great work on implementing queued messages! The architecture is sound — separating UI queueing from daemon sequencing is the right approach.
However, the review found three critical issues that will prevent this from working in production:
-
enqueuenot exported —server.tscallsdesign.runs.enqueue()but the function is never returned fromcreateChatRunService. Every chat run will throwTypeError: enqueue is not a function. -
Legacy
/api/chatendpoint broken — this endpoint creates runs without aconversationId, so the queuing logic never fires. Streaming clients using this endpoint will hang. -
Race condition — if a second run is enqueued while the first is in async preflight, both can start simultaneously because the selected run remains
status === 'queued'untilstartChatRunreaches the spawn point.
See inline comments below for concrete fixes. Once these are addressed, the implementation will be solid. 🚀
| if (run.status !== 'queued') return run; // Only enqueue fresh runs | ||
| starters.set(run.id, starter); | ||
| maybeStartNext(run.conversationId); | ||
| return run; |
There was a problem hiding this comment.
P1 — enqueue is defined but not exported from createChatRunService. Add enqueue to the returned service object (around line 172 where other methods are exported). Without this, server.ts:3867 will throw TypeError: design.runs.enqueue is not a function on every chat run.
| const run = design.runs.create(); | ||
| design.runs.stream(run, req, res); | ||
| design.runs.start(run, () => startChatRun(req.body || {}, run)); | ||
| design.runs.enqueue(run, () => startChatRun(req.body || {}, run)); |
There was a problem hiding this comment.
P1 — The legacy /api/chat endpoint doesn't assign conversationId until inside startChatRun, but enqueue calls maybeStartNext(run.conversationId) immediately — before the ID is set. The queued run never starts because conversationId is null.
Concrete fix: Either (a) pass request body into create(req.body || {}) to set conversationId early, or (b) make enqueue start immediately when no conversation ID exists (treating it as a one-off non-queued run).
| const starter = starters.get(nextRun.id); | ||
| if (starter) { | ||
| // Remove from starters map once we're about to run it | ||
| starters.delete(nextRun.id); |
There was a problem hiding this comment.
P1 — Race condition: if a second run is enqueued while the selected run is in async preflight (before startChatRun marks it non-queued), maybeStartNext sees no active non-queued run and starts the second one too, violating one-run-per-conversation.
Concrete fix: Mark the selected run as internally active before invoking start(nextRun, starter), or add an isStarting flag and treat it as active.
| ); | ||
|
|
||
| // Dispatch queued messages after the current run finishes. | ||
| useEffect(() => { |
There was a problem hiding this comment.
P2 — The queued message auto-dispatch works for UI-level state, but if the UI sends a second turn while streaming, the daemon-queued run will execute with a stale transcript snapshot from enqueue time, not the completed assistant response.
Concrete fix: Either (a) queue only locally and call streamViaDaemon after the current run finishes (simpler), or (b) change the daemon API to accept the new user prompt plus conversation IDs, and resolve the final persisted history at actual start time.
| <button | ||
| type="button" | ||
| className="composer-send" | ||
| data-testid="chat-send" |
There was a problem hiding this comment.
P2 — The Queue button now renders while streaming, but clicking it is a no-op because submit() returns early when streaming is true, and setQueuedMessage is never called.
Concrete fix: Wire onSend during streaming to set the queued message in ProjectView, clear/reset the composer only after enqueue succeeds, and render queued state so users can see/cancel it.
| disabled={!draft.trim() && commentAttachments.length === 0} | ||
| > | ||
| <Icon name="send" size={13} /> | ||
| <span>{streaming ? 'Queue' : t('chat.send')}</span> |
There was a problem hiding this comment.
P3 — 'Queue' is hardcoded and bypasses the i18n system. Add chat.queue to Dict and all locale files, then render t('chat.queue').
mrcfps
left a comment
There was a problem hiding this comment.
@Moe1211, thanks for pushing this queued-message path forward. I found a few blocking correctness issues that keep the current head from reliably starting runs or queueing follow-up messages, so I’m requesting changes for this pass.
Generated by Looper 0.5.5 · runner=reviewer · agent=opencode| const body = { runId: run.id }; | ||
| res.status(202).json(body); | ||
| design.runs.start(run, () => startChatRun(req.body || {}, run)); | ||
| design.runs.enqueue(run, () => startChatRun(req.body || {}, run)); |
There was a problem hiding this comment.
server.ts now calls design.runs.enqueue(...), but the changed service implementation only defines enqueue locally and the returned object from createChatRunService still does not expose it. That means the first /api/runs request reaches this line and throws TypeError: design.runs.enqueue is not a function instead of starting or queueing the run, breaking the main chat path. Please add enqueue to the service return in apps/daemon/src/runs.ts (and cover the /api/runs create/start path with a test) before switching callers to it.
| const maybeStartNext = (conversationId) => { | ||
| if (!conversationId) return; |
There was a problem hiding this comment.
maybeStartNext immediately returns when conversationId is missing, but server.ts also changed the legacy /api/chat endpoint to call enqueue on a run created with design.runs.create() and no metadata. Once enqueue is exported, that endpoint will leave its run in queued forever: the SSE client is attached, but no starter ever runs and no terminal event is emitted. Please either create /api/chat runs with the request metadata before enqueueing, or have enqueue fall back to start(run, starter) for runs without a conversation id.
| if (starter) { | ||
| // Remove from starters map once we're about to run it | ||
| starters.delete(nextRun.id); | ||
| start(nextRun, starter); |
There was a problem hiding this comment.
The selected queued run is not marked active before start(nextRun, starter) invokes the async startChatRun preflight. During that window the run is still status === 'queued', so another enqueue for the same conversation can call maybeStartNext, see no active non-queued run, and start a second queued run concurrently. That violates the one-run-at-a-time sequencing this change is adding. Please mark the run as starting/running before invoking the starter (or add an isStarting state that the active-run check treats as active), then add a fixture that enqueues two runs while the first starter is still awaiting preflight.
mrcfps
left a comment
There was a problem hiding this comment.
@Moe1211, thanks for the follow-up fixes here. I re-reviewed the current head and found a few blocking correctness issues that still keep the main queued-message flow from reliably starting runs and queueing the follow-up locally.
Generated by Looper 0.5.5 · runner=reviewer · agent=opencode| const body = { runId: run.id }; | ||
| res.status(202).json(body); | ||
| design.runs.start(run, () => startChatRun(req.body || {}, run)); | ||
| design.runs.enqueue(run, () => startChatRun(req.body || {}, run)); |
There was a problem hiding this comment.
This still calls design.runs.enqueue(...), but createChatRunService() only defines enqueue locally and the returned service object in apps/daemon/src/runs.ts does not include it. That means the primary /api/runs path returns 202 and then immediately throws TypeError: design.runs.enqueue is not a function, so no run is started or queued. Please add enqueue to the returned service API (and cover this route with a regression test) before switching callers over. 🚧
|
|
||
| app.post('/api/chat', (req, res) => { | ||
| const run = design.runs.create(); | ||
| const run = design.runs.create(req.body || {}); |
There was a problem hiding this comment.
Passing req.body || {} into create() fixes callers that include a conversationId, but direct/legacy /api/chat requests can still omit it. In that case enqueue() stores the starter, maybeStartNext(run.conversationId) returns early because the ID is null, and the attached SSE stream never starts or ends. Please either make enqueue() fall back to start(run, starter) when there is no conversation id, or reject /api/chat requests without a conversation id with a clear 400 if the endpoint now requires one.
| } | ||
| if ((!prompt && commentAttachments.length === 0) || streaming) return; | ||
| if (!prompt && commentAttachments.length === 0) return; | ||
| onSend(prompt, staged, commentAttachments); |
There was a problem hiding this comment.
Removing the streaming guard lets the new Queue button call onSend(...), but ProjectView.handleSend does not store queuedMessage before creating the user/assistant messages, controllers, and stream. The only new queuedMessage usages initialize it, clear it on stop, and dispatch it after streaming ends, so clicking Queue while a run is active starts another frontend run immediately instead of queueing the follow-up locally. That can overwrite the single abortRef/cancelRef and make Stop/streaming state track the wrong run. Please add an early if (streaming) { setQueuedMessage({ prompt, attachments, commentAttachments }); return; } path in handleSend before creating the new run, then let the existing post-stream effect dispatch it.
lefarcen
left a comment
There was a problem hiding this comment.
Thanks for the quick update, @Moe1211! You've made good progress — the race condition mitigation and i18n fixes look solid. However, the critical blocker is still present: enqueue is defined but not exported from createChatRunService, so the first chat request will still throw TypeError: enqueue is not a function.
See inline for the remaining items. Once enqueue is in the service return object, this should be ready to ship. 🚀
| if (run.status !== 'queued') return run; // Only enqueue fresh runs | ||
| starters.set(run.id, starter); | ||
| maybeStartNext(run.conversationId); | ||
| return run; |
There was a problem hiding this comment.
P1 (CRITICAL — still not fixed) — enqueue is defined but not exported. You need to add it to the service return object (around line 172 where create, start, etc. are returned). Without this, server.ts:3867 throws TypeError: design.runs.enqueue is not a function on the first /api/runs or /api/chat request.
|
|
||
| app.post('/api/chat', (req, res) => { | ||
| const run = design.runs.create(); | ||
| const run = design.runs.create(req.body || {}); |
There was a problem hiding this comment.
P1 — Passing req.body to create() only fixes /api/chat when callers supply conversationId. Existing clients can omit it, and maybeStartNext() returns early for null, so those runs queue forever.
Concrete fix: In enqueue(), fall back to start(run, starter) immediately when run.conversationId is null (no serialization key = no queue needed).
| // Remove from starters map once we're about to run it | ||
| starters.delete(nextRun.id); | ||
| // Mark as starting to prevent race condition while async preflight runs | ||
| nextRun.status = 'starting'; |
There was a problem hiding this comment.
P2 — status = 'starting' closes the race, but 'starting' is not in the ChatRunStatus contract, so web helpers like isActiveRunStatus won't recognize it.
Concrete fix: Either add 'starting' to packages/contracts/src/api/chat.ts and all consumers, or keep it internal (e.g., run._isStarting = true) while leaving public status as 'queued' until the real start() call.
|
|
||
| // Dispatch queued messages after the current run finishes. | ||
| useEffect(() => { | ||
| if (!streaming && queuedMessage) { |
There was a problem hiding this comment.
P2 — The Queue button now calls handleSend, but handleSend still snapshots nextHistory = [...messages, userMsg] immediately, while the current assistant response may still be partial. The queuedMessage state/effect you added isn't actually preventing this.
Concrete fix: In handleSend, if streaming is true, call setQueuedMessage({ prompt, attachments, commentAttachments }) and return before creating messages or calling streamViaDaemon. The effect will dispatch it after the stream finishes.
38538bc to
63580c1
Compare
lefarcen
left a comment
There was a problem hiding this comment.
Thanks for the updates, @Moe1211! Good progress — the 'starting' status and Queue button defer are now fixed ✅. However, the two critical P1 blockers remain: enqueue is still not exported, and /api/chat runs without conversationId will queue forever. These will cause runtime failures on the main chat path.
See inline for the remaining items.
| if (run.status !== 'queued') return run; // Only enqueue fresh runs | ||
| starters.set(run.id, starter); | ||
| maybeStartNext(run.conversationId); | ||
| return run; |
There was a problem hiding this comment.
P1 (CRITICAL — still not fixed) — enqueue is defined but not returned from createChatRunService. You need to add enqueue to the service return object (around line 174 where create, start, get, etc. are returned). Without this, server.ts:3867 and server.ts:3902 will throw TypeError: design.runs.enqueue is not a function on the first /api/runs or /api/chat request.
| return run; | ||
| }; | ||
|
|
||
| const enqueue = (run, starter) => { |
There was a problem hiding this comment.
P1 — enqueue() calls maybeStartNext(run.conversationId), but maybeStartNext returns immediately when the ID is null. /api/chat can create runs with design.runs.create() without metadata, so those runs will be enqueued but never started.
Concrete fix: Add an early fallback in enqueue(): if (!run.conversationId) { start(run, starter); return run; }
mrcfps
left a comment
There was a problem hiding this comment.
@Moe1211, thanks for the quick follow-up. I re-reviewed the current head and found a few additional blocking issues that keep the daemon/web checks and queued-run sequencing from being safe to merge.
Generated by Looper 0.5.5 · runner=reviewer · agent=opencode| // Remove from starters map once we're about to run it | ||
| starters.delete(nextRun.id); | ||
| // Mark as starting to prevent race condition while async preflight runs | ||
| nextRun._isStarting = true; |
There was a problem hiding this comment.
nextRun._isStarting = true is never cleared, and the active-run predicate on line 82 treats _isStarting as active even after the run reaches a terminal status. When the first queued run finishes, finish() calls maybeStartNext() while that completed run still has _isStarting, so the next queued run for the same conversation is blocked until cleanup removes the old run (up to the 30 minute TTL). Please clear _isStarting when the starter transitions into start()/finish(), or change the active check to ignore _isStarting on terminal runs, and add a regression that two queued runs in one conversation drain sequentially.
| 'chat.linkedFolderAlready': string; | ||
| 'chat.linkedFolderPickError': string; | ||
| 'chat.send': string; | ||
| 'chat.queue': string; |
There was a problem hiding this comment.
Adding chat.queue to the required Dict interface means every export const <locale>: Dict must define it, but this PR only updates seven locale files. The remaining locales (for example apps/web/src/i18n/locales/de.ts, plus ar/es-ES/fr/hu/ja/ko/pl/tr) still have chat.send without chat.queue, so a normal installed web typecheck will fail once it checks those Dict assignments. Please add chat.queue to every locale object or make the key optional with an explicit fallback before using t('chat.queue').
63580c1 to
a8bc6d0
Compare
mrcfps
left a comment
There was a problem hiding this comment.
@Moe1211, thanks for the follow-up fixes here. I re-reviewed the current head and found blocking issues that still break the queued-run path and can send a queued follow-up into the wrong conversation.
Generated by Looper 0.5.5 · runner=reviewer · agent=opencode| const body = { runId: run.id }; | ||
| res.status(202).json(body); | ||
| design.runs.start(run, () => startChatRun(req.body || {}, run)); | ||
| design.runs.enqueue(run, () => startChatRun(req.body || {}, run)); |
There was a problem hiding this comment.
This route now calls design.runs.enqueue(...) (and /api/chat does the same at line 3902), but the changed createChatRunService() in apps/daemon/src/runs.ts still returns create, start, get, etc. without returning enqueue. As soon as the API responds 202, this line throws TypeError: design.runs.enqueue is not a function, so the primary daemon chat path neither starts nor queues any run. Please add enqueue to the returned service object and cover the /api/runs create/start path with a regression test.
lefarcen
left a comment
There was a problem hiding this comment.
Great progress, @Moe1211! You've fixed 4 out of 5 critical blockers:
✅ Fixed:
_isStartingnow cleared infinish()andstart()finally block- All locale files now include
chat.queue enqueue()correctly falls back tostart()for null conversationId- Syntax error removed (file now ends cleanly)
❌ Still blocking:
enqueue is defined at line 129 but not exported from the service return object at line 178. This means server.ts:3867 will still throw TypeError: design.runs.enqueue is not a function on the first /api/runs request.
Fix: Add , enqueue to the return object after start (line 180):
return {
create,
start,
enqueue, // ← add this line
get,
list,
stream,
cancel,
wait,
emit,
finish,
fail,
statusBody,
isTerminal(status) {
return TERMINAL_RUN_STATUSES.has(status);
},
};Once this is added, the implementation should be ready to ship. 🚀
a8bc6d0 to
9af2dfc
Compare
lefarcen
left a comment
There was a problem hiding this comment.
Excellent work, @Moe1211! 🎉
You've successfully fixed all 5 critical blockers from the previous reviews:
✅ enqueue now exported from service return object (line 181)
✅ _isStarting properly cleared in finish() and start() finally block
✅ All 16 locale files include chat.queue
✅ enqueue() correctly falls back to start() for null conversationId
✅ Syntax error removed
The queued-message implementation is now ready from a technical correctness standpoint. Deferring final merge decision to a maintainer.
— open-design team
lefarcen
left a comment
There was a problem hiding this comment.
Perfect, @Moe1211! 🎉 All blockers are now resolved on head 9af2dfc:
✅ Fixed:
enqueueis now exported from the service return object (apps/daemon/src/runs.ts:180)- Queued message snapshot now stores
projectIdandconversationIdto prevent wrong-conversation dispatch (apps/web/src/components/ProjectView.tsx:194-195) fix_runs.shscratch script removed
The implementation is ready to ship. 🚀
9af2dfc to
b2ac995
Compare
lefarcen
left a comment
There was a problem hiding this comment.
Re-confirmed on rebased head b2ac995 🎉
All prior blockers remain fixed after the rebase:
enqueueexported from service return (apps/daemon/src/runs.ts:181)- Queued message stores projectId + conversationId (apps/web/src/components/ProjectView.tsx:195-196)
The rebase onto current main looks clean. Ready to ship! 🚀
mrcfps
left a comment
There was a problem hiding this comment.
@Moe1211, thank you for continuing to tighten up the queued-message flow. I found one remaining blocker in the web path: the new queueing block is still unreachable because the previous streaming guard returns first, so clicking Queue currently drops the message instead of saving it for dispatch.
Generated by Looper 0.6.1 · runner=reviewer · agent=opencodeb2ac995 to
a1ee7e1
Compare
lefarcen
left a comment
There was a problem hiding this comment.
Hey @Moe1211! Thanks for the quick follow-up.
You removed the streaming guard from onSubmitForm — great instinct! But the blocker is actually one level deeper: handleSend itself still has an early if (streaming) return; at line 857 that exits before reaching the queueing block at line 860.
The flow now is:
- User clicks Queue while streaming
onSubmitFormcallshandleSend✅ (guard removed here)handleSendline 857 returns early ❌- Queueing block (line 860) never runs
One more tweak — see inline — and this should unblock the Queue path. 🙏
| @@ -849,6 +856,16 @@ export function ProjectView({ | |||
| if (!activeConversationId) return; | |||
| if (streaming) return; | |||
There was a problem hiding this comment.
P1 — This early return still makes the queueing block (line 860) unreachable. When a user clicks Queue while a run streams, onSubmitForm now calls handleSend (✅ your fix), but execution exits here before reaching the if (streaming) { setQueuedMessage(...); return; } block below.
Please remove this line 857 guard so execution flows to the queueing block. The queueing block at line 860 already handles the streaming case correctly by storing the message and returning.
a1ee7e1 to
5bc0b57
Compare
|
@Moe1211 I'm holding off on generating review comments for #518 because this pull request has merge conflicts right now. Please resolve the conflicts with main and push the updated branch. Once that's done, request or wait for the review to run again and I'll take another look. Generated by Looper 0.6.1 · runner=reviewer · agent=opencode |
5bc0b57 to
5f4598f
Compare
lefarcen
left a comment
There was a problem hiding this comment.
Hey @Moe1211, excellent work addressing the queueing blocker! 🎉
The web path now snapshots { prompt, attachments, commentAttachments, projectId, conversationId } before the streaming early return, so clicking Queue during an active run stores the follow-up locally instead of dropping it. This resolves the final blocking issue from mrcfps's last review.
All prior fixes remain intact:
- Daemon:
enqueueis exported atruns.ts:192✅ - Web: queuedMessage stores both
projectIdandconversationId(lines 987–988) ✅ - Cleanup:
fix_runs.shremoved ✅
This PR is ready for a maintainer to merge. Thank you for the persistent iteration and clean resolution! 🙏
lefarcen
left a comment
There was a problem hiding this comment.
Hey @Moe1211, all prior blockers are resolved in this head! 🎉
What changed since last review:
- ✅ Line 857 blocker fixed:
ProjectView.handleSendnow checksif (streaming)BEFORE creating any assistant run (lines 986-993). The queueing block saves{prompt, attachments, commentAttachments, projectId, conversationId}to state and returns early. - ✅ ChatComposer unlocked: removed
|| streamingfrom the empty-check guard (line 494), soonSendis allowed during streaming. - ✅ Conversation-switch safety: the dispatch
useEffectverifiesqueuedMessage.projectId === project.idandqueuedMessage.conversationId === activeConversationIdbefore dispatching; clears the queue if user switched contexts. - ✅ Stop-button safety:
handleStopcallssetQueuedMessage(null)to clear pending queue when user stops the stream. - ✅ All locale files:
chat.queuekey added to every locale (ar, de, en, es, fa, fr, hu, id, it, ja, ko, pl, pt-BR, ru, tr, uk, zh-CN, zh-TW). - ✅
enqueueexported:apps/daemon/src/runs.tsline 194 returnsenqueuefrom the service API. - ✅
_isStartingcleared: bothfinish()andstart().finally()setrun._isStarting = falseto prevent queue stalls.
Remaining gaps (non-blocking P2):
- No tests added for the queueing path (daemon sequential-start fixture + web queue-then-dispatch fixture would be nice, but not required for merge).
_isStartingis a runtime property without TypeScript typing (acceptable as private implementation detail).
The core queued-message flow is now correct and safe. Looks good to me; deferring final approval to a maintainer. 🚀
mrcfps
left a comment
There was a problem hiding this comment.
@Moe1211, thank you for the steady iteration on this queued-message flow. I found one remaining blocker around the promised reload-safe queueing behavior.
Blocking — apps/web/src/components/ProjectView.tsx, handleSend streaming branch (lines 985-993): this branch stores the queued follow-up only in component state and returns before creating any daemon run. That means the daemon-side enqueue() path added in apps/daemon/src/runs.ts never receives this message while the current run is active; if the user refreshes or closes the page before streaming flips false, queuedMessage is lost and there is no queued daemon run to recover. This breaks the PR's stated cross-reload behavior for the main Queue button flow.
Please persist or submit the queued item before returning there—for example, create a queued daemon run (or a small daemon-side queued-message record/API that can build the final history when it starts) and keep only UI state as a mirror. Add a regression that queues a follow-up during an active daemon run, remounts/reloads the web view before the first run finishes, and verifies the follow-up still drains from the daemon queue.
Generated by Looper 0.6.1 · runner=reviewer · agent=opencode5f4598f to
3a1c468
Compare
mrcfps
left a comment
There was a problem hiding this comment.
@Moe1211, thank you for the continued iteration on the queued-message flow. I found one remaining blocking correctness issue: daemon-queued follow-ups are now persisted, but the prompt they persist can still be based on the in-flight assistant response before it has finished.
Generated by Looper 0.6.1 · runner=reviewer · agent=opencode| persistMessage(userMsg); | ||
| persistMessage(assistantMsg); | ||
|
|
||
| const transcript = nextHistory |
There was a problem hiding this comment.
This builds the queued daemon run payload immediately from nextHistory, which is messages plus the new user prompt while the current run is still streaming. At that moment the current assistant message in messages can still be partial (or buffered just before the final chunks flush), and the daemon stores this message string in the starter closure until maybeStartNext() runs later. When the queued run finally starts, startChatRun() uses that stale transcript instead of the completed assistant response, so the follow-up can be answered without the final context from the turn it is supposed to follow.
Because this is the main daemon Queue path and the PR promises sequential follow-up behavior, please defer constructing the daemon prompt until the previous run has reached a terminal state, or persist a compact queued request and have the daemon/web rebuild the transcript from the finalized conversation before starting it. A regression should queue a follow-up while the first assistant response is still streaming, let the first response append additional/final content, and verify the second run receives the completed first response in its transcript.
Resolves #446.
This PR adds the ability for users to compose and queue a follow-up message while a Claude run is still streaming. It implements the queue locally in the UI for immediate responsiveness and backs it with daemon-level sequencing so the queue survives page reloads.
Changes
Frontend (
apps/web)Sendbutton now readsQueuewhen streaming, while theStopbutton remains fully accessible alongside it.queuedMessagestate toProjectViewthat intercepts sent messages during an active stream.useEffectto automatically dispatch the queued message when the stream transitions to a finished state.Daemon (
apps/daemon)runs.ts) to handle sequential execution.start()calls with a newenqueue()method in the API layer (server.ts).enqueue()checks if there is an active run for the givenconversationId. If so, it leaves the new run in aqueuedstate.maybeStartNext()into thefinishandfaillifecycle methods so the daemon automatically dequeues and starts the next run in the conversation, ensuring cross-reload queue persistence.