Skip to content

feat: Allow composing follow-up messages while a run is in flight ("queued message")#518

Open
Moe1211 wants to merge 2 commits intonexu-io:mainfrom
Moe1211:feat/queued-messages
Open

feat: Allow composing follow-up messages while a run is in flight ("queued message")#518
Moe1211 wants to merge 2 commits intonexu-io:mainfrom
Moe1211:feat/queued-messages

Conversation

@Moe1211
Copy link
Copy Markdown

@Moe1211 Moe1211 commented May 5, 2026

Resolves #446.

This PR adds the ability for users to compose and queue a follow-up message while a Claude run is still streaming. It implements the queue locally in the UI for immediate responsiveness and backs it with daemon-level sequencing so the queue survives page reloads.

Changes

Frontend (apps/web)

  • Unlocked the composer input during active runs.
  • The Send button now reads Queue when streaming, while the Stop button remains fully accessible alongside it.
  • Added a queuedMessage state to ProjectView that intercepts sent messages during an active stream.
  • Added a useEffect to automatically dispatch the queued message when the stream transitions to a finished state.

Daemon (apps/daemon)

  • Updated the run manager (runs.ts) to handle sequential execution.
  • Replaced direct start() calls with a new enqueue() method in the API layer (server.ts).
  • enqueue() checks if there is an active run for the given conversationId. If so, it leaves the new run in a queued state.
  • Hooked maybeStartNext() into the finish and fail lifecycle methods so the daemon automatically dequeues and starts the next run in the conversation, ensuring cross-reload queue persistence.

@lefarcen lefarcen self-requested a review May 5, 2026 10:52
@lefarcen lefarcen added the feature New feature or enhancement label May 5, 2026
@lefarcen
Copy link
Copy Markdown
Contributor

lefarcen commented May 5, 2026

Hi @Moe1211! 🎉
Thanks for the contribution — queued messages during active runs is a highly requested feature.
I will run a deep review and get back to you within 24h.

Thanks for making open-design better!
— open-design team

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: cbc6aba0b3

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread apps/daemon/src/runs.ts
Comment on lines +76 to +77
if (!conversationId) return;
const conversationRuns = Array.from(runs.values())
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Start enqueued runs without conversation IDs

enqueue() now relies on maybeStartNext(), but maybeStartNext() immediately returns when conversationId is falsy. That leaves runs permanently stuck in queued for endpoints that don't attach a conversation id (notably /api/chat, which calls create() without metadata in server.ts), so clients can receive a run id/stream connection that never starts or terminates.

Useful? React with 👍 / 👎.

disabled={!draft.trim() && commentAttachments.length === 0}
>
<Icon name="send" size={13} />
<span>{streaming ? 'Queue' : t('chat.send')}</span>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Make the Queue button actually dispatch while streaming

The composer now shows a Queue button during streaming, but submit() still short-circuits on streaming (if (streaming) return / ... || streaming) return), so clicking this new button is a no-op and users cannot queue a follow-up at all. This is a functional regression in the exact flow this UI change introduces.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @Moe1211, great work on implementing queued messages! The architecture is sound — separating UI queueing from daemon sequencing is the right approach.

However, the review found three critical issues that will prevent this from working in production:

  1. enqueue not exportedserver.ts calls design.runs.enqueue() but the function is never returned from createChatRunService. Every chat run will throw TypeError: enqueue is not a function.

  2. Legacy /api/chat endpoint broken — this endpoint creates runs without a conversationId, so the queuing logic never fires. Streaming clients using this endpoint will hang.

  3. Race condition — if a second run is enqueued while the first is in async preflight, both can start simultaneously because the selected run remains status === 'queued' until startChatRun reaches the spawn point.

See inline comments below for concrete fixes. Once these are addressed, the implementation will be solid. 🚀

Comment thread apps/daemon/src/runs.ts
if (run.status !== 'queued') return run; // Only enqueue fresh runs
starters.set(run.id, starter);
maybeStartNext(run.conversationId);
return run;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1enqueue is defined but not exported from createChatRunService. Add enqueue to the returned service object (around line 172 where other methods are exported). Without this, server.ts:3867 will throw TypeError: design.runs.enqueue is not a function on every chat run.

Comment thread apps/daemon/src/server.ts
const run = design.runs.create();
design.runs.stream(run, req, res);
design.runs.start(run, () => startChatRun(req.body || {}, run));
design.runs.enqueue(run, () => startChatRun(req.body || {}, run));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 — The legacy /api/chat endpoint doesn't assign conversationId until inside startChatRun, but enqueue calls maybeStartNext(run.conversationId) immediately — before the ID is set. The queued run never starts because conversationId is null.

Concrete fix: Either (a) pass request body into create(req.body || {}) to set conversationId early, or (b) make enqueue start immediately when no conversation ID exists (treating it as a one-off non-queued run).

Comment thread apps/daemon/src/runs.ts
const starter = starters.get(nextRun.id);
if (starter) {
// Remove from starters map once we're about to run it
starters.delete(nextRun.id);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 — Race condition: if a second run is enqueued while the selected run is in async preflight (before startChatRun marks it non-queued), maybeStartNext sees no active non-queued run and starts the second one too, violating one-run-per-conversation.

Concrete fix: Mark the selected run as internally active before invoking start(nextRun, starter), or add an isStarting flag and treat it as active.

);

// Dispatch queued messages after the current run finishes.
useEffect(() => {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 — The queued message auto-dispatch works for UI-level state, but if the UI sends a second turn while streaming, the daemon-queued run will execute with a stale transcript snapshot from enqueue time, not the completed assistant response.

Concrete fix: Either (a) queue only locally and call streamViaDaemon after the current run finishes (simpler), or (b) change the daemon API to accept the new user prompt plus conversation IDs, and resolve the final persisted history at actual start time.

<button
type="button"
className="composer-send"
data-testid="chat-send"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 — The Queue button now renders while streaming, but clicking it is a no-op because submit() returns early when streaming is true, and setQueuedMessage is never called.

Concrete fix: Wire onSend during streaming to set the queued message in ProjectView, clear/reset the composer only after enqueue succeeds, and render queued state so users can see/cancel it.

disabled={!draft.trim() && commentAttachments.length === 0}
>
<Icon name="send" size={13} />
<span>{streaming ? 'Queue' : t('chat.send')}</span>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3'Queue' is hardcoded and bypasses the i18n system. Add chat.queue to Dict and all locale files, then render t('chat.queue').

Copy link
Copy Markdown
Contributor

@mrcfps mrcfps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Moe1211, thanks for pushing this queued-message path forward. I found a few blocking correctness issues that keep the current head from reliably starting runs or queueing follow-up messages, so I’m requesting changes for this pass.

Generated by Looper 0.5.5 · runner=reviewer · agent=opencode

Comment thread apps/daemon/src/server.ts
const body = { runId: run.id };
res.status(202).json(body);
design.runs.start(run, () => startChatRun(req.body || {}, run));
design.runs.enqueue(run, () => startChatRun(req.body || {}, run));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

server.ts now calls design.runs.enqueue(...), but the changed service implementation only defines enqueue locally and the returned object from createChatRunService still does not expose it. That means the first /api/runs request reaches this line and throws TypeError: design.runs.enqueue is not a function instead of starting or queueing the run, breaking the main chat path. Please add enqueue to the service return in apps/daemon/src/runs.ts (and cover the /api/runs create/start path with a test) before switching callers to it.

Comment thread apps/daemon/src/runs.ts
Comment on lines +75 to +76
const maybeStartNext = (conversationId) => {
if (!conversationId) return;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybeStartNext immediately returns when conversationId is missing, but server.ts also changed the legacy /api/chat endpoint to call enqueue on a run created with design.runs.create() and no metadata. Once enqueue is exported, that endpoint will leave its run in queued forever: the SSE client is attached, but no starter ever runs and no terminal event is emitted. Please either create /api/chat runs with the request metadata before enqueueing, or have enqueue fall back to start(run, starter) for runs without a conversation id.

Comment thread apps/daemon/src/runs.ts
if (starter) {
// Remove from starters map once we're about to run it
starters.delete(nextRun.id);
start(nextRun, starter);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The selected queued run is not marked active before start(nextRun, starter) invokes the async startChatRun preflight. During that window the run is still status === 'queued', so another enqueue for the same conversation can call maybeStartNext, see no active non-queued run, and start a second queued run concurrently. That violates the one-run-at-a-time sequencing this change is adding. Please mark the run as starting/running before invoking the starter (or add an isStarting state that the active-run check treats as active), then add a fixture that enqueues two runs while the first starter is still awaiting preflight.

Comment thread apps/web/src/components/ChatComposer.tsx Outdated
Copy link
Copy Markdown
Contributor

@mrcfps mrcfps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Moe1211, thanks for the follow-up fixes here. I re-reviewed the current head and found a few blocking correctness issues that still keep the main queued-message flow from reliably starting runs and queueing the follow-up locally.

Generated by Looper 0.5.5 · runner=reviewer · agent=opencode

Comment thread apps/daemon/src/server.ts
const body = { runId: run.id };
res.status(202).json(body);
design.runs.start(run, () => startChatRun(req.body || {}, run));
design.runs.enqueue(run, () => startChatRun(req.body || {}, run));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still calls design.runs.enqueue(...), but createChatRunService() only defines enqueue locally and the returned service object in apps/daemon/src/runs.ts does not include it. That means the primary /api/runs path returns 202 and then immediately throws TypeError: design.runs.enqueue is not a function, so no run is started or queued. Please add enqueue to the returned service API (and cover this route with a regression test) before switching callers over. 🚧

Comment thread apps/daemon/src/server.ts

app.post('/api/chat', (req, res) => {
const run = design.runs.create();
const run = design.runs.create(req.body || {});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing req.body || {} into create() fixes callers that include a conversationId, but direct/legacy /api/chat requests can still omit it. In that case enqueue() stores the starter, maybeStartNext(run.conversationId) returns early because the ID is null, and the attached SSE stream never starts or ends. Please either make enqueue() fall back to start(run, starter) when there is no conversation id, or reject /api/chat requests without a conversation id with a clear 400 if the endpoint now requires one.

}
if ((!prompt && commentAttachments.length === 0) || streaming) return;
if (!prompt && commentAttachments.length === 0) return;
onSend(prompt, staged, commentAttachments);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing the streaming guard lets the new Queue button call onSend(...), but ProjectView.handleSend does not store queuedMessage before creating the user/assistant messages, controllers, and stream. The only new queuedMessage usages initialize it, clear it on stop, and dispatch it after streaming ends, so clicking Queue while a run is active starts another frontend run immediately instead of queueing the follow-up locally. That can overwrite the single abortRef/cancelRef and make Stop/streaming state track the wrong run. Please add an early if (streaming) { setQueuedMessage({ prompt, attachments, commentAttachments }); return; } path in handleSend before creating the new run, then let the existing post-stream effect dispatch it.

Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick update, @Moe1211! You've made good progress — the race condition mitigation and i18n fixes look solid. However, the critical blocker is still present: enqueue is defined but not exported from createChatRunService, so the first chat request will still throw TypeError: enqueue is not a function.

See inline for the remaining items. Once enqueue is in the service return object, this should be ready to ship. 🚀

Comment thread apps/daemon/src/runs.ts
if (run.status !== 'queued') return run; // Only enqueue fresh runs
starters.set(run.id, starter);
maybeStartNext(run.conversationId);
return run;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 (CRITICAL — still not fixed)enqueue is defined but not exported. You need to add it to the service return object (around line 172 where create, start, etc. are returned). Without this, server.ts:3867 throws TypeError: design.runs.enqueue is not a function on the first /api/runs or /api/chat request.

Comment thread apps/daemon/src/server.ts

app.post('/api/chat', (req, res) => {
const run = design.runs.create();
const run = design.runs.create(req.body || {});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 — Passing req.body to create() only fixes /api/chat when callers supply conversationId. Existing clients can omit it, and maybeStartNext() returns early for null, so those runs queue forever.

Concrete fix: In enqueue(), fall back to start(run, starter) immediately when run.conversationId is null (no serialization key = no queue needed).

Comment thread apps/daemon/src/runs.ts Outdated
// Remove from starters map once we're about to run it
starters.delete(nextRun.id);
// Mark as starting to prevent race condition while async preflight runs
nextRun.status = 'starting';
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2status = 'starting' closes the race, but 'starting' is not in the ChatRunStatus contract, so web helpers like isActiveRunStatus won't recognize it.

Concrete fix: Either add 'starting' to packages/contracts/src/api/chat.ts and all consumers, or keep it internal (e.g., run._isStarting = true) while leaving public status as 'queued' until the real start() call.


// Dispatch queued messages after the current run finishes.
useEffect(() => {
if (!streaming && queuedMessage) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 — The Queue button now calls handleSend, but handleSend still snapshots nextHistory = [...messages, userMsg] immediately, while the current assistant response may still be partial. The queuedMessage state/effect you added isn't actually preventing this.

Concrete fix: In handleSend, if streaming is true, call setQueuedMessage({ prompt, attachments, commentAttachments }) and return before creating messages or calling streamViaDaemon. The effect will dispatch it after the stream finishes.

@Moe1211 Moe1211 force-pushed the feat/queued-messages branch from 38538bc to 63580c1 Compare May 5, 2026 11:39
Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates, @Moe1211! Good progress — the 'starting' status and Queue button defer are now fixed ✅. However, the two critical P1 blockers remain: enqueue is still not exported, and /api/chat runs without conversationId will queue forever. These will cause runtime failures on the main chat path.

See inline for the remaining items.

Comment thread apps/daemon/src/runs.ts
if (run.status !== 'queued') return run; // Only enqueue fresh runs
starters.set(run.id, starter);
maybeStartNext(run.conversationId);
return run;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 (CRITICAL — still not fixed)enqueue is defined but not returned from createChatRunService. You need to add enqueue to the service return object (around line 174 where create, start, get, etc. are returned). Without this, server.ts:3867 and server.ts:3902 will throw TypeError: design.runs.enqueue is not a function on the first /api/runs or /api/chat request.

Comment thread apps/daemon/src/runs.ts
return run;
};

const enqueue = (run, starter) => {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1enqueue() calls maybeStartNext(run.conversationId), but maybeStartNext returns immediately when the ID is null. /api/chat can create runs with design.runs.create() without metadata, so those runs will be enqueued but never started.

Concrete fix: Add an early fallback in enqueue(): if (!run.conversationId) { start(run, starter); return run; }

Copy link
Copy Markdown
Contributor

@mrcfps mrcfps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Moe1211, thanks for the quick follow-up. I re-reviewed the current head and found a few additional blocking issues that keep the daemon/web checks and queued-run sequencing from being safe to merge.

Generated by Looper 0.5.5 · runner=reviewer · agent=opencode

Comment thread apps/daemon/src/runs.ts Outdated
Comment thread apps/daemon/src/runs.ts
// Remove from starters map once we're about to run it
starters.delete(nextRun.id);
// Mark as starting to prevent race condition while async preflight runs
nextRun._isStarting = true;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nextRun._isStarting = true is never cleared, and the active-run predicate on line 82 treats _isStarting as active even after the run reaches a terminal status. When the first queued run finishes, finish() calls maybeStartNext() while that completed run still has _isStarting, so the next queued run for the same conversation is blocked until cleanup removes the old run (up to the 30 minute TTL). Please clear _isStarting when the starter transitions into start()/finish(), or change the active check to ignore _isStarting on terminal runs, and add a regression that two queued runs in one conversation drain sequentially.

'chat.linkedFolderAlready': string;
'chat.linkedFolderPickError': string;
'chat.send': string;
'chat.queue': string;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding chat.queue to the required Dict interface means every export const <locale>: Dict must define it, but this PR only updates seven locale files. The remaining locales (for example apps/web/src/i18n/locales/de.ts, plus ar/es-ES/fr/hu/ja/ko/pl/tr) still have chat.send without chat.queue, so a normal installed web typecheck will fail once it checks those Dict assignments. Please add chat.queue to every locale object or make the key optional with an explicit fallback before using t('chat.queue').

@Moe1211 Moe1211 force-pushed the feat/queued-messages branch from 63580c1 to a8bc6d0 Compare May 5, 2026 12:00
Copy link
Copy Markdown
Contributor

@mrcfps mrcfps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Moe1211, thanks for the follow-up fixes here. I re-reviewed the current head and found blocking issues that still break the queued-run path and can send a queued follow-up into the wrong conversation.

Generated by Looper 0.5.5 · runner=reviewer · agent=opencode

Comment thread apps/daemon/src/server.ts
const body = { runId: run.id };
res.status(202).json(body);
design.runs.start(run, () => startChatRun(req.body || {}, run));
design.runs.enqueue(run, () => startChatRun(req.body || {}, run));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This route now calls design.runs.enqueue(...) (and /api/chat does the same at line 3902), but the changed createChatRunService() in apps/daemon/src/runs.ts still returns create, start, get, etc. without returning enqueue. As soon as the API responds 202, this line throws TypeError: design.runs.enqueue is not a function, so the primary daemon chat path neither starts nor queues any run. Please add enqueue to the returned service object and cover the /api/runs create/start path with a regression test.

Comment thread apps/web/src/components/ProjectView.tsx Outdated
Comment thread fix_runs.sh Outdated
Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great progress, @Moe1211! You've fixed 4 out of 5 critical blockers:

✅ Fixed:

  • _isStarting now cleared in finish() and start() finally block
  • All locale files now include chat.queue
  • enqueue() correctly falls back to start() for null conversationId
  • Syntax error removed (file now ends cleanly)

Still blocking:

enqueue is defined at line 129 but not exported from the service return object at line 178. This means server.ts:3867 will still throw TypeError: design.runs.enqueue is not a function on the first /api/runs request.

Fix: Add , enqueue to the return object after start (line 180):

return {
  create,
  start,
  enqueue,  // ← add this line
  get,
  list,
  stream,
  cancel,
  wait,
  emit,
  finish,
  fail,
  statusBody,
  isTerminal(status) {
    return TERMINAL_RUN_STATUSES.has(status);
  },
};

Once this is added, the implementation should be ready to ship. 🚀

@Moe1211 Moe1211 force-pushed the feat/queued-messages branch from a8bc6d0 to 9af2dfc Compare May 5, 2026 12:36
Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent work, @Moe1211! 🎉

You've successfully fixed all 5 critical blockers from the previous reviews:

enqueue now exported from service return object (line 181)
_isStarting properly cleared in finish() and start() finally block
✅ All 16 locale files include chat.queue
enqueue() correctly falls back to start() for null conversationId
✅ Syntax error removed

The queued-message implementation is now ready from a technical correctness standpoint. Deferring final merge decision to a maintainer.

— open-design team

Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect, @Moe1211! 🎉 All blockers are now resolved on head 9af2dfc:

Fixed:

  • enqueue is now exported from the service return object (apps/daemon/src/runs.ts:180)
  • Queued message snapshot now stores projectId and conversationId to prevent wrong-conversation dispatch (apps/web/src/components/ProjectView.tsx:194-195)
  • fix_runs.sh scratch script removed

The implementation is ready to ship. 🚀

@Moe1211 Moe1211 force-pushed the feat/queued-messages branch from 9af2dfc to b2ac995 Compare May 5, 2026 17:24
@Moe1211 Moe1211 requested a review from mrcfps May 5, 2026 17:26
Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-confirmed on rebased head b2ac995 🎉

All prior blockers remain fixed after the rebase:

  • enqueue exported from service return (apps/daemon/src/runs.ts:181)
  • Queued message stores projectId + conversationId (apps/web/src/components/ProjectView.tsx:195-196)

The rebase onto current main looks clean. Ready to ship! 🚀

Copy link
Copy Markdown
Contributor

@mrcfps mrcfps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Moe1211, thank you for continuing to tighten up the queued-message flow. I found one remaining blocker in the web path: the new queueing block is still unreachable because the previous streaming guard returns first, so clicking Queue currently drops the message instead of saving it for dispatch.

Generated by Looper 0.6.1 · runner=reviewer · agent=opencode

Comment thread apps/web/src/components/ProjectView.tsx Outdated
@Moe1211 Moe1211 force-pushed the feat/queued-messages branch from b2ac995 to a1ee7e1 Compare May 6, 2026 13:19
Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @Moe1211! Thanks for the quick follow-up.

You removed the streaming guard from onSubmitForm — great instinct! But the blocker is actually one level deeper: handleSend itself still has an early if (streaming) return; at line 857 that exits before reaching the queueing block at line 860.

The flow now is:

  1. User clicks Queue while streaming
  2. onSubmitForm calls handleSend ✅ (guard removed here)
  3. handleSend line 857 returns early ❌
  4. Queueing block (line 860) never runs

One more tweak — see inline — and this should unblock the Queue path. 🙏

Comment thread apps/web/src/components/ProjectView.tsx Outdated
@@ -849,6 +856,16 @@ export function ProjectView({
if (!activeConversationId) return;
if (streaming) return;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 — This early return still makes the queueing block (line 860) unreachable. When a user clicks Queue while a run streams, onSubmitForm now calls handleSend (✅ your fix), but execution exits here before reaching the if (streaming) { setQueuedMessage(...); return; } block below.

Please remove this line 857 guard so execution flows to the queueing block. The queueing block at line 860 already handles the streaming case correctly by storing the message and returning.

@Moe1211 Moe1211 force-pushed the feat/queued-messages branch from a1ee7e1 to 5bc0b57 Compare May 6, 2026 13:43
@mrcfps
Copy link
Copy Markdown
Contributor

mrcfps commented May 6, 2026

@Moe1211 I'm holding off on generating review comments for #518 because this pull request has merge conflicts right now.

Please resolve the conflicts with main and push the updated branch. Once that's done, request or wait for the review to run again and I'll take another look.

Generated by Looper 0.6.1 · runner=reviewer · agent=opencode

@Moe1211 Moe1211 force-pushed the feat/queued-messages branch from 5bc0b57 to 5f4598f Compare May 6, 2026 14:51
Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @Moe1211, excellent work addressing the queueing blocker! 🎉

The web path now snapshots { prompt, attachments, commentAttachments, projectId, conversationId } before the streaming early return, so clicking Queue during an active run stores the follow-up locally instead of dropping it. This resolves the final blocking issue from mrcfps's last review.

All prior fixes remain intact:

  • Daemon: enqueue is exported at runs.ts:192
  • Web: queuedMessage stores both projectId and conversationId (lines 987–988) ✅
  • Cleanup: fix_runs.sh removed ✅

This PR is ready for a maintainer to merge. Thank you for the persistent iteration and clean resolution! 🙏

@Moe1211 Moe1211 requested a review from mrcfps May 6, 2026 15:02
Copy link
Copy Markdown
Contributor

@lefarcen lefarcen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @Moe1211, all prior blockers are resolved in this head! 🎉

What changed since last review:

  • Line 857 blocker fixed: ProjectView.handleSend now checks if (streaming) BEFORE creating any assistant run (lines 986-993). The queueing block saves {prompt, attachments, commentAttachments, projectId, conversationId} to state and returns early.
  • ChatComposer unlocked: removed || streaming from the empty-check guard (line 494), so onSend is allowed during streaming.
  • Conversation-switch safety: the dispatch useEffect verifies queuedMessage.projectId === project.id and queuedMessage.conversationId === activeConversationId before dispatching; clears the queue if user switched contexts.
  • Stop-button safety: handleStop calls setQueuedMessage(null) to clear pending queue when user stops the stream.
  • All locale files: chat.queue key added to every locale (ar, de, en, es, fa, fr, hu, id, it, ja, ko, pl, pt-BR, ru, tr, uk, zh-CN, zh-TW).
  • enqueue exported: apps/daemon/src/runs.ts line 194 returns enqueue from the service API.
  • _isStarting cleared: both finish() and start().finally() set run._isStarting = false to prevent queue stalls.

Remaining gaps (non-blocking P2):

  • No tests added for the queueing path (daemon sequential-start fixture + web queue-then-dispatch fixture would be nice, but not required for merge).
  • _isStarting is a runtime property without TypeScript typing (acceptable as private implementation detail).

The core queued-message flow is now correct and safe. Looks good to me; deferring final approval to a maintainer. 🚀

Copy link
Copy Markdown
Contributor

@mrcfps mrcfps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Moe1211, thank you for the steady iteration on this queued-message flow. I found one remaining blocker around the promised reload-safe queueing behavior.

Blocking — apps/web/src/components/ProjectView.tsx, handleSend streaming branch (lines 985-993): this branch stores the queued follow-up only in component state and returns before creating any daemon run. That means the daemon-side enqueue() path added in apps/daemon/src/runs.ts never receives this message while the current run is active; if the user refreshes or closes the page before streaming flips false, queuedMessage is lost and there is no queued daemon run to recover. This breaks the PR's stated cross-reload behavior for the main Queue button flow.

Please persist or submit the queued item before returning there—for example, create a queued daemon run (or a small daemon-side queued-message record/API that can build the final history when it starts) and keep only UI state as a mirror. Add a regression that queues a follow-up during an active daemon run, remounts/reloads the web view before the first run finishes, and verifies the follow-up still drains from the daemon queue.

Generated by Looper 0.6.1 · runner=reviewer · agent=opencode

@Moe1211 Moe1211 force-pushed the feat/queued-messages branch from 5f4598f to 3a1c468 Compare May 6, 2026 18:06
@Moe1211 Moe1211 requested a review from mrcfps May 6, 2026 18:06
Copy link
Copy Markdown
Contributor

@mrcfps mrcfps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Moe1211, thank you for the continued iteration on the queued-message flow. I found one remaining blocking correctness issue: daemon-queued follow-ups are now persisted, but the prompt they persist can still be based on the in-flight assistant response before it has finished.

Generated by Looper 0.6.1 · runner=reviewer · agent=opencode

persistMessage(userMsg);
persistMessage(assistantMsg);

const transcript = nextHistory
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This builds the queued daemon run payload immediately from nextHistory, which is messages plus the new user prompt while the current run is still streaming. At that moment the current assistant message in messages can still be partial (or buffered just before the final chunks flush), and the daemon stores this message string in the starter closure until maybeStartNext() runs later. When the queued run finally starts, startChatRun() uses that stale transcript instead of the completed assistant response, so the follow-up can be answered without the final context from the turn it is supposed to follow.

Because this is the main daemon Queue path and the PR promises sequential follow-up behavior, please defer constructing the daemon prompt until the previous run has reached a terminal state, or persist a compact queued request and have the daemon/web rebuild the transcript from the finalized conversation before starting it. A regression should queue a follow-up while the first assistant response is still streaming, let the first response append additional/final content, and verify the second run receives the completed first response in its transcript.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature New feature or enhancement

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(web): allow composing follow-up messages while a run is in flight ("queued message")

3 participants