Skip to content

Conversation

bitemyapp
Copy link

@bitemyapp bitemyapp commented Oct 9, 2025

Fixes: #4751 #3675 #4550

This implements toggleable stdout/stderr streaming for live commands in codex-tui.

Contribution guidelines

Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

At the moment, we only plan to prioritize reviewing external contributions for bugs or security fixes.

If you want to add a new feature or change the behavior of an existing one, please open an issue proposing the feature and get approval from an OpenAI team member before spending time building it.

New contributions that don't go through this process may be closed if they aren't aligned with our current roadmap or conflict with other priorities/upcoming features.

I know, I just don't care. I'm posting this in case anyone else wants it. I'm happy to sign a CLA so you can reuse what I did to get this feature fully integrated faster.

I did run cargo test && cargo clippy --tests && cargo fmt -- --config imports_granularity=Item and made an amendment to a test so that it would pass. It could be the case that you'd want to be able to test a preview and non-preview mode of this functionality, but I chose to keep it simple for now.

This is marked Draft: because it is not yet in a mergeable state. I'm putting this up in case it saves openai time or in case anyone else wants the feature as I did. I will be using this fork of the TUI client until you integrate this functionality.

Screenshots:

image image image image image

Made with codex, transcript attached

I used codex to write 100% of this. I'm attaching a transcript from my interactions with codex-tui below so that you can evaluate. codex-stdout-streaming-transcript.txt

Request changes if you'd like any!

I'm happy to make any changes you'd like in order to get this merged. I'm including all of the context from my work with Codex on this MR for the sake of transparency, including the markdown documents of the GitHub issues and the markdown plan I asked gpt5-codex-high to formulate for this effort. I also haven't squashed the history yet for the sake of transparency. I can do that at your request.

Caveats discovered so far

The main behavioral caveat to the way this is currently implemented that I've noticed is that even when the command's output is folded, updates to the folded one-line preview seemed to scroll the bottom view of the TUI to the top third of my terminal. I tested with ghostty on Ubuntu 25.04, I need to test this with gnome-terminal and on macOS with Terminal.app to see if it repros there. It wasn't fatally annoying, but I wanted to be careful and note the possibly aberrant behavior. One straight-forward way to solve this would be to eliminate the one-line preview in the folded view, but it makes me happy so I'd rather find a way to make it work.

I haven't tested to see if the one-line preview or if the unfolded view does anything intelligent with whitespace output. I wasn't sure what behavior would make the most sense here so I chose to leave it be. You'd know better than I what's best here. It's possible that you'd want to debounce the one-line preview so it only updates after the next non-100%-whitespace line or show a whitespace count after the last non-whitespace-line.

Skipped spooling

I think the complexity and SSD writes aren't worth it so I didn't bother looking into spooling the stream to disk.

Copy link

github-actions bot commented Oct 9, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Comment on lines 537 to +577
self.defer_or_handle(|q| q.push_exec_begin(ev), |s| s.handle_exec_begin_now(ev2));
}

fn on_exec_command_output_delta(
&mut self,
_ev: codex_core::protocol::ExecCommandOutputDeltaEvent,
) {
// TODO: Handle streaming exec output if/when implemented
fn on_exec_command_output_delta(&mut self, ev: ExecCommandOutputDeltaEvent) {
let ExecCommandOutputDeltaEvent {
call_id,
stream,
chunk,
} = ev;

if let Some(cell) = self
.active_cell
.as_mut()
.and_then(|c| c.as_any_mut().downcast_mut::<ExecCell>())
{
let preview = cell.append_live_chunk(&call_id, stream, &chunk);
let has_live = !cell.live_lines_for_display().is_empty();
if let Some(text) = preview.or_else(|| cell.latest_live_preview()) {
self.bottom_pane.set_status_live_line(Some(text));
}
if has_live {
self.bottom_pane.set_live_output_toggle_enabled(true);
}
self.bottom_pane
.set_live_output_visible(cell.show_live_output());
self.request_redraw();
return;
}

if !chunk.is_empty() {
let fallback = String::from_utf8_lossy(&chunk).to_string();
if !fallback.is_empty() {
self.bottom_pane
.set_status_live_line(Some(fallback.trim_end_matches('\n').to_string()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Queue exec output deltas to preserve ordering

Exec output streaming is handled immediately in on_exec_command_output_delta, while begin and end events still use defer_or_handle and the interrupt queue. When the UI is already processing another stream (e.g., reasoning text) the begin event is queued, but output deltas are processed right away and the active cell has not been created yet, so the fallback branch runs and the live chunk is discarded. Users will miss the first part of command output and the toggle hint never appears. Output delta events should be queued alongside ExecCommandBeginEvent/ExecCommandEndEvent to ensure they are processed after the corresponding begin event.

Useful? React with 👍 / 👎.

@bitemyapp
Copy link
Author

bitemyapp commented Oct 9, 2025

I have read the CLA Document and I hereby sign the CLA.

@bitemyapp bitemyapp force-pushed the bitemyapp/stdout-streaming branch from 529b271 to 87bd85b Compare October 9, 2025 16:55
@bitemyapp
Copy link
Author

Fired off a rebase. Following up on the bot's review note in a bit.

@bitemyapp bitemyapp marked this pull request as draft October 9, 2025 17:07
@bitemyapp
Copy link
Author

Converted to a draft in GitHub, forgot GitHub doesn't respect Draft: prefixes on PR/MR titles like GitLab.

@bitemyapp
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request Oct 9, 2025
@bitemyapp bitemyapp force-pushed the bitemyapp/stdout-streaming branch from 87bd85b to a58165a Compare October 9, 2025 19:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: Stream real-time output from long-running shell commands

1 participant