Skip to content

Alternative fix for #2519: keep compression with Live streaming#2530

Open
justin808 wants to merge 7 commits intomasterfrom
codex/2519-alt-no-transform-streaming
Open

Alternative fix for #2519: keep compression with Live streaming#2530
justin808 wants to merge 7 commits intomasterfrom
codex/2519-alt-no-transform-streaming

Conversation

@justin808
Copy link
Member

@justin808 justin808 commented Mar 4, 2026

Summary

  • replace the no-transform approach with a stream-level gzip writer for RSC payload responses
  • keep ActionController::Live chunked streaming behavior (no buffering)
  • set Content-Encoding: gzip and Vary: Accept-Encoding when gzip is used
  • wire RSC payload rendering to request compressed streaming (compress: true)
  • harden gzip negotiation and disconnect handling without disabling compression or streaming
  • add regression tests for gzip streaming headers and RSC payload compression wiring

Why

Issue #2519 is caused by compression middleware if predicates calling body.each on ActionController::Live responses, which can deadlock before the stream writes. Instead of disabling transforms globally, this PR preserves both streaming and compression by compressing the live stream directly for clients that accept gzip.

This PR is an alternative solution for issue #2519.

ELI5 Explanation

Imagine we have a hose sending water little by little (streaming). Another machine tried to squeeze the water while also waiting for all the water first (compression middleware), so both sides waited forever and nothing came out.

The fix is: we keep sending little by little, and we put a tiny squeezer directly on the hose itself. That way:

  • streaming still streams right away
  • compression still happens for browsers that want gzip
  • no waiting loop/deadlock

Closes #2519

Test Plan

  • cd react_on_rails_pro && bundle exec rspec spec/react_on_rails_pro/stream_spec.rb spec/react_on_rails_pro/rsc_payload_renderer_spec.rb
  • cd react_on_rails_pro && bundle exec rubocop lib/react_on_rails_pro/concerns/stream.rb lib/react_on_rails_pro/concerns/rsc_payload_renderer.rb spec/react_on_rails_pro/stream_spec.rb spec/react_on_rails_pro/rsc_payload_renderer_spec.rb
  • manual curl validation against dummy app with Accept-Encoding: gzip and Accept-Encoding: br to confirm no hang, compression behavior, and continued streaming

Summary by CodeRabbit

  • Bug Fixes

    • Streaming endpoints (including RSC payloads) now negotiate gzip per client Accept-Encoding, gzip-compress output when accepted, set Content-Encoding and Vary headers, and preserve chunked streaming behavior to avoid hanging payloads.
  • Tests

    • Added coverage for gzip and non-gzip streaming, header propagation (Vary/Content-Encoding), client-disconnect scenarios, footer/closure edge cases, and regression checks for streaming compression behavior.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: cb7ee580-788d-4b7c-b5b3-ff6c3ce539cc

📥 Commits

Reviewing files that changed from the base of the PR and between 805d133 and e2e436f.

📒 Files selected for processing (2)
  • react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb
  • react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb

Walkthrough

Adds gzip-capable streaming to the RSC streaming path: controller negotiates gzip via Accept-Encoding, builds either a gzip-wrapping output stream or plain stream, sets Content-Encoding and Vary when gzip is used, routes streamed chunks through the output abstraction, and enables compression for the RSC payload endpoint. Tests added.

Changes

Cohort / File(s) Summary
Changelog
CHANGELOG.md
Added entry describing prevention of RSC payload streaming deadlocks with compression middleware and documenting gzip-compressed streaming behavior and regression coverage.
Streaming core
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb
Introduced compress: option on public streaming API; added gzip negotiation helpers, GzipOutputStream (with WriterAdapter), output_stream setup, header adjustments (Content-Encoding, Vary), rewired stream writes to an output stream, and enhanced client-disconnect/error handling and concurrent drain logic.
RSC payload wiring
react_on_rails_pro/lib/react_on_rails_pro/concerns/rsc_payload_renderer.rb
Enabled compression for RSC payload rendering by passing compress: true to stream_view_containing_react_components.
Streaming tests
react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
Expanded test scaffolding to mock request/response and Accept-Encoding, exposed request in test controller, added helpers and tests covering gzip vs non-gzip flows, header propagation, client-disconnect behavior, gzip footer/close semantics, and GzipOutputStream state.
RSC payload tests
react_on_rails_pro/spec/react_on_rails_pro/rsc_payload_renderer_spec.rb
Added test asserting rsc_payload delegates to streaming with template: "react_on_rails_pro/rsc_payload", compress: true, layout: false.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant Middleware as Rack::Deflater/Brotli
  participant Controller as RscPayloadController
  participant Stream as ResponseStream
  participant GzipWriter as GzipOutputStream

  rect rgba(200,200,255,0.5)
  Client->>Controller: GET /rsc_payload/:component
  Controller->>Controller: inspect Accept-Encoding -> compress?
  Controller->>Stream: setup output_stream (gzip? → GzipOutputStream : plain)
  alt compress enabled
    Controller->>Controller: set header Content-Encoding: gzip
    Controller->>Controller: add Vary: Accept-Encoding
  end
  Controller->>GzipWriter: write JSON chunks via output_stream
  GzipWriter->>Middleware: yield compressed chunks
  Middleware->>Client: forward response chunks
  Client->>Client: decompress if needed
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I wrapped my stream in a cozy gzip coat,
No blocking threads, no middleware bloat.
Vary and Encoding say hello,
Chunks hop out, the data flows.
Hooray — the payloads skip and float!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 9.09% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Alternative fix for #2519: keep compression with Live streaming' clearly describes the main change: adopting an alternative approach to fixing issue #2519 while preserving both compression and live streaming capabilities.
Linked Issues check ✅ Passed All coding requirements from #2519 are met: gzip compression is handled at the stream level to prevent middleware deadlock, RSC payload endpoints set proper Content-Encoding and Vary headers, streaming behavior is preserved, and comprehensive tests validate gzip compression wiring and header propagation.
Out of Scope Changes check ✅ Passed All code changes are directly scoped to the requirements: gzip streaming implementation in stream.rb, RSC payload compression wiring, comprehensive test coverage for gzip and streaming behavior, and changelog documentation. No extraneous modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/2519-alt-no-transform-streaming

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link

claude bot commented Mar 4, 2026

PR Review: Alternative fix for #2519 — Cache-Control: no-transform guard

Overall: The implementation is clean and the intent is sound — adding no-transform before the stream begins is a standard mitigation for compression middleware deadlocks. The code quality is good and the tests are well-structured. A few concerns worth discussing:


Potential Race Condition with ActionController::Live

The core concern: add_no_transform_cache_control_directive runs inside the controller action thread (spawned by ActionController::Live), but Rack middleware reads response headers immediately after app.call(env) returns — which happens before that thread has necessarily executed even the first line of the action.

Typical Rack::Deflater flow:

def call(env)
  status, headers, body = @app.call(env)   # Rails spawns controller thread, returns immediately
  return [status, headers, body] unless should_deflate?(env, status, headers, body)
  # ...compress body via body.each...
end

If should_deflate? reads headers["Cache-Control"] before the controller thread has run add_no_transform_cache_control_directive, no-transform won't be visible and the middleware will still call body.each, causing the deadlock. The window is small but real.

Suggested mitigation: Set the header in a before_action callback or directly on the response object at the controller class level (runs before ActionController::Live hands off to a thread), rather than inside the action method body. For example:

before_action :add_no_transform_cache_control_directive, only: [:stream_view_containing_react_components]

Or set it unconditionally in the included block as a before_action on the entire controller mixing in Stream. That ensures the header is set synchronously during the main request thread before ActionController::Live spawns any background thread.


Test Uses Plain Hash Instead of Real Headers Object

setup_stream_test passes a plain {} as the headers: parameter, and the implementation uses response.headers["Cache-Control"]. In production, response.headers is an ActionDispatch::Response::Headers (or ActionController::Live::Response headers object), which is case-insensitive for HTTP headers. A plain Ruby Hash is case-sensitive, meaning this test wouldn't catch bugs like writing "cache-control" (lowercase) vs. reading "Cache-Control".

It still tests the happy path correctly, but consider using ActionDispatch::Response.new.headers in the test double setup for higher-fidelity coverage.


Missing Test: Case-Insensitive Deduplication

The implementation correctly uses casecmp to avoid duplicating no-transform regardless of casing. However, there's no test exercising this path:

it "does not duplicate when no-transform is in a different case" do
  headers = { "Cache-Control" => "private, NO-TRANSFORM" }
  # ...
  expect(headers["Cache-Control"]).to eq("private, NO-TRANSFORM")
end

Minor omission, but worth covering given the explicit casecmp logic.


Comma-Splitting of Cache-Control

headers["Cache-Control"].to_s.split(",") is fine for all real-world Cache-Control directives (they don't embed quoted commas). No issue here.


CHANGELOG

The entry is clear and accurate. No concerns.

@claude
Copy link

claude bot commented Mar 4, 2026

Review of PR #2530: Skip Compression Transforms for Live Streaming

Overall: Looks good. The fix is correct and well-scoped. A few minor observations below.

What works well

  • add_no_transform_cache_control_directive is correctly called before render_to_string and the first response.stream.write, so the header is set before Rails commits the response — this is the right timing.
  • The RSC payload endpoint is also covered: RSCPayloadRenderer#rsc_payload delegates to stream_view_containing_react_components, so the fix applies to /rsc_payload/:component_name as described.
  • Case-insensitive duplicate detection via casecmp avoids silently doubling the directive.
  • Test coverage is solid: empty header, existing directives preserved, duplicate suppressed.

Minor observations

  1. casecmp? is more idiomatic (see inline). casecmp(x).zero? works, but casecmp?(x) communicates intent more directly and is the modern Ruby idiom.

  2. split(',') does not handle quoted Cache-Control values. RFC 7234 allows quoted-string parameters (e.g. no-cache="Set-Cookie"). If a quoted value ever contained a comma the split would produce a spurious fragment. In practice this virtually never occurs for Cache-Control, and the only directive we write (no-transform) has no value, so this is a theoretical edge case — worth a brief comment rather than a code change.

  3. no-transform is set even on pre-render error paths. If render_to_string raises a PrerenderError the header is already set before the rescue propagates. This is harmless (error responses ignore it), but worth being aware of.

@greptile-apps
Copy link

greptile-apps bot commented Mar 4, 2026

Greptile Summary

This PR adds a Cache-Control: no-transform directive to all ActionController::Live streaming responses to prevent Rack compression middleware (e.g. Rack::Deflater, Rack::Brotli) from deadlocking live streams. The fix is applied via a new private helper add_no_transform_cache_control_directive called at the very start of stream_view_containing_react_components, before any async work or response writes are initiated.

Key changes:

  • stream.rb: New add_no_transform_cache_control_directive method that reads the existing Cache-Control header, case-insensitively checks for an existing no-transform directive to avoid duplication, then appends no-transform if absent.
  • stream_spec.rb: setup_stream_test now accepts and returns a headers hash so specs can assert on header state after streaming; three new test cases cover the empty-header, preserve-existing-directives, and no-duplication scenarios.

The approach is clean and minimal. The header is mutated in-place on the shared hash object, which correctly reflects in test assertions. The method is placed before the require "async" calls, meaning a hypothetical header error would fail fast before allocating async resources.

Confidence Score: 4/5

  • This PR is safe to merge; the change is small, well-targeted, and backed by regression tests.
  • The implementation is logically correct, handles all relevant edge cases (empty header, existing directives, case-insensitive deduplication), and is placed at the right point in the call stack (before any response writes). The only minor concern is using casecmp(...).zero? instead of the more idiomatic and nil-safe casecmp?. No correctness or security issues were identified.
  • No files require special attention.

Important Files Changed

Filename Overview
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb Adds add_no_transform_cache_control_directive called at the top of stream_view_containing_react_components to prevent Rack compression middleware from deadlocking live streams; logic is correct and handles case-insensitive duplicate detection.
react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb Extends setup_stream_test to accept and expose a headers hash so tests can assert on Cache-Control mutations; adds three targeted test cases for the new header behavior.

Sequence Diagram

sequenceDiagram
    participant Controller
    participant Stream as ReactOnRailsPro::Stream
    participant Headers as response.headers
    participant Middleware as Rack::Deflater / Rack::Brotli
    participant Buffer as ActionController::Live::Buffer

    Controller->>Stream: stream_view_containing_react_components(template:)
    Stream->>Headers: read Cache-Control
    Headers-->>Stream: existing directives (or nil)
    Stream->>Headers: write Cache-Control += no-transform
    Note over Headers: Header committed before first write

    Stream->>Stream: render_to_string(template:)
    Stream->>Buffer: stream.write(template_string)
    Note over Middleware: Sees Cache-Control: no-transform
    Note over Middleware: Short-circuits — skips body.each predicate
    Note over Middleware: No deadlock risk

    Stream->>Buffer: drain_streams_concurrently (async chunks)
    Buffer-->>Controller: chunks flushed to client
    Stream->>Buffer: stream.close
Loading

Last reviewed commit: cbf0f14

@justin808 justin808 changed the title Alternative fix for #2519: skip compression transforms for Live streaming Alternative fix for #2519: keep compression with Live streaming Mar 4, 2026
@claude
Copy link

claude bot commented Mar 4, 2026


PR Review: Alternative fix for #2519

Good approach overall — replacing the no-transform workaround with in-stream gzip compression is more correct and avoids the middleware deadlock without disabling compression entirely.

Issues Found

High: GzipOutputStream#close can leak the underlying stream

If @gzip_writer.close raises (broken pipe, internal Zlib error), @stream.close and @closed = true are never reached. The underlying ActionController::Live response stream stays open, causing the request to hang.

Fix — set @closed = true first to prevent re-entry, then protect @stream.close with ensure:

def close
  return if @closed
  @closed = true

  begin
    @gzip_writer.close
  ensure
    @stream.close
  end
end

Medium: Latent data-corruption when compress: true + close_stream_at_end: false

output_stream is a local variable inside stream_view_containing_react_components. When close_stream_at_end: false, output_stream.close is never called, so the gzip footer/checksum is never written, delivering a corrupt stream to the client. The caller only has access to response.stream; calling response.stream.close directly bypasses GzipOutputStream#close entirely.

The current rsc_payload usage relies on the default close_stream_at_end: true, so this is latent rather than active, but worth guarding. At minimum add a @param note:

# @param compress [Boolean] Requires close_stream_at_end: true (the default) to
#   ensure the gzip footer is written correctly.

Low: gzip_streaming_enabled? double-negation and edge case

return false if content_encoding.present? && !/\bidentity\b/i.match?(content_encoding)

The double-negation (! + match?) makes the intent harder to read. There is also an edge case: a value of "gzip, identity" would pass the guard (the regex matches identity), leading to double-encoding. A more explicit alternative:

return false if content_encoding.present? && content_encoding.downcase != "identity"

Low: GzipOutputStream#write has no return value

WriterAdapter#write correctly returns data.bytesize to satisfy the IO-like interface expected by Zlib::GzipWriter. GzipOutputStream#write implicitly returns nil. Callers in this PR do not use the return value, but returning a byte count would make the class more idiomatic.


Low: Timing-based tests

The new tests synchronize with sleep 0.1, which can be flaky on slow CI. The existing tests use explicit queue-close signals for deterministic teardown. Adding a queue + close signal to the new gzip tests would eliminate the timing dependency.


Nit

casecmp("Accept-Encoding").zero? is correct but casecmp?("Accept-Encoding") (available since Ruby 2.4) reads more naturally.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ec29c39a70

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (1)

89-117: ⚠️ Potential issue | 🔴 Critical

Stop producer tasks immediately on client disconnect to avoid hang.

If the writer exits on IOError/Errno::EPIPE, @async_barrier.wait can block indefinitely while producers are backpressured on @main_output_queue.enqueue. Cancel producers when disconnect is detected, not only after waiting.

Suggested patch shape
       while (chunk = `@main_output_queue.dequeue`)
         output_stream.write(chunk)
       end
     rescue IOError, Errno::EPIPE => e
       # Client disconnected - stop writing gracefully
       client_disconnected = true
       log_client_disconnect("writer", e)
+      `@async_barrier.stop`
     end
@@
     begin
       `@async_barrier.wait`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb` around lines 89
- 117, The writer rescue currently sets client_disconnected but doesn't cancel
producers immediately; update the rescue in the writing_task (the block that
dequeues from `@main_output_queue`) to call `@async_barrier.stop` (and optionally
`@main_output_queue.close`) as soon as an IOError/Errno::EPIPE is caught so
producers are unblocked; ensure you still call log_client_disconnect("writer",
e) and set client_disconnected = true, then let the ensure block continue
closing the queue and waiting on writing_task as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb`:
- Around line 35-41: The streaming path in
stream_view_containing_react_components currently enables compression but
doesn't set the required middleware short-circuit guard; update
stream_view_containing_react_components to set the HTTP header "Cache-Control"
to include "no-transform" before building/returning the output stream (e.g., via
the response or headers object used in this context) so downstream
proxies/middleware won't modify the payload; ensure the header is applied early
in the method (before calling build_output_stream and before any compression
logic) and do not remove or overwrite existing Cache-Control directives—merge or
append "no-transform" if present.

---

Outside diff comments:
In `@react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb`:
- Around line 89-117: The writer rescue currently sets client_disconnected but
doesn't cancel producers immediately; update the rescue in the writing_task (the
block that dequeues from `@main_output_queue`) to call `@async_barrier.stop` (and
optionally `@main_output_queue.close`) as soon as an IOError/Errno::EPIPE is
caught so producers are unblocked; ensure you still call
log_client_disconnect("writer", e) and set client_disconnected = true, then let
the ensure block continue closing the queue and waiting on writing_task as
before.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bbe3d765-e4c1-486d-976d-0ca00a75e0bc

📥 Commits

Reviewing files that changed from the base of the PR and between cbf0f14 and ec29c39.

📒 Files selected for processing (5)
  • CHANGELOG.md
  • react_on_rails_pro/lib/react_on_rails_pro/concerns/rsc_payload_renderer.rb
  • react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb
  • react_on_rails_pro/spec/react_on_rails_pro/rsc_payload_renderer_spec.rb
  • react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
🚧 Files skipped from review as they are similar to previous changes (1)
  • CHANGELOG.md

@justin808
Copy link
Member Author

Addressed follow-up review feedback in 80672b7:

  • guaranteed stream closure in gzip wrapper (ensure path) and suppressed disconnect close errors
  • moved gzip header/output stream setup to run only after successful render_to_string
  • made Content-Encoding gate explicit (identity only)
  • raised on compress: true + close_stream_at_end: false to avoid incomplete gzip payloads
  • removed timing-based sleeps from the new gzip tests
  • stopped producer barrier immediately on writer disconnect to avoid wait deadlocks

Manual verification (test env) was also run against the dummy app over HTTP:

  1. Built test bundles and started node renderer + Rails server
  2. curl --max-time 20 -H 'Accept-Encoding: gzip' http://127.0.0.1:3041/rsc_payload/SimpleComponent
  3. curl --max-time 20 -H 'Accept-Encoding: br' http://127.0.0.1:3041/rsc_payload/SimpleComponent
  4. Verified gzip response had Content-Encoding: gzip, plain response omitted it, both returned promptly (no hang), and gzip payload decompressed successfully.

@claude
Copy link

claude bot commented Mar 4, 2026

Review Summary

The approach here is sound — compressing the live stream directly rather than disabling compression middleware is the right architectural fix for the deadlock described in #2519. The GzipOutputStream wrapper is clean, the header management is correct, and the test coverage addresses the important paths (gzip enabled/disabled, header preservation, pre-commit failure, and disconnect suppression).

A few issues worth addressing before merge:

Medium concerns

  1. Rack::Utils.q_values / Rack::Utils.select_best_encoding are private Rack internals (see inline comment on line 157). These are used by Rack::Deflater internally and have been renamed/removed in Rack minor releases before. Relying on them introduces a fragile dependency. Consider inlining a small parser or at minimum adding a compatibility guard + gemspec version pin.

  2. Error swallowing after client disconnect may hide real producer failures (see inline comment on line 110). When @async_barrier.stop is called from the writing task and the barrier then raises, the rescued StandardError could be a genuine application error that raced with the disconnect. At minimum, log it at warn level rather than silently discarding it.

Minor concerns

  1. Content-Encoding multi-value check is fragile (inline on line 147) — the check works for the common case but would pass an already-doubly-encoded body through a second gzip layer if Content-Encoding contained comma-separated values like "deflate, gzip".

  2. GzipOutputStream is missing closed? (inline on line 175) — ActionController::Live::Buffer responds to closed?; not having it on the wrapper is a latent NoMethodError if any duck-typed stream check is added later.

  3. GzipOutputStream#write return value (inline on line 198) — returning data.bytesize (uncompressed) while IO#write semantics imply bytes written to the wire. Not a live bug since callers ignore the return value, but worth a comment.

Correctness observation

The build_output_stream call happens after render_to_string but before any write to the wire — so gzip headers are set before the response is committed. This ordering is correct and I confirmed it matches the code path. Good.

Overall this is a well-structured fix. The medium items (Rack private API and error swallowing) are the ones I'd prioritize before landing.

@claude
Copy link

claude bot commented Mar 5, 2026

Review: Alternative fix for #2519

The approach is sound: compressing at the stream level rather than relying on middleware completely avoids the body.each deadlock. The GzipOutputStream wrapper, SYNC_FLUSH after each chunk for true incremental delivery, and the header guards (Content-Length removal, Vary accumulation) are all correct. Test coverage is thorough and the disconnect-error suppression in GzipOutputStream#close is a nice safety net.

A few issues worth addressing before merge:

Design / correctness

compress: true, close_stream_at_end: false guard is too eager (see inline comment on lines 37–39)
The ArgumentError is raised before Accept-Encoding negotiation, so it fires even when gzip would not be activated (client sends no Accept-Encoding: gzip). In that path no GzipOutputStream is created, there is no footer to finalize, and the guard is wrong to block the combination.

build_output_stream mutates response headers as a side effect (lines 137–142)
The method name suggests pure object construction; the header mutation is hidden. This is a readability and maintainability issue — see inline comment.

parse_accept_encoding: last-occurrence-wins for duplicate tokens (lines 169–183)
Minor RFC compliance gap. Using ||= makes first-occurrence-wins explicit and is a one-character fix — see inline suggestion.

Testing

Plain Hash used in place of ActionDispatch::Response::Headers (stream_spec.rb lines 389–396)
Production headers support case-insensitive lookup; a plain Hash does not. Code that reads back a header with different casing would pass in tests but fail in production.

Neutral observations

  • WriterAdapter#close being a no-op is intentional and correct — GzipOutputStream#close manages the lifecycle explicitly.
  • The gzip;q=0 and identity;q=0 edge cases are handled correctly by the positive? check and the quality comparison.
  • The @async_barrier.stop added inside the writing-task rescue (previously only in ensure) is an improvement; see the inline note about the fiber-scheduling ordering that makes it safe.
  • The render_to_stringbuild_output_stream → first write ordering correctly ensures headers are only set when rendering succeeds.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb`:
- Around line 165-166: The rescue in Stream parsing currently only catches
ArgumentError (in the method handling q-param parsing) so a malformed header
like "Accept-Encoding: gzip;q" raising TypeError (from Float(nil)) escapes and
causes a 500; update the rescue in the method (the block that currently has
"rescue ArgumentError" in react_on_rails_pro/concerns/stream.rb) to also rescue
TypeError (e.g., "rescue ArgumentError, TypeError") so malformed q params fall
back to plain streaming, and add/extend the existing spec (related to the
q-param parsing tests) to include a case for "q" without a value to assert no
exception and correct fallback behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e56eea69-a9be-4959-9a31-10ab3ae8c838

📥 Commits

Reviewing files that changed from the base of the PR and between 80672b7 and 805d133.

📒 Files selected for processing (2)
  • react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb
  • react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb

rescue StandardError => e
@async_barrier.stop
raise e
raise e unless client_disconnected
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Silently suppressing barrier errors on client disconnect can mask real component rendering errors. A barrier task that raises RuntimeError (genuine bug in a component) at the same moment the client disconnects will be swallowed here. Consider only suppressing errors whose class is IOError/Errno::EPIPE, or at least re-raising if the error is not a connection-level exception:

Suggested change
raise e unless client_disconnected
raise e unless client_disconnected || e.is_a?(IOError) || e.is_a?(Errno::EPIPE)

@@ -113,11 +127,132 @@ def drain_streams_concurrently(parent_task)
@async_barrier.stop if client_disconnected
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the client-disconnect path, @async_barrier.stop is called three times total:

  1. In the writing_task rescue block (line 104)
  2. In the rescue StandardError block above (line 111)
  3. Here in ensure

Steps 2 and 3 are redundant since the barrier is already stopped by step 1. This is likely idempotent, but clarifying the intent (or removing the redundant calls) would reduce confusion when reading the flow.

buffer_size = ReactOnRailsPro.configuration.concurrent_component_streaming_buffer_size
@main_output_queue = Async::LimitedQueue.new(buffer_size)
gzip_streaming_enabled = gzip_streaming_enabled?(compress)
if gzip_streaming_enabled && !close_stream_at_end
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This argument validation fires inside the Sync block, which initializes the async event loop before a simple precondition check. Moving the guard before the Sync do call avoids unnecessary async context setup for invalid arguments:

Suggested change
if gzip_streaming_enabled && !close_stream_at_end
def stream_view_containing_react_components(template:, close_stream_at_end: true, compress: false, **render_options)
if compress && !close_stream_at_end
raise ArgumentError, "compress: true requires close_stream_at_end: true to finalize gzip footer"
end
require "async"

end

def write(data)
@stream.write(data)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data.bytesize returns the uncompressed byte count, but the actual bytes written to the underlying stream are the compressed bytes (fewer bytes). The return value isn't used by any callers today, but this is semantically misleading if the interface is ever consumed expecting the number of bytes transmitted. A comment noting this intentional mismatch would help future readers.

identity_quality = parsed_accept_encoding.fetch("identity", wildcard_quality || 1.0)

gzip_quality.positive? && gzip_quality >= identity_quality
rescue ArgumentError, TypeError
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: TypeError from Float(nil) is not rescued

When the Accept-Encoding header contains gzip;q (a q parameter without = or a value), the flow is:

"gzip;q".split("=", 2) => ["q"]
key, value = ["q"]       => key="q", value=nil
Float(nil)               => TypeError (not ArgumentError)

Float(nil) raises TypeError, but this method only rescues ArgumentError. The TypeError escapes request_accepts_gzip?, propagates through gzip_streaming_enabled? and into the Sync block, where it is caught and re-raised — causing the stream to fail instead of silently falling back to plain streaming.

The spec "keeps plain streaming when Accept-Encoding has q without a value" would actually fail with an error rather than passing.

Fix:

Suggested change
rescue ArgumentError, TypeError
rescue ArgumentError, TypeError

Alternatively, guard before calling Float:

return Float(value) unless value.nil? || value.empty?


content_encoding = response.headers["Content-Encoding"].to_s
content_encoding_values = content_encoding.split(",").map { |value| value.downcase.strip }.reject(&:blank?)
return false if content_encoding_values.present? && content_encoding_values != ["identity"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fragile array-equality comparison for identity values

content_encoding_values != ["identity"] only matches when there is exactly one value and it is "identity". If a response somehow has Content-Encoding: identity, identity (duplicated), the resulting array ["identity", "identity"] is != ["identity"], causing this method to return false and skip gzip — even though the effective encoding is still "no encoding" (identity).

More robustly:

Suggested change
return false if content_encoding_values.present? && content_encoding_values != ["identity"]
return false if content_encoding_values.any? { |v| v != "identity" }

This allows any number of identity entries while still blocking compression when any non-identity encoding is present.

##### Fixed

- **Fix streaming SSR hangs and silent error absorption in RSC payload injection**: Fixed two related issues: (1) streaming SSR renders hanging forever when errors occur because Node.js `stream.pipe()` doesn't propagate errors or closure from source to destination, and (2) errors in the RSC payload injection pipeline being silently absorbed, preventing them from reaching error reporters like Sentry. Introduced a shared `safePipe` utility and used `'close'` events as reliable termination signals across the streaming pipeline (Node renderer, RSC payload injection, transform streams, and Ruby async task). Also added a Ruby safety net to prevent Rails request hangs when async rendering tasks raise before the first chunk. [PR 2407](https://github.com/shakacode/react_on_rails/pull/2407) by [AbanoubGhadban](https://github.com/AbanoubGhadban).
- **Prevented RSC payload streaming deadlocks with compression middleware while keeping streaming and compression enabled**. The RSC payload endpoint now streams gzip-compressed output directly when the client accepts gzip, sets `Content-Encoding: gzip` and `Vary: Accept-Encoding`, and keeps chunked `ActionController::Live` behavior. This avoids hangs on `/rsc_payload/:component_name` caused by middleware `if` predicates that enumerate `body.each`, without disabling streaming or response compression. Added regression coverage for gzip streaming header behavior and RSC payload compression wiring. [PR 2530](https://github.com/shakacode/react_on_rails/pull/2530) by [justin808](https://github.com/justin808).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

Overall this is a well-designed fix for the deadlock in issue #2519. Compressing directly at the stream level is the right approach — it avoids the middleware body.each enumeration problem without disabling compression globally. The implementation is layered cleanly and the test suite is thorough.

Issues worth addressing (see inline comments on stream.rb):

  1. Line 112 of stream.rb — Barrier errors silently swallowed on client disconnect can mask genuine component rendering errors that happen to coincide with a disconnect. Only connection-level exceptions (IOError/Errno::EPIPE) should be suppressed.

  2. Line 47 of stream.rb — The compress: true && !close_stream_at_end guard fires inside the Sync block, initializing the async runtime before a simple precondition check. Moving it before Sync do avoids unnecessary event-loop startup for invalid arguments.

  3. Lines 104/111/127 of stream.rb@async_barrier.stop is called three times on the client disconnect path. Idempotent in practice but worth simplifying.

Minor:

  • WriterAdapter#write returns data.bytesize (uncompressed bytes), not the actual compressed bytes written to the stream. Harmless since the return value is unused by all callers, but semantically misleading.
  • Float(value) raising ArgumentError as the control-flow signal for malformed q-values works correctly but is an unusual pattern.

Strengths:

  • SYNC_FLUSH after every write is correct — ensures chunks are delivered immediately rather than held in the gzip buffer.
  • WriterAdapter#close being a no-op correctly prevents Zlib::GzipWriter#close from closing the underlying stream prematurely.
  • Headers are only set after render_to_string succeeds, so pre-commit render failures do not poison Content-Encoding. The regression test for this behavior is a good catch.
  • Accept-Encoding negotiation correctly handles wildcards, quality ordering, identity;q=0, and malformed inputs.
  • Tests cover the non-trivial edge cases well.

log_client_disconnect("writer", e)
# `Async` does not yield between setting the flag and stopping the barrier.
# This guarantees the rescue path can observe `client_disconnected == true`.
@async_barrier.stop
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential unhandled exception type from @async_barrier.wait after barrier stop

When writing_task rescues an IOError/Errno::EPIPE and calls @async_barrier.stop, the component producer tasks inside the barrier are interrupted. Depending on the version of the async gem, stopping tasks may cause Async::Stop (which inherits from Exception, not StandardError) to propagate when @async_barrier.wait is called on line 109.

If @async_barrier.wait raises Async::Stop (or another non-StandardError), the rescue StandardError => e on line 110 will not catch it — the client_disconnected check and the graceful swallow on lines 112–115 are bypassed, and the exception leaks up to the caller.

This is worth verifying against the exact Async gem version in use. If Async::Barrier#wait does surface Stop as a StandardError wrapper, the current code is fine. If not, consider broadening the rescue:

rescue Exception => e # rubocop:disable Lint/RescueException
  raise unless client_disconnected || e.is_a?(Async::Stop)
  ...
end

or checking the Async gem's documented contract for barrier wait after stop.

@claude
Copy link

claude bot commented Mar 5, 2026

Review: Alternative fix for #2519 — gzip streaming for RSC payloads

The approach here is sound: instead of disabling compression middleware globally via no-transform, the PR wraps the live response stream in a GzipOutputStream that compresses inline, avoids the middleware's body.each deadlock, and keeps chunked streaming intact. The Accept-Encoding negotiation logic and header management are well-structured, and the test coverage is extensive.

I found one confirmed bug and two concerns worth addressing before merge.


Bug — TypeError not rescued in request_accepts_gzip? (line 167)

parse_accept_encoding_quality calls Float(value) where value can be nil (when the header has gzip;q — a q param with no =). Float(nil) raises TypeError, but request_accepts_gzip? only rescues ArgumentError. The TypeError escapes, propagates into the Sync block, is re-raised, and crashes the stream instead of silently falling back to plain streaming.

The spec "keeps plain streaming when Accept-Encoding has q without a value" would fail at CI for this reason (it expects no error and no Content-Encoding header, but gets a TypeError).

Fix: rescue ArgumentError, TypeError (or guard value.nil? before calling Float). See inline comment.


Concern — @async_barrier.stop inside writing_task rescue (line 104)

Calling @async_barrier.stop from inside the writing task's rescue block and then @async_barrier.wait from the main flow may surface Async::Stop < Exception (not StandardError) from wait, bypassing the rescue StandardError guard and leaking the exception. This depends on the Async gem version's contract for Barrier#wait after stop. Worth a quick verification or a targeted test that triggers this exact path. See inline comment.


Minor — content_encoding_values != ["identity"] comparison (line 151)

An exact array-equality check fails for unusual (but theoretically valid) inputs like "identity, identity". Using content_encoding_values.any? { |v| v != "identity" } is more robust. Low real-world risk but easy to fix. See inline comment.


Missing test — explicit gzip rejection (gzip;q=0)

There is no test for Accept-Encoding: gzip;q=0. RFC 7231 treats q=0 as an explicit exclusion. The gzip_quality.positive? check on line 166 handles it correctly, but a regression test would make the intent explicit.


Everything else looks good

  • GzipOutputStream is cleanly designed: WriterAdapter#close is intentionally a no-op (only the outer close closes the stream), SYNC_FLUSH after every write ensures chunks are not buffered inside zlib, and the double-close guard is correct.
  • The prepare_gzip_streaming_headers method correctly handles Vary: *, case-insensitive Accept-Encoding deduplication, and Content-Length removal.
  • Placing setup_output_stream after render_to_string is intentional and correct — gzip headers are only committed if the template renders successfully.
  • The close_stream_at_end: false + gzip guard with a clear ArgumentError message is a nice API safety net.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

RSC payload streaming deadlocks with Rack::Deflater and Rack::Brotli compression middleware

1 participant