Alternative fix for #2519: keep compression with Live streaming#2530
Alternative fix for #2519: keep compression with Live streaming#2530
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
WalkthroughAdds gzip-capable streaming to the RSC streaming path: controller negotiates gzip via Accept-Encoding, builds either a gzip-wrapping output stream or plain stream, sets Content-Encoding and Vary when gzip is used, routes streamed chunks through the output abstraction, and enables compression for the RSC payload endpoint. Tests added. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant Middleware as Rack::Deflater/Brotli
participant Controller as RscPayloadController
participant Stream as ResponseStream
participant GzipWriter as GzipOutputStream
rect rgba(200,200,255,0.5)
Client->>Controller: GET /rsc_payload/:component
Controller->>Controller: inspect Accept-Encoding -> compress?
Controller->>Stream: setup output_stream (gzip? → GzipOutputStream : plain)
alt compress enabled
Controller->>Controller: set header Content-Encoding: gzip
Controller->>Controller: add Vary: Accept-Encoding
end
Controller->>GzipWriter: write JSON chunks via output_stream
GzipWriter->>Middleware: yield compressed chunks
Middleware->>Client: forward response chunks
Client->>Client: decompress if needed
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Review: Alternative fix for #2519 — Cache-Control: no-transform guardOverall: The implementation is clean and the intent is sound — adding Potential Race Condition with ActionController::LiveThe core concern: Typical def call(env)
status, headers, body = @app.call(env) # Rails spawns controller thread, returns immediately
return [status, headers, body] unless should_deflate?(env, status, headers, body)
# ...compress body via body.each...
endIf Suggested mitigation: Set the header in a before_action :add_no_transform_cache_control_directive, only: [:stream_view_containing_react_components]Or set it unconditionally in the Test Uses Plain Hash Instead of Real Headers Object
It still tests the happy path correctly, but consider using Missing Test: Case-Insensitive DeduplicationThe implementation correctly uses it "does not duplicate when no-transform is in a different case" do
headers = { "Cache-Control" => "private, NO-TRANSFORM" }
# ...
expect(headers["Cache-Control"]).to eq("private, NO-TRANSFORM")
endMinor omission, but worth covering given the explicit Comma-Splitting of Cache-Control
CHANGELOGThe entry is clear and accurate. No concerns. |
Review of PR #2530: Skip Compression Transforms for Live StreamingOverall: Looks good. The fix is correct and well-scoped. A few minor observations below. What works well
Minor observations
|
Greptile SummaryThis PR adds a Key changes:
The approach is clean and minimal. The header is mutated in-place on the shared hash object, which correctly reflects in test assertions. The method is placed before the Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Controller
participant Stream as ReactOnRailsPro::Stream
participant Headers as response.headers
participant Middleware as Rack::Deflater / Rack::Brotli
participant Buffer as ActionController::Live::Buffer
Controller->>Stream: stream_view_containing_react_components(template:)
Stream->>Headers: read Cache-Control
Headers-->>Stream: existing directives (or nil)
Stream->>Headers: write Cache-Control += no-transform
Note over Headers: Header committed before first write
Stream->>Stream: render_to_string(template:)
Stream->>Buffer: stream.write(template_string)
Note over Middleware: Sees Cache-Control: no-transform
Note over Middleware: Short-circuits — skips body.each predicate
Note over Middleware: No deadlock risk
Stream->>Buffer: drain_streams_concurrently (async chunks)
Buffer-->>Controller: chunks flushed to client
Stream->>Buffer: stream.close
Last reviewed commit: cbf0f14 |
PR Review: Alternative fix for #2519Good approach overall — replacing the Issues FoundHigh: If Fix — set def close
return if @closed
@closed = true
begin
@gzip_writer.close
ensure
@stream.close
end
endMedium: Latent data-corruption when
The current Low: return false if content_encoding.present? && !/\bidentity\b/i.match?(content_encoding)The double-negation ( return false if content_encoding.present? && content_encoding.downcase != "identity"Low:
Low: Timing-based tests The new tests synchronize with Nit
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ec29c39a70
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (1)
89-117:⚠️ Potential issue | 🔴 CriticalStop producer tasks immediately on client disconnect to avoid hang.
If the writer exits on
IOError/Errno::EPIPE,@async_barrier.waitcan block indefinitely while producers are backpressured on@main_output_queue.enqueue. Cancel producers when disconnect is detected, not only after waiting.Suggested patch shape
while (chunk = `@main_output_queue.dequeue`) output_stream.write(chunk) end rescue IOError, Errno::EPIPE => e # Client disconnected - stop writing gracefully client_disconnected = true log_client_disconnect("writer", e) + `@async_barrier.stop` end @@ begin `@async_barrier.wait`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb` around lines 89 - 117, The writer rescue currently sets client_disconnected but doesn't cancel producers immediately; update the rescue in the writing_task (the block that dequeues from `@main_output_queue`) to call `@async_barrier.stop` (and optionally `@main_output_queue.close`) as soon as an IOError/Errno::EPIPE is caught so producers are unblocked; ensure you still call log_client_disconnect("writer", e) and set client_disconnected = true, then let the ensure block continue closing the queue and waiting on writing_task as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb`:
- Around line 35-41: The streaming path in
stream_view_containing_react_components currently enables compression but
doesn't set the required middleware short-circuit guard; update
stream_view_containing_react_components to set the HTTP header "Cache-Control"
to include "no-transform" before building/returning the output stream (e.g., via
the response or headers object used in this context) so downstream
proxies/middleware won't modify the payload; ensure the header is applied early
in the method (before calling build_output_stream and before any compression
logic) and do not remove or overwrite existing Cache-Control directives—merge or
append "no-transform" if present.
---
Outside diff comments:
In `@react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb`:
- Around line 89-117: The writer rescue currently sets client_disconnected but
doesn't cancel producers immediately; update the rescue in the writing_task (the
block that dequeues from `@main_output_queue`) to call `@async_barrier.stop` (and
optionally `@main_output_queue.close`) as soon as an IOError/Errno::EPIPE is
caught so producers are unblocked; ensure you still call
log_client_disconnect("writer", e) and set client_disconnected = true, then let
the ensure block continue closing the queue and waiting on writing_task as
before.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bbe3d765-e4c1-486d-976d-0ca00a75e0bc
📒 Files selected for processing (5)
CHANGELOG.mdreact_on_rails_pro/lib/react_on_rails_pro/concerns/rsc_payload_renderer.rbreact_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rbreact_on_rails_pro/spec/react_on_rails_pro/rsc_payload_renderer_spec.rbreact_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
🚧 Files skipped from review as they are similar to previous changes (1)
- CHANGELOG.md
|
Addressed follow-up review feedback in
Manual verification (test env) was also run against the dummy app over HTTP:
|
Review SummaryThe approach here is sound — compressing the live stream directly rather than disabling compression middleware is the right architectural fix for the deadlock described in #2519. The A few issues worth addressing before merge: Medium concerns
Minor concerns
Correctness observationThe Overall this is a well-structured fix. The medium items (Rack private API and error swallowing) are the ones I'd prioritize before landing. |
Review: Alternative fix for #2519The approach is sound: compressing at the stream level rather than relying on middleware completely avoids the A few issues worth addressing before merge: Design / correctness
TestingPlain Neutral observations
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb`:
- Around line 165-166: The rescue in Stream parsing currently only catches
ArgumentError (in the method handling q-param parsing) so a malformed header
like "Accept-Encoding: gzip;q" raising TypeError (from Float(nil)) escapes and
causes a 500; update the rescue in the method (the block that currently has
"rescue ArgumentError" in react_on_rails_pro/concerns/stream.rb) to also rescue
TypeError (e.g., "rescue ArgumentError, TypeError") so malformed q params fall
back to plain streaming, and add/extend the existing spec (related to the
q-param parsing tests) to include a case for "q" without a value to assert no
exception and correct fallback behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e56eea69-a9be-4959-9a31-10ab3ae8c838
📒 Files selected for processing (2)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rbreact_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
| rescue StandardError => e | ||
| @async_barrier.stop | ||
| raise e | ||
| raise e unless client_disconnected |
There was a problem hiding this comment.
Silently suppressing barrier errors on client disconnect can mask real component rendering errors. A barrier task that raises RuntimeError (genuine bug in a component) at the same moment the client disconnects will be swallowed here. Consider only suppressing errors whose class is IOError/Errno::EPIPE, or at least re-raising if the error is not a connection-level exception:
| raise e unless client_disconnected | |
| raise e unless client_disconnected || e.is_a?(IOError) || e.is_a?(Errno::EPIPE) |
| @@ -113,11 +127,132 @@ def drain_streams_concurrently(parent_task) | |||
| @async_barrier.stop if client_disconnected | |||
There was a problem hiding this comment.
On the client-disconnect path, @async_barrier.stop is called three times total:
- In the
writing_taskrescue block (line 104) - In the
rescue StandardErrorblock above (line 111) - Here in
ensure
Steps 2 and 3 are redundant since the barrier is already stopped by step 1. This is likely idempotent, but clarifying the intent (or removing the redundant calls) would reduce confusion when reading the flow.
| buffer_size = ReactOnRailsPro.configuration.concurrent_component_streaming_buffer_size | ||
| @main_output_queue = Async::LimitedQueue.new(buffer_size) | ||
| gzip_streaming_enabled = gzip_streaming_enabled?(compress) | ||
| if gzip_streaming_enabled && !close_stream_at_end |
There was a problem hiding this comment.
This argument validation fires inside the Sync block, which initializes the async event loop before a simple precondition check. Moving the guard before the Sync do call avoids unnecessary async context setup for invalid arguments:
| if gzip_streaming_enabled && !close_stream_at_end | |
| def stream_view_containing_react_components(template:, close_stream_at_end: true, compress: false, **render_options) | |
| if compress && !close_stream_at_end | |
| raise ArgumentError, "compress: true requires close_stream_at_end: true to finalize gzip footer" | |
| end | |
| require "async" |
| end | ||
|
|
||
| def write(data) | ||
| @stream.write(data) |
There was a problem hiding this comment.
data.bytesize returns the uncompressed byte count, but the actual bytes written to the underlying stream are the compressed bytes (fewer bytes). The return value isn't used by any callers today, but this is semantically misleading if the interface is ever consumed expecting the number of bytes transmitted. A comment noting this intentional mismatch would help future readers.
| identity_quality = parsed_accept_encoding.fetch("identity", wildcard_quality || 1.0) | ||
|
|
||
| gzip_quality.positive? && gzip_quality >= identity_quality | ||
| rescue ArgumentError, TypeError |
There was a problem hiding this comment.
Bug: TypeError from Float(nil) is not rescued
When the Accept-Encoding header contains gzip;q (a q parameter without = or a value), the flow is:
"gzip;q".split("=", 2) => ["q"]
key, value = ["q"] => key="q", value=nil
Float(nil) => TypeError (not ArgumentError)
Float(nil) raises TypeError, but this method only rescues ArgumentError. The TypeError escapes request_accepts_gzip?, propagates through gzip_streaming_enabled? and into the Sync block, where it is caught and re-raised — causing the stream to fail instead of silently falling back to plain streaming.
The spec "keeps plain streaming when Accept-Encoding has q without a value" would actually fail with an error rather than passing.
Fix:
| rescue ArgumentError, TypeError | |
| rescue ArgumentError, TypeError |
Alternatively, guard before calling Float:
return Float(value) unless value.nil? || value.empty?|
|
||
| content_encoding = response.headers["Content-Encoding"].to_s | ||
| content_encoding_values = content_encoding.split(",").map { |value| value.downcase.strip }.reject(&:blank?) | ||
| return false if content_encoding_values.present? && content_encoding_values != ["identity"] |
There was a problem hiding this comment.
Fragile array-equality comparison for identity values
content_encoding_values != ["identity"] only matches when there is exactly one value and it is "identity". If a response somehow has Content-Encoding: identity, identity (duplicated), the resulting array ["identity", "identity"] is != ["identity"], causing this method to return false and skip gzip — even though the effective encoding is still "no encoding" (identity).
More robustly:
| return false if content_encoding_values.present? && content_encoding_values != ["identity"] | |
| return false if content_encoding_values.any? { |v| v != "identity" } |
This allows any number of identity entries while still blocking compression when any non-identity encoding is present.
| ##### Fixed | ||
|
|
||
| - **Fix streaming SSR hangs and silent error absorption in RSC payload injection**: Fixed two related issues: (1) streaming SSR renders hanging forever when errors occur because Node.js `stream.pipe()` doesn't propagate errors or closure from source to destination, and (2) errors in the RSC payload injection pipeline being silently absorbed, preventing them from reaching error reporters like Sentry. Introduced a shared `safePipe` utility and used `'close'` events as reliable termination signals across the streaming pipeline (Node renderer, RSC payload injection, transform streams, and Ruby async task). Also added a Ruby safety net to prevent Rails request hangs when async rendering tasks raise before the first chunk. [PR 2407](https://github.com/shakacode/react_on_rails/pull/2407) by [AbanoubGhadban](https://github.com/AbanoubGhadban). | ||
| - **Prevented RSC payload streaming deadlocks with compression middleware while keeping streaming and compression enabled**. The RSC payload endpoint now streams gzip-compressed output directly when the client accepts gzip, sets `Content-Encoding: gzip` and `Vary: Accept-Encoding`, and keeps chunked `ActionController::Live` behavior. This avoids hangs on `/rsc_payload/:component_name` caused by middleware `if` predicates that enumerate `body.each`, without disabling streaming or response compression. Added regression coverage for gzip streaming header behavior and RSC payload compression wiring. [PR 2530](https://github.com/shakacode/react_on_rails/pull/2530) by [justin808](https://github.com/justin808). |
There was a problem hiding this comment.
Code Review Summary
Overall this is a well-designed fix for the deadlock in issue #2519. Compressing directly at the stream level is the right approach — it avoids the middleware body.each enumeration problem without disabling compression globally. The implementation is layered cleanly and the test suite is thorough.
Issues worth addressing (see inline comments on stream.rb):
-
Line 112 of stream.rb — Barrier errors silently swallowed on client disconnect can mask genuine component rendering errors that happen to coincide with a disconnect. Only connection-level exceptions (
IOError/Errno::EPIPE) should be suppressed. -
Line 47 of stream.rb — The
compress: true && !close_stream_at_endguard fires inside theSyncblock, initializing the async runtime before a simple precondition check. Moving it beforeSync doavoids unnecessary event-loop startup for invalid arguments. -
Lines 104/111/127 of stream.rb —
@async_barrier.stopis called three times on the client disconnect path. Idempotent in practice but worth simplifying.
Minor:
WriterAdapter#writereturnsdata.bytesize(uncompressed bytes), not the actual compressed bytes written to the stream. Harmless since the return value is unused by all callers, but semantically misleading.Float(value)raisingArgumentErroras the control-flow signal for malformed q-values works correctly but is an unusual pattern.
Strengths:
SYNC_FLUSHafter every write is correct — ensures chunks are delivered immediately rather than held in the gzip buffer.WriterAdapter#closebeing a no-op correctly preventsZlib::GzipWriter#closefrom closing the underlying stream prematurely.- Headers are only set after
render_to_stringsucceeds, so pre-commit render failures do not poisonContent-Encoding. The regression test for this behavior is a good catch. - Accept-Encoding negotiation correctly handles wildcards, quality ordering,
identity;q=0, and malformed inputs. - Tests cover the non-trivial edge cases well.
| log_client_disconnect("writer", e) | ||
| # `Async` does not yield between setting the flag and stopping the barrier. | ||
| # This guarantees the rescue path can observe `client_disconnected == true`. | ||
| @async_barrier.stop |
There was a problem hiding this comment.
Potential unhandled exception type from @async_barrier.wait after barrier stop
When writing_task rescues an IOError/Errno::EPIPE and calls @async_barrier.stop, the component producer tasks inside the barrier are interrupted. Depending on the version of the async gem, stopping tasks may cause Async::Stop (which inherits from Exception, not StandardError) to propagate when @async_barrier.wait is called on line 109.
If @async_barrier.wait raises Async::Stop (or another non-StandardError), the rescue StandardError => e on line 110 will not catch it — the client_disconnected check and the graceful swallow on lines 112–115 are bypassed, and the exception leaks up to the caller.
This is worth verifying against the exact Async gem version in use. If Async::Barrier#wait does surface Stop as a StandardError wrapper, the current code is fine. If not, consider broadening the rescue:
rescue Exception => e # rubocop:disable Lint/RescueException
raise unless client_disconnected || e.is_a?(Async::Stop)
...
endor checking the Async gem's documented contract for barrier wait after stop.
Review: Alternative fix for #2519 — gzip streaming for RSC payloadsThe approach here is sound: instead of disabling compression middleware globally via I found one confirmed bug and two concerns worth addressing before merge. Bug —
|
Summary
no-transformapproach with a stream-level gzip writer for RSC payload responsesActionController::Livechunked streaming behavior (no buffering)Content-Encoding: gzipandVary: Accept-Encodingwhen gzip is usedcompress: true)Why
Issue #2519 is caused by compression middleware
ifpredicates callingbody.eachonActionController::Liveresponses, which can deadlock before the stream writes. Instead of disabling transforms globally, this PR preserves both streaming and compression by compressing the live stream directly for clients that accept gzip.This PR is an alternative solution for issue #2519.
ELI5 Explanation
Imagine we have a hose sending water little by little (streaming). Another machine tried to squeeze the water while also waiting for all the water first (compression middleware), so both sides waited forever and nothing came out.
The fix is: we keep sending little by little, and we put a tiny squeezer directly on the hose itself. That way:
Closes #2519
Test Plan
cd react_on_rails_pro && bundle exec rspec spec/react_on_rails_pro/stream_spec.rb spec/react_on_rails_pro/rsc_payload_renderer_spec.rbcd react_on_rails_pro && bundle exec rubocop lib/react_on_rails_pro/concerns/stream.rb lib/react_on_rails_pro/concerns/rsc_payload_renderer.rb spec/react_on_rails_pro/stream_spec.rb spec/react_on_rails_pro/rsc_payload_renderer_spec.rbAccept-Encoding: gzipandAccept-Encoding: brto confirm no hang, compression behavior, and continued streamingSummary by CodeRabbit
Bug Fixes
Tests