Skip to content

Conversation

@keanji-x
Copy link
Contributor

@keanji-x keanji-x commented Dec 29, 2025

Introduced a mechanism to forcefully batch transactions for a specified duration using --txpool.batch-timeout.

The previous implementation relied on greedy consumption (poll_recv_many), which processes whatever is available immediately. Under low-to-medium load, this often results in small batches (or single transactions), failing to effectively amortize the cost of acquiring the transaction pool lock.

By introducing a timeout, we can enforce buffering up to max_batch_size or batch_timeout, significantly improving lock contention and throughput by ensuring larger persistent batches.

Implementation Details

Used Optiontokio::time::Interval in BatchTxProcessor to maintain zero-cost overhead for the default immediate mode (timeout=0).
Refactored the poll loop to support both immediate consumptions and time-buffered batching.
Added config propagation and integration tests.

@github-project-automation github-project-automation bot moved this to Backlog in Reth Tracker Dec 29, 2025
@keanji-x keanji-x changed the title feat: add configurable batch timeout to transaction pool insertions feat(txpool): add configurable batch timeout to transaction pool insertions Dec 29, 2025
feat: add `batch_timeout` configuration for transaction pool insertions.

refactor: batcher poll method to continuously process available batches and clarify flush conditions.
@codspeed-hq
Copy link

codspeed-hq bot commented Dec 29, 2025

CodSpeed Performance Report

Merging #20661 will not alter performance

Comparing keanji-x:fix_timeout (7408999) with main (3d4efdb)

Summary

✅ 118 untouched
⏩ 7 skipped1

Footnotes

  1. 7 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@keanji-x keanji-x requested a review from gakonst as a code owner December 29, 2025 04:29
Copy link
Collaborator

@mattsse mattsse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this makes sense and I#ve thought about this as well because most of the time this ends up just batching 1 single request, but even waiting a bit should be very beneficial here.

left some suggestions and a q re the poll logic

Comment on lines 44 to 45
max_batch_size: usize,
batch_timeout: Duration,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could we introude a dedicated Batchconfig type?

Comment on lines 101 to 102
let interval = if batch_timeout.is_zero() {
None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we make this an option then instead?

Comment on lines 173 to 185
match this.interval.as_mut().as_pin_mut() {
// Immediate mode (timeout = 0): zero-cost path, original behavior
None => loop {
match this.request_rx.as_mut().poll_recv_many(cx, this.buf, *this.max_batch_size) {
Poll::Ready(0) => return Poll::Ready(()), // Channel closed
Poll::Ready(_) => {
Self::spawn_batch(this.pool, this.buf);
this.buf.reserve(*this.max_batch_size);
// continue to check for more requests
}
Poll::Pending => return Poll::Pending,
}
},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all of this looks a bit too complex

how should this behave exactly?

can we not use just the interval to yield pending if it's not ready yet?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, thanks for review

@github-project-automation github-project-automation bot moved this from Backlog to In Progress in Reth Tracker Dec 29, 2025
@mattsse mattsse added M-changelog This change should be included in the changelog A-rpc Related to the RPC implementation A-cli Related to the reth CLI labels Dec 29, 2025
@keanji-x keanji-x force-pushed the fix_timeout branch 3 times, most recently from a892bef to 5579b0b Compare December 29, 2025 12:11
@keanji-x keanji-x requested a review from mattsse December 29, 2025 12:37
Copy link
Collaborator

@mattsse mattsse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to restructure the poll logic a bit so that this behaves like:

  1. poll_recv many
  2. if batch empty -> return early
  3. if batch full -> spawn + continue
  4. if batch not full and interval is set: poll interval

I'd also like to see some manual poll tests for this

continue;
// 1. Collect available requests (non-blocking)
while this.buf.len() < *this.max_batch_size {
match this.request_rx.as_mut().poll_recv(cx) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should keep using poll_recv_many


if batch_full || timeout_ready {
Self::spawn_batch(this.pool, this.buf);
this.buf.reserve(*this.max_batch_size);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is wasteful, because this will most likely over allocate

Comment on lines 187 to 188
// 2. Check flush conditions
let batch_full = this.buf.len() >= *this.max_batch_size;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the batch is full we can spawn it immediately and then only need to reset the interval and can skip polling it

!this.buf.is_empty()
};

if batch_full || timeout_ready {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find timeout_ready a bit confusing here because if no timeout is set, then this represents something else

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

…icient request collection and enhance batching logic with new tests.
@keanji-x keanji-x requested a review from mattsse January 5, 2026 03:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-cli Related to the reth CLI A-rpc Related to the RPC implementation M-changelog This change should be included in the changelog

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

2 participants