-
Notifications
You must be signed in to change notification settings - Fork 132
Refactor producer send paths to shrink future sizes #348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Refactor producer send paths to shrink future sizes #348
Conversation
…e serialization Signed-off-by: chris wren <[email protected]>
Signed-off-by: chris wren <[email protected]>
All of the changes in I kept the current approach to eliminate the allocation and keep futures lightweight, but I’m open to reverting to the boxed-future solution if reviewers prefer the safer path. Given that |
@freeznet tagging you for review. Thanks! |
IMO, heap allocation is acceptable. BTW, could you use another PR to improve the batch lock contention? It seems to be a separated issue with the large future size. |
src/producer.rs
Outdated
let f = async move { | ||
let res = fut.await; | ||
res.map_err(|e| { | ||
error!("wait send receipt got error: {:?}", e); | ||
Error::Producer(ProducerError::Connection(e)) | ||
}) | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this necessary to reduce the future size? After putting ensure_connected().await
before this method, the connection send future won't be held across an .await
.
BTW, could you share more details about the outputs of build with the -Zprint-type-sizes
flag? i.e. which future's type size is large, as well as the size after the change? It would be helpful to verify if any regression could be introduced in future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By doing ensure_connected().await
before creating the send future, then creating the send future in a sync function start_send_once
and returning a thin adapter (e.g., Ok(async move { fut.await.map_err(..) })
), the outer async fn
no longer holds the big future and now only returns a small wrapper.
I will rerun -Zprint-type-sizes
on the current branch and on main to regenerate a before/after comparison to illustrate the changes.
f413611
to
ebb57e6
Compare
Signed-off-by: chris wren <[email protected]>
ebb57e6
to
14f8ac2
Compare
Signed-off-by: chris wren <[email protected]>
… master Signed-off-by: chris wren <[email protected]>
c149c34
to
3268d17
Compare
After investigating with This seems to be due to the way Clippy estimates future sizes, which can yield false positives. (See discussion in risingwavelabs/risingwave#22971) After stripping back my earlier refactor, I found that the only change needed to eliminate the downstream Clippy Before:
After:
Compression Only:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened a PR to support batch timeout: #354, this PR abstracts a synchronous compress_message
function as well.
It involves much code refactoring so that it will have conflicts with this PR, I'd like to hold this PR for a while. Feel free to review my PR as well.
Issue: #347
This PR refactors how producers send messages to reduce the size of generated async futures and to fix unnecessary lock contention during batching.
The public API and behavior are unchanged — these are internal improvements.
Motivation
Oversized async futures (~16 KB):
Producer::send_non_blocking
and related methods held on to the network send future returned byconnection.sender().send(..)
across an.await
.SendFuture
andResult<SendFuture, Error>
and propagated large futures into downstream code.Unnecessary batch lock contention:
Changes
Introduced
send_inner_retry
/start_send_once
helpers:Refactored batching (
send_batch
and batching branch ofsend_raw
):Benefits
Testing
-Zprint-type-sizes
that the generated futures are now small (hundreds of bytes instead of ~16 KB).