Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds a new mill-rpc workspace with core protocol, codec, context and error types; procedural macros; client and server implementations; examples, benchmarks and integration tests; and updates workspace manifest and README to include and document the new RPC crates. Changes
Sequence DiagramsequenceDiagram
participant Client as RPC Client
participant Transport as TcpTransport
participant Server as RPC Server
participant Dispatcher as ServiceDispatcher
participant Handler as Service Impl
Client->>Client: serialize args (Codec)
Client->>Transport: send Request Frame (request_id, svc_id, method_id, payload)
Transport->>Server: deliver bytes
Server->>Server: parse_frames -> Frame
Server->>Server: parse_request_payload -> (request_id, svc_id, method_id, args)
Server->>Dispatcher: dispatch(ctx, method_id, args, codec)
Dispatcher->>Handler: invoke method impl
Handler-->>Dispatcher: Result<Vec<u8>> / RpcError
alt success
Dispatcher-->>Server: Ok(response_bytes)
Server->>Transport: send Response Frame
else error
Dispatcher-->>Server: Err(RpcError)
Server->>Transport: send Error Frame
end
Transport->>Client: deliver Response/Error Frame
Client->>Client: parse_frames, match request_id, notify waiter
Client->>Client: deserialize response (Codec) -> return
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 20
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mill-rpc/examples/calculator_client.rs`:
- Around line 31-32: Replace the fragile
thread::sleep(Duration::from_millis(50)) startup wait with a proper
synchronization primitive: e.g., create a std::sync::mpsc::channel (or a
std::sync::Barrier) and have the event-loop thread send a ready signal (or call
barrier.wait()) when it has started; then in the main thread replace the sleep
with waiting on the receiver (recv()/recv_timeout()) or barrier.wait() so the
code deterministically waits for the event loop to be ready instead of sleeping.
In `@mill-rpc/examples/echo_client.rs`:
- Around line 12-18: The Echo trait is duplicated across echo_client.rs and
echo_server.rs; extract the trait into a single shared module (e.g., a new
module common::echo) and have both examples import and use that module so the
trait definition is authoritative and cannot drift. Move the trait declaration
for Echo (with methods echo, echo_uppercase, echo_repeat, request_count) into
the shared module, update echo_client and echo_server to refer to that module's
Echo trait, and ensure the #[mill_rpc::service] attribute is applied once on the
shared trait so both client and server derive from the same definition.
In `@mill-rpc/examples/kv_client.rs`:
- Around line 26-30: The sleep-based synchronization after spawning the event
loop (the event_loop.clone() -> el -> thread::spawn(move || { el.run().unwrap();
})) is fragile; replace it with a deterministic ready signal (e.g., an
std::sync::mpsc::channel or a std::sync::Barrier) so the spawned thread signals
when el.run() is actively processing before the main thread proceeds; extract
this into a shared client initialization helper (e.g., init_client_event_loop or
start_event_loop) and have the helper return the handle and a ready-wait
receiver so callers of the KV client (and calculator_client.rs) can wait for the
ready signal instead of using thread::sleep.
In `@mill-rpc/mill-rpc-core/Cargo.toml`:
- Around line 7-10: Remove the unused thiserror dependency from the
[dependencies] section in Cargo.toml: delete the line `thiserror = "2"` (or the
whole dependency entry) so the crate list contains only serde and bincode,
ensure no references to thiserror exist in the codebase, then run cargo build
(or cargo check) to verify the project compiles and cargo tree to confirm the
dependency is gone.
In `@mill-rpc/mill-rpc-core/src/codec.rs`:
- Around line 28-43: The serialize and deserialize methods have long lines that
violate formatting; break the long map_err closures into multi-line expressions
so they conform to rustfmt—e.g., in fn serialize (method serialize) split the
bincode::serialize(value).map_err(|e| RpcError::codec_error(format!("serialize:
{}", e))) into a shorter, wrapped form and do the same for
bincode::deserialize(...) in fn deserialize (method deserialize), ensuring each
chained call and closure is on its own line to satisfy cargo fmt.
In `@mill-rpc/mill-rpc-core/src/protocol.rs`:
- Line 148: The function signature for request in protocol.rs is not formatted
to project style; run the Rust formatter on this file (or run cargo fmt) to fix
spacing/line-wrapping for pub fn request(request_id: u64, service_id: u16,
method_id: u16, args: Vec<u8>, one_way: bool) -> Self and commit the formatted
changes so cargo fmt --check passes.
- Around line 141-144: Make Frame fields private and add safe accessors,
introduce a helper (e.g., validate_payload_len(len: usize) -> Result<u32,
Error>) that checks for overflow and enforces an explicit max allowed frame
size, call that helper from every Frame constructor (the functions that
currently cast payload.len() to u32) to avoid silent truncation, and ensure
encode() recomputes/validates header.payload_len from the actual payload length
(rather than trusting the header) to prevent header/payload drift; update
constructors to set header.payload_len from the helper result and expose
payload/header only via getters/setters that enforce validation.
In `@mill-rpc/README.md`:
- Around line 183-191: The README's RPC status table is incomplete compared to
the RpcStatus enum in error.rs; update the table to include the missing variants
Cancelled (1), AlreadyExists (4), PermissionDenied (5), Unauthenticated (6), and
ResourceExhausted (7) with short descriptions, or alternatively add a clear note
that the table is partial and references the authoritative enum in error.rs;
locate the RpcStatus variants by name (Cancelled, AlreadyExists,
PermissionDenied, Unauthenticated, ResourceExhausted) when making the change so
the README and error.rs stay consistent.
- Line 177: Replace the non-descriptive link text "here" in the README sentence
"you will find all examples [here](./examples/)." with descriptive text such as
"the examples directory" or "example projects" so the link reads like "you will
find all examples in the examples directory" (or similar), ensuring the markdown
link uses that descriptive phrase instead of "here".
In `@mill-rpc/src/client.rs`:
- Around line 42-73: The connect method on RpcClient currently takes a codec
parameter that appears unused; update the API to remove the unused parameter or
deliberately preserve it. Specifically, modify RpcClient::connect to either
remove the Codec argument (and any related construction sites) if the struct no
longer has a codec field, or keep the Codec field in the RpcClient struct and
use the codec parameter to initialize that field (refer to RpcClient::connect,
RpcClient struct and the codec variable) so there is no unused parameter warning
and the signature matches the struct fields and intended API surface.
- Line 5: The import list in mill-rpc's client module includes an unused symbol
RpcStatus which causes a build error; update the use statement in client.rs (the
line importing Codec, RpcError, RpcStatus, RpcTransport) to remove RpcStatus so
only used items (Codec, RpcError, RpcTransport) are imported, then run cargo
build to confirm the warning is resolved.
- Around line 9-10: The import list in the use statement includes HEADER_SIZE
from mill_rpc_core::protocol but it is not used; remove HEADER_SIZE from the
import so the line reads something like `use mill_rpc_core::protocol::{self,
Frame, MessageType};` (leave Frame and MessageType and other imports untouched)
to resolve the Clippy/build failure caused by the unused HEADER_SIZE symbol.
- Around line 75-78: The set_timeout method currently requires &mut self which
is unusable after RpcClient is wrapped in an Arc (returned by connect()); change
the implementation so timeout can be mutated through shared ownership: replace
the timeout field with a thread-safe container (e.g. AtomicU64 storing
milliseconds or a Mutex<Duration>), change set_timeout to take &self (pub fn
set_timeout(&self, timeout: Duration)) and update all reads/writes of timeout in
methods like connect()/any RPC call to load/lock and convert between Duration
and the stored representation; ensure atomic conversions use a consistent unit
(e.g. millis) and handle overflow/rounding appropriately.
- Around line 34-40: The RpcClient struct contains an unused field `codec` which
causes a build failure; remove the `codec: Codec` field from the `RpcClient`
definition and delete any constructor/initializer parameters and assignments
that set it (e.g., in `RpcClient::new` or other `RpcClient` builders), and then
tidy up any remaining references to `codec` (remove imports or variables that
only existed to create the codec) so the type and its constructors compile
cleanly without the unused field.
In `@mill-rpc/src/server.rs`:
- Around line 81-85: The build logic currently uses HashMap::insert which
silently overwrites existing entries, so update build() (and the code paths that
collect registrations from service(), service_with_id(), and any helpers around
next_service_id and services Vec) to detect duplicate service IDs and reject
them instead of replacing: when iterating registrations (from fields like
services and explicit service_with_id entries) check whether the target id
already exists in the map (e.g. via entry().or_insert or contains_key) and
return an error or panic with a clear message identifying the duplicate id and
the conflicting registration; likewise make service() and service_with_id()
behavior consistent by preventing generation or acceptance of an ID that would
duplicate an existing registration (validate before pushing to services Vec or
incrementing next_service_id).
- Around line 210-215: When parse_request_payload() fails we currently just log
and return, leaving non-one-way clients hanging; update the Err(e) branch in the
request handling code that calls frame.parse_request_payload() to detect whether
the request is one-way (use frame.is_one_way() or the header metadata) and, if
not one-way, construct and send an error response frame containing the protocol
error details (use the existing send_error_response/write_error_frame helper or
the same framing path used for normal responses), including any
request_id/service_id/method_id available from the frame/header (or a safe
default) and the error string e; apply the same fix to the other parsing-failure
block later in the file that mirrors this logic so malformed two-way requests
always return an error frame instead of timing out.
- Line 12: Remove the unused imports FrameHeader and HEADER_SIZE from the
protocol import list to satisfy clippy: edit the use statement that currently
imports Frame, FrameHeader, MessageType, HEADER_SIZE (e.g., use
mill_rpc_core::protocol::{self, Frame, FrameHeader, MessageType, HEADER_SIZE};)
and delete FrameHeader and HEADER_SIZE so it only imports the symbols actually
used (e.g., Frame and MessageType), then run cargo clippy to confirm the lint is
fixed.
- Line 149: The call to buffers.entry(conn_id.as_u64()).or_insert_with(Vec::new)
should be changed to use or_default() to satisfy clippy's or_fun_call lint;
locate the entry call on the buffers map (the line creating buf from
buffers.entry(conn_id.as_u64())) and replace or_insert_with(Vec::new) with
or_default() so the default Vec is inserted without the explicit Vec::new call.
In `@mill-rpc/tests/integration_test.rs`:
- Around line 77-103: The test test_rpc_server_client_integration currently only
starts the server and never exercises client-server RPCs and also treats server
build failures as non-fatal; update the test to (1) unwrap or assert on
RpcServer::builder().build(&event_loop) so build failures fail the test, (2)
after the server is started read the actual bound address from the created
RpcServer instance (from RpcServer::builder or the returned _server) and create
a client that connects to that address, (3) perform at least one RPC call
against the CalculatorDispatcher/MyCalculator (e.g., an add or multiply call)
and assert the expected response, and (4) shut down the EventLoop
(event_loop.stop()) only after the client call and assertions complete, ensuring
the test covers the full client-server interaction path.
In `@README.md`:
- Around line 40-45: Replace the development-only path dependency declaration
mill-rpc = { path = "mill-rpc" } in the README with a version-based crate
dependency once mill-rpc is published to crates.io; update the install snippet
to use the published version string (e.g., mill-rpc = "x.y.z") and mention that
users should pin the appropriate version, so external users can add the crate
from crates.io instead of relying on a local path.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (26)
Cargo.tomlREADME.mdmill-rpc/Cargo.tomlmill-rpc/README.mdmill-rpc/examples/calculator_client.rsmill-rpc/examples/calculator_server.rsmill-rpc/examples/concurrent_clients.rsmill-rpc/examples/echo_client.rsmill-rpc/examples/echo_server.rsmill-rpc/examples/kv_client.rsmill-rpc/examples/kv_server.rsmill-rpc/examples/multi_service_client.rsmill-rpc/examples/multi_service_server.rsmill-rpc/mill-rpc-core/Cargo.tomlmill-rpc/mill-rpc-core/src/codec.rsmill-rpc/mill-rpc-core/src/context.rsmill-rpc/mill-rpc-core/src/error.rsmill-rpc/mill-rpc-core/src/lib.rsmill-rpc/mill-rpc-core/src/protocol.rsmill-rpc/mill-rpc-macros/Cargo.tomlmill-rpc/mill-rpc-macros/src/lib.rsmill-rpc/src/client.rsmill-rpc/src/lib.rsmill-rpc/src/prelude.rsmill-rpc/src/server.rsmill-rpc/tests/integration_test.rs
| pub fn service<S: ServiceDispatch>(mut self, svc: S) -> Self { | ||
| let id = self.next_service_id; | ||
| self.next_service_id += 1; | ||
| self.services.push((id, Box::new(svc))); | ||
| self |
There was a problem hiding this comment.
Reject duplicate service IDs instead of silently overwriting.
HashMap::insert in build() replaces prior registrations for the same ID. This can happen with repeated service_with_id() calls, or mixed service_with_id() + service() usage, and leads to incorrect dispatch at runtime.
Also applies to: 89-90, 100-103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/src/server.rs` around lines 81 - 85, The build logic currently uses
HashMap::insert which silently overwrites existing entries, so update build()
(and the code paths that collect registrations from service(),
service_with_id(), and any helpers around next_service_id and services Vec) to
detect duplicate service IDs and reject them instead of replacing: when
iterating registrations (from fields like services and explicit service_with_id
entries) check whether the target id already exists in the map (e.g. via
entry().or_insert or contains_key) and return an error or panic with a clear
message identifying the duplicate id and the conflicting registration; likewise
make service() and service_with_id() behavior consistent by preventing
generation or acceptance of an ID that would duplicate an existing registration
(validate before pushing to services Vec or incrementing next_service_id).
| let (request_id, service_id, method_id, args) = match frame.parse_request_payload() { | ||
| Ok(parsed) => parsed, | ||
| Err(e) => { | ||
| log::error!("Invalid request payload from {:?}: {}", conn_id, e); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Malformed two-way requests currently get no response.
When request payload parsing fails, the handler logs and returns immediately. For non-one-way calls this can leave clients waiting until timeout instead of receiving an error frame.
Also applies to: 233-239
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/src/server.rs` around lines 210 - 215, When parse_request_payload()
fails we currently just log and return, leaving non-one-way clients hanging;
update the Err(e) branch in the request handling code that calls
frame.parse_request_payload() to detect whether the request is one-way (use
frame.is_one_way() or the header metadata) and, if not one-way, construct and
send an error response frame containing the protocol error details (use the
existing send_error_response/write_error_frame helper or the same framing path
used for normal responses), including any request_id/service_id/method_id
available from the frame/header (or a safe default) and the error string e;
apply the same fix to the other parsing-failure block later in the file that
mirrors this logic so malformed two-way requests always return an error frame
instead of timing out.
| #[test] | ||
| fn test_rpc_server_client_integration() { | ||
| let _ = env_logger::builder().is_test(true).try_init(); | ||
|
|
||
| let event_loop = Arc::new(EventLoop::new(4, 1024, 100).unwrap()); | ||
|
|
||
| let server = RpcServer::builder() | ||
| .bind("127.0.0.1:0".parse().unwrap()) | ||
| .service(CalculatorDispatcher(MyCalculator)) | ||
| .build(&event_loop); | ||
|
|
||
| match server { | ||
| Ok(_server) => { | ||
| let el = event_loop.clone(); | ||
| let handle = thread::spawn(move || { | ||
| let _ = el.run(); | ||
| }); | ||
|
|
||
| thread::sleep(Duration::from_millis(100)); | ||
|
|
||
| event_loop.stop(); | ||
| let _ = handle.join(); | ||
| } | ||
| Err(e) => { | ||
| eprintln!("Server build failed (non-fatal in test): {}", e); | ||
| } | ||
| } |
There was a problem hiding this comment.
Integration test doesn't actually test client-server interaction.
The test test_rpc_server_client_integration only verifies server startup but doesn't create a client or make any RPC calls. This leaves the actual client-server communication path untested.
Additionally, treating server build failure as "non-fatal" (lines 100-102) could mask real issues in CI.
💡 Suggestion: Add actual client interaction
match server {
- Ok(_server) => {
+ Ok(server) => {
+ let server_addr = server.local_addr().expect("Server should have address");
let el = event_loop.clone();
let handle = thread::spawn(move || {
let _ = el.run();
});
thread::sleep(Duration::from_millis(100));
+ // TODO: Create client and make RPC calls
+ // let transport = RpcClient::connect(server_addr, &event_loop, Codec::bincode()).unwrap();
+ // let client = CalculatorClient::new(transport, Codec::bincode(), 0);
+ // assert_eq!(client.add(2, 3).unwrap(), 5);
event_loop.stop();
let _ = handle.join();
}
Err(e) => {
- eprintln!("Server build failed (non-fatal in test): {}", e);
+ panic!("Server build failed: {}", e);
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/tests/integration_test.rs` around lines 77 - 103, The test
test_rpc_server_client_integration currently only starts the server and never
exercises client-server RPCs and also treats server build failures as non-fatal;
update the test to (1) unwrap or assert on
RpcServer::builder().build(&event_loop) so build failures fail the test, (2)
after the server is started read the actual bound address from the created
RpcServer instance (from RpcServer::builder or the returned _server) and create
a client that connects to that address, (3) perform at least one RPC call
against the CalculatorDispatcher/MyCalculator (e.g., an add or multiply call)
and assert the expected response, and (4) shut down the EventLoop
(event_loop.stop()) only after the client call and assertions complete, ensuring
the test covers the full client-server interaction path.
| For the RPC framework (includes mill-io and mill-net): | ||
|
|
||
| ```toml | ||
| [dependencies] | ||
| mill-rpc = { path = "mill-rpc" } | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider updating the install snippet once published.
The path-based dependency mill-rpc = { path = "mill-rpc" } is correct for development but won't work for external users. Update this to a version-based dependency once the crate is published to crates.io.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 40 - 45, Replace the development-only path dependency
declaration mill-rpc = { path = "mill-rpc" } in the README with a version-based
crate dependency once mill-rpc is published to crates.io; update the install
snippet to use the published version string (e.g., mill-rpc = "x.y.z") and
mention that users should pin the appropriate version, so external users can add
the crate from crates.io instead of relying on a local path.
There was a problem hiding this comment.
Actionable comments posted: 13
♻️ Duplicate comments (12)
mill-rpc/tests/integration_test.rs (1)
83-103:⚠️ Potential issue | 🟠 MajorIntegration test still doesn’t validate client↔server RPC behavior.
The test remains startup/shutdown-only and still allows server build failure without failing the test, so real integration regressions can pass CI undetected.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/tests/integration_test.rs` around lines 83 - 103, The test currently only starts/stops the server and ignores server build failures; change it to assert the Server build succeeds (do not swallow Err from RpcServer::builder), obtain the bound socket address from the built server, create a client that connects to that address, perform at least one RPC call against the Calculator service (using CalculatorDispatcher/MyCalculator) and assert the returned result matches expected value, then cleanly stop the event_loop and join the thread; ensure any failure in building the server, connecting the client, or the RPC call causes the test to panic/fail rather than be printed to stderr.mill-rpc/examples/echo_client.rs (1)
12-18: 🧹 Nitpick | 🔵 TrivialKeep a single source of truth for the
Echoservice trait.
Echois still duplicated between client and server examples, which is drift-prone for method order/signatures and generated IDs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/examples/echo_client.rs` around lines 12 - 18, The Echo trait is duplicated across client and server examples (trait Echo with methods echo, echo_uppercase, echo_repeat, request_count), so extract the single source of truth by moving the trait definition into a shared module/crate (e.g., a common lib or mod named echo_proto) and have both client and server import that module; update the client file (examples/echo_client.rs) to remove the local trait and reference the shared Echo trait, ensuring the #[mill_rpc::service] macro is applied only once in the shared definition or consistently used via the shared trait so generated IDs and method signatures stay in sync.mill-rpc/examples/calculator_client.rs (1)
31-32:⚠️ Potential issue | 🟡 MinorReplace sleep-based startup wait with a deterministic readiness signal.
The fixed 50ms delay is still nondeterministic and can cause intermittent startup races.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/examples/calculator_client.rs` around lines 31 - 32, Replace the nondeterministic thread::sleep(Duration::from_millis(50)) startup wait with a deterministic readiness signal: have the server side send a readiness notification (e.g., via a std::sync::mpsc channel, std::sync::Barrier, or tokio::sync::oneshot) and have the client (where thread::sleep is used) block on receiving that signal before proceeding; locate the sleep call in calculator_client.rs and change it to wait on the chosen readiness primitive, and ensure the server sends the signal once it has finished initialization and is ready to accept connections.mill-rpc/mill-rpc-core/src/codec.rs (1)
31-32:⚠️ Potential issue | 🟡 Minor
cargo fmt --checkfailure is still present in codec error-mapping lines.This file still has the long chained expressions that violate rustfmt in CI.
🔧 Minimal formatting fix
- bincode::serialize(value).map_err(|e| RpcError::codec_error(format!("serialize: {}", e))) + bincode::serialize(value) + .map_err(|e| RpcError::codec_error(format!("serialize: {}", e))) @@ - bincode::deserialize(data).map_err(|e| RpcError::codec_error(format!("deserialize: {}", e))) + bincode::deserialize(data) + .map_err(|e| RpcError::codec_error(format!("deserialize: {}", e)))Also applies to: 40-41
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/mill-rpc-core/src/codec.rs` around lines 31 - 32, The rustfmt failure comes from overly long chained expressions mapping bincode errors into RpcError; update the mapping in the bincode::serialize(...) call (and the similar mapping around the other serialize/deserialize lines) to a more compact, rustfmt-friendly layout by breaking the map_err closure onto its own line and/or using a helper like let err = format!(...) then RpcError::codec_error(err) (i.e., split the closure into multiple lines or assign the formatted message to a variable before returning RpcError::codec_error) so the expressions involving bincode::serialize(value).map_err(|e| RpcError::codec_error(format!("serialize: {}", e))) and the corresponding deserialize mapping are wrapped and formatted to satisfy cargo fmt.mill-rpc/examples/kv_client.rs (1)
26-30:⚠️ Potential issue | 🟡 MinorUse deterministic event-loop readiness instead of fixed sleep.
This startup synchronization is still timing-dependent and may fail intermittently.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/examples/kv_client.rs` around lines 26 - 30, Replace the brittle thread::sleep-based startup sync with a deterministic handshake: create a std::sync::mpsc::channel (ready_tx, ready_rx) before spawning the thread, move ready_tx into the thread that currently calls el.run(), call ready_tx.send(()) just before el.run() to signal the loop is ready, and in the parent replace thread::sleep(...) with ready_rx.recv() or recv_timeout to wait for the readiness signal; update references to el, handle, and the thread::spawn block accordingly.mill-rpc/src/client.rs (3)
34-40:⚠️ Potential issue | 🔴 CriticalUnused
codecfield currently blocks compilation.The stored
codecis never read, which fails clippy with-D warnings.Minimal non-breaking fix (keep API, mark intentionally unused)
pub struct RpcClient { @@ - codec: Codec, + _codec: Codec, @@ Ok(Arc::new(Self { @@ - codec, + _codec: codec, timeout: Duration::from_secs(30), }))Also applies to: 66-71
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/client.rs` around lines 34 - 40, The RpcClient struct has an unused field `codec` which causes compilation warnings; to fix minimally without changing the public API, mark the field as intentionally unused by renaming it to `_codec` or annotate it with #[allow(dead_code)] so Clippy stops complaining; apply the same change to the other occurrence referenced (the second struct/field at lines 66-71) and ensure the identifier references remain consistent with the struct name `RpcClient` and the field name `codec` so no other code needs updating.
5-10:⚠️ Potential issue | 🔴 CriticalRemove unused imports to unblock clippy.
RpcStatusandHEADER_SIZEare unused and currently fail-D warnings.Proposed fix
-use crate::{Codec, RpcError, RpcStatus, RpcTransport}; +use crate::{Codec, RpcError, RpcTransport}; @@ -use mill_rpc_core::protocol::{self, Frame, MessageType, HEADER_SIZE}; +use mill_rpc_core::protocol::{self, Frame, MessageType};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/client.rs` around lines 5 - 10, The imports RpcStatus and HEADER_SIZE are unused in mill-rpc/src/client.rs and trigger clippy warnings; remove RpcStatus from the "use crate::{Codec, RpcError, RpcStatus, RpcTransport};" statement and remove HEADER_SIZE from the "use mill_rpc_core::protocol::{self, Frame, MessageType, HEADER_SIZE};" statement (or consolidate those use lines) so only used symbols (Codec, RpcError, RpcTransport, protocol, Frame, MessageType) are imported; ensure no other code references RpcStatus or HEADER_SIZE before removing.
75-78:⚠️ Potential issue | 🟠 Major
set_timeoutis unusable afterconnectreturnsArc<Self>.
&mut selfcannot be obtained from sharedArc<RpcClient>in normal call paths.Proposed thread-safe timeout API
pub struct RpcClient { @@ - timeout: Duration, + timeout: Mutex<Duration>, } @@ Ok(Arc::new(Self { @@ - timeout: Duration::from_secs(30), + timeout: Mutex::new(Duration::from_secs(30)), })) } @@ - pub fn set_timeout(&mut self, timeout: Duration) { - self.timeout = timeout; + pub fn set_timeout(&self, timeout: Duration) { + *self.timeout.lock().unwrap() = timeout; } @@ - let deadline = std::time::Instant::now() + self.timeout; + let timeout = *self.timeout.lock().unwrap(); + let deadline = std::time::Instant::now() + timeout; @@ - request_id, self.timeout + request_id, timeout ))); @@ - request_id, self.timeout + request_id, timeout )));#!/bin/bash # Verify Arc-returning constructor vs mutable timeout setter usage. rg -nP 'pub fn connect\s*\(' rg -nP '->\s*Result<Arc<Self>,\s*RpcError>' rg -nP 'pub fn set_timeout\s*\(&mut self' rg -nP '\.set_timeout\s*\('🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/client.rs` around lines 75 - 78, The set_timeout method currently takes &mut self which cannot be called on an Arc<RpcClient> returned by connect; change the RpcClient timeout storage to an interior-mutable type (e.g., AtomicU64 storing timeout.as_millis()/secs or a Mutex<Duration>) and make pub fn set_timeout(&self, timeout: Duration) update that interior cell; update the RpcClient struct's timeout field (replace current timeout: Duration) and update any reads of timeout (e.g., in call/send methods) to load/lock from the new atomic/Mutex-based timeout accessor.mill-rpc/src/server.rs (3)
12-12:⚠️ Potential issue | 🟠 MajorRemove unused imports to fix clippy failure.
FrameHeaderandHEADER_SIZEare imported but never used in this file. This is blocking CI.Proposed fix
-use mill_rpc_core::protocol::{self, Frame, FrameHeader, MessageType, HEADER_SIZE}; +use mill_rpc_core::protocol::{self, Frame, MessageType};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/server.rs` at line 12, Remove the unused imports FrameHeader and HEADER_SIZE from the use declaration that currently imports mill_rpc_core::protocol::{self, Frame, FrameHeader, MessageType, HEADER_SIZE}; — update it to only import the symbols actually used (e.g., protocol, Frame, MessageType) so the unused FrameHeader and HEADER_SIZE identifiers are dropped to satisfy clippy.
81-103:⚠️ Potential issue | 🟠 MajorReject duplicate service IDs and coordinate ID assignment.
Two issues with service registration:
HashMap::insertinbuild()silently overwrites prior registrations for the same ID.service_with_id()doesn't updatenext_service_id, so subsequentservice()calls may generate a conflicting auto-assigned ID.Proposed fix
pub fn service<S: ServiceDispatch>(mut self, svc: S) -> Self { let id = self.next_service_id; self.next_service_id += 1; self.services.push((id, Box::new(svc))); self } /// Register a service with an explicit service ID. pub fn service_with_id<S: ServiceDispatch>(mut self, id: u16, svc: S) -> Self { + // Ensure auto-assignment won't collide with this explicit ID + if id >= self.next_service_id { + self.next_service_id = id + 1; + } self.services.push((id, Box::new(svc))); self } /// Build and start the RPC server on the given event loop. pub fn build(self, event_loop: &Arc<EventLoop>) -> Result<RpcServer> { let addr = self .address .unwrap_or_else(|| "127.0.0.1:9000".parse().unwrap()); let mut service_map = HashMap::new(); for (id, dispatcher) in self.services { - service_map.insert(id, RegisteredService { dispatcher }); + if service_map.contains_key(&id) { + return Err(NetworkError::other(format!( + "Duplicate service ID: {}", + id + ))); + } + service_map.insert(id, RegisteredService { dispatcher }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/server.rs` around lines 81 - 103, The build() method currently uses service_map.insert(...) which silently overwrites duplicate IDs and service_with_id() doesn't advance next_service_id, causing conflicts; update service_with_id<S: ServiceDispatch>(mut self, id: u16, svc: S) to advance self.next_service_id = id.saturating_add(1).max(self.next_service_id) (or set to id+1 when id >= next_service_id) after pushing the service to ensure future auto-assigned IDs don't collide, and change build() to check for duplicate IDs when inserting into service_map (e.g., check if service_map.contains_key(&id) and return an Err describing the duplicate ID instead of overwriting) so RegisteredService entries are never silently replaced.
209-216:⚠️ Potential issue | 🟠 MajorSend error frame for malformed two-way requests.
When
parse_request_payload()fails, the handler returns without sending a response. For two-way calls, this leaves clients waiting until timeout. Since we can't extractrequest_idfrom a malformed payload, consider using a sentinel value or closing the connection.Proposed fix - use request_id 0 for unparseable requests
fn handle_request(&self, ctx: &ServerContext, conn_id: ConnectionId, frame: Frame) { let (request_id, service_id, method_id, args) = match frame.parse_request_payload() { Ok(parsed) => parsed, Err(e) => { log::error!("Invalid request payload from {:?}: {}", conn_id, e); + // For two-way calls, send an error frame with request_id 0 + if !frame.header.flags.is_one_way() { + let err = RpcError::invalid_argument(format!("Malformed request: {}", e)); + if let Ok(err_bytes) = self.codec.serialize(&err) { + let err_frame = Frame::error(0, err_bytes); + let _ = ctx.send_to(conn_id, &err_frame.encode()); + } + } return; } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/server.rs` around lines 209 - 216, When parse_request_payload() fails inside handle_request, send an error response for two-way requests instead of silently returning: call whatever path you use to write frames on ServerContext/connection (using conn_id) and emit an error frame indicating a malformed request; since request_id can't be recovered, use a sentinel request_id (e.g. 0) and sentinel service/method ids if needed, include the parse error text in the frame payload, or alternatively close the connection for one-way detection; update the Err(e) branch in handle_request (where parse_request_payload() is matched) to log the error and then send the sentinel error frame for two-way frames (check Frame methods or flags to detect two-way) before returning.mill-rpc/mill-rpc-core/src/protocol.rs (1)
234-241:⚠️ Potential issue | 🟡 Minor
encode()trustsheader.payload_lenwithout revalidation.Since
headerandpayloadare public fields, they can drift.encode()doesn't recomputepayload_len, potentially producing malformed frames where the header length doesn't match the actual payload.Proposed fix - recompute payload_len in encode
pub fn encode(&self) -> Vec<u8> { - let header_bytes = self.header.encode(); + let mut header = self.header.clone(); + header.payload_len = self.payload.len() as u32; + let header_bytes = header.encode(); let mut buf = Vec::with_capacity(HEADER_SIZE + self.payload.len()); buf.extend_from_slice(&header_bytes); buf.extend_from_slice(&self.payload); buf }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/mill-rpc-core/src/protocol.rs` around lines 234 - 241, The encode() method currently trusts header.payload_len and can emit a header that doesn't match self.payload; fix by recomputing payload_len from self.payload.len() before encoding: make a local mutable copy of self.header (e.g., let mut hdr = self.header.clone()), set hdr.payload_len = self.payload.len() as _ (matching the header field type), then call hdr.encode() and append self.payload as before (use the existing self.header.encode() call only on the updated copy); this ensures the header bytes produced by encode() always reflect the actual payload length.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mill-rpc/examples/calculator_server.rs`:
- Around line 22-35: The example uses unchecked arithmetic in add, subtract,
multiply (and negate elsewhere) which can overflow silently in release builds;
change each implementation (add, subtract, multiply, negate) to use
checked_add/checked_sub/checked_mul/checked_neg and handle the Option result,
converting an overflow into a returned error instead of performing wrapping
arithmetic—update the functions' return type to a Result (e.g. Result<i32, _> or
Result<i64, _> as appropriate) and return a descriptive error on None (overflow)
so RPC callers get a deterministic error instead of non-deterministic wrapping
behavior.
In `@mill-rpc/examples/concurrent_clients.rs`:
- Around line 100-105: The logging call uses local_results.first().unwrap() and
.last().unwrap(), which will panic if requests_per_client is 0; update the code
around the println using client_id and local_results to handle an empty results
vector safely (e.g., check local_results.is_empty() or use .first().map(|v|
v.to_string()).unwrap_or("none".to_string()) and similarly for .last()) and
print a clear "none" or similar placeholder instead of unwrapping so the client
doesn't panic when requests_per_client == 0.
In `@mill-rpc/examples/echo_server.rs`:
- Around line 45-49: The echo_repeat implementation unconditionally calls
message.repeat(times) which allows untrusted RPC input to allocate arbitrarily
large buffers; modify echo_repeat (and its signature/return type if needed) to
enforce bounds: define constants like MAX_REPETITIONS and MAX_OUTPUT_BYTES,
validate that times <= MAX_REPETITIONS and that (message.len() as
u64).saturating_mul(times as u64) <= MAX_OUTPUT_BYTES before repeating, and
handle violations by returning a controlled error or truncated/limited response
instead of performing the repeat; reference the echo_repeat method and its use
of self.counter and RpcContext when making this change.
In `@mill-rpc/examples/multi_service_client.rs`:
- Around line 34-43: The startup currently uses a fixed thread::sleep before
calling mill_rpc::RpcClient::connect (using el/event_loop/handle), which is
racy; replace the sleep with a deterministic readiness gate by polling/retrying
RpcClient::connect (or attempting a small TCP connect to the server address) in
a short loop with a timeout/backoff, returning the connected client when
successful or failing after the timeout; ensure you remove thread::sleep and use
the retry loop around RpcClient::connect so the client only proceeds once the
server is actually ready.
In `@mill-rpc/examples/multi_service_server.rs`:
- Around line 26-47: Add explicit overflow guards to the factorial and fibonacci
methods: change the return types of factorial(&self, _ctx: &RpcContext, n: u64)
and fibonacci(&self, _ctx: &RpcContext, n: u32) to a Result<u64, _> (using your
crate's RpcError or a simple error type), check bounds up front (factorial: n >
20 is overflow; fibonacci: n > 93 is overflow), and return an Err with a clear
message when out-of-range instead of producing a wrapped value; on success
compute as before and return Ok(result), and update the RPC trait/signature and
callers to propagate/handle the Result.
- Around line 59-63: The loop uses the multiplication bound "i * i <= n" which
can overflow for large u64 values; change the condition to use division "i <= n
/ i" (or equivalent) to avoid overflow while keeping the same loop
semantics—update the loop in the primality check where variable i is incremented
(the block containing "while i * i <= n" and "i += 2") to use "while i <= n / i"
instead and ensure integer division on n is used.
In `@mill-rpc/mill-rpc-core/src/protocol.rs`:
- Around line 295-391: Add unit tests that assert FrameHeader::decode returns
errors for (1) payloads exceeding MAX_PAYLOAD_SIZE by constructing a header
bytes array with payload_len > MAX_PAYLOAD_SIZE and calling
FrameHeader::decode(...).is_err(), (2) version mismatches by setting the version
byte != VERSION and asserting decode returns an error, and (3) unknown
MessageType by setting the message_type byte to a value not in MessageType and
asserting FrameHeader::decode(...) yields an error; locate test helpers around
tests::test_invalid_magic and reuse Frame::encode()/FrameHeader::encode() or
direct byte buffers to craft the failing headers and assert the specific error
variants if available.
- Around line 155-172: Add payload-size validation to Frame constructors by
implementing a helper (e.g., Frame::validate_payload_len(len: usize) ->
Result<u32, RpcError>) that checks len against MAX_PAYLOAD_SIZE and returns
Err(RpcError::invalid_argument(...)) if too large; call this helper when
building payload for Frame::request (and any other constructors) to obtain
payload_len rather than casting payload.len() to u32, and change those
constructors' signatures to return Result<Self, RpcError> so oversized payloads
are rejected at construction time; update uses of Frame::request accordingly and
ensure FrameHeader.payload_len is set from the validated u32.
In `@mill-rpc/mill-rpc-macros/src/lib.rs`:
- Around line 194-198: The generated struct construction in lib.rs uses explicit
field initializers (#( `#arg_names`: `#arg_names`, )*) even when the field names
equal variable names; change the else branch that builds req_construct to use
Rust struct field shorthand by emitting #( `#arg_names`, )* instead of repetitive
name: name while keeping the same branches and identifiers (req_construct,
m.args, req_name, arg_names, quote!).
- Around line 88-94: The code casts idx (usize) to u16 when constructing
MethodInfo.method_id, which limits services to 65,535 methods; add a guard and
document the limit: before pushing into methods (in the loop that builds
MethodInfo) assert or check that idx <= u16::MAX as usize (e.g. panic! or emit
compile_error! from the macro if exceeded) and update the MethodInfo or
surrounding macro docs to mention the 16-bit method_id limit so callers know the
constraint.
In `@mill-rpc/src/client.rs`:
- Around line 89-108: The pending request inserted into self.shared.pending
before sending is not removed if the subsequent client.send(...) fails, leaking
entries; update the send logic around Frame::request /
self.tcp_client.lock().unwrap().send(...) so that on any Err case you acquire
the same self.shared.pending mutex and remove the request_id
(pending.remove(&request_id)) before returning the RpcError; keep the insertion
only for the successful-send path or ensure removal in the error branch so
PendingRequest entries are cleaned up on send failure.
In `@mill-rpc/src/server.rs`:
- Around line 244-254: The handler currently aborts when
self.codec.serialize(&rpc_err) fails, causing no response to be sent; update the
Err(e) arm to produce a fallback error payload and still send Frame::error:
capture the serialization error `e`, log it, then build fallback bytes (e.g., a
UTF-8 payload from `format!("serialization error: {}; original error: {}", e,
rpc_err)` or a simple JSON string) and pass those bytes into
`Frame::error(request_id, fallback_bytes)` so the client receives an error frame
even when `self.codec.serialize` fails.
- Around line 139-142: Replace the panic-prone .lock().unwrap() calls on the
conn_buffers mutex with poison-aware handling: when acquiring the lock on
self.conn_buffers (and the other similar mutex acquisitions at the other
occurrences), recover from a poisoned lock by calling into_inner() on the
PoisonError (e.g., use .lock().unwrap_or_else(|poisoned| poisoned.into_inner())
or match the Result and call poisoned.into_inner()), then proceed to insert/read
from the Vec as before; apply the same pattern to every place that currently
uses .lock().unwrap() (the occurrences around conn_buffers insert, read, and any
other mutex accesses) so a single thread panic won’t crash the server.
---
Duplicate comments:
In `@mill-rpc/examples/calculator_client.rs`:
- Around line 31-32: Replace the nondeterministic
thread::sleep(Duration::from_millis(50)) startup wait with a deterministic
readiness signal: have the server side send a readiness notification (e.g., via
a std::sync::mpsc channel, std::sync::Barrier, or tokio::sync::oneshot) and have
the client (where thread::sleep is used) block on receiving that signal before
proceeding; locate the sleep call in calculator_client.rs and change it to wait
on the chosen readiness primitive, and ensure the server sends the signal once
it has finished initialization and is ready to accept connections.
In `@mill-rpc/examples/echo_client.rs`:
- Around line 12-18: The Echo trait is duplicated across client and server
examples (trait Echo with methods echo, echo_uppercase, echo_repeat,
request_count), so extract the single source of truth by moving the trait
definition into a shared module/crate (e.g., a common lib or mod named
echo_proto) and have both client and server import that module; update the
client file (examples/echo_client.rs) to remove the local trait and reference
the shared Echo trait, ensuring the #[mill_rpc::service] macro is applied only
once in the shared definition or consistently used via the shared trait so
generated IDs and method signatures stay in sync.
In `@mill-rpc/examples/kv_client.rs`:
- Around line 26-30: Replace the brittle thread::sleep-based startup sync with a
deterministic handshake: create a std::sync::mpsc::channel (ready_tx, ready_rx)
before spawning the thread, move ready_tx into the thread that currently calls
el.run(), call ready_tx.send(()) just before el.run() to signal the loop is
ready, and in the parent replace thread::sleep(...) with ready_rx.recv() or
recv_timeout to wait for the readiness signal; update references to el, handle,
and the thread::spawn block accordingly.
In `@mill-rpc/mill-rpc-core/src/codec.rs`:
- Around line 31-32: The rustfmt failure comes from overly long chained
expressions mapping bincode errors into RpcError; update the mapping in the
bincode::serialize(...) call (and the similar mapping around the other
serialize/deserialize lines) to a more compact, rustfmt-friendly layout by
breaking the map_err closure onto its own line and/or using a helper like let
err = format!(...) then RpcError::codec_error(err) (i.e., split the closure into
multiple lines or assign the formatted message to a variable before returning
RpcError::codec_error) so the expressions involving
bincode::serialize(value).map_err(|e| RpcError::codec_error(format!("serialize:
{}", e))) and the corresponding deserialize mapping are wrapped and formatted to
satisfy cargo fmt.
In `@mill-rpc/mill-rpc-core/src/protocol.rs`:
- Around line 234-241: The encode() method currently trusts header.payload_len
and can emit a header that doesn't match self.payload; fix by recomputing
payload_len from self.payload.len() before encoding: make a local mutable copy
of self.header (e.g., let mut hdr = self.header.clone()), set hdr.payload_len =
self.payload.len() as _ (matching the header field type), then call hdr.encode()
and append self.payload as before (use the existing self.header.encode() call
only on the updated copy); this ensures the header bytes produced by encode()
always reflect the actual payload length.
In `@mill-rpc/src/client.rs`:
- Around line 34-40: The RpcClient struct has an unused field `codec` which
causes compilation warnings; to fix minimally without changing the public API,
mark the field as intentionally unused by renaming it to `_codec` or annotate it
with #[allow(dead_code)] so Clippy stops complaining; apply the same change to
the other occurrence referenced (the second struct/field at lines 66-71) and
ensure the identifier references remain consistent with the struct name
`RpcClient` and the field name `codec` so no other code needs updating.
- Around line 5-10: The imports RpcStatus and HEADER_SIZE are unused in
mill-rpc/src/client.rs and trigger clippy warnings; remove RpcStatus from the
"use crate::{Codec, RpcError, RpcStatus, RpcTransport};" statement and remove
HEADER_SIZE from the "use mill_rpc_core::protocol::{self, Frame, MessageType,
HEADER_SIZE};" statement (or consolidate those use lines) so only used symbols
(Codec, RpcError, RpcTransport, protocol, Frame, MessageType) are imported;
ensure no other code references RpcStatus or HEADER_SIZE before removing.
- Around line 75-78: The set_timeout method currently takes &mut self which
cannot be called on an Arc<RpcClient> returned by connect; change the RpcClient
timeout storage to an interior-mutable type (e.g., AtomicU64 storing
timeout.as_millis()/secs or a Mutex<Duration>) and make pub fn
set_timeout(&self, timeout: Duration) update that interior cell; update the
RpcClient struct's timeout field (replace current timeout: Duration) and update
any reads of timeout (e.g., in call/send methods) to load/lock from the new
atomic/Mutex-based timeout accessor.
In `@mill-rpc/src/server.rs`:
- Line 12: Remove the unused imports FrameHeader and HEADER_SIZE from the use
declaration that currently imports mill_rpc_core::protocol::{self, Frame,
FrameHeader, MessageType, HEADER_SIZE}; — update it to only import the symbols
actually used (e.g., protocol, Frame, MessageType) so the unused FrameHeader and
HEADER_SIZE identifiers are dropped to satisfy clippy.
- Around line 81-103: The build() method currently uses service_map.insert(...)
which silently overwrites duplicate IDs and service_with_id() doesn't advance
next_service_id, causing conflicts; update service_with_id<S:
ServiceDispatch>(mut self, id: u16, svc: S) to advance self.next_service_id =
id.saturating_add(1).max(self.next_service_id) (or set to id+1 when id >=
next_service_id) after pushing the service to ensure future auto-assigned IDs
don't collide, and change build() to check for duplicate IDs when inserting into
service_map (e.g., check if service_map.contains_key(&id) and return an Err
describing the duplicate ID instead of overwriting) so RegisteredService entries
are never silently replaced.
- Around line 209-216: When parse_request_payload() fails inside handle_request,
send an error response for two-way requests instead of silently returning: call
whatever path you use to write frames on ServerContext/connection (using
conn_id) and emit an error frame indicating a malformed request; since
request_id can't be recovered, use a sentinel request_id (e.g. 0) and sentinel
service/method ids if needed, include the parse error text in the frame payload,
or alternatively close the connection for one-way detection; update the Err(e)
branch in handle_request (where parse_request_payload() is matched) to log the
error and then send the sentinel error frame for two-way frames (check Frame
methods or flags to detect two-way) before returning.
In `@mill-rpc/tests/integration_test.rs`:
- Around line 83-103: The test currently only starts/stops the server and
ignores server build failures; change it to assert the Server build succeeds (do
not swallow Err from RpcServer::builder), obtain the bound socket address from
the built server, create a client that connects to that address, perform at
least one RPC call against the Calculator service (using
CalculatorDispatcher/MyCalculator) and assert the returned result matches
expected value, then cleanly stop the event_loop and join the thread; ensure any
failure in building the server, connecting the client, or the RPC call causes
the test to panic/fail rather than be printed to stderr.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (26)
Cargo.tomlREADME.mdmill-rpc/Cargo.tomlmill-rpc/README.mdmill-rpc/examples/calculator_client.rsmill-rpc/examples/calculator_server.rsmill-rpc/examples/concurrent_clients.rsmill-rpc/examples/echo_client.rsmill-rpc/examples/echo_server.rsmill-rpc/examples/kv_client.rsmill-rpc/examples/kv_server.rsmill-rpc/examples/multi_service_client.rsmill-rpc/examples/multi_service_server.rsmill-rpc/mill-rpc-core/Cargo.tomlmill-rpc/mill-rpc-core/src/codec.rsmill-rpc/mill-rpc-core/src/context.rsmill-rpc/mill-rpc-core/src/error.rsmill-rpc/mill-rpc-core/src/lib.rsmill-rpc/mill-rpc-core/src/protocol.rsmill-rpc/mill-rpc-macros/Cargo.tomlmill-rpc/mill-rpc-macros/src/lib.rsmill-rpc/src/client.rsmill-rpc/src/lib.rsmill-rpc/src/prelude.rsmill-rpc/src/server.rsmill-rpc/tests/integration_test.rs
mill-rpc/mill-rpc-macros/src/lib.rs
Outdated
| methods.push(MethodInfo { | ||
| name, | ||
| method_id: idx as u16, | ||
| args, | ||
| return_type, | ||
| }); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider documenting the method count limit.
The method ID is cast from usize to u16, limiting services to 65,535 methods. While practically sufficient, consider adding a compile-time check or documenting this constraint.
📝 Optional: Add overflow check
+ if idx > u16::MAX as usize {
+ return Err(syn::Error::new_spanned(
+ &method.sig.ident,
+ "Service exceeds maximum method count (65535)",
+ ));
+ }
+
methods.push(MethodInfo {
name,
method_id: idx as u16,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| methods.push(MethodInfo { | |
| name, | |
| method_id: idx as u16, | |
| args, | |
| return_type, | |
| }); | |
| } | |
| if idx > u16::MAX as usize { | |
| return Err(syn::Error::new_spanned( | |
| &method.sig.ident, | |
| "Service exceeds maximum method count (65535)", | |
| )); | |
| } | |
| methods.push(MethodInfo { | |
| name, | |
| method_id: idx as u16, | |
| args, | |
| return_type, | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/mill-rpc-macros/src/lib.rs` around lines 88 - 94, The code casts idx
(usize) to u16 when constructing MethodInfo.method_id, which limits services to
65,535 methods; add a guard and document the limit: before pushing into methods
(in the loop that builds MethodInfo) assert or check that idx <= u16::MAX as
usize (e.g. panic! or emit compile_error! from the macro if exceeded) and update
the MethodInfo or surrounding macro docs to mention the 16-bit method_id limit
so callers know the constraint.
mill-rpc/mill-rpc-macros/src/lib.rs
Outdated
| let req_construct = if m.args.is_empty() { | ||
| quote! { #req_name } | ||
| } else { | ||
| quote! { #req_name { #( #arg_names: #arg_names, )* } } | ||
| }; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use struct field shorthand.
When the field name matches the variable name, Rust allows shorthand initialization.
♻️ Simplify field initialization
let req_construct = if m.args.is_empty() {
quote! { `#req_name` }
} else {
- quote! { `#req_name` { #( `#arg_names`: `#arg_names`, )* } }
+ quote! { `#req_name` { #( `#arg_names`, )* } }
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let req_construct = if m.args.is_empty() { | |
| quote! { #req_name } | |
| } else { | |
| quote! { #req_name { #( #arg_names: #arg_names, )* } } | |
| }; | |
| let req_construct = if m.args.is_empty() { | |
| quote! { `#req_name` } | |
| } else { | |
| quote! { `#req_name` { #( `#arg_names`, )* } } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/mill-rpc-macros/src/lib.rs` around lines 194 - 198, The generated
struct construction in lib.rs uses explicit field initializers (#( `#arg_names`:
`#arg_names`, )*) even when the field names equal variable names; change the else
branch that builds req_construct to use Rust struct field shorthand by emitting
#( `#arg_names`, )* instead of repetitive name: name while keeping the same
branches and identifiers (req_construct, m.args, req_name, arg_names, quote!).
| self.conn_buffers | ||
| .lock() | ||
| .unwrap() | ||
| .insert(conn_id.as_u64(), Vec::new()); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider graceful handling of poisoned locks.
Multiple .unwrap() calls on lock acquisition (lines 141, 148, 181, 224) will panic if a prior holder panicked. In a server context, this can cascade a single request's panic into a full server crash.
Example: recover from poisoned lock
- self.conn_buffers
- .lock()
- .unwrap()
- .insert(conn_id.as_u64(), Vec::new());
+ self.conn_buffers
+ .lock()
+ .unwrap_or_else(|poisoned| {
+ log::warn!("Recovered from poisoned conn_buffers lock");
+ poisoned.into_inner()
+ })
+ .insert(conn_id.as_u64(), Vec::new());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self.conn_buffers | |
| .lock() | |
| .unwrap() | |
| .insert(conn_id.as_u64(), Vec::new()); | |
| self.conn_buffers | |
| .lock() | |
| .unwrap_or_else(|poisoned| { | |
| log::warn!("Recovered from poisoned conn_buffers lock"); | |
| poisoned.into_inner() | |
| }) | |
| .insert(conn_id.as_u64(), Vec::new()); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/src/server.rs` around lines 139 - 142, Replace the panic-prone
.lock().unwrap() calls on the conn_buffers mutex with poison-aware handling:
when acquiring the lock on self.conn_buffers (and the other similar mutex
acquisitions at the other occurrences), recover from a poisoned lock by calling
into_inner() on the PoisonError (e.g., use .lock().unwrap_or_else(|poisoned|
poisoned.into_inner()) or match the Result and call poisoned.into_inner()), then
proceed to insert/read from the Vec as before; apply the same pattern to every
place that currently uses .lock().unwrap() (the occurrences around conn_buffers
insert, read, and any other mutex accesses) so a single thread panic won’t crash
the server.
| Err(rpc_err) => { | ||
| let err_bytes = match self.codec.serialize(&rpc_err) { | ||
| Ok(b) => b, | ||
| Err(e) => { | ||
| log::error!("Failed to serialize error: {}", e); | ||
| return; | ||
| } | ||
| }; | ||
| Frame::error(request_id, err_bytes) | ||
| } | ||
| }; |
There was a problem hiding this comment.
Serialization failure drops the response silently.
If self.codec.serialize(&rpc_err) fails (line 245), the handler logs and returns without sending any frame. The client will timeout. Consider a fallback error encoding.
Proposed fix - fallback to simple error message
Err(rpc_err) => {
let err_bytes = match self.codec.serialize(&rpc_err) {
Ok(b) => b,
Err(e) => {
log::error!("Failed to serialize error: {}", e);
- return;
+ // Fallback: send the error message as raw UTF-8
+ rpc_err.to_string().into_bytes()
}
};
Frame::error(request_id, err_bytes)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Err(rpc_err) => { | |
| let err_bytes = match self.codec.serialize(&rpc_err) { | |
| Ok(b) => b, | |
| Err(e) => { | |
| log::error!("Failed to serialize error: {}", e); | |
| return; | |
| } | |
| }; | |
| Frame::error(request_id, err_bytes) | |
| } | |
| }; | |
| Err(rpc_err) => { | |
| let err_bytes = match self.codec.serialize(&rpc_err) { | |
| Ok(b) => b, | |
| Err(e) => { | |
| log::error!("Failed to serialize error: {}", e); | |
| // Fallback: send the error message as raw UTF-8 | |
| rpc_err.to_string().into_bytes() | |
| } | |
| }; | |
| Frame::error(request_id, err_bytes) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/src/server.rs` around lines 244 - 254, The handler currently aborts
when self.codec.serialize(&rpc_err) fails, causing no response to be sent;
update the Err(e) arm to produce a fallback error payload and still send
Frame::error: capture the serialization error `e`, log it, then build fallback
bytes (e.g., a UTF-8 payload from `format!("serialization error: {}; original
error: {}", e, rpc_err)` or a simple JSON string) and pass those bytes into
`Frame::error(request_id, fallback_bytes)` so the client receives an error frame
even when `self.codec.serialize` fails.
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Cargo.toml (1)
18-22: 🧹 Nitpick | 🔵 TrivialConsider adding shared dependencies to
[workspace.dependencies].The mill-rpc crates share common dependencies (
serde,bincode) that could be centralized in[workspace.dependencies]for consistent versioning across the workspace.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` around lines 18 - 22, Add the shared dependencies serde and bincode to the [workspace.dependencies] table so all mill-rpc crates use a centralized version; update the workspace section (the existing [workspace.dependencies] block that currently lists mio, lock_freedom, and mill-io) to include entries for serde and bincode with explicit versions (and any required features like serde = { version = "...", features = ["derive"] }) so downstream mill-rpc crates can remove individual version pins and rely on the workspace-resolved versions.
♻️ Duplicate comments (8)
mill-rpc/mill-rpc-macros/src/lib.rs (1)
161-173:⚠️ Potential issue | 🟡 MinorGuard
u16method ID/count bounds before casting.Line 161 and Line 172 cast
usizetou16without bounds checks. Services above 65,535 methods will silently truncate IDs and metadata.🔧 Add explicit macro-time validation
fn generate_service_module(def: ServiceDef) -> syn::Result<proc_macro2::TokenStream> { + if def.methods.len() > u16::MAX as usize { + return Err(syn::Error::new_spanned( + &def.name, + "Service exceeds maximum method count (65535)", + )); + } + let mod_name = format_ident!("{}", to_snake_case(&def.name.to_string())); let service_name_str = def.name.to_string(); let method_count = def.methods.len() as u16;Also applies to: 373-373
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/mill-rpc-macros/src/lib.rs` around lines 161 - 173, The code casts method counts/indices from usize to u16 (e.g., computing method_count and ids in the method_consts map) without bounds checks, which will truncate values for services with >65,535 methods; add explicit validation at macro-expansion time: check def.methods.len() fits in u16 and return a compile_error! or panic with a clear message if not, and likewise validate each index before casting in the mapping that produces method_consts (referencing method_count, method_consts, def.methods, and GenerateMode to find the surrounding logic); ensure both the overall count and per-method id conversions are guarded so no silent truncation occurs.mill-rpc/examples/concurrent_clients.rs (1)
82-87:⚠️ Potential issue | 🟡 MinorAvoid panic in completion logging for empty result sets.
Line 85 and Line 86 unwrap optional values and will panic if request count is ever
0.🔧 Safe logging
- println!( - " Client {} done: first={}, last={}", - client_id, - results.first().unwrap(), - results.last().unwrap() - ); + if let (Some(first), Some(last)) = (results.first(), results.last()) { + println!(" Client {} done: first={}, last={}", client_id, first, last); + } else { + println!(" Client {} done: no requests sent", client_id); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/examples/concurrent_clients.rs` around lines 82 - 87, The current println! uses results.first().unwrap() and results.last().unwrap() which will panic if results is empty; change the logging to handle empty result sets safely by replacing the unwraps with non-panicking checks (e.g., map/unwrap_or or pattern matching) so you print a safe placeholder like "none" or "0" when results has length 0; update the call site using the same identifiers (println!, client_id, results.first(), results.last()) to avoid panics.mill-rpc/mill-rpc-core/src/protocol.rs (1)
141-143:⚠️ Potential issue | 🔴 CriticalFrame invariants are unenforced; malformed wire output is possible.
Line 168/185/202 performs unchecked narrowing casts, and Line 235 encodes whatever is in
header.payload_leneven if it disagrees withpayload.len(). With public fields (Line 142-143), callers can create desynchronizing frames.🔒 Proposed hardening
impl Frame { + fn validate_payload_len(len: usize) -> Result<u32, RpcError> { + if len > MAX_PAYLOAD_SIZE as usize { + return Err(RpcError::invalid_argument(format!( + "Payload too large: {} > {}", + len, MAX_PAYLOAD_SIZE + ))); + } + Ok(len as u32) + } + pub fn request( request_id: u64, service_id: u16, method_id: u16, args: Vec<u8>, one_way: bool, - ) -> Self { + ) -> Result<Self, RpcError> { @@ - Frame { + Ok(Frame { header: FrameHeader { version: VERSION, flags, message_type: MessageType::Request, - payload_len: payload.len() as u32, + payload_len: Self::validate_payload_len(payload.len())?, }, payload, - } + }) } @@ - pub fn encode(&self) -> Vec<u8> { + pub fn encode(&self) -> Result<Vec<u8>, RpcError> { + let payload_len = Self::validate_payload_len(self.payload.len())?; + if self.header.payload_len != payload_len { + return Err(RpcError::invalid_argument("Header/payload length mismatch")); + } let header_bytes = self.header.encode(); let mut buf = Vec::with_capacity(HEADER_SIZE + self.payload.len()); buf.extend_from_slice(&header_bytes); buf.extend_from_slice(&self.payload); - buf + Ok(buf) } }Also applies to: 168-169, 185-186, 202-203, 235-240
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/mill-rpc-core/src/protocol.rs` around lines 141 - 143, Make Frame/FrameHeader fields private and enforce invariants: ensure any constructor/decoder for Frame validates payload.len() matches header.payload_len and replace unchecked narrowing casts with checked conversions (e.g. TryFrom/try_into or explicit bounds checks) in the functions that currently cast sizes (the places around the existing casts you saw). Modify the Frame::encode (or equivalent encoder) to always encode the actual payload length (set header.payload_len = payload.len() after validation) or return an Err if lengths disagree, and update any public constructors/decoders to return Result so malformed inputs are rejected rather than producing desynchronized wire data.mill-rpc/src/client.rs (2)
95-101:⚠️ Potential issue | 🔴 CriticalClean up pending entry when send fails.
If
senderrors, the inserted pending request is leaked and never resolved.🔧 Proposed fix
let frame = Frame::request(request_id, service_id, method_id, payload, false); - { - let client = self.tcp_client.lock().unwrap(); - client - .send(&frame.encode()) - .map_err(|e| RpcError::unavailable(format!("Send failed: {}", e)))?; - } + let send_result = { + let client = self.tcp_client.lock().unwrap(); + client.send(&frame.encode()) + }; + if let Err(e) = send_result { + self.shared.pending.lock().unwrap().remove(&request_id); + return Err(RpcError::unavailable(format!("Send failed: {}", e))); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/client.rs` around lines 95 - 101, The send failure path leaks the pending request entry; when Frame::request is created and you attempt client.send(...) inside the tcp_client lock, ensure that on Err you remove the corresponding pending entry (e.g., from the pending requests map keyed by request_id) before returning the RpcError. Concretely, after the send .map_err branch fails, acquire the same pending collection (e.g., self.pending.lock().unwrap()) and call .remove(&request_id) (or equivalent) to clean up, then propagate the error.
69-72:⚠️ Potential issue | 🟠 Major
set_timeoutis effectively unusable afterconnect()returnsArc<Self>.
connect()hands out shared ownership, butset_timeoutrequires exclusive mutable access. Make timeout internally mutable and exposeset_timeout(&self, ...).🔧 Proposed fix
pub struct RpcClient { tcp_client: Mutex<TcpClient<RpcClientHandler>>, shared: Arc<ClientShared>, next_request_id: AtomicU64, - timeout: Duration, + timeout: Mutex<Duration>, } @@ Ok(Arc::new(Self { tcp_client: Mutex::new(tcp_client), shared, next_request_id: AtomicU64::new(1), - timeout: Duration::from_secs(30), + timeout: Mutex::new(Duration::from_secs(30)), })) @@ - pub fn set_timeout(&mut self, timeout: Duration) { - self.timeout = timeout; + pub fn set_timeout(&self, timeout: Duration) { + *self.timeout.lock().unwrap() = timeout; } @@ - let deadline = std::time::Instant::now() + self.timeout; + let timeout = *self.timeout.lock().unwrap(); + let deadline = std::time::Instant::now() + timeout; @@ - request_id, self.timeout + request_id, timeout @@ - request_id, self.timeout + request_id, timeout#!/bin/bash # Verify Arc-returning constructor + mutable setter mismatch. rg -nP --type rust 'pub fn connect\([^)]*\)\s*->\s*Result<Arc<Self>,\s*RpcError>' mill-rpc/src/client.rs rg -nP --type rust 'pub fn set_timeout\(&mut self,\s*timeout:\s*Duration\)' mill-rpc/src/client.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/client.rs` around lines 69 - 72, The setter set_timeout currently takes &mut self but connect(...) returns Arc<Self>, making it unusable; change the client struct's timeout field to an interior-mutable type (e.g., std::sync::Mutex<Duration> or RwLock<Duration> / AtomicU64 with Duration converted to millis) and update set_timeout signature to pub fn set_timeout(&self, timeout: Duration) to lock and update the inner value; update any direct reads of self.timeout to acquire the lock (or load the atomic) and preserve semantics, keeping the method name set_timeout and referencing the existing connect and timeout symbols so callers using Arc<Self> can change the timeout after connect returns.mill-rpc/src/server.rs (3)
245-253:⚠️ Potential issue | 🟠 MajorUse a stable wire encoding for
RpcErrorand always send an error frame.The client decodes error frames with bincode (
mill-rpc/src/client.rs, Line 229), but server usesself.codechere. Non-bincode codecs will break error decoding. Also, current serialize failure drops the response.🔧 Proposed fix
let response_frame = match result { Ok(resp_bytes) => Frame::response(request_id, resp_bytes), Err(rpc_err) => { - let err_bytes = match self.codec.serialize(&rpc_err) { - Ok(b) => b, - Err(e) => { - log::error!("Failed to serialize error: {}", e); - return; - } - }; + let err_bytes = bincode::serialize(&rpc_err).unwrap_or_else(|e| { + log::error!("Failed to serialize RpcError: {}", e); + format!("serialization error: {}; original error: {}", e, rpc_err).into_bytes() + }); Frame::error(request_id, err_bytes) } };#!/bin/bash # Verify error payload codec mismatch across client/server. rg -n --type rust 'bincode::deserialize\(err_data\)' mill-rpc/src/client.rs rg -n --type rust 'self\.codec\.serialize\(&rpc_err\)' mill-rpc/src/server.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/server.rs` around lines 245 - 253, The server currently uses self.codec.serialize(&rpc_err) inside the error response path (see RpcError, Frame::error) which can produce a codec mismatch with the client that decodes error frames with bincode; update the error-path to always send an error frame by first attempting self.codec.serialize(&rpc_err) and, on Err(e), log the serialization failure and fall back to serializing rpc_err with bincode (or otherwise a stable wire encoding) so Frame::error(request_id, err_bytes) always returns valid bincode-decoded bytes to the client; ensure the fallback also includes the original serialization error details in the logged message.
210-215:⚠️ Potential issue | 🟠 MajorMalformed two-way requests should get an error frame, not a timeout.
On parse failure the handler exits early. For non-one-way calls, respond with
Frame::errorso callers fail fast.🔧 Proposed fix
let (request_id, service_id, method_id, args) = match frame.parse_request_payload() { Ok(parsed) => parsed, Err(e) => { log::error!("Invalid request payload from {:?}: {}", conn_id, e); + if !frame.header.flags.is_one_way() { + let request_id = frame + .payload + .get(..8) + .and_then(|b| b.try_into().ok()) + .map(u64::from_le_bytes) + .unwrap_or(0); + let rpc_err = RpcError::internal(format!("Invalid request payload: {}", e)); + let err_bytes = bincode::serialize(&rpc_err) + .unwrap_or_else(|_| rpc_err.to_string().into_bytes()); + let _ = ctx.send_to(conn_id, &Frame::error(request_id, err_bytes).encode()); + } return; } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/server.rs` around lines 210 - 215, The handler currently bails on parse_request_payload errors, causing callers of two-way requests to hang; instead, inside the Err(e) branch of let (request_id, service_id, method_id, args) = match frame.parse_request_payload() in the request handling path, check whether the incoming frame is one-way or not and for non-one-way calls write back a Frame::error containing the error details (and any available request identifier/header info) to the connection before returning; for one-way frames retain the no-response behavior. Target symbols: parse_request_payload, Frame::error, and the request handling block where conn_id is logged.
100-103:⚠️ Potential issue | 🟠 MajorReject duplicate service IDs instead of overwriting registrations.
Current
insertsemantics silently replace an existing dispatcher, causing wrong runtime dispatch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mill-rpc/src/server.rs` around lines 100 - 103, The loop that builds service_map currently uses service_map.insert(id, RegisteredService { dispatcher }) which silently overwrites duplicate IDs; change it to detect duplicates (e.g., use service_map.entry(id) or service_map.contains_key(&id)) and reject them instead of inserting—return an error (or propagate a Result) when a duplicate id is encountered so registrations are not overwritten; update the surrounding function signature/return path as needed to surface the duplicate-id error and reference service_map, RegisteredService, and self.services when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mill-rpc/Cargo.toml`:
- Around line 1-15: In the mill-rpc Cargo.toml package section for crate
"mill-rpc", remove the explicit package fields version, edition, authors, and
license so they are inherited from [workspace.package]; leave only name and
crate-specific fields like description and local path dependencies, and ensure
you don't reintroduce those fields elsewhere in this crate's Cargo.toml.
In `@mill-rpc/examples/concurrent_clients.rs`:
- Around line 55-75: The fixed thread::sleep calls before starting the event
loop and before calling RpcClient::connect can race and cause unwrap() panics;
replace them with bounded readiness/retry logic: after spawning the EventLoop
(EventLoop::new and el_thread) poll or attempt RpcClient::connect in a loop with
a short backoff and total timeout, only proceeding when RpcClient::connect
returns Ok (or break with a clear error after timeout); ensure the event loop
thread (el_thread) is running by checking the client_el readiness if available
or retrying the connect call rather than sleeping, and apply the same bounded
retry pattern for any other unwrap()s around connect/transport creation to avoid
flaky startup.
In `@mill-rpc/examples/kv_client.rs`:
- Around line 12-22: The generated Client struct for services that define fn
len() -> u64 triggers Clippy's len_without_is_empty; update the macro in
mill-rpc-macros/src/lib.rs that emits the Client struct (the code path that
expands the service into the Client type) to add
#[allow(clippy::len_without_is_empty)] above the generated Client struct
declaration so the lint is suppressed, or alternatively make the macro
auto-generate an is_empty(&self) -> bool method (returning self.len() == 0)
whenever it sees a len() -> u64 service method; modify the macro branch that
handles method list generation to detect len and either emit the allow attribute
for struct Client or emit the is_empty method alongside len to satisfy Clippy.
In `@mill-rpc/examples/kv_server.rs`:
- Around line 35-78: The methods on KvStore currently call
self.data.read().unwrap() / write().unwrap() which will panic on a poisoned
RwLock; update every occurrence in get, set, delete, keys, len, and clear to
handle poisoning by using unwrap_or_else(|poisoned| poisoned.into_inner()) when
acquiring the read/write guard (so you recover the inner data instead of
panicking) and keep the rest of each method unchanged; ensure you change both
read() and write() calls across all listed methods so the server handles
poisoned locks gracefully.
In `@mill-rpc/mill-rpc-core/Cargo.toml`:
- Around line 1-9: The crate's Cargo.toml duplicates package metadata that
should be inherited from the workspace; update mill-rpc-core's [package] block
to rely on workspace.package by removing the duplicated fields (remove version
and edition lines and do not add license/authors here) so only crate-specific
fields like name and description remain and let the root workspace provide
version/edition/license/authors; keep the existing dependencies and ensure the
package block still exists with name "mill-rpc-core".
In `@mill-rpc/mill-rpc-core/src/lib.rs`:
- Around line 15-16: Update the documentation comment that currently references
the attribute-style macro `#[mill_rpc::service]` to the correct invocation form
`mill_rpc::service! { ... }` (or `mill_rpc::service!()` if preferred) so it
accurately reflects how to invoke the macro; edit the doc string that mentions
`#[mill_rpc::service]` in lib.rs to replace that token with `mill_rpc::service!
{ ... }` and ensure surrounding text still reads correctly with the new macro
form (referencing `mill_rpc::service!` and the generated `{Service}Server`
trait).
In `@mill-rpc/mill-rpc-macros/Cargo.toml`:
- Around line 1-13: The package manifest for mill-rpc-macros currently hardcodes
package metadata; update the [package] block for the package named
"mill-rpc-macros" to inherit common fields from the workspace by replacing
explicit fields with workspace inheritance (e.g., set version, edition, authors,
and license to use workspace = true or remove duplicates so Cargo pulls them
from [workspace.package]); keep the package-specific name and description and
ensure proc-macro = true remains under [lib].
In `@mill-rpc/README.md`:
- Around line 43-54: Add a brief inline comment in the README example next to
EventLoop::new(4, 1024, 100) explaining what each numeric argument represents
(e.g., thread count, max connections/queue size, timeout or max events) so
readers understand the configuration; update the line in the main example where
EventLoop::new(...) is called (the EventLoop::new(4, 1024, 100) invocation
inside fn main) with a short comment describing each parameter.
In `@mill-rpc/tests/integration_test.rs`:
- Around line 38-47: The test builds the bincode payload manually using
to_le_bytes(), which assumes bincode internals; instead use the test's Codec
instance to produce the payload: call codec.serialize(&AddRequest { a: 2, b: 3
}) (or serialize a tuple/tuple struct matching the RPC parameter shape) to
produce payload, then pass that payload into dispatcher.dispatch(&ctx,
calculator::methods::ADD, &payload, &codec) and keep using codec.deserialize for
the result; update the local payload variable and remove the manual
to_le_bytes() calls so the test relies on codec.serialize rather than hardcoded
bytes.
---
Outside diff comments:
In `@Cargo.toml`:
- Around line 18-22: Add the shared dependencies serde and bincode to the
[workspace.dependencies] table so all mill-rpc crates use a centralized version;
update the workspace section (the existing [workspace.dependencies] block that
currently lists mio, lock_freedom, and mill-io) to include entries for serde and
bincode with explicit versions (and any required features like serde = { version
= "...", features = ["derive"] }) so downstream mill-rpc crates can remove
individual version pins and rely on the workspace-resolved versions.
---
Duplicate comments:
In `@mill-rpc/examples/concurrent_clients.rs`:
- Around line 82-87: The current println! uses results.first().unwrap() and
results.last().unwrap() which will panic if results is empty; change the logging
to handle empty result sets safely by replacing the unwraps with non-panicking
checks (e.g., map/unwrap_or or pattern matching) so you print a safe placeholder
like "none" or "0" when results has length 0; update the call site using the
same identifiers (println!, client_id, results.first(), results.last()) to avoid
panics.
In `@mill-rpc/mill-rpc-core/src/protocol.rs`:
- Around line 141-143: Make Frame/FrameHeader fields private and enforce
invariants: ensure any constructor/decoder for Frame validates payload.len()
matches header.payload_len and replace unchecked narrowing casts with checked
conversions (e.g. TryFrom/try_into or explicit bounds checks) in the functions
that currently cast sizes (the places around the existing casts you saw). Modify
the Frame::encode (or equivalent encoder) to always encode the actual payload
length (set header.payload_len = payload.len() after validation) or return an
Err if lengths disagree, and update any public constructors/decoders to return
Result so malformed inputs are rejected rather than producing desynchronized
wire data.
In `@mill-rpc/mill-rpc-macros/src/lib.rs`:
- Around line 161-173: The code casts method counts/indices from usize to u16
(e.g., computing method_count and ids in the method_consts map) without bounds
checks, which will truncate values for services with >65,535 methods; add
explicit validation at macro-expansion time: check def.methods.len() fits in u16
and return a compile_error! or panic with a clear message if not, and likewise
validate each index before casting in the mapping that produces method_consts
(referencing method_count, method_consts, def.methods, and GenerateMode to find
the surrounding logic); ensure both the overall count and per-method id
conversions are guarded so no silent truncation occurs.
In `@mill-rpc/src/client.rs`:
- Around line 95-101: The send failure path leaks the pending request entry;
when Frame::request is created and you attempt client.send(...) inside the
tcp_client lock, ensure that on Err you remove the corresponding pending entry
(e.g., from the pending requests map keyed by request_id) before returning the
RpcError. Concretely, after the send .map_err branch fails, acquire the same
pending collection (e.g., self.pending.lock().unwrap()) and call
.remove(&request_id) (or equivalent) to clean up, then propagate the error.
- Around line 69-72: The setter set_timeout currently takes &mut self but
connect(...) returns Arc<Self>, making it unusable; change the client struct's
timeout field to an interior-mutable type (e.g., std::sync::Mutex<Duration> or
RwLock<Duration> / AtomicU64 with Duration converted to millis) and update
set_timeout signature to pub fn set_timeout(&self, timeout: Duration) to lock
and update the inner value; update any direct reads of self.timeout to acquire
the lock (or load the atomic) and preserve semantics, keeping the method name
set_timeout and referencing the existing connect and timeout symbols so callers
using Arc<Self> can change the timeout after connect returns.
In `@mill-rpc/src/server.rs`:
- Around line 245-253: The server currently uses self.codec.serialize(&rpc_err)
inside the error response path (see RpcError, Frame::error) which can produce a
codec mismatch with the client that decodes error frames with bincode; update
the error-path to always send an error frame by first attempting
self.codec.serialize(&rpc_err) and, on Err(e), log the serialization failure and
fall back to serializing rpc_err with bincode (or otherwise a stable wire
encoding) so Frame::error(request_id, err_bytes) always returns valid
bincode-decoded bytes to the client; ensure the fallback also includes the
original serialization error details in the logged message.
- Around line 210-215: The handler currently bails on parse_request_payload
errors, causing callers of two-way requests to hang; instead, inside the Err(e)
branch of let (request_id, service_id, method_id, args) = match
frame.parse_request_payload() in the request handling path, check whether the
incoming frame is one-way or not and for non-one-way calls write back a
Frame::error containing the error details (and any available request
identifier/header info) to the connection before returning; for one-way frames
retain the no-response behavior. Target symbols: parse_request_payload,
Frame::error, and the request handling block where conn_id is logged.
- Around line 100-103: The loop that builds service_map currently uses
service_map.insert(id, RegisteredService { dispatcher }) which silently
overwrites duplicate IDs; change it to detect duplicates (e.g., use
service_map.entry(id) or service_map.contains_key(&id)) and reject them instead
of inserting—return an error (or propagate a Result) when a duplicate id is
encountered so registrations are not overwritten; update the surrounding
function signature/return path as needed to surface the duplicate-id error and
reference service_map, RegisteredService, and self.services when making the
change.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (26)
Cargo.tomlREADME.mdmill-rpc/Cargo.tomlmill-rpc/README.mdmill-rpc/examples/calculator_client.rsmill-rpc/examples/calculator_server.rsmill-rpc/examples/concurrent_clients.rsmill-rpc/examples/echo_client.rsmill-rpc/examples/echo_server.rsmill-rpc/examples/kv_client.rsmill-rpc/examples/kv_server.rsmill-rpc/examples/multi_service_client.rsmill-rpc/examples/multi_service_server.rsmill-rpc/mill-rpc-core/Cargo.tomlmill-rpc/mill-rpc-core/src/codec.rsmill-rpc/mill-rpc-core/src/context.rsmill-rpc/mill-rpc-core/src/error.rsmill-rpc/mill-rpc-core/src/lib.rsmill-rpc/mill-rpc-core/src/protocol.rsmill-rpc/mill-rpc-macros/Cargo.tomlmill-rpc/mill-rpc-macros/src/lib.rsmill-rpc/src/client.rsmill-rpc/src/lib.rsmill-rpc/src/prelude.rsmill-rpc/src/server.rsmill-rpc/tests/integration_test.rs
| [package] | ||
| name = "mill-rpc-core" | ||
| version = "0.1.0" | ||
| edition = "2021" | ||
| description = "Core types, wire protocol, and codec traits for Mill-RPC" | ||
|
|
||
| [dependencies] | ||
| serde = { version = "1", features = ["derive"] } | ||
| bincode = "1" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider inheriting workspace package fields for consistency.
The crate defines its own version, edition, and lacks license/authors fields, while the root Cargo.toml defines [workspace.package] with these values. Using inheritance ensures consistency across workspace crates.
♻️ Suggested change
[package]
name = "mill-rpc-core"
-version = "0.1.0"
-edition = "2021"
+version.workspace = true
+edition.workspace = true
+authors.workspace = true
+license.workspace = true
description = "Core types, wire protocol, and codec traits for Mill-RPC"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| [package] | |
| name = "mill-rpc-core" | |
| version = "0.1.0" | |
| edition = "2021" | |
| description = "Core types, wire protocol, and codec traits for Mill-RPC" | |
| [dependencies] | |
| serde = { version = "1", features = ["derive"] } | |
| bincode = "1" | |
| [package] | |
| name = "mill-rpc-core" | |
| version.workspace = true | |
| edition.workspace = true | |
| authors.workspace = true | |
| license.workspace = true | |
| description = "Core types, wire protocol, and codec traits for Mill-RPC" | |
| [dependencies] | |
| serde = { version = "1", features = ["derive"] } | |
| bincode = "1" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/mill-rpc-core/Cargo.toml` around lines 1 - 9, The crate's Cargo.toml
duplicates package metadata that should be inherited from the workspace; update
mill-rpc-core's [package] block to rely on workspace.package by removing the
duplicated fields (remove version and edition lines and do not add
license/authors here) so only crate-specific fields like name and description
remain and let the root workspace provide version/edition/license/authors; keep
the existing dependencies and ensure the package block still exists with name
"mill-rpc-core".
| /// This is auto-implemented by the `#[mill_rpc::service]` macro for any type | ||
| /// that implements the generated `{Service}Server` trait. |
There was a problem hiding this comment.
Fix macro syntax in documentation.
Line 15 references #[mill_rpc::service], but this macro is invoked as mill_rpc::service! { ... }.
📝 Suggested doc fix
-/// This is auto-implemented by the `#[mill_rpc::service]` macro for any type
+/// This is auto-generated by the `mill_rpc::service!` macro for any type
/// that implements the generated `{Service}Server` trait.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// This is auto-implemented by the `#[mill_rpc::service]` macro for any type | |
| /// that implements the generated `{Service}Server` trait. | |
| /// This is auto-generated by the `mill_rpc::service!` macro for any type | |
| /// that implements the generated `{Service}Server` trait. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/mill-rpc-core/src/lib.rs` around lines 15 - 16, Update the
documentation comment that currently references the attribute-style macro
`#[mill_rpc::service]` to the correct invocation form `mill_rpc::service! { ...
}` (or `mill_rpc::service!()` if preferred) so it accurately reflects how to
invoke the macro; edit the doc string that mentions `#[mill_rpc::service]` in
lib.rs to replace that token with `mill_rpc::service! { ... }` and ensure
surrounding text still reads correctly with the new macro form (referencing
`mill_rpc::service!` and the generated `{Service}Server` trait).
| [package] | ||
| name = "mill-rpc-macros" | ||
| version = "0.1.0" | ||
| edition = "2021" | ||
| description = "Proc macros for Mill-RPC service definitions" | ||
|
|
||
| [lib] | ||
| proc-macro = true | ||
|
|
||
| [dependencies] | ||
| syn = { version = "2", features = ["full"] } | ||
| quote = "1" | ||
| proc-macro2 = "1" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider inheriting workspace package fields.
For consistency with the workspace, inherit version, edition, authors, and license from [workspace.package].
♻️ Suggested change
[package]
name = "mill-rpc-macros"
-version = "0.1.0"
-edition = "2021"
+version.workspace = true
+edition.workspace = true
+authors.workspace = true
+license.workspace = true
description = "Proc macros for Mill-RPC service definitions"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| [package] | |
| name = "mill-rpc-macros" | |
| version = "0.1.0" | |
| edition = "2021" | |
| description = "Proc macros for Mill-RPC service definitions" | |
| [lib] | |
| proc-macro = true | |
| [dependencies] | |
| syn = { version = "2", features = ["full"] } | |
| quote = "1" | |
| proc-macro2 = "1" | |
| [package] | |
| name = "mill-rpc-macros" | |
| version.workspace = true | |
| edition.workspace = true | |
| authors.workspace = true | |
| license.workspace = true | |
| description = "Proc macros for Mill-RPC service definitions" | |
| [lib] | |
| proc-macro = true | |
| [dependencies] | |
| syn = { version = "2", features = ["full"] } | |
| quote = "1" | |
| proc-macro2 = "1" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/mill-rpc-macros/Cargo.toml` around lines 1 - 13, The package
manifest for mill-rpc-macros currently hardcodes package metadata; update the
[package] block for the package named "mill-rpc-macros" to inherit common fields
from the workspace by replacing explicit fields with workspace inheritance
(e.g., set version, edition, authors, and license to use workspace = true or
remove duplicates so Cargo pulls them from [workspace.package]); keep the
package-specific name and description and ensure proc-macro = true remains under
[lib].
| fn main() { | ||
| let event_loop = Arc::new(EventLoop::new(4, 1024, 100).unwrap()); | ||
|
|
||
| let _server = RpcServer::builder() | ||
| .bind("127.0.0.1:9001".parse().unwrap()) | ||
| .service(calculator::server(MyCalc)) | ||
| .build(&event_loop) | ||
| .unwrap(); | ||
|
|
||
| event_loop.run().unwrap(); | ||
| } | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider documenting the EventLoop parameters.
The example uses EventLoop::new(4, 1024, 100) without explaining what these parameters represent. Adding a brief inline comment would help readers understand the configuration.
📝 Suggested improvement
fn main() {
- let event_loop = Arc::new(EventLoop::new(4, 1024, 100).unwrap());
+ // 4 worker threads, 1024 max events per poll, 100ms timeout
+ let event_loop = Arc::new(EventLoop::new(4, 1024, 100).unwrap());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn main() { | |
| let event_loop = Arc::new(EventLoop::new(4, 1024, 100).unwrap()); | |
| let _server = RpcServer::builder() | |
| .bind("127.0.0.1:9001".parse().unwrap()) | |
| .service(calculator::server(MyCalc)) | |
| .build(&event_loop) | |
| .unwrap(); | |
| event_loop.run().unwrap(); | |
| } | |
| ``` | |
| fn main() { | |
| // 4 worker threads, 1024 max events per poll, 100ms timeout | |
| let event_loop = Arc::new(EventLoop::new(4, 1024, 100).unwrap()); | |
| let _server = RpcServer::builder() | |
| .bind("127.0.0.1:9001".parse().unwrap()) | |
| .service(calculator::server(MyCalc)) | |
| .build(&event_loop) | |
| .unwrap(); | |
| event_loop.run().unwrap(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/README.md` around lines 43 - 54, Add a brief inline comment in the
README example next to EventLoop::new(4, 1024, 100) explaining what each numeric
argument represents (e.g., thread count, max connections/queue size, timeout or
max events) so readers understand the configuration; update the line in the main
example where EventLoop::new(...) is called (the EventLoop::new(4, 1024, 100)
invocation inside fn main) with a short comment describing each parameter.
| // bincode serialization of AddRequest { a: 2, b: 3 }: two i32 LE | ||
| let mut payload = Vec::new(); | ||
| payload.extend_from_slice(&2i32.to_le_bytes()); | ||
| payload.extend_from_slice(&3i32.to_le_bytes()); | ||
|
|
||
| let result_bytes = dispatcher | ||
| .dispatch(&ctx, calculator::methods::ADD, &payload, &codec) | ||
| .unwrap(); | ||
| let result: i32 = codec.deserialize(&result_bytes).unwrap(); | ||
| assert_eq!(result, 5); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
# First, let's check the structure and find the codec implementation
find . -name "codec.rs" -o -name "codec.ts" | head -20Repository: citadel-tech/mill-io
Length of output: 101
🏁 Script executed:
# List the structure of mill-rpc directory
ls -la mill-rpc/ 2>/dev/null || echo "mill-rpc not found at root"
find . -path "*mill-rpc*" -name "*.rs" | grep -E "(codec|integration_test)" | head -20Repository: citadel-tech/mill-io
Length of output: 662
🏁 Script executed:
# Check the integration test file
cat -n mill-rpc/tests/integration_test.rs | head -110Repository: citadel-tech/mill-io
Length of output: 3628
🏁 Script executed:
# Now let's examine the codec.rs file to understand the Codec struct and its methods
cat -n mill-rpc/mill-rpc-core/src/codec.rsRepository: citadel-tech/mill-io
Length of output: 1623
🏁 Script executed:
# Let's also check the prelude to understand what's exported
cat -n mill-rpc/src/lib.rs | head -50Repository: citadel-tech/mill-io
Length of output: 1404
🏁 Script executed:
# Let's verify how the service macro generates the dispatcher and what format it expects
# Look for the generated server code or dispatcher logic
rg -n "fn dispatch" mill-rpc --type rust -A 10 | head -50Repository: citadel-tech/mill-io
Length of output: 1634
🏁 Script executed:
# Also check if there are other tests that use codec.serialize
rg -n "codec.serialize" mill-rpc --type rustRepository: citadel-tech/mill-io
Length of output: 376
🏁 Script executed:
# Let's look at the macro code to see how it deserializes the args parameter
sed -n '260,290p' mill-rpc/mill-rpc-macros/src/lib.rsRepository: citadel-tech/mill-io
Length of output: 1310
🏁 Script executed:
# Check what the dispatch_arms look like in the generated code
rg -n "dispatch_arms" mill-rpc/mill-rpc-macros/src/lib.rs -B 5 -A 20 | head -80Repository: citadel-tech/mill-io
Length of output: 2365
🏁 Script executed:
# Let's see what the macro generates for request/response types
rg -n "struct.*Request" mill-rpc/mill-rpc-macros/src/lib.rs -B 2 -A 2Repository: citadel-tech/mill-io
Length of output: 46
🏁 Script executed:
# Let's check what types are generated and exported
rg -n "pub struct\|pub enum" mill-rpc/mill-rpc-macros/src/lib.rs | head -20Repository: citadel-tech/mill-io
Length of output: 46
🏁 Script executed:
# Look for where the types module is created
sed -n '140,200p' mill-rpc/mill-rpc-macros/src/lib.rsRepository: citadel-tech/mill-io
Length of output: 2098
🏁 Script executed:
# Let's verify the exact request struct that's generated by checking if there are any examples
# or tests that use codec.serialize with tuples
rg -n "codec\.serialize" mill-rpc --type rust -B 2 -A 2Repository: citadel-tech/mill-io
Length of output: 1433
🏁 Script executed:
# Let's also double-check the bincode serialization behavior for structs vs tuples
# by looking for any documentation or tests about this
rg -n "Serialize\|serialize" mill-rpc/mill-rpc-core/src/codec.rs -B 2 -A 2Repository: citadel-tech/mill-io
Length of output: 46
🏁 Script executed:
# Check if there are any other integration tests or examples that show how to construct payloads
find mill-rpc -name "*.rs" -type f | xargs grep -l "payload" | head -10Repository: citadel-tech/mill-io
Length of output: 289
Use the codec to serialize test payloads instead of manual byte construction.
The test manually constructs bincode payloads using to_le_bytes() (lines 40-41), which hardcodes assumptions about bincode's serialization format. This approach is fragile: it breaks if the codec or serialization format changes.
The Codec class provides a serialize() method that handles this correctly. Use it instead:
♻️ Suggested approach
- // bincode serialization of AddRequest { a: 2, b: 3 }: two i32 LE
- let mut payload = Vec::new();
- payload.extend_from_slice(&2i32.to_le_bytes());
- payload.extend_from_slice(&3i32.to_le_bytes());
+ // Use codec to serialize the request tuple
+ let payload = codec.serialize(&(2i32, 3i32)).unwrap();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // bincode serialization of AddRequest { a: 2, b: 3 }: two i32 LE | |
| let mut payload = Vec::new(); | |
| payload.extend_from_slice(&2i32.to_le_bytes()); | |
| payload.extend_from_slice(&3i32.to_le_bytes()); | |
| let result_bytes = dispatcher | |
| .dispatch(&ctx, calculator::methods::ADD, &payload, &codec) | |
| .unwrap(); | |
| let result: i32 = codec.deserialize(&result_bytes).unwrap(); | |
| assert_eq!(result, 5); | |
| // Use codec to serialize the request tuple | |
| let payload = codec.serialize(&(2i32, 3i32)).unwrap(); | |
| let result_bytes = dispatcher | |
| .dispatch(&ctx, calculator::methods::ADD, &payload, &codec) | |
| .unwrap(); | |
| let result: i32 = codec.deserialize(&result_bytes).unwrap(); | |
| assert_eq!(result, 5); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/tests/integration_test.rs` around lines 38 - 47, The test builds the
bincode payload manually using to_le_bytes(), which assumes bincode internals;
instead use the test's Codec instance to produce the payload: call
codec.serialize(&AddRequest { a: 2, b: 3 }) (or serialize a tuple/tuple struct
matching the RPC parameter shape) to produce payload, then pass that payload
into dispatcher.dispatch(&ctx, calculator::methods::ADD, &payload, &codec) and
keep using codec.deserialize for the result; update the local payload variable
and remove the manual to_le_bytes() calls so the test relies on codec.serialize
rather than hardcoded bytes.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mill-rpc/benches/rpc_comparison.rs`:
- Around line 277-317: The benchmark hardcodes mill_addr ("127.0.0.1:19876")
which can cause port conflicts; update the setup that uses
mill_rpc::RpcServer::builder().bind(...) and mill::make_client(...) to allocate
a dynamic port (bind to port 0) and use the actual bound SocketAddr when
creating the client: either change RpcServer::builder()/bind to return the bound
address from the built server (or accept an already-bound TcpListener and
extract its local_addr), then pass that actual SocketAddr into mill::make_client
so the client connects to the real assigned port; ensure thread startup and
sleeps use the retrieved address rather than the hardcoded mill_addr.
In `@mill-rpc/examples/multi_service_server.rs`:
- Around line 32-50: The factorial and fibonacci implementations (functions
factorial and fibonacci) can overflow u64 silently; update them to either (a)
perform overflow-safe arithmetic using checked_mul/checked_add and return
Option<u64> or Result<u64, OverflowError> on overflow, or (b) use
saturating_mul/saturating_add if you prefer capped behavior, and document the
chosen behavior in the function comments; specifically change factorial to
iterate using checked_mul (or saturating_mul) and return early on None (or Err),
and change fibonacci to compute using checked_add (or saturating_add) with the
same return strategy so callers can detect overflow instead of relying on silent
wrapping.
- Around line 52-70: The is_prime implementation uses the stable-on-1.87.0
method is_multiple_of which breaks MSRV 1.70.0; update the function is_prime (in
the impl where RpcContext is a parameter) to use the modulo operator (%) instead
of is_multiple_of for both even-check and loop divisibility checks (e.g.,
replace n.is_multiple_of(2) with n % 2 == 0 and n.is_multiple_of(i) with n % i
== 0) and keep the rest of the algorithm unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 26150a56-dbd7-437a-bda4-55142f7ac67e
📒 Files selected for processing (9)
mill-rpc/Cargo.tomlmill-rpc/README.mdmill-rpc/benches/rpc_comparison.rsmill-rpc/examples/kv_client.rsmill-rpc/examples/kv_server.rsmill-rpc/examples/multi_service_server.rsmill-rpc/mill-rpc-core/Cargo.tomlmill-rpc/mill-rpc-macros/Cargo.tomlmill-rpc/mill-rpc-macros/src/lib.rs
| let mill_addr: SocketAddr = "127.0.0.1:19876".parse().unwrap(); | ||
| let _mill_server = mill_rpc::RpcServer::builder() | ||
| .bind(mill_addr) | ||
| .service(mill::bench_service::server(svc)) | ||
| .build(&mill_el); | ||
|
|
||
| match _mill_server { | ||
| Ok(_server) => { | ||
| let el = mill_el.clone(); | ||
| thread::spawn(move || { | ||
| let _ = el.run(); | ||
| }); | ||
| thread::sleep(Duration::from_millis(100)); | ||
|
|
||
| // Create a persistent client | ||
| let client_el = Arc::new(mill_io::EventLoop::new(1, 256, 50).unwrap()); | ||
| let cel = client_el.clone(); | ||
| thread::spawn(move || { | ||
| let _ = cel.run(); | ||
| }); | ||
| thread::sleep(Duration::from_millis(50)); | ||
|
|
||
| let client = mill::make_client(mill_addr, &client_el); | ||
|
|
||
| group.bench_function("mill_rpc", |b| { | ||
| b.iter(|| { | ||
| let resp = client.ping(); | ||
| black_box(resp).unwrap(); | ||
| }); | ||
| }); | ||
|
|
||
| client_el.stop(); | ||
| mill_el.stop(); | ||
| } | ||
| Err(e) => { | ||
| eprintln!("Mill-RPC server failed to start (skipping): {}", e); | ||
| } | ||
| } | ||
|
|
||
| group.finish(); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider using dynamic port allocation to avoid conflicts.
The benchmarks use hardcoded ports (19876, 19877, etc.). If multiple benchmark runs execute concurrently or a previous run leaves a port in TIME_WAIT, the benchmark will fail. Consider binding to port 0 and retrieving the assigned address, similar to the legacy server pattern.
♻️ Example using dynamic port
- let mill_addr: SocketAddr = "127.0.0.1:19876".parse().unwrap();
+ let mill_addr: SocketAddr = "127.0.0.1:0".parse().unwrap();
let _mill_server = mill_rpc::RpcServer::builder()
.bind(mill_addr)
.service(mill::bench_service::server(svc))
.build(&mill_el);
+ // Note: Would need RpcServer to expose actual bound addressThis would require RpcServer to expose the actual bound address after building, which may be a broader API change.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/benches/rpc_comparison.rs` around lines 277 - 317, The benchmark
hardcodes mill_addr ("127.0.0.1:19876") which can cause port conflicts; update
the setup that uses mill_rpc::RpcServer::builder().bind(...) and
mill::make_client(...) to allocate a dynamic port (bind to port 0) and use the
actual bound SocketAddr when creating the client: either change
RpcServer::builder()/bind to return the bound address from the built server (or
accept an already-bound TcpListener and extract its local_addr), then pass that
actual SocketAddr into mill::make_client so the client connects to the real
assigned port; ensure thread startup and sleeps use the retrieved address rather
than the hardcoded mill_addr.
| fn factorial(&self, _ctx: &RpcContext, n: u64) -> u64 { | ||
| (1..=n).product() | ||
| } | ||
|
|
||
| fn fibonacci(&self, _ctx: &RpcContext, n: u32) -> u64 { | ||
| match n { | ||
| 0 => 0, | ||
| 1 => 1, | ||
| _ => { | ||
| let (mut a, mut b) = (0u64, 1u64); | ||
| for _ in 2..=n { | ||
| let tmp = a + b; | ||
| a = b; | ||
| b = tmp; | ||
| } | ||
| b | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider overflow protection for factorial and fibonacci.
factorial overflows u64 for n > 20; fibonacci overflows for n > 93. Both silently wrap on overflow in release builds. For an example, this may be acceptable, but consider documenting this limitation or using checked_mul/saturating_mul.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/examples/multi_service_server.rs` around lines 32 - 50, The
factorial and fibonacci implementations (functions factorial and fibonacci) can
overflow u64 silently; update them to either (a) perform overflow-safe
arithmetic using checked_mul/checked_add and return Option<u64> or Result<u64,
OverflowError> on overflow, or (b) use saturating_mul/saturating_add if you
prefer capped behavior, and document the chosen behavior in the function
comments; specifically change factorial to iterate using checked_mul (or
saturating_mul) and return early on None (or Err), and change fibonacci to
compute using checked_add (or saturating_add) with the same return strategy so
callers can detect overflow instead of relying on silent wrapping.
| fn is_prime(&self, _ctx: &RpcContext, n: u64) -> bool { | ||
| if n < 2 { | ||
| return false; | ||
| } | ||
| if n < 4 { | ||
| return true; | ||
| } | ||
| if n.is_multiple_of(2) { | ||
| return false; | ||
| } | ||
| let mut i = 3; | ||
| while i * i <= n { | ||
| if n.is_multiple_of(i) { | ||
| return false; | ||
| } | ||
| i += 2; | ||
| } | ||
| true | ||
| } |
There was a problem hiding this comment.
Replace is_multiple_of with modulo operator to fix MSRV incompatibility.
The is_multiple_of method is stable only since Rust 1.87.0, but the workspace MSRV is 1.70.0. This causes CI to fail.
🔧 Proposed fix
fn is_prime(&self, _ctx: &RpcContext, n: u64) -> bool {
if n < 2 {
return false;
}
if n < 4 {
return true;
}
- if n.is_multiple_of(2) {
+ if n % 2 == 0 {
return false;
}
let mut i = 3;
while i * i <= n {
- if n.is_multiple_of(i) {
+ if n % i == 0 {
return false;
}
i += 2;
}
true
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn is_prime(&self, _ctx: &RpcContext, n: u64) -> bool { | |
| if n < 2 { | |
| return false; | |
| } | |
| if n < 4 { | |
| return true; | |
| } | |
| if n.is_multiple_of(2) { | |
| return false; | |
| } | |
| let mut i = 3; | |
| while i * i <= n { | |
| if n.is_multiple_of(i) { | |
| return false; | |
| } | |
| i += 2; | |
| } | |
| true | |
| } | |
| fn is_prime(&self, _ctx: &RpcContext, n: u64) -> bool { | |
| if n < 2 { | |
| return false; | |
| } | |
| if n < 4 { | |
| return true; | |
| } | |
| if n % 2 == 0 { | |
| return false; | |
| } | |
| let mut i = 3; | |
| while i * i <= n { | |
| if n % i == 0 { | |
| return false; | |
| } | |
| i += 2; | |
| } | |
| true | |
| } |
🧰 Tools
🪛 GitHub Actions: Lint with Clippy
[error] 59-64: current MSRV (Minimum Supported Rust Version) is 1.70.0 but this item is stable since 1.87.0 (clippy: incompatible_msrv).
[error] 59-64: Could not compile mill-rpc (example "multi_service_server") due to the above MSRV incompatibility errors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mill-rpc/examples/multi_service_server.rs` around lines 52 - 70, The is_prime
implementation uses the stable-on-1.87.0 method is_multiple_of which breaks MSRV
1.70.0; update the function is_prime (in the impl where RpcContext is a
parameter) to use the modulo operator (%) instead of is_multiple_of for both
even-check and loop divisibility checks (e.g., replace n.is_multiple_of(2) with
n % 2 == 0 and n.is_multiple_of(i) with n % i == 0) and keep the rest of the
algorithm unchanged.
Implement an event-loop-based RPC framework built on top of mill-net.
Example
The framework has
servicemacro that generates the server and clientServer
Client
Summary by CodeRabbit