-
Notifications
You must be signed in to change notification settings - Fork 626
[Feat] Prover loading assets (circuits) dynamically #1717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughMove zkvm workspace deps from branch/alias entries to commit‑pinned names, replace Euclid‑specific prover/verifier with a universal implementation, switch RootProof→StarkProof and util→utils imports, add dynamic per‑VK asset download/caching and handler creation, remove Euclid handlers and CLI dump, add hex-or-base64 VK decoding, and wire proof timing/gas metrics. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Client
participant Prover as LocalProver
participant Assets as AssetsLocationData
participant Net as RemoteStore
participant Handler as UniversalHandler (per‑VK)
participant ZK as Prover
Client->>Prover: prove(req)
Prover->>Prover: compute vk from req
Prover->>Prover: lookup handler by vk
alt handler missing
Prover->>Assets: gen_asset_url(vk, proof_type)
Assets->>Net: HEAD/GET app.vmexe & openvm.toml
Net-->>Assets: stream bytes
Assets-->>Prover: asset paths
Prover->>Handler: create with workspace+assets
end
Prover->>Handler: get_proof_data(&ProvingTask, need_snark)
Handler->>ZK: gen_proof_universal(task, need_snark)
ZK-->>Handler: proof JSON
Handler-->>Prover: proof JSON
Prover-->>Client: ProveResponse
sequenceDiagram
autonumber
participant App as Caller
participant V as Verifier
participant UV as UniversalVerifier
App->>V: init(fork, config)
V->>UV: setup(verifier_binary_path)
UV-->>V: ready
App->>V: verify(proof)
V->>UV: verify_stark_proof / verify_evm_proof
UV-->>V: result
V-->>App: Ok(true) or error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🔭 Outside diff range comments (3)
crates/libzkp/src/lib.rs (2)
132-136
: Avoid unwrap() on verifier mutex to prevent panics.
Propagate poisoning as an error instead.- let ret = verifier.lock().unwrap().verify(task_type, &proof)?; + let ret = verifier + .lock() + .map_err(|e| eyre::eyre!("verifier mutex poisoned: {e}"))? + .verify(task_type, &proof)?;
140-145
: Same here: replace unwrap() with error propagation.
Consistent error handling.- verifier.lock().unwrap().dump_vk(Path::new(file)); + verifier + .lock() + .map_err(|e| eyre::eyre!("verifier mutex poisoned: {e}"))? + .dump_vk(Path::new(file));crates/libzkp/src/verifier/universal.rs (1)
33-64
: Avoid unwraps and panic_catch; prefer structured error propagationMultiple
unwrap()
calls wrapped bypanic_catch
obscures error causes and stacks. Use proper deserialization and verifier error propagation with?
, returningResult<bool>
directly.Example direction:
- fn verify(&self, task_type: super::TaskType, proof: &[u8]) -> Result<bool> { - panic_catch(|| match task_type { - TaskType::Chunk => { - let proof = serde_json::from_slice::<ChunkProof>(proof).unwrap(); - if !proof.pi_hash_check(self.fork) { return false; } - self.verifier.verify_proof(proof.as_root_proof(), &proof.vk).unwrap() - } - ... - }).map_err(|err_str: String| eyre::eyre!("{err_str}")) - } + fn verify(&self, task_type: super::TaskType, proof: &[u8]) -> Result<bool> { + match task_type { + TaskType::Chunk => { + let proof: ChunkProof = serde_json::from_slice(proof) + .map_err(|e| eyre::eyre!("chunk proof deserialization failed: {e}"))?; + if !proof.pi_hash_check(self.fork) { + return Ok(false); + } + Ok(self.verifier.verify_proof(proof.as_root_proof(), &proof.vk)?) + } + TaskType::Batch => { + let proof: BatchProof = serde_json::from_slice(proof) + .map_err(|e| eyre::eyre!("batch proof deserialization failed: {e}"))?; + if !proof.pi_hash_check(self.fork) { + return Ok(false); + } + Ok(self.verifier.verify_proof(proof.as_root_proof(), &proof.vk)?) + } + TaskType::Bundle => { + let proof: BundleProof = serde_json::from_slice(proof) + .map_err(|e| eyre::eyre!("bundle proof deserialization failed: {e}"))?; + if !proof.pi_hash_check(self.fork) { + return Ok(false); + } + let vk = proof.vk.clone(); + let evm_proof = proof.into_evm_proof(); + Ok(self.verifier.verify_proof_evm(&evm_proof, &vk)?) + } + } + }
🧹 Nitpick comments (14)
crates/libzkp/src/lib.rs (2)
52-58
: Typos and clearer error messages (nit).
Polish comments and bail messages; improves diagnosability.- // normailze fork name field in task + // normalize fork name field in task ... - eyre::bail!("fork name in chunk task not match the calling arg, expected {fork_name_str}, get {}", task.fork_name); + eyre::bail!( + "fork name in chunk task does not match the argument; expected '{fork_name_str}', got {}", + task.fork_name + ); ... - eyre::bail!("fork name in batch task not match the calling arg, expected {fork_name_str}, get {}", task.fork_name); + eyre::bail!( + "fork name in batch task does not match the argument; expected '{fork_name_str}', got {}", + task.fork_name + ); ... - eyre::bail!("fork name in bundle task not match the calling arg, expected {fork_name_str}, get {}", task.fork_name); + eyre::bail!( + "fork name in bundle task does not match the argument; expected '{fork_name_str}', got {}", + task.fork_name + );Also applies to: 67-71, 78-82
71-75
: Fix copy-paste in panic mapping messages (batch/bundle).
They currently say “chunk task” in non-chunk paths.- .map_err(|e| eyre::eyre!("caught panic in chunk task{e}"))??; + .map_err(|e| eyre::eyre!("caught panic in batch task: {e}"))??; ... - .map_err(|e| eyre::eyre!("caught panic in chunk task{e}"))??; + .map_err(|e| eyre::eyre!("caught panic in bundle task: {e}"))??;Also applies to: 82-86
crates/prover-bin/Cargo.toml (1)
21-21
: Pin futures-util to exact version for consistency.
Keeps it aligned with futures = 0.3.30 and avoids duplicate minor versions.-futures-util = "0.3" +futures-util = "0.3.30"crates/libzkp/src/verifier.rs (3)
55-56
: Unify trait-object bounds in type alias and return type for clarity
VerifierType
includes+ Send
, whileget_verifier
returnsArc<Mutex<dyn ProofVerifier>>
without+ Send
. Although this likely coerces, returning the alias improves readability and avoids confusion.Apply:
-pub fn get_verifier(fork_name: &str) -> Result<Arc<Mutex<dyn ProofVerifier>>> { +pub fn get_verifier(fork_name: &str) -> Result<VerifierType> {Also applies to: 78-79
78-88
: Case-insensitive lookup to match lowercased insert keysYou lowercase keys on insert but not on lookup. Normalize
fork_name
on retrieval to avoid surprising misses.- if let Some(verifier) = verifiers.get(fork_name) { + if let Some(verifier) = verifiers.get(&fork_name.to_lowercase()) {
74-76
: Prefer expect with message over bare assert for OnceLock setProvide a clear error message if
init
is called twice.- let ret = VERIFIERS.set(verifiers).is_ok(); - assert!(ret); + VERIFIERS + .set(verifiers) + .expect("VERIFIERS already initialized; init() should only be called once");crates/libzkp/src/verifier/universal.rs (2)
19-26
: Tweak setup error message; consider propagating errors instead of panickingThe expect message mentions “chunk verifier”, which is misleading here. Also, panicking in constructor complicates recovery and testing.
- verifier: UniversalVerifier::setup(&config, &exe, &verifier_bin) - .expect("Setting up chunk verifier"), + verifier: UniversalVerifier::setup(&config, &exe, &verifier_bin) + .expect("Setting up universal verifier"),If feasible, change
new
to returneyre::Result<Self>
and propagate the setup error instead of panicking.
66-68
: Don’t panic in deprecated method; no-op with a warningPanicking on
dump_vk
can take down the process if someone calls it by mistake. Prefer a no-op and log a warning.- fn dump_vk(&self, _file: &Path) { - panic!("dump vk has been deprecated"); - } + fn dump_vk(&self, _file: &Path) { + tracing::warn!("dump_vk is deprecated on universal verifier; no-op"); + }crates/prover-bin/src/zk_circuits_handler.rs (1)
22-26
: Naming nit: Phase::EuclidV2 under a universal handler can misleadPhase name is legacy but the context is now universal/asset-driven. Consider renaming or documenting why EuclidV2 phase is kept for asset layout only.
crates/prover-bin/src/zk_circuits_handler/assets.rs (2)
20-32
: Prover setup per proof type is clear; minor suggestion on error contextSetup for chunk/batch/bundle looks correct with EVM enabled only for bundle. Consider adding more specific error messages to ease troubleshooting.
- let chunk_prover = Prover::setup(p.phase_spec_chunk(workspace_path), false, None) - .expect("Failed to setup chunk prover"); + let chunk_prover = Prover::setup(p.phase_spec_chunk(workspace_path), false, None) + .expect("Failed to setup chunk prover (phase_spec_chunk)"); ... - let batch_prover = Prover::setup(p.phase_spec_batch(workspace_path), false, None) - .expect("Failed to setup batch prover"); + let batch_prover = Prover::setup(p.phase_spec_batch(workspace_path), false, None) + .expect("Failed to setup batch prover (phase_spec_batch)"); ... - let bundle_prover = Prover::setup(p.phase_spec_bundle(workspace_path), true, None) - .expect("Failed to setup bundle prover"); + let bundle_prover = Prover::setup(p.phase_spec_bundle(workspace_path), true, None) + .expect("Failed to setup bundle prover (phase_spec_bundle)");
33-41
: VK cache init is sound; ensureProofType
keys are exhaustiveLazy
OnceLock<String>
per proof type is good. IfProofType
gains variants later, it would be safer to derive keys fromcfg.vks
or a canonical list to avoid missing cache entries.Consider building
cached_vks
from an iterator over supported types, or assert all required keys are present at init.Also applies to: 46-50
crates/prover-bin/src/zk_circuits_handler/universal.rs (2)
45-47
: Validate task payload before deserialising
serde_json::from_str
blindly trusts the input. Consider cheap, explicit checks (e.g. size limits, required fields) before parsing to avoid DoS via gigabyte-sized JSON.
69-72
: Error message loses contextThe bail message when
evm_prover
is missing only prints the VK. Add the requested proof type and maybe the fork name to help operators diagnose mis-configurations.crates/prover-bin/src/prover.rs (1)
73-90
: HEAD size-check may fail silentlyMany servers disable
HEAD
or omitContent-Length
. In that case you fall through tocontinue
, skipping a potentially stale file. At minimum, fall back to re-downloading when the HEAD request is not200 OK
/ lacks length.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (14)
Cargo.toml
(1 hunks)crates/libzkp/Cargo.toml
(1 hunks)crates/libzkp/src/lib.rs
(1 hunks)crates/libzkp/src/proofs.rs
(1 hunks)crates/libzkp/src/tasks/batch/utils.rs
(1 hunks)crates/libzkp/src/verifier.rs
(2 hunks)crates/libzkp/src/verifier/universal.rs
(2 hunks)crates/prover-bin/Cargo.toml
(3 hunks)crates/prover-bin/src/prover.rs
(5 hunks)crates/prover-bin/src/zk_circuits_handler.rs
(1 hunks)crates/prover-bin/src/zk_circuits_handler/assets.rs
(1 hunks)crates/prover-bin/src/zk_circuits_handler/euclid.rs
(0 hunks)crates/prover-bin/src/zk_circuits_handler/euclidV2.rs
(0 hunks)crates/prover-bin/src/zk_circuits_handler/universal.rs
(1 hunks)
💤 Files with no reviewable changes (2)
- crates/prover-bin/src/zk_circuits_handler/euclid.rs
- crates/prover-bin/src/zk_circuits_handler/euclidV2.rs
🔇 Additional comments (13)
crates/libzkp/src/proofs.rs (1)
13-14
: Import path fix verified: no outdated references remain
- Ran
rg -n --hidden --glob '!target' 'scroll_zkvm_types::util::vec_as_base64'
; no matches found- The new
utils::vec_as_base64
import is correct and aligns with the types crate renameApprove code changes.
crates/libzkp/src/tasks/batch/utils.rs (1)
21-21
: No stale imports; import path change is correct
Ran a repo-wide search for the old import path (scroll_zkvm_types::util::sha256_rv32
) and found no occurrences. The update toscroll_zkvm_types::utils::sha256_rv32
is consistent with the types crate refactor, andget_versioned_hash
remains compliant with EIP-4844.crates/libzkp/Cargo.toml (1)
9-9
: No stale Euclid-specific crate references found
- Ran
rg -n --hidden --glob '!target' 'scroll-zkvm-(verifier|prover)-euclid'
and confirmed there are no remaining imports of separate Euclid-specific verifier/prover crates.- All remaining
EuclidV2
symbols (e.g.Phase::EuclidV2
,finalizeBundlePostEuclidV2
) belong to universal verifier logic and are expected.crates/libzkp/src/lib.rs (1)
8-8
: Import path fix to utils::vec_as_base64 is correct.
Matches updates elsewhere in the crate.crates/prover-bin/Cargo.toml (3)
10-10
: Switch to scroll-zkvm-prover (workspace) aligns with universal flow.
Good step toward dynamic asset loading.
34-34
: url with serde feature is a sensible addition.
Enables (de)serialization of URLs in config or tasks.
22-22
: No reqwest version skew detectedWe’ve scanned the workspace and found only:
- crates/prover-bin/Cargo.toml
• reqwest = { version = "0.12.4", features = ["gzip", "stream"] }
• reqwest-middleware = "0.3"The
"0.3"
spec will automatically pick the latest 0.3.x patch, which is compatible with reqwest 0.12.x. Noreqwest-retry
dependency was found.
No further action required.crates/libzkp/src/verifier.rs (1)
3-4
: Universal verifier switch looks correctModule import and usage of
universal::Verifier
align with the PR direction. No issues spotted here.crates/prover-bin/src/zk_circuits_handler.rs (3)
4-6
: Module re-org (universal + assets) is consistent with PR goalsSeparation of universal proving and asset utilities is clear and aligns with dynamic asset loading keyed by VK.
10-12
: Imports updated to universal types/configs look goodUsing
ProvingTask
andProverConfig
from universal crates matches the architecture shift.
16-20
: AllCircuitsHandler
implementations and call sites updated
Verified that the onlyimpl CircuitsHandler
in
crates/prover-bin/src/zk_circuits_handler/universal.rs
matches the newasync fn get_vk(&self) -> String; async fn get_proof_data(&self, u_task: &ProvingTask, need_snark: bool) -> Result<String>;and that the lone usage in
crates/prover-bin/src/prover.rs
(handler.get_proof_data(&prover_task, is_evm)
) has been adapted accordingly. No further changes required.Cargo.toml (1)
20-22
: Commit 2ec1303 not found in scroll-tech/zkvm-prover – please verify crate sourcesThe checkout script failed with “couldn’t find remote ref 2ec1303”. Before proceeding, confirm that:
- The revision
2ec1303
actually exists in https://github.com/scroll-tech/zkvm-prover.git.- All three crates (
scroll-zkvm-prover
,scroll-zkvm-verifier
,scroll-zkvm-types
) are present at that commit, or update the repo URLs/revs if they live in separate repositories.Affected location:
- Cargo.toml, lines 20–22
crates/prover-bin/src/prover.rs (1)
57-68
: Asset directory can collide across proof typesAssets for chunk/batch/bundle with identical VKs share the same
storage_path
(base_path/vk
). If the files differ per proof type the cache becomes corrupt. Consider includingproof_type
in the directory name.let storage_path = base_path.as_ref() .join(format!("{vk}_{:?}", proof_type).to_lowercase());
Has passed prover e2e test with dynamically downloading assets |
reviewed. Don't force push from now on |
regression: e2e test passed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
coordinator/internal/logic/verifier/verifier.go (2)
158-164
: File handle leak: close the opened VK file.The file is never closed after os.Open, leaking FDs in a long-lived process.
Apply this diff:
f, err := os.Open(filepath.Clean(vkFile)) if err != nil { return err } +defer f.Close() byt, err := io.ReadAll(f)
127-131
: Avoid substring check for blocked VKs; use exact-match set.strings.Contains on a comma-separated blob risks false positives (substring matches). Switch to an exact-match set for correctness.
You can replace the constant and use a set:
// replace the string blob with a set var blockedVKs = map[string]struct{}{ "rSJNNBpsxBdKlstbIIU/aYc7bHau98Qb2yjZMc5PmDhmGOolp5kYRbvF/VcWcO5HN5ujGs6S00W8pZcCoNQRLQ==": {}, "2Lo7Cebm6SFtcsYXipkcMxIBmVY7UpoMXik/Msm7t2nyvi9EaNGsSnDnaCurscYEF+IcdjPUtVtY9EcD7IKwWg==": {}, "D6YFHwTLZF/U2zpYJPQ3LwJZRm85yA5Vq2iFBqd3Mk4iwOUpS8sbOp3vg2+NDxhhKphgYpuUlykpdsoRhEt+cw==": {}, }Then update checks:
if _, ok := blockedVKs[dump.Chunk]; ok { return fmt.Errorf("loaded blocked chunk vk") } if _, ok := blockedVKs[dump.Batch]; ok { return fmt.Errorf("loaded blocked batch vk") } if _, ok := blockedVKs[dump.Bundle]; ok { return fmt.Errorf("loaded blocked bundle vk") }Also consider renaming to blockedVKs (Go naming).
♻️ Duplicate comments (2)
crates/prover-bin/src/prover.rs (2)
248-261
: Don’t unwrap on config validation; propagate or return a Result from constructorA bad config should not abort the entire process with unwrap.
Two options:
- Preferred: change constructor to return Result and use ?:
-impl LocalProver { - pub fn new(mut config: LocalProverConfig) -> Self { +impl LocalProver { + pub fn new(mut config: LocalProverConfig) -> eyre::Result<Self> { for (fork_name, circuit_config) in config.circuits.iter_mut() { - // validate each base url - circuit_config.location_data.validate().unwrap(); + // validate each base url + circuit_config.location_data.validate()?; // ... } - Self { + Ok(Self { config, next_task_id: 0, current_task: None, handlers: HashMap::new(), - } + }) } }
- If changing signature is too invasive right now, at least use expect with a clear message:
- circuit_config.location_data.validate().unwrap(); + circuit_config.location_data + .validate() + .expect("invalid circuits.[fork].base_url: must end with a trailing slash");
312-314
: Avoid spawn_blocking + Handle::block_on; run async directly on the runtimeRunning block_on inside spawn_blocking risks nested-runtime issues. You can just spawn an async task; the handler API is async already.
Apply this diff:
- let is_evm = req.proof_type == ProofType::Bundle; - let task_handle = tokio::task::spawn_blocking(move || { - handle.block_on(handler.get_proof_data(&prover_task, is_evm)) - }); + let is_evm = req.proof_type == ProofType::Bundle; + let handler_cloned = handler.clone(); + let task_handle = tokio::task::spawn(async move { + handler_cloned.get_proof_data(&prover_task, is_evm).await + });If UniversalHandler internally performs heavy CPU-bound work synchronously, keep spawn_blocking but then don’t call back into the runtime from that thread; instead move the blocking work into that closure.
🧹 Nitpick comments (18)
crates/prover-bin/src/zk_circuits_handler.rs (2)
4-4
: Scope down the blanket lint suppression for non_snake_case.Applying
#[allow(non_snake_case)]
at the module level suppresses the lint for everything insideuniversal
, potentially masking future issues. Prefer moving theallow
to the specific offending items insideuniversal.rs
, or better yet, rename those items to conform to Rust style.Suggested change in this file:
-#[allow(non_snake_case)] -pub mod universal; +pub mod universal;If certain public items inside
universal
must retain their current names (e.g., external ABI compatibility), apply#[allow(non_snake_case)]
directly to those items inuniversal.rs
.
8-8
: Avoid leaking external types in your public API; re-exportProvingTask
for ergonomics.By referencing
scroll_zkvm_types::ProvingTask
in the trait, downstream crates must now depend on that crate to implement or callCircuitsHandler
. If that’s intended, consider re-exportingProvingTask
here to reduce churn in dependents and make the API easier to consume.Apply this small re-export to this module:
use eyre::Result; use scroll_zkvm_types::ProvingTask; +pub use scroll_zkvm_types::ProvingTask;
Alternatively, introduce a local type alias and use it in the trait (gives you flexibility to swap underlying types later without breaking dependents).
zkvm-prover/Makefile (1)
57-59
: Optional: add basic env for observability and faster failuresConsider setting standard env for better CI/dev ergonomics when running e2e locally:
- RUST_BACKTRACE=1 for crash diagnostics
- RUST_LOG=info (or tracing) to surface download progress/errors
- Optionally a shorter network timeout for e2e to fail fast if assets are unreachable
Apply this diff if you want the defaults inline:
test_e2e_run: ${E2E_HANDLE_SET} - GO_TAG=${GO_TAG} GIT_REV=${GIT_REV} ZK_VERSION=${ZK_VERSION} cargo run --release -p prover -- --config ./config.json handle ${E2E_HANDLE_SET} + RUST_BACKTRACE=1 RUST_LOG=info GO_TAG=${GO_TAG} GIT_REV=${GIT_REV} ZK_VERSION=${ZK_VERSION} cargo run --release -p prover -- --config ./config.json handle ${E2E_HANDLE_SET}crates/prover-bin/src/prover.rs (6)
30-31
: Doc typos: clarify and fix wordingMinor nit to improve clarity: “a altered url for specififed vk” → “an alternate URL for the specified VK”.
Apply this diff:
- /// a altered url for specififed vk + /// an alternate URL for the specified VK
65-67
: Path construction with vk: currently safe, but assert hex for defense-in-depthvk here is derived via hex::encode, so it only contains [0-9a-f] and is safe as a path component. If you want to harden against future changes, add a quick hex-only check.
Apply this diff:
- let storage_path = base_path.as_ref().join(vk); + // Defense-in-depth: ensure vk is hex-only before using as a path component + if !vk.chars().all(|c| c.is_ascii_hexdigit()) { + eyre::bail!("invalid vk: must be hex"); + } + let storage_path = base_path.as_ref().join(vk);
82-86
: Header parsing: current code is non-panicking; consider using typed helpersYour current chain won’t panic (unwrap_or + parse in if let). If you want it cleaner and avoid defaulting to “0”, you can make intent explicit.
Apply this diff:
- if let Some(content_length) = head_resp.headers().get("content-length") { - if let Ok(remote_size) = - content_length.to_str().unwrap_or("0").parse::<u64>() - { + if let Some(content_length) = head_resp.headers().get("content-length") { + if let Ok(remote_size) = content_length + .to_str() + .ok() + .and_then(|s| s.parse::<u64>().ok()) + {
89-99
: Prefer structured logging over println! for operational visibilityUse tracing (or log) instead of println! so ops can manage levels and sinks.
Apply this diff:
- println!("File {} already exists with matching size, skipping download", filename); + tracing::info!("asset cache hit: {} (size matched)", filename);- println!("Downloading {} from {}", filename, download_url); + tracing::info!("downloading asset: {} from {}", filename, download_url);Add at the top if not already present in the crate root:
use tracing;
229-245
: Hardcoded URLs in GLOBAL_ASSET_URLS_FEYNMAN: OK for bootstrap; consider config-driven overridesThe baked-in map provides sensible defaults. Since you already merge user detours over this map, this is fine. If these drift often, you could move them into the template config to avoid rebuilds, but not required.
271-309
: Per-VK handler caching: good reuse; consider guarding concurrent creationLooks good for single-task operation. If you later allow multiple concurrent prove requests, you may want a once-cell per VK to avoid racing downloads/initializations.
Would you like a follow-up patch that wraps the map with dashmap + OnceCell to eliminate races under concurrency?
coordinator/internal/logic/verifier/verifier.go (2)
133-149
: Harden VK decoding: handle 0x prefix, base64url variants, and redact error previews.Current hex-first approach is fine, but:
- Hex inputs with a 0x/0X prefix will fail.
- Some producers may emit base64url (/-to-_ substitutions, no padding).
- Error reveals the entire VK, which can be large and sensitive.
Proposed improvement keeps behavior but adds normalization, base64url fallbacks, and safer errors.
Apply this diff:
-func decodeVkString(s string) ([]byte, error) { - // Try hex decoding first - if b, err := hex.DecodeString(s); err == nil { - return b, nil - } - // Fallback to base64 decoding - b, err := base64.StdEncoding.DecodeString(s) - if err != nil { - return nil, err - } - if len(b) == 0 { - return nil, fmt.Errorf("decode vk string %s fail (empty bytes)", s) - } - return b, nil -} +func decodeVkString(s string) ([]byte, error) { + s = strings.TrimSpace(s) + + // Support optional 0x prefix for hex + if len(s) >= 2 && (s[0:2] == "0x" || s[0:2] == "0X") { + s = s[2:] + } + + // Heuristic: only attempt hex if it looks like pure hex and has even length + isHex := func(str string) bool { + if len(str)%2 != 0 { + return false + } + for i := 0; i < len(str); i++ { + c := str[i] + if !((c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F')) { + return false + } + } + return true + } + if isHex(s) { + if b, err := hex.DecodeString(s); err == nil { + if len(b) == 0 { + return nil, fmt.Errorf("decoded vk is empty (hex)") + } + return b, nil + } + // fall through to base64 attempts + } + + // Try standard base64, then raw and URL-safe variants + decoders := []*base64.Encoding{ + base64.StdEncoding, + base64.RawStdEncoding, + base64.URLEncoding, + base64.RawURLEncoding, + } + var lastErr error + for _, enc := range decoders { + if b, err := enc.DecodeString(s); err == nil { + if len(b) == 0 { + return nil, fmt.Errorf("decoded vk is empty (base64)") + } + return b, nil + } else { + lastErr = err + } + } + + // Redact long strings in error + preview := s + if len(preview) > 40 { + preview = preview[:20] + "..." + preview[len(preview)-8:] + } + return nil, fmt.Errorf("failed to decode vk (len=%d, preview=%q): %v", len(s), preview, lastErr) +}
124-126
: Typo in comment.“imcompatilbe” → “incompatible”; also grammar tweak.
Apply this diff:
-/* -add vk of imcompatilbe circuit app here to avoid we had used them unexpectedly -25/07/15: 0.5.0rc0 is no longer compatible since a breaking change -*/ +/* +Add Vks of incompatible circuit apps here to avoid using them unexpectedly. +25/07/15: 0.5.0-rc0 is no longer compatible since a breaking change. +*/crates/libzkp/src/tasks/batch.rs (1)
92-96
: Update panic message to match Stark proof.Message still says “root proof” while code uses into_stark_proof().
Apply this diff:
- .map(|w_proof| w_proof.proof.into_stark_proof().expect("expect root proof")) + .map(|w_proof| w_proof.proof.into_stark_proof().expect("expect stark proof"))crates/libzkp/src/tasks/bundle.rs (1)
82-86
: Align panic message with Stark proof.Same wording mismatch as in batch.rs.
Apply this diff:
- .map(|w_proof| w_proof.proof.into_stark_proof().expect("expect root proof")) + .map(|w_proof| w_proof.proof.into_stark_proof().expect("expect stark proof"))crates/libzkp/src/proofs.rs (3)
64-69
: Adjust panic text to reflect Stark proof.Message still references “root proof”.
Apply this diff:
- .as_stark_proof() - .expect("batch proof use root proof") + .as_stark_proof() + .expect("chunk proof should be a stark proof")
72-77
: Adjust panic text to reflect Stark proof.Same wording mismatch here.
Apply this diff:
- .as_stark_proof() - .expect("batch proof use root proof") + .as_stark_proof() + .expect("batch proof should be a stark proof")
22-35
: Doc comments still refer to RootProof; consider updating for clarity.Multiple mentions of RootProof remain in the struct docs; they should say StarkProof now.
I can prepare a focused doc-only diff if you want to keep code churn minimal in this PR.
crates/libzkp/src/verifier/universal.rs (2)
18-27
: Tweak setup error message for accuracy.The expect message still says “chunk verifier” though we’re setting up the universal verifier.
Apply this diff:
- verifier: UniversalVerifier::setup(&verifier_bin).expect("Setting up chunk verifier"), + verifier: UniversalVerifier::setup(&verifier_bin).expect("setting up universal verifier"),
34-45
: Add assert messages for PI hash check to aid debugging.Bare asserts yield generic “assertion failed” without context. Include fork/task info.
Apply this diff:
- assert!(proof.pi_hash_check(self.fork)); + assert!(proof.pi_hash_check(self.fork), "pi_hash_check failed for Chunk (fork={:?})", self.fork); ... - assert!(proof.pi_hash_check(self.fork)); + assert!(proof.pi_hash_check(self.fork), "pi_hash_check failed for Batch (fork={:?})", self.fork); ... - assert!(proof.pi_hash_check(self.fork)); + assert!(proof.pi_hash_check(self.fork), "pi_hash_check failed for Bundle (fork={:?})", self.fork);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
Cargo.lock
is excluded by!**/*.lock
crates/gpu_override/Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (14)
Cargo.toml
(1 hunks)coordinator/internal/logic/verifier/verifier.go
(3 hunks)crates/libzkp/src/proofs.rs
(3 hunks)crates/libzkp/src/tasks/batch.rs
(1 hunks)crates/libzkp/src/tasks/bundle.rs
(1 hunks)crates/libzkp/src/verifier.rs
(2 hunks)crates/libzkp/src/verifier/universal.rs
(1 hunks)crates/prover-bin/Cargo.toml
(3 hunks)crates/prover-bin/src/main.rs
(0 hunks)crates/prover-bin/src/prover.rs
(5 hunks)crates/prover-bin/src/zk_circuits_handler.rs
(1 hunks)crates/prover-bin/src/zk_circuits_handler/universal.rs
(1 hunks)zkvm-prover/Makefile
(1 hunks)zkvm-prover/config.json.template
(1 hunks)
💤 Files with no reviewable changes (1)
- crates/prover-bin/src/main.rs
🚧 Files skipped from review as they are similar to previous changes (4)
- crates/prover-bin/src/zk_circuits_handler/universal.rs
- Cargo.toml
- crates/prover-bin/Cargo.toml
- crates/libzkp/src/verifier.rs
🧰 Additional context used
🧬 Code Graph Analysis (3)
crates/libzkp/src/verifier/universal.rs (3)
crates/libzkp/src/proofs.rs (1)
pi_hash_check
(182-200)crates/libzkp/src/verifier.rs (1)
verify
(38-38)crates/libzkp/src/utils.rs (1)
panic_catch
(43-53)
crates/prover-bin/src/zk_circuits_handler.rs (1)
crates/prover-bin/src/zk_circuits_handler/universal.rs (1)
get_proof_data
(47-72)
crates/prover-bin/src/prover.rs (2)
common/types/message/message.go (1)
ProofType
(14-14)crates/prover-bin/src/zk_circuits_handler/universal.rs (2)
new
(18-32)get_task_from_input
(40-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
🔇 Additional comments (9)
zkvm-prover/Makefile (1)
54-56
: LGTM: test targets no longer prefetch circuits, aligning with dynamic assetsDropping the pre-download prerequisite is consistent with on-demand asset loading. The targets run the prover directly with the config, which is what we want.
crates/prover-bin/src/prover.rs (1)
170-175
: Behavior change acknowledged: get_vks deprecatedReturning an empty list is consistent with dynamic asset loading. Ensure callers don’t rely on VK prefetching.
Can you confirm the coordinator ignores get_vks responses now (or handles empty gracefully)?
coordinator/internal/logic/verifier/verifier.go (2)
7-7
: Import of hex is appropriate.Needed for the new hex-first VK decoding path. No issues.
186-201
: VK decoding integration looks good.The decodeVkString usage for Chunk/Batch/Bundle is correct and keeps the code DRY. No functional concerns here.
crates/libzkp/src/proofs.rs (2)
10-14
: Imports and utils path swap look correct.Switch to StarkProof and utils::vec_as_base64 is consistent with the refactor.
43-44
: Trait signature update is consistent with StarkProof.Returning &StarkProof from AsRootProof aligns with the new proof representation.
crates/libzkp/src/verifier/universal.rs (3)
10-10
: UniversalVerifier import is appropriate.Matches the switch to the universal verifier path.
13-16
: Struct rename and fields look good.Holding ForkName and the UniversalVerifier instance is sufficient for current flows.
50-51
: Return mapping is fine if verify_ panics on failure.*map(|| true) implies success is indicated by no panic/Err. If verify* starts returning booleans in the future, this will mask false, but that’s not the case today.
If there’s any chance verify_stark_proof/verify_evm_proof may return Result<bool, _> later, consider propagating the boolean instead of discarding it.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## develop #1717 +/- ##
===========================================
- Coverage 37.54% 37.45% -0.10%
===========================================
Files 243 243
Lines 20500 20534 +34
===========================================
- Hits 7697 7691 -6
- Misses 11988 12029 +41
+ Partials 815 814 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems good. left several comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
crates/prover-bin/src/prover.rs (4)
309-313
: Avoid spawn_blocking + block_on on the same runtimeRunning
handle.block_on
insidespawn_blocking
risks deadlocks if the future does blocking work or tries to spawn more blocking tasks. Prefer spawning the async future directly.Apply this diff:
- let handle = Handle::current(); - let is_evm = req.proof_type == ProofType::Bundle; - let task_handle = tokio::task::spawn_blocking(move || { - handle.block_on(handler.get_proof_data(&prover_task, is_evm)) - }); + let is_evm = req.proof_type == ProofType::Bundle; + let handler_cloned = handler.clone(); + let task_handle = tokio::spawn(async move { + handler_cloned.get_proof_data(&prover_task, is_evm).await + });If
get_proof_data
is CPU-bound, push that blocking work down to the implementation (e.g., usespawn_blocking
inside the handler), not here.
65-68
: Sanitize vk before using as a path component (directory traversal risk)A malicious or malformed
vk
could contain path separators or..
and escapeworkspace_path
. Validatevk
beforejoin
.Apply this diff:
- let storage_path = base_path.as_ref().join(vk); + // Reject path traversal in `vk` + if vk.contains('/') || vk.contains('\\') || vk.contains("..") { + eyre::bail!("invalid vk value: {}", vk); + } + let storage_path = base_path.as_ref().join(vk);
70-71
: Set HTTP timeouts to avoid hanging downloads
reqwest::Client::new()
has no overall timeout; a stalled server can hang the prover.Apply this diff:
- let client = reqwest::Client::new(); + let client = reqwest::Client::builder() + .connect_timeout(std::time::Duration::from_secs(30)) + .timeout(std::time::Duration::from_secs(600)) + .build()?;Optional: promote this client to a global
LazyLock
to reuse connections.
236-260
: Don’tunwrap
/assert!
on user-supplied config; makenew
fallible
validate().unwrap()
and theassert!
on detour URLs will abort the service on bad config. Propagate errors instead.Apply this diff:
- pub fn new(mut config: LocalProverConfig) -> Self { + pub fn new(mut config: LocalProverConfig) -> eyre::Result<Self> { for (fork_name, circuit_config) in config.circuits.iter_mut() { // validate each base url - circuit_config.location_data.validate().unwrap(); + circuit_config.location_data.validate()?; @@ - // apply default settings in template - for (key, url) in circuit_config.location_data.asset_detours.drain() { - template_url_mapping.insert(key, url); - } + // normalize override keys and apply defaults from template + let mut overrides = std::mem::take(&mut circuit_config.location_data.asset_detours); + let mut normalized = HashMap::with_capacity(overrides.len()); + for (k, v) in overrides.drain() { + // Normalize to lowercase hex without 0x to match `hex::encode` + let nk = k.trim_start_matches("0x").to_ascii_lowercase(); + normalized.insert(nk, v); + } + template_url_mapping.extend(normalized); @@ - // validate each detours url - for url in circuit_config.location_data.asset_detours.values() { - assert!( - url.path().ends_with('/'), - "url {} must be end with /", - url.as_str() - ); - } + // validate each detour URL + for url in circuit_config.location_data.asset_detours.values() { + if !url.path().ends_with('/') { + eyre::bail!("url {} must end with /", url.as_str()); + } + } } - Self { + Ok(Self { config, next_task_id: 0, current_task: None, handlers: HashMap::new(), - } + }) }Follow-up: we’ll need to adjust call sites to handle
Result<Self>
.
🧹 Nitpick comments (6)
crates/prover-bin/src/prover.rs (6)
27-31
: Fix typos in docs for clarity
- “a altered” → “an alternate” or “an alternative”
- “specififed” → “specified”
Purely cosmetic, but it helps future readers.
- /// a altered url for specififed vk + /// an alternative URL for a specified vk
98-99
: Prefer structured logging over println!Swap
println!
fortracing::{info,debug}
to integrate with existing logging and avoid stdout coupling.- println!("Downloading {} from {}", filename, download_url); + tracing::info!("downloading {filename} from {download_url}");
270-301
: Normalize vk used for lookups to avoid mismatches
vk
is derived viahex::encode
(lowercase, no 0x). Ensure detour keys in config/templates match this format. The proposed normalization innew
addresses this; if not applied, add normalization here before lookups.- let vk = hex::encode(&prover_task.vk); + let vk = hex::encode(&prover_task.vk); // lowercase, no 0xIf you prefer normalizing here instead of in
new
, we can add:let vk = vk.trim_start_matches("0x").to_ascii_lowercase();
272-274
: Minor: avoid unwrap on system time and use as_secs_f64Very unlikely to fail, but
unwrap
is avoidable andas_secs_f64
is cleaner.- let duration = SystemTime::now().duration_since(UNIX_EPOCH).unwrap(); - let created_at = duration.as_secs() as f64 + duration.subsec_nanos() as f64 * 1e-9; + let created_at = SystemTime::now() + .duration_since(UNIX_EPOCH) + .map(|d| d.as_secs_f64()) + .unwrap_or(0.0);
98-116
: Optional: verify remote size only when HEAD is successfulIf HEAD returns non-success, skip size comparison silently. Also consider using
CONTENT_LENGTH
constant.- // Make a HEAD request to get remote file size - if let Ok(head_resp) = client.head(download_url.clone()).send().await { - if let Some(content_length) = head_resp.headers().get("content-length") { + // Make a HEAD request to get remote file size + if let Ok(head_resp) = client.head(download_url.clone()).send().await { + if head_resp.status().is_success() { + if let Some(content_length) = head_resp.headers().get(reqwest::header::CONTENT_LENGTH) { if let Ok(remote_size) = - content_length.to_str().unwrap_or("0").parse::<u64>() + content_length.to_str().ok().and_then(|v| v.parse::<u64>().ok()) { // If sizes match, skip download if metadata.len() == remote_size { println!("File {} already exists with matching size, skipping download", filename); continue; } } - } + } }
63-64
: Future-proofing: support additional asset filesIf more assets are introduced per VK, consider making
download_files
configurable per proof type or fetched from a manifest instead of hardcoding the two filenames.I can sketch a small “manifest.json” format and loader if helpful.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
crates/prover-bin/assets_url_preset.json
(1 hunks)crates/prover-bin/src/prover.rs
(5 hunks)crates/prover-bin/src/zk_circuits_handler/universal.rs
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- crates/prover-bin/assets_url_preset.json
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/prover-bin/src/zk_circuits_handler/universal.rs
🧰 Additional context used
🧬 Code graph analysis (1)
crates/prover-bin/src/prover.rs (2)
crates/prover-bin/src/zk_circuits_handler/universal.rs (2)
new
(18-32)get_task_from_input
(40-42)crates/libzkp/src/lib.rs (4)
serde_json
(20-20)serde_json
(51-51)serde_json
(66-66)serde_json
(77-77)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (6)
crates/prover-bin/src/prover.rs (6)
34-45
: Good: graceful handling of unknown ProofTypeReturning a typed error via
eyre::bail!
(instead ofunreachable!
) is the right call here. This avoids panics on externally supplied values.
47-55
: Good: explicit trailing-slash validation on base_urlThis precondition prevents subtle URL-join bugs later. Nice.
147-155
: Config shape looks good; serde(flatten) is appropriateEmbedding
AssetsLocationData
via#[serde(flatten)]
keeps the JSON tidy. No issues found.
170-176
: API change: get_vks() returns empty — confirm upstream compatibilityReturning an empty list deprecates VK discovery via API. Ensure all coordinators/clients targeting this prover no longer rely on
get_vks
and gracefully handle an empty response.Would you like me to scan the repo for any
get_vks
consumers and surface potential breakages?
229-234
: Good: externalizing URL presets to JSONUsing
include_str!
+ serde keeps defaults versioned and readable. Theexpect
is acceptable here since the file is bundled with the binary.
162-163
: Trait object coherence confirmedThe
CircuitsHandler
trait is explicitly implemented forMutex<UniversalHandler>
incrates/prover-bin/src/zk_circuits_handler/universal.rs
(lines 46–48). Thanks to Rust’s unsizing coercion,Arc<Mutex<UniversalHandler>>
automatically coerces intoArc<dyn CircuitsHandler>
. No additional blanketimpl
is needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
crates/prover-bin/src/zk_circuits_handler/universal.rs (2)
47-59
: Long-held Tokio mutex serializes proofs; shorten lock scope around the heavy call.gen_proof_universal likely runs for seconds/minutes on large circuits; executing it under a held Mutex blocks other tasks sharing the handler/VK. Extract what you need while locked, then release the lock before the heavy computation.
Minimal improvement (avoid starving the async scheduler) by running the heavy call in a blocking region; still serializes proofs but won’t hog a runtime worker:
- let proof = handler_self - .get_prover() - .gen_proof_universal(u_task, need_snark)?; + let proof = tokio::task::block_in_place(|| { + handler_self + .get_prover() + .gen_proof_universal(u_task, need_snark) + })?;Stronger improvement (parallelize proofs across tasks): store a shareable handle and call outside the lock. This requires a shareable Prover (e.g., Arc with internal synchronization) and Prover being Sync to cross threads safely. If that’s true, refactor like:
- prover: Prover, + prover: std::sync::Arc<Prover>, @@ - let prover = Prover::setup(config, use_evm, None)?; - Ok(Self { prover }) + let prover = Prover::setup(config, use_evm, None)?; + Ok(Self { prover: std::sync::Arc::new(prover) }) @@ - pub fn get_prover(&self) -> &Prover { - &self.prover + pub fn get_prover(&self) -> std::sync::Arc<Prover> { + self.prover.clone() } @@ - let proof = handler_self - .get_prover() - .gen_proof_universal(u_task, need_snark)?; + let prover = handler_self.get_prover(); // drop the mutex guard here + let proof = tokio::task::block_in_place(|| { + prover.gen_proof_universal(u_task, need_snark) + })?;If Prover is not Sync, do not adopt the Arc approach. In that case, keep serialization but use block_in_place (above) to avoid blocking the reactor.
15-15
: Unsafe Send on UniversalHandler is unjustified and likely unsound—remove or document the safety contract.You’re force-marking UniversalHandler as Send while it contains scroll_zkvm_prover::Prover. Without a proof that Prover is thread-safe, this invites UB. Either:
- Remove the unsafe impl and restructure so you don't need it, or
- Add a rigorous SAFETY: justification proving Prover has no thread-affine or !Send interior and is safe to move across threads.
Run this script to inspect Prover’s definition and (non-)Send/Sync status:
#!/usr/bin/env bash set -euo pipefail # Locate the Prover type and check for Send/Sync impls or thread-affine fields. rg -nP -C4 --type=rust '\b(struct|pub\s+struct)\s+Prover\b' rg -nP --type=rust 'unsafe\s+impl\s+(Send|Sync)\s+for\s+Prover' rg -nP --type=rust -C2 '\bProver\b.*\{' # Look for Rc/RefCell/cell::Cell/raw pointers that may prevent Send/Sync: rg -nP --type=rust -C2 'Rc<|RefCell<|Cell<|\*mut |\*const ' $(fd -a '' | rg 'scroll_zkvm_prover')
🧹 Nitpick comments (4)
crates/prover-bin/src/zk_circuits_handler/universal.rs (4)
57-61
: CPU-bound work inside async function should use block_in_place/spawn_blocking.The proof generation is CPU-heavy and will block a Tokio worker thread. Use tokio::task::block_in_place (shown above) or tokio::task::spawn_blocking if Prover and captured values are Send. The minimal diff in the previous comment applies here.
If you can guarantee Prover (or a handle) is Send + 'static, prefer spawn_blocking:
// sketch let prover = handler_self.get_prover().clone(); // Arc<Prover> let proof = tokio::task::spawn_blocking(move || { prover.gen_proof_universal(u_task, need_snark) }).await??;
50-54
: Don’t reach into Prover internals; add an accessor and improve the error.Directly touching .evm_prover ties you to Prover’s internals. Prefer a small accessor on UniversalHandler and make the error actionable.
Apply these diffs:
- if need_snark && handler_self.prover.evm_prover.is_none() { - eyre::bail!( - "do not init prover for evm (vk: {})", - BASE64_STANDARD.encode(handler_self.get_prover().get_app_vk()) - ) + if need_snark && !handler_self.is_evm_enabled() { + eyre::bail!( + "need_snark=true but EVM prover is not initialized; set ProofType::Bundle or enable EVM (vk_b64={})", + BASE64_STANDARD.encode(handler_self.get_prover().get_app_vk()) + ) }and add this helper near get_prover():
pub fn get_prover(&self) -> &Prover { &self.prover } + + #[inline] + pub fn is_evm_enabled(&self) -> bool { + // keep encapsulation here even if Prover exposes fields today + self.prover.evm_prover.is_some() + }
21-26
: Magic number in segment_len—name it and document the rationale.(1 << 22) - 100 silently encodes ~4 MiB minus 100 bytes. Explain the “minus 100” margin and centralize it under a named constant.
Apply:
+const DEFAULT_SEGMENT_LEN: usize = (1 << 22) - 100; // ~4 MiB minus safety margin; TODO: document why 100 @@ - let segment_len = Some((1 << 22) - 100); + let segment_len = Some(DEFAULT_SEGMENT_LEN);
34-36
: Doc comment is outdated/unclear.The note about “later we would replace chunk/batch/bundle_prover with universal” is ambiguous in the current codebase. Update to state the current contract (“exposes the inner universal prover handle”).
- /// get_prover get the inner prover, later we would replace chunk/batch/bundle_prover with - /// universal prover, before that, use bundle_prover as the represent one + /// Returns a reference to the inner universal prover handle.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
crates/prover-bin/src/zk_circuits_handler/universal.rs
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
crates/prover-bin/src/zk_circuits_handler/universal.rs (1)
crates/prover-bin/src/zk_circuits_handler.rs (1)
get_proof_data
(12-12)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (1)
crates/prover-bin/src/zk_circuits_handler/universal.rs (1)
28-31
: Verify EVM usage mapping for allProofType
variantsThe audit script did not locate the
ProofType
enum in the codebase, so please manually confirm which variants ofProofType
should enableuse_evm
. If additional variants require EVM support beyondBundle
, update the condition accordingly.• Review the definition of
ProofType
(likely incrates/prover-bin/src/zk_circuits_handler/…
) and identify every variant.
• Adjust theuse_evm
assignment fromlet use_evm = proof_type == ProofType::Bundle;to something like:
let use_evm = matches!( proof_type, ProofType::Bundle /* | ProofType::OtherEvmBacked */ );so that all EVM-backed proof types are covered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
crates/gpu_override/.cargo/config.toml (3)
3-13
: Nit: keep formatting consistent for readabilityopenvm-instructions has a missing space before the brace, unlike other lines.
-openvm-instructions ={ git = "ssh://[email protected]/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false } +openvm-instructions = { git = "ssh://[email protected]/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
3-13
: Enable system Git for Cargo fetches to improve SSH reliabilityNo workspace-level
.cargo/config.toml
was found, so it’s safe to add the[net]
section here to force Cargo to use the system Git client when fetching those SSH URLs.• File needing update:
crates/gpu_override/.cargo/config.toml
• Add at the top (or bottom) of the file:--- a/crates/gpu_override/.cargo/config.toml +++ b/crates/gpu_override/.cargo/config.toml +[net] +git-fetch-with-cli = trueAlternatively, if you’d rather switch entirely to HTTPS (and manage authentication via tokens or CI secrets), update each git URL:
-[patch.crates-io] -openvm-build = { git = "ssh://[email protected]/scroll-tech/openvm-gpu.git", ... } +openvm-build = { git = "https://github.com/scroll-tech/openvm-gpu.git", ... }
3-13
: Align patch override scope with your intentI confirmed that the only
.cargo/config.toml
defining your OpenVM patch is atcrates/gpu_override/.cargo/config.toml
, so the override is indeed scoped to that crate and its dependencies. Other workspace crates will continue pulling the published OpenVM versions, which can lead to duplicate crate versions and trait-coherence conflicts at link time.• If you intended a workspace-wide override, move the entire
[patch."https://github.com/openvm-org/openvm.git"] openvm-build = { git = "…", branch = "patch-v1.3.0-pipe", default-features = false } … openvm-transpiler = { git = "…", branch = "patch-v1.3.0-pipe", default-features = false }section into the workspace root
.cargo/config.toml
.
• If you only need this override for the GPU-override crate, no changes are required.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
crates/gpu_override/Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (2)
crates/gpu_override/.cargo/config.toml
(2 hunks)zkvm-prover/config.json.template
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- zkvm-prover/config.json.template
🔇 Additional comments (3)
crates/gpu_override/.cargo/config.toml (3)
12-12
: Double-check feature set on openvm-sdkFeatures ["parallel", "bench-metrics", "evm-prove"] look intentional. Confirm “bench-metrics” is safe in production profiles and doesn’t add non-trivial overhead or logging in release builds. If it’s only for benchmarking, gate it behind a cfg(feature = "bench") at the workspace level.
45-45
: Override Verified: p3-bn254-fr at v0.2.1
Lock-file inspection confirms that in crates/gpu_override/Cargo.lock the package “p3-bn254-fr” (v0.1.0) is sourced fromgit+ssh://[email protected]/scroll-tech/plonky3-gpu.git?tag=v0.2.1#…
, matching the override added in.cargo/config.toml
. All Plonky3 crates remain uniformly pinned to v0.2.1.LGTM.
3-13
: Pin Git dependencies by exact commit SHA, not branchThe overrides in crates/gpu_override/.cargo/config.toml currently use
branch = "patch-v1.3.0-pipe"
, which risks silent drift. Lock each dependency to the specific commit that passed CI:-openvm-build = { git = "ssh://[email protected]/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false } +openvm-build = { git = "ssh://[email protected]/scroll-tech/openvm-gpu.git", rev = "<commit-for-patch-v1.3.0-pipe>", default-features = false } -openvm-circuit = { git = "ssh://[email protected]/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false } +openvm-circuit = { git = "ssh://[email protected]/scroll-tech/openvm-gpu.git", rev = "<commit-for-patch-v1.3.0-pipe>", default-features = false } <!-- …and so on for each openvm-* entry… -->Audit of the full workspace reveals additional branch-based pins that would benefit from the same treatment:
- workspace Cargo.toml
•sbv-primitives
,sbv-utils
at lines 24–25 (branch = "chore/openvm-1.3")
•revm
family in[patch.crates-io]
at lines 49–60 (branch = "feat/reth-v78")
•ruint
at line 62 (branch = "v1.15.0")
•alloy-primitives
at line 63 (branch = "v1.2.0")Converting these to
rev = "<SHA>"
everywhere will align with the workspace’s pinned-deps policy and eliminate supply-chain flakiness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
Cargo.toml (1)
20-22
: Prefer full 40‑char commit SHAs in Cargo git dependencies.Using a short SHA (e.g.,
89a2dc1
) can become ambiguous as history grows. Cargo will lock to the resolved full SHA in Cargo.lock, but encoding the full 40‑char SHA inrev
avoids ambiguity at source and simplifies offline verification.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
Cargo.lock
is excluded by!**/*.lock
crates/gpu_override/Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (1)
Cargo.toml
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (2)
Cargo.toml (2)
20-22
: Good move: pin all zkvm crates to the same commit for reproducibility.Pinning
scroll-zkvm-prover
,scroll-zkvm-verifier
, andscroll-zkvm-types
to a single commit in the same repo aligns versions across the universal prover/verifier shift and prevents branch drift. This should make CI and coordinated rollouts more deterministic.
20-22
: Fix invalid Git dependencies for Scroll ZKVM cratesIt looks like Cargo won’t be able to clone or discover the crates at the pinned commit. Please address the following critical issues:
• The revision
89a2dc1
cannot be found in thescroll-tech/zkvm-prover
repository—Cargo will error out on clone.checkout.
– Ensure you’ve pushed that commit to GitHub, or updaterev
to the correct full SHA.
• Verify where thescroll-zkvm-verifier
andscroll-zkvm-types
crates actually live. If they’re in separate repos (e.g.https://github.com/scroll-tech/zkvm-verifier
,https://github.com/scroll-tech/zkvm-types
), update theirgit =
URLs accordingly.
• Once the URLs and revisions are fixed, confirm that each repo at the pinned commit declares the matchingname = "scroll-zkvm-…"
in itsCargo.toml
.
• Finally, if you’re committingCargo.lock
, make sure it locks all three dependencies to the same full commit hash (no mixed SHAs).Suggested diff outline in
Cargo.toml
:-[dependencies] -scroll-zkvm-prover = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "89a2dc1" } -scroll-zkvm-verifier = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "89a2dc1" } -scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "89a2dc1" } +[dependencies] +scroll-zkvm-prover = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "<PROVER_SHA>" } +scroll-zkvm-verifier = { git = "https://github.com/scroll-tech/zkvm-verifier", rev = "<VERIFIER_SHA>" } +scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-types", rev = "<TYPES_SHA>" }Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (5)
coordinator/Makefile (2)
37-41
: Mark localsetup as phony and prefer CURDIR over PWD for portabilityMinor Makefile hygiene: declare the target as .PHONY and use
$(CURDIR) (set by make) instead of relying on $ (PWD) from the shell.Additions outside the shown hunk:
# near line 1 .PHONY: lint docker clean coordinator coordinator_skip_libzkp mock_coordinator libzkp localsetupWithin the target:
- cp -r $(PWD)/conf $(PWD)/build/bin/ + cp -r $(CURDIR)/conf $(CURDIR)/build/bin/ - cd $(PWD)/build && SCROLL_ZKVM_VERSION=$${SCROLL_ZKVM_VERSION:?SCROLL_ZKVM_VERSION is required (e.g. release tag/commit)} bash setup_releases.sh + cd $(CURDIR)/build && SCROLL_ZKVM_VERSION=$${SCROLL_ZKVM_VERSION:?SCROLL_ZKVM_VERSION is required (e.g. release tag/commit)} bash setup_releases.sh
37-41
: Preflight check for required tools (jq, wget) to improve DXMake target can fail late with unclear errors if jq/wget are missing. Add a lightweight prerequisite target to check tool availability once.
Add outside the hunk:
check_tools: @command -v jq >/dev/null 2>&1 || { echo "jq not found"; exit 1; } @command -v wget >/dev/null 2>&1 || { echo "wget not found"; exit 1; } # and make localsetup depend on it localsetup: check_tools coordinator_apicoordinator/build/setup_releases.sh (3)
4-7
: Validate SCROLL_ZKVM_VERSION format to avoid surprising URLsBasic allowlist guards accidental spaces or shell expansion, and yields clearer errors.
Apply:
if [ -z "${SCROLL_ZKVM_VERSION}" ]; then echo "SCROLL_ZKVM_VERSION not set" exit 1 fi + +# allow only tag/commit-like values +if ! [[ "${SCROLL_ZKVM_VERSION}" =~ ^[A-Za-z0-9._-]+$ ]]; then + echo "Invalid SCROLL_ZKVM_VERSION: ${SCROLL_ZKVM_VERSION}" + exit 1 +fi
55-56
: Avoid needless re-downloads with timestampingIf assets are already present and unchanged, skip network. This speeds up local iterations.
Alternative to the previous diff:
- wget https://.../verifier.bin -O ${ASSET_DIR}/verifier.bin - wget https://.../openVmVk.json -O ${ASSET_DIR}/openVmVk.json + wget -N https://.../verifier.bin -P "${ASSET_DIR}" + wget -N https://.../openVmVk.json -P "${ASSET_DIR}"Note: -N relies on Last-Modified headers being set by S3.
58-62
: Report partial failures accuratelyCurrently, the script always prints “All verifier assets downloaded successfully” even if one download fails earlier (and without strict modes it wouldn’t exit). With strict modes enabled this becomes less likely, but an explicit status still helps.
Apply:
-echo "All verifier assets downloaded successfully" +echo "All verifier assets downloaded successfully for $VERIFIER_COUNT verifier(s)."Or track a failure flag and summarize successes/failures.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
-
coordinator/Makefile
(1 hunks) -
coordinator/build/setup_releases.sh
(1 hunks) -
coordinator/conf/config.json
(0 hunks)
💤 Files with no reviewable changes (1)
- coordinator/conf/config.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (1)
coordinator/build/setup_releases.sh (1)
27-35
: Confirm intent: allow absolute paths from config to write anywhere?Accepting absolute assets_path means a malformed config can write outside the repo/build tree. If this is only for local dev that’s fine; otherwise, restrict to SCRIPT_DIR/bin or require a specific allowlist.
If you want to restrict, add:
+ if [[ "$ASSETS_PATH" = /* ]]; then + echo "Absolute assets_path not allowed for safety: $ASSETS_PATH" + exit 1 + fi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
common/types/message/message.go (1)
162-162
: Consider documenting the unit of measurement forTotalGasUsed
.While the field name suggests gas usage, it would be helpful to add a comment clarifying the exact unit (e.g., gas units, wei) for better maintainability.
coordinator/internal/logic/submitproof/proof_receiver.go (1)
85-85
: Remove unnecessary blank line.There's an extra blank line added that doesn't improve readability.
Remove the blank line at line 85.
crates/libzkp/src/proofs.rs (1)
125-126
: Consider adding a unit specifier to the field name or documentation.The field
chunk_total_gas
would benefit from clearer documentation about its unit of measurement. Consider either renaming it tochunk_total_gas_used
(to match the Go struct) or adding more detailed documentation.Apply this diff to improve clarity:
- /// Additional data for stat - pub chunk_total_gas: u64, + /// Total gas consumed by all transactions in the chunk (in gas units) + pub chunk_total_gas: u64,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
common/types/message/message.go
(2 hunks)coordinator/internal/logic/submitproof/proof_receiver.go
(4 hunks)coordinator/test/api_test.go
(1 hunks)crates/libzkp/src/proofs.rs
(4 hunks)crates/libzkp/src/tasks.rs
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
coordinator/test/api_test.go (1)
common/types/message/message.go (1)
ChunkInfo
(113-127)
coordinator/internal/logic/submitproof/proof_receiver.go (1)
common/types/message/message.go (2)
ProofTypeBatch
(35-35)OpenVMBatchProof
(192-201)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (7)
common/types/message/message.go (2)
138-143
: LGTM! Well-structured telemetry data type.The new
OpenVMProofStat
struct provides essential performance metrics for proof generation, which will be valuable for monitoring and optimization. The field names are clear and the JSON tags follow consistent naming conventions.
147-149
: LGTM! Proper use of pointer for optional field.Using
*OpenVMProofStat
withomitempty
tag ensures backward compatibility when stats are not available, which is important for gradual rollout of this feature.coordinator/test/api_test.go (1)
586-589
: LGTM! Test correctly updated for new metadata field.The test properly includes the new
TotalGasUsed
field in the anonymous struct that mirrorsOpenVMChunkProof.MetaData
, ensuring serialization compatibility.coordinator/internal/logic/submitproof/proof_receiver.go (1)
223-233
: Add defensive checks for division by zero.The calculation
float64(stat.TotalCycle) / float64(chunkProof.MetaData.TotalGasUsed)
could cause a panic ifTotalGasUsed
is zero. Also, the metrics update could fail silently if the gauge is nil.Apply this diff to add defensive checks:
if stat := chunkProof.VmProof.Stat; stat != nil { if g, _ := m.proverSpeed.GetMetricWithLabelValues("chunk", "exec"); g != nil { - g.Set(float64(stat.TotalCycle) / float64(stat.ExecutionTimeMills*1000)) + if stat.ExecutionTimeMills > 0 { + g.Set(float64(stat.TotalCycle) / float64(stat.ExecutionTimeMills*1000)) + } } if g, _ := m.proverSpeed.GetMetricWithLabelValues("chunk", "proving"); g != nil { - g.Set(float64(stat.TotalCycle) / float64(stat.ProvingTimeMills*1000)) + if stat.ProvingTimeMills > 0 { + g.Set(float64(stat.TotalCycle) / float64(stat.ProvingTimeMills*1000)) + } } - cycle_per_gas := float64(stat.TotalCycle) / float64(chunkProof.MetaData.TotalGasUsed) - m.evm_cycle_per_gas.Set(cycle_per_gas) - m.provingTime.Set(float64(stat.ProvingTimeMills) / 1000) + if chunkProof.MetaData.TotalGasUsed > 0 { + cyclePerGas := float64(stat.TotalCycle) / float64(chunkProof.MetaData.TotalGasUsed) + m.evmCyclePerGas.Set(cyclePerGas) + } + if stat.ProvingTimeMills > 0 { + m.provingTime.Set(float64(stat.ProvingTimeMills) / 1000) + } }Likely an incorrect or invalid review comment.
crates/libzkp/src/proofs.rs (2)
10-10
: LGTM! Consistent migration from RootProof to StarkProof.The type migration from
RootProof
toStarkProof
and the import path update forvec_as_base64
are properly implemented throughout the file.Also applies to: 13-13
63-68
: LGTM! Proper implementation of trait methods.The trait implementations correctly return
&StarkProof
and use the appropriateas_stark_proof()
method. The error messages in theexpect()
calls are accurate.Also applies to: 71-76
crates/libzkp/src/tasks.rs (1)
47-56
: LGTM! Proper extraction and usage of gas statistics.The implementation correctly extracts
total_gas_used
from task statistics and includes it in the metadata. The field is properly passed to theChunkProofMetadata
constructor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
coordinator/internal/config/config.go (1)
60-64
: Per‑fork MinProverVersion: define precedence and validate inputs
- Clarify in code/docs whether VerifierConfig.MinProverVersion is a global floor and AssetConfig.MinProverVersion further tightens per fork (current usage in auth.Check suggests yes). Consider a short comment on the field to prevent misconfiguration.
- Add lightweight validation after loading config to ensure each MinProverVersion is either a valid semver or an accepted "sdk-..." format.
Would you like a small Validate() helper invoked from NewConfig to check these constraints?
coordinator/internal/logic/auth/login.go (4)
32-36
: Avoid shadowing cfg in range clauseThe inner loop variable shadows the function parameter cfg, which hurts readability and risks mistakes during refactors.
Apply:
- for _, cfg := range cfg.ProverManager.Verifier.Verifiers { - proverVersionHardForkMap[cfg.ForkName] = cfg.MinProverVersion - } + for _, verifierCfg := range cfg.ProverManager.Verifier.Verifiers { + proverVersionHardForkMap[verifierCfg.ForkName] = verifierCfg.MinProverVersion + }
27-28
: Map name doesn’t reflect new semanticsNow that the map is forkName -> minVersion, consider a clearer name like forkMinVersionMap.
Apply:
- proverVersionHardForkMap map[string]string + forkMinVersionMap map[string]stringThen update usages accordingly in this file.
108-113
: Deterministic output for joined hard fork namesMap iteration order is random; joining without sorting yields nondeterministic results.
Apply:
- if len(hardForkNames) == 0 { + if len(hardForkNames) == 0 { return "", fmt.Errorf("invalid prover prover_version:%s", login.Message.ProverVersion) - } - return strings.Join(hardForkNames, ","), nil + } + sort.Strings(hardForkNames) + return strings.Join(hardForkNames, ","), nilAnd add the import:
// add to imports import "sort"
102-113
: Unit tests for version gating logicPlease add tests covering:
- sdk versions (e.g., "sdk-2025.08.01") mapping to expected forks,
- semver prereleases (e.g., "4.2.0-beta.1"),
- empty per-fork MinProverVersion (matches all),
- no matches → error.
I can draft a table‑driven test for ProverHardForkName using a stubbed LoginParameter and a seeded map of forks/min versions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
coordinator/internal/config/config.go
(1 hunks)coordinator/internal/logic/auth/login.go
(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
coordinator/internal/logic/auth/login.go (3)
coordinator/internal/config/config.go (2)
Config
(51-56)ProverManager
(13-29)common/version/prover_version.go (1)
CheckScrollRepoVersion
(37-55)coordinator/internal/types/auth.go (2)
Message
(42-49)ProverVersion
(20-20)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
Prover can load assets while it has received a task.
It load the corresponding task according to the vk specified in universal task.
The updated notes for deployment can be checked in: https://www.notion.so/scrollzkp/Deployment-of-coordinator-prover-for-feynman-upgrade-2237792d22af807583c6cd3920bda3d2
Minor updates:
Induce new metrics: [Feat] Induce new metric for proving #1726
Fix the issue of forkname - version check: until now there is an issue in the forkname matching while handling the login of prover. As the result, only the repo version from prover is identical with which specified by
prover_manager.verifier.min_prover_version
can be allowed to login. We have induced the fixing in this PR:.min_prover_version
can loginmin_prover_version
field can be added into any element inprover_manager.verifier.verifiers
so any prover whose repo version lower than the optionalmin_prover_version
would be excluded from the tasks for the corresponding forkingSummary by CodeRabbit
New Features
Refactor
Dependencies