Fix fsmeta chaos and nightly correctness checks#225
Conversation
There was a problem hiding this comment.
⚠️ Performance Alert ⚠️
Possible performance regression was detected for benchmark.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.15.
| Benchmark suite | Current: 55c9e3a | Previous: 3b73389 | Ratio |
|---|---|---|---|
BenchmarkARTGet (github.com/feichai0017/NoKV/engine/index) |
344.7 ns/op 0 B/op 0 allocs/op |
292.1 ns/op 0 B/op 0 allocs/op |
1.18 |
BenchmarkARTGet (github.com/feichai0017/NoKV/engine/index) - ns/op |
344.7 ns/op |
292.1 ns/op |
1.18 |
BenchmarkARTIteratorNext (github.com/feichai0017/NoKV/engine/index) |
94.91 ns/op 0 B/op 0 allocs/op |
56.29 ns/op 0 B/op 0 allocs/op |
1.69 |
BenchmarkARTIteratorNext (github.com/feichai0017/NoKV/engine/index) - ns/op |
94.91 ns/op |
56.29 ns/op |
1.69 |
BenchmarkSkiplistInsert (github.com/feichai0017/NoKV/engine/index) |
1900 ns/op 33.69 MB/s 161 B/op 1 allocs/op |
1047 ns/op 61.12 MB/s 159 B/op 1 allocs/op |
1.81 |
BenchmarkSkiplistInsert (github.com/feichai0017/NoKV/engine/index) - ns/op |
1900 ns/op |
1047 ns/op |
1.81 |
BenchmarkL0SelectTablesForKeyLinear (github.com/feichai0017/NoKV/engine/lsm) |
702139895 ns/op |
1067 ns/op |
658050.51 |
BenchmarkDirPageMaterializeAsync/entries=10000 (github.com/feichai0017/NoKV/engine/slab/dirpage) |
3928460 ns/op 6766991 B/op 2074 allocs/op |
3396147 ns/op 6766466 B/op 2072 allocs/op |
1.16 |
BenchmarkDirPageMaterializeAsync/entries=10000 (github.com/feichai0017/NoKV/engine/slab/dirpage) - ns/op |
3928460 ns/op |
3396147 ns/op |
1.16 |
BenchmarkDirPageInvalidate (github.com/feichai0017/NoKV/engine/slab/dirpage) |
55.59 ns/op 0 B/op 0 allocs/op |
46.94 ns/op 0 B/op 0 allocs/op |
1.18 |
BenchmarkDirPageInvalidate (github.com/feichai0017/NoKV/engine/slab/dirpage) - ns/op |
55.59 ns/op |
46.94 ns/op |
1.18 |
BenchmarkWALReplay (github.com/feichai0017/NoKV/engine/wal) |
42084047 ns/op 5991621 B/op 83377 allocs/op |
35406397 ns/op 5991546 B/op 83376 allocs/op |
1.19 |
BenchmarkWALReplay (github.com/feichai0017/NoKV/engine/wal) - ns/op |
42084047 ns/op |
35406397 ns/op |
1.19 |
BenchmarkDBBatchSet/NoSync (github.com/feichai0017/NoKV/local) - MB/s |
156.6 MB/s |
133.25 MB/s |
1.18 |
BenchmarkDBBatchSet/SyncInline (github.com/feichai0017/NoKV/local) - MB/s |
36.61 MB/s |
29.96 MB/s |
1.22 |
BenchmarkDBBatchSet/SyncPipeline (github.com/feichai0017/NoKV/local) - MB/s |
36.8 MB/s |
31.3 MB/s |
1.18 |
BenchmarkDBCommitInlineValueSizes/Inline_64B (github.com/feichai0017/NoKV/local) - MB/s |
45.31 MB/s |
36.76 MB/s |
1.23 |
BenchmarkDBCommitInlineValueSizes/Inline_4KB (github.com/feichai0017/NoKV/local) - MB/s |
866.84 MB/s |
732.9 MB/s |
1.18 |
BenchmarkDBIteratorScan (github.com/feichai0017/NoKV/local/internal/iterator) - B/op |
93 B/op |
33 B/op |
2.82 |
BenchmarkMPSCQueuePushPop/producers=4 (github.com/feichai0017/NoKV/utils) |
189.7 ns/op |
155.3 ns/op |
1.22 |
BenchmarkMPSCQueuePushPop/producers=8 (github.com/feichai0017/NoKV/utils) |
221.5 ns/op |
173.7 ns/op |
1.28 |
BenchmarkMPSCQueuePushPop/producers=16 (github.com/feichai0017/NoKV/utils) |
239.1 ns/op |
194.2 ns/op |
1.23 |
BenchmarkMPSCQueueConsumerSessionPushPop/producers=8 (github.com/feichai0017/NoKV/utils) |
160.1 ns/op |
130.2 ns/op |
1.23 |
BenchmarkMPSCQueuePushOnlyContention/producers=8 (github.com/feichai0017/NoKV/utils) |
218.3 ns/op |
183.6 ns/op |
1.19 |
BenchmarkMPSCQueuePushOnlyContention/producers=16 (github.com/feichai0017/NoKV/utils) |
232.1 ns/op |
195.4 ns/op |
1.19 |
BenchmarkRingPushPop/producers=4 (github.com/feichai0017/NoKV/utils) |
89.86 ns/op |
76.15 ns/op |
1.18 |
BenchmarkRingPushPop/producers=8 (github.com/feichai0017/NoKV/utils) |
88.16 ns/op |
74.04 ns/op |
1.19 |
BenchmarkRingPushPop/producers=16 (github.com/feichai0017/NoKV/utils) |
87.85 ns/op |
75.35 ns/op |
1.17 |
This comment was automatically generated by workflow using github-action-benchmark.
There was a problem hiding this comment.
Pull request overview
This PR improves fsmeta chaos/nightly correctness validation by making the concurrent history checker more robust to ambiguous outcomes, fixing dirpage cache identity for paginated ReadDirPlus, and ensuring storage-backed coordinator root-event validation uses the same durable snapshot authority used for lifecycle assessment.
Changes:
- Extend the fsmeta concurrent history contract runner to retain a bounded set of alternative valid serializations and optionally treat certain availability failures as indeterminate commit outcomes.
- Fix dirpage cache keying so ReadDirPlus pages are cached per (mount, parent, StartAfter, Limit) while invalidation remains directory-scoped.
- Validate coordinator root events against the durable storage snapshot when running in storage-backed mode (to avoid stale in-memory cache validation).
Reviewed changes
Copilot reviewed 21 out of 21 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| scripts/chaos/docker_fsmeta_history.sh | Adds a toggle to allow indeterminate error handling for Docker chaos history runs. |
| fsmeta/server/service_test.go | Adds a gRPC error-mapping test case for retry-exhausted errors. |
| fsmeta/server/errors.go | Maps KindRetryExhausted to codes.Unavailable. |
| fsmeta/integration/history_contract_test.go | Updates concurrent history runner invocation to pass the new options struct. |
| fsmeta/exec/runner.go | Fixes dirpage cache identity to include pagination; invalidation now uses a directory-scoped key. |
| fsmeta/exec/runner_test.go | Strengthens cache-hit assertions and adds pagination-keying coverage for ReadDirPlus caching. |
| fsmeta/contract/model.go | Updates operation string formatting to include StartAfter/Limit for read operations. |
| fsmeta/contract/history.go | Adds HistoryOptions, bounded multi-candidate linearization, and optional indeterminate-error handling. |
| fsmeta/contract/history_test.go | Updates concurrent history runner invocation to pass HistoryOptions. |
| engine/slab/dirpage/codec.go | Extends the on-disk page format and key identity to include StartAfter/Limit; adds DirectoryKey abstraction. |
| engine/slab/dirpage/cache.go | Changes epochs to be directory-scoped and ensures lookups validate pagination identity. |
| engine/slab/dirpage/cache_test.go | Adds tests for pagination keying and directory-wide invalidation behavior. |
| engine/slab/dirpage/cache_bench_test.go | Updates invalidate benchmark to match directory-scoped invalidation. |
| coordinator/server/transition_service.go | Adjusts AssessRootEvent to match new lifecycle assessment return signature. |
| coordinator/server/service_test.go | Ensures fake storage snapshot cloning includes additional rooted state (subtrees/quotas/snapshot epochs). |
| coordinator/server/service_gateway.go | Validates storage-backed root events against the storage snapshot used for lifecycle assessment. |
| coordinator/catalog/cluster.go | Exposes a public ValidateRootEventAgainstSnapshot helper for storage-backed validation. |
| cmd/nokv-fsmeta-soak/main.go | Updates concurrent history runner invocation to pass HistoryOptions. |
| cmd/nokv-fsmeta-history/main.go | Adds allow-indeterminate flag, adds scope creation retry barrier, and isolates per-seed inode IDs. |
| cmd/nokv-fsmeta-history/main_test.go | Updates tests to match per-seed inode scoping and new scope creation operation helper. |
| .dockerignore | Ignores benchmark/ycsb/data in Docker builds. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (9)
📝 WalkthroughWalkthroughThe PR introduces pagination-aware dirpage caching, multi-candidate filesystem history execution for improved test coverage, storage-backed root event validation, and per-seed scope isolation for history testing alongside error mapping and Docker build optimization. ChangesPagination-aware Dirpage Cache
Storage-backed Root Event Validation
Multi-candidate Filesystem History Execution
Error Mapping for Retry Exhaustion
Docker Build Optimization
Sequence Diagram(s)sequenceDiagram
participant Executor
participant CandidateSet as Candidate Models
participant Linearizer
participant Barrier as History Barrier
Executor->>Executor: Initialize candidates = [base_model]
loop For each batch of operations
Executor->>Executor: Execute ops once (real execution)
Executor->>Linearizer: linearizeCandidateBatch(ops, candidates, constraints)
loop For each candidate model
Linearizer->>Linearizer: Try op against candidate
alt Success
Linearizer->>CandidateSet: Advance candidate to next model
else Indeterminate Error (if allowed)
Linearizer->>CandidateSet: Branch candidate (produce alternatives)
else Hard Error
Linearizer->>CandidateSet: Discard candidate
end
end
Linearizer->>CandidateSet: Deduplicate via modelFingerprint
Linearizer->>CandidateSet: Truncate to MaxCandidates
Linearizer->>Executor: Return next candidate set
Executor->>CandidateSet: Replace working models
Executor->>Executor: Log batch with first_candidate_order
end
alt Observed History Barrier
Barrier->>Executor: Report observed result
Executor->>Barrier: runSequentialObservedCandidates(candidates, barrier)
loop Advance all candidates
Barrier->>CandidateSet: Try barrier against each candidate
end
alt Any candidate matches
Barrier->>Executor: Return matching candidates
else No match
Barrier->>Executor: Error: no candidate accepts barrier
end
end
Executor->>Executor: Return final history (first candidate linearization)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
fsmeta/exec/runner.go (1)
565-578:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon’t publish a partial cache page when one entry fails to encode.
Skipping a single pair here turns a valid
ReadDirPlusresult into a truncated cached listing, and later cache hits will keep serving the shorter page as if it were complete. If any entry cannot be encoded, abort materialization for the whole page instead of dropping that entry.Possible fix
-func encodeDirPageEntries(pairs []fsmeta.DentryAttrPair) []dirpage.Entry { +func encodeDirPageEntries(pairs []fsmeta.DentryAttrPair) ([]dirpage.Entry, error) { out := make([]dirpage.Entry, 0, len(pairs)) for _, p := range pairs { blob, err := fsmeta.EncodeInodeValue(p.Inode) if err != nil { - continue + return nil, err } out = append(out, dirpage.Entry{ Name: []byte(p.Dentry.Name), Inode: uint64(p.Dentry.Inode), AttrBlob: blob, }) } - return out + return out, nil }And in
ReadDirPlus, only callMaterializeAsyncwhen that encoding step succeeds.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@fsmeta/exec/runner.go` around lines 565 - 578, The encoder currently drops entries on EncodeInodeValue errors which produces partial cached pages; change encodeDirPageEntries to fail fast and propagate the error (e.g., change signature to return ([]dirpage.Entry, error)), stopping and returning an error if any fsmeta.EncodeInodeValue fails instead of continuing, and update callers (notably ReadDirPlus and any MaterializeAsync invocation) to check that error and only call MaterializeAsync when encodeDirPageEntries succeeds.fsmeta/contract/history.go (1)
64-66:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHonor
AllowIndeterminateErrorswhenbatchSize == 1.This fast path skips the new candidate-based logic entirely. A run like
--allow-indeterminate-errors --batch 1still falls back to the strictRun(...)path, so retryableUnavailable/Abortedresults will be reported as mismatches even though the option says to treat them as indeterminate.Proposed fix
- if batchSize <= 1 { + if batchSize <= 1 && !opts.AllowIndeterminateErrors { return Run(ctx, exec, model, ops) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@fsmeta/contract/history.go` around lines 64 - 66, The fast-path "if batchSize <= 1 { return Run(ctx, exec, model, ops) }" bypasses the candidate-based logic and ignores the AllowIndeterminateErrors option; change the condition so the shortcut only happens when AllowIndeterminateErrors is false. In other words, only call Run(...) immediately when batchSize <= 1 AND AllowIndeterminateErrors is false; otherwise fall through into the candidate-based handling so AllowIndeterminateErrors is honored. Ensure you reference the AllowIndeterminateErrors flag (options struct) alongside batchSize and keep Run(ctx, exec, model, ops) as the fallback.cmd/nokv-fsmeta-history/main.go (1)
132-145:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRemap parent inode references for nested operations too.
op.Inodeis shifted into the per-seed namespace, butParent,FromParent, andToParentare only rewritten when they equalRootInode. After the script creates a nested directory, later ops that target that directory still use the old generated inode id, so external runs can address the wrong parent and break seed isolation.Proposed fix
op.Mount = mount // The generated inodes are unique only within one in-memory script. // Docker chaos runs multiple seeds against the same mounted system, // so external histories must shift inode ids into the per-seed scope // to avoid cross-seed namespace pollution. op.Inode = scopeGeneratedInode(inodeBase, op.Inode) if op.Parent == fsmeta.RootInode { op.Parent = scopeInode + } else { + op.Parent = scopeGeneratedInode(inodeBase, op.Parent) } if op.FromParent == fsmeta.RootInode { op.FromParent = scopeInode + } else { + op.FromParent = scopeGeneratedInode(inodeBase, op.FromParent) } if op.ToParent == fsmeta.RootInode { op.ToParent = scopeInode + } else { + op.ToParent = scopeGeneratedInode(inodeBase, op.ToParent) } out = append(out, op)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@cmd/nokv-fsmeta-history/main.go` around lines 132 - 145, The code shifts op.Inode into the per-seed namespace but only rewrites Parent/FromParent/ToParent when they equal fsmeta.RootInode which leaves nested targets unscoped; update the block that sets op.Inode (and uses scopeGeneratedInode and scopeInode) to also remap op.Parent, op.FromParent and op.ToParent by calling scopeGeneratedInode(inodeBase, <field>) whenever the field is not fsmeta.RootInode (or simply always) and keep the existing RootInode->scopeInode substitution for those fields; use the same helper scopeGeneratedInode and fsmeta.RootInode/scopeInode symbols so nested directory targets use the per-seed scoped inode IDs.
🧹 Nitpick comments (1)
engine/slab/dirpage/cache_bench_test.go (1)
121-132: ⚡ Quick winThis benchmark no longer measures the expensive part of invalidation.
The timer starts before any pages are materialized, so the loop only benchmarks epoch bookkeeping on an empty index. That misses the new page-deletion sweep described in the updated comment. Preload at least one cached page per directory before
ResetTimer()if you want numbers for the current implementation.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@engine/slab/dirpage/cache_bench_test.go` around lines 121 - 132, BenchmarkDirPageInvalidate currently starts the timer before any pages are materialized so it only measures lightweight epoch bookkeeping; before calling b.ResetTimer() preload at least one cached page per directory by using newBenchCache to create and populate entries for the PageKey values (e.g. materialize/cache a DirPage for each keys[i].Directory()) so that c.Invalidate(key.Directory()) triggers the full page-deletion sweep; ensure the preloading loop runs before b.ResetTimer()/b.ReportAllocs() and uses the same PageKey slice used in the benchmark.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@engine/slab/dirpage/cache.go`:
- Around line 269-277: The Invalidate implementation on Cache currently scans
the whole c.pages map under c.pageMu to remove pages for a single DirectoryKey,
causing O(total cached pages) cost and global lock contention; add a secondary
index (e.g., map[DirectoryKey]map[PageKey]struct{} or
map[DirectoryKey][]PageKey) maintained alongside c.pages so that
Cache.Invalidate(key DirectoryKey) can look up only that directory’s PageKeys
and remove them without scanning all entries, update mutations (e.g., in methods
that insert/remove pages/MaterializeAsync/Lookup) to keep both the primary
c.pages and the new directoryIndex consistent under appropriate locking (or use
a fine-grained per-directory lock) and ensure the epoch/next logic using
epochFor/ep.Add(1) remains applied to the targeted PageKeys only.
In `@engine/slab/dirpage/codec.go`:
- Around line 38-41: The on-disk dirpage wire format changed but dirPageVersion
remains 1, so update the code to enforce an upgrade boundary: either increment
the constant dirPageVersion to a new value (e.g. 2) to reflect the incompatible
layout, or add explicit invalidation logic in reloadSegment that reads the
segment header/version and drops/renames the segment when it detects an older
version before attempting page decoding; modify the code paths that rely on
dirPageVersion and the segment header check in reloadSegment so segments with
the previous format are never scanned/partially decoded.
In `@fsmeta/contract/history.go`:
- Around line 190-203: linearizeCandidateBatch currently stops collecting once
next.len >= maxCandidates before deduping, which lets duplicate fingerprints
consume the budget; instead track unique candidate fingerprints while collecting
by maintaining a seen map (keyed on the historyCandidate fingerprint/ID) and
only append a historyCandidate when its fingerprint is not in seen, incrementing
the unique count until reaching maxCandidates; use the same seen-based
collection fix for the other collector block mentioned (the analogous code
around lines 305-326) and you can optionally reuse a shared seen map to avoid
repeated fingerprinting across calls to linearizeBatchCandidates and the second
collector.
In `@fsmeta/exec/runner.go`:
- Around line 503-505: The cache-hit path currently returns whatever
decodeDirPageEntries(pageKey, entries) produces (including errors), which lets
corrupt cached dirpage decode failures bubble up; change the logic in the
dirPages.Lookup handling so you call decodeDirPageEntries(pageKey, entries) and
only return the cached result when decode succeeds (no error), otherwise treat
it as a cache miss and continue down the authoritative scan path (i.e., do not
return the error from decodeDirPageEntries; fall through to the
ReadDirPlus/runner scan). Ensure you reference the existing
dirPages.Lookup(pageKey, frontier) branch and the decodeDirPageEntries call when
implementing this conditional return behavior.
---
Outside diff comments:
In `@cmd/nokv-fsmeta-history/main.go`:
- Around line 132-145: The code shifts op.Inode into the per-seed namespace but
only rewrites Parent/FromParent/ToParent when they equal fsmeta.RootInode which
leaves nested targets unscoped; update the block that sets op.Inode (and uses
scopeGeneratedInode and scopeInode) to also remap op.Parent, op.FromParent and
op.ToParent by calling scopeGeneratedInode(inodeBase, <field>) whenever the
field is not fsmeta.RootInode (or simply always) and keep the existing
RootInode->scopeInode substitution for those fields; use the same helper
scopeGeneratedInode and fsmeta.RootInode/scopeInode symbols so nested directory
targets use the per-seed scoped inode IDs.
In `@fsmeta/contract/history.go`:
- Around line 64-66: The fast-path "if batchSize <= 1 { return Run(ctx, exec,
model, ops) }" bypasses the candidate-based logic and ignores the
AllowIndeterminateErrors option; change the condition so the shortcut only
happens when AllowIndeterminateErrors is false. In other words, only call
Run(...) immediately when batchSize <= 1 AND AllowIndeterminateErrors is false;
otherwise fall through into the candidate-based handling so
AllowIndeterminateErrors is honored. Ensure you reference the
AllowIndeterminateErrors flag (options struct) alongside batchSize and keep
Run(ctx, exec, model, ops) as the fallback.
In `@fsmeta/exec/runner.go`:
- Around line 565-578: The encoder currently drops entries on EncodeInodeValue
errors which produces partial cached pages; change encodeDirPageEntries to fail
fast and propagate the error (e.g., change signature to return ([]dirpage.Entry,
error)), stopping and returning an error if any fsmeta.EncodeInodeValue fails
instead of continuing, and update callers (notably ReadDirPlus and any
MaterializeAsync invocation) to check that error and only call MaterializeAsync
when encodeDirPageEntries succeeds.
---
Nitpick comments:
In `@engine/slab/dirpage/cache_bench_test.go`:
- Around line 121-132: BenchmarkDirPageInvalidate currently starts the timer
before any pages are materialized so it only measures lightweight epoch
bookkeeping; before calling b.ResetTimer() preload at least one cached page per
directory by using newBenchCache to create and populate entries for the PageKey
values (e.g. materialize/cache a DirPage for each keys[i].Directory()) so that
c.Invalidate(key.Directory()) triggers the full page-deletion sweep; ensure the
preloading loop runs before b.ResetTimer()/b.ReportAllocs() and uses the same
PageKey slice used in the benchmark.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3eded6c8-51e1-4c02-9d76-9453f92a9d3f
📒 Files selected for processing (21)
.dockerignorecmd/nokv-fsmeta-history/main.gocmd/nokv-fsmeta-history/main_test.gocmd/nokv-fsmeta-soak/main.gocoordinator/catalog/cluster.gocoordinator/server/service_gateway.gocoordinator/server/service_test.gocoordinator/server/transition_service.goengine/slab/dirpage/cache.goengine/slab/dirpage/cache_bench_test.goengine/slab/dirpage/cache_test.goengine/slab/dirpage/codec.gofsmeta/contract/history.gofsmeta/contract/history_test.gofsmeta/contract/model.gofsmeta/exec/runner.gofsmeta/exec/runner_test.gofsmeta/integration/history_contract_test.gofsmeta/server/errors.gofsmeta/server/service_test.goscripts/chaos/docker_fsmeta_history.sh
Summary
Tests
go test ./coordinator/server ./coordinator/catalog ./fsmeta/contract ./fsmeta/client ./cmd/nokv-fsmeta-history ./cmd/nokv-fsmeta-soak -count=1NOKV_CONTRACT_HISTORY_SEEDS=64 NOKV_CONTRACT_HISTORY_STEPS=240 NOKV_CONTRACT_HISTORY_BATCH=3 go test ./fsmeta/contract -run TestFSMetaExecutorConcurrentHistoryContract -count=1go test ./engine/slab/dirpage ./fsmeta/exec -count=1go test ./fsmeta/... ./engine/slab/dirpage ./cmd/nokv-fsmeta-history -count=1NOKV_DOCKER_CHAOS_SEEDS=2 NOKV_DOCKER_CHAOS_STEPS=32 NOKV_DOCKER_CHAOS_BATCH=3 NOKV_DOCKER_CHAOS_TIMEOUT=120s NOKV_DOCKER_CHAOS_INTERVAL=2 NOKV_DOCKER_CHAOS_DOWN=1 ./scripts/chaos/docker_fsmeta_history.shWorkflow Verification
957ea6c5957ea6c5957ea6c5957ea6c5Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores