Skip to content

chore: initial statistics subsystem#5

Open
EclesioMeloJunior wants to merge 209 commits intomasterfrom
statistics-collector
Open

chore: initial statistics subsystem#5
EclesioMeloJunior wants to merge 209 commits intomasterfrom
statistics-collector

Conversation

@EclesioMeloJunior
Copy link
Member

Description

  • Creates Statistics Collector subsytem
  • Defines the subsystem messages
/// Messages sent to the Statistics Collector subsystem.
#[derive(Debug)]
pub enum StatisticsCollectorMessage {
	// Approval vote received
	ApprovalVoting(Hash, CandidateHash, (ValidatorIndex, DelayTranche)),

	// Candidate received enough approval and now is approved
	CandidateApproved(CandidateHash, Hash),

	// Set of candidates that has not shared votes in time
	ObservedNoShows(SessionIndex, Vec<ValidatorIndex>),

	// All relay block's candidates are approved, therefore relay block is approved
	RelayBlockApproved(Hash)
}
  • Updated approval-voting to send data to the statistics collector subsytem

Next Steps

  • Collect approval distribution metrics (upload & downloads)
    • Define subsystem messages
    • Update approval distribution to send messages
  • Publish prometheus metrics
  • Calculate approvals tallies

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ EclesioMeloJunior
❌ EgorPopelyaev
You have signed the CLA already but the status is still pending? Let us recheck it.

- Inject Authority Discovery to Availability Distribution
- When responding to a chunk request get the peer id and retrive its authority ids
- In the Collector store the session info which enable to get the validator index from authority ids
EclesioMeloJunior and others added 30 commits February 9, 2026 09:31
…aritytech#10920)

fixes paritytech/contract-issues#213 where
storage deposit refunds failed in nested/reentrant calls.

Problem
Storage refunds were calculated incorrectly when a contract allocated
storage, then performed a nested call that cleared it. Pending storage
changes lived only in the parent FrameMeter, so child frames could not
see them and refunds were skipped.

Solution
Apply pending storage deposit changes to a cloned ContractInfo before
creating nested frames. This makes the parent’s storage state visible to
child frames during refund calculation.

Implementation
- Added apply_pending_changes_to_contract() to apply pending diffs to
ContractInfo
- Added apply_pending_storage_changes() wrapper on FrameMeter
- Applied pending storage changes before nested frame creation in
exec.rs (3 locations)

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: pgherveou <pgherveou@gmail.com>
…ck (paritytech#11031)

After `AuraDigestProvider` was introduced, emulated integration tests
for parachains with `slot_duration != relay_slot_duration` (e.g. 12s
Polkadot/Kusama chains) panic because `FixedVelocityConsensusHook`
derives a parachain slot that doesn't match `CurrentSlot`.
Fix by advancing the relay block number by `slot_duration /
RELAY_CHAIN_SLOT_DURATION_MILLIS` per parachain block (instead of always
+1), and computing the aura digest slot inline using both durations.
This removes the `DigestProvider` associated type from the `Parachain`
trait and the `AuraDigestProvider` struct — the emulator now handles the
digest automatically.
Downstream users must remove `DigestProvider: AuraDigestProvider,` from
their `decl_test_parachains!` invocations.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description

Add `DecodeWithMemTracking` derive to `CompactProof` in
`substrate/primitives/trie/src/storage_proof.rs`.

`StorageProof` already derived `DecodeWithMemTracking` but
`CompactProof` in the same file was missed.

## Integration

No integration changes required for downstream projects. `CompactProof`
now implements `DecodeWithMemTracking`, which is a strictly additive
trait implementation. Existing code using `CompactProof` will continue
to work as before.

## Review Notes

Single-line change adding `DecodeWithMemTracking` to the derive macro
list on `CompactProof`:

```diff
-#[derive(Debug, PartialEq, Eq, Clone, Encode, Decode, TypeInfo)]
+#[derive(Debug, PartialEq, Eq, Clone, Encode, Decode, DecodeWithMemTracking, TypeInfo)]
 pub struct CompactProof {
     pub encoded_nodes: Vec<Vec<u8>>,
 }
```

`CompactProof` only contains `Vec<Vec<u8>>`, which already implements
`DecodeWithMemTracking`, so the derive works without any manual
implementation.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Lets give the link-checker job some new life.

Sadly our blog posts are not available anymore (or at least I could not
find them), so I removed all references to them. Was thinking about
linking web archive, but its silly to first remove our blog and then
link to an archive.
### Context

When verifying Ethereum-to-Polkadot transfer messages, the key field in
receipt_proof is not used. Remove it as a cleanup and update the tests
accordingly.

---------

Co-authored-by: Branislav Kontur <bkontur@gmail.com>
…10794)

Changes:
- Ensure all benchmarks run for at least 10 seconds. Configurable with
`--min-duration <s>`
- Turn off runtime logging in bench bot to reduce spam log output
- Reduce DB repetition to 1 since PoV metering must be deterministic

Example of the System benchmark with the `set_heap_pages` benchmark that
took less than 10 ms before:
```pre
2026-01-13T21:36:10.687286Z [ 22 % ] Starting benchmark: frame_system::set_heap_pages    
2026-01-13T21:36:10.688437Z [ 33 % ] Starting benchmark: frame_system::set_code    
```

Now takes 10 seconds:
```pre
2026-01-13T21:37:31.392981Z [ 22 % ] Starting benchmark: frame_system::set_heap_pages    
2026-01-13T21:37:32.271275Z [ 22 % ] Running  benchmark: frame_system::set_heap_pages (overtime)    
2026-01-13T21:37:37.272099Z [ 22 % ] Running  benchmark: frame_system::set_heap_pages (overtime)    
2026-01-13T21:37:41.393107Z [ 33 % ] Starting benchmark: frame_system::set_code    
```

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Follow up of paritytech#11038

Even though the job passed in my last PR, I missed this broken link. So
here we go again.
…aritytech#11053)

Latest version of tracing-subscriber right now doesn't support ASNI
colour codes correctly: tokio-rs/tracing#3378

So, the workaround right now is to pin it to `0.3.19`.


Closes: paritytech#11030

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ttHub fails on relay chain (paritytech#11055)

Emit SessionKeysUpdateFailed with the operation type and dispatch error
for observability so set_keys/purge_kets failures from AssetHub are
observable on-chain.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…d proofs via the relay state proof (paritytech#10678)

## Purpose

This pull request introduces a new runtime API and implements the full
feature pipeline for requesting additional relay-chain storage proofs in
lookahead collators. The API allows parachain runtimes to specify extra
top-level storage keys or child-trie data that must be included in the
relay-chain state proof. The collator collects these additional proofs
and merges them into the relay-chain state proof provided to the runtime
during block execution, enabling the runtime to later process custom
relay-chain data.

## Rationale 

Immediate application in pubsub mechanism proposed in paritytech#9994

This is a narrow down of scope for easier review of PR paritytech#10679 

Due to early exits when defaulted it adds no significant overhead to
current flows.

## What this PR adds
### Runtime API

- Introduces `KeyToIncludeInRelayProofApi`. (_Suggestions for better
naming are very welcome._)

- Adds supporting types` RelayProofRequest` and `RelayStorageKey`.

- Allows runtimes to declare which relay-chain storage entries must be
included in the relay state proof.

### Collator integration

- The lookahead collator calls the runtime API before block production.

- Requested relay-chain proofs are collected, batched, and merged in a
single operation.

- The additional proofs are merged into the existing relay-chain state
proof and passed to the runtime via parachain inherent data.

### Proof extraction

- `parachain-system` exposes an extraction method for processing this
additional proofs.

- Uses a handler pattern:

  - `parachain-system` manages proof lifecycle and initial validation.

- Application pallets consume proofs (data extraction or additional
validation) by implementing `ProcessRelayProofKeys`.

- Keeps extra proofs processing logic out of parachain-system.

### About RelayStorageKey

`RelayStorageKey` is an enum with two variants:

- `Top`: a `Vec<u8>` representing a top-level relay-chain storage key.

- `Child`, which contains:

- `storage_key`: an unprefixed identifier of the child trie root (the
default _:child_storage:default:_ prefix is applied automatically),

  - `key`: the specific key within that child trie.

On the client side, child trie access is performed via
ChildInfo::new_default(&storage_key).

Why `storage_key` instead of `ChildInfo`:

- `ChildInfo` from `sp-storage` does not implement `TypeInfo`, which
runtime APIs require.

- Adding `TypeInfo` to `sp-storage` (or introducing a wrapper to avoid
bloating a critical core component like `sp-storage`) would
significantly expand the scope of this PR.

As a result, the current design:

- Uses raw `storage_key` bytes.

- Is limited to child tries using the default prefix.

## Future improvements

- Full `ChildInfo` support if `TypeInfo` is added to `sp-storage`
(directly or via a wrapper), enabling arbitrary child-trie prefixes.

- Possible unification with `additional_relay_state_keys` for top-level
proofs, subject to careful analysis of semantics and backward
compatibility.

- Integration with additional collator implementations beyond lookahead
collators.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
This PR fixes block import during Warp sync, which was silently failing
due to "Unknown parent" errors - a typical case during Warp sync and the
`full_node_warp_sync` test was not detecting such failure.

Changes
 - Relaxed verification for Warp synced blocks:
The fix relaxes verification requirements for Warp synced blocks by not
performing full verification, with the assumption that these blocks are
part of the finalized chain and have already been verified using the
provided warp sync proof.
- New `BlockOrigin` variants:
For improved clarity, two additional `BlockOrigin` items have been
introduced:
  - `WarpSync`
  - `GapSync`
- Gap sync improvements:
Warp synced blocks are now skipped during the gap sync block import
phase, which required improvements to gap handling when committing the
block import operation in the database.
- Enhanced testing:
The Warp sync zombienet test has been modified to more thoroughly assert
both warp and gap sync phases.

This PR builds on changes by @sistemd in paritytech#9678

---------

Co-authored-by: sistemd <enntheprogrammer@gmail.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…a benchmark (paritytech#11037)

## Summary

Consolidates the three identical `get_name`, `get_symbol`, and
`get_decimals` benchmarks into a single `get_metadata` benchmark. This
addresses the follow-up from paritytech#10971 where it was noted that these
benchmarks perform the same operation (`Pallet::get_metadata()`).

## Changes

### Benchmarks
- **`substrate/frame/assets/src/benchmarking.rs`**
- Replaced `get_name`, `get_symbol`, `get_decimals` with single
`get_metadata` benchmark
- Updated verification to check all three metadata fields (name, symbol,
decimals)

### Weight Functions
- **`substrate/frame/assets/src/weights.rs`**
- Replaced `get_name()`, `get_symbol()`, `get_decimals()` with single
`get_metadata()` in `WeightInfo` trait
  - Updated implementations for `SubstrateWeight<T>` and `()`

### Precompile
- **`substrate/frame/assets/precompiles/src/lib.rs`**
- Updated `name()`, `symbol()`, and `decimals()` methods to all charge
`get_metadata()` weight

### Cumulus Runtimes
Updated weight implementations in:
- `asset-hub-rococo`: `pallet_assets_foreign.rs`,
`pallet_assets_local.rs`, `pallet_assets_pool.rs`
- `asset-hub-westend`: `pallet_assets_foreign.rs`,
`pallet_assets_local.rs`, `pallet_assets_pool.rs`

## Rationale

All three original benchmarks were measuring the exact same operation -
a single metadata storage read. Consolidating them:
1. Reduces code duplication
2. Simplifies the `WeightInfo` trait
3. Accurately reflects that `name()`, `symbol()`, and `decimals()` have
identical costs

Closes follow-up from
paritytech#10971 (comment)

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ytech#7751) (paritytech#10917)

Implements persistent storage for the experimental collator protocol's
reputation database.

Changes:

- Adds `PersistentDb` wrapper that persists the in-memory reputation DB
to disk
  - Periodic persistence every 10 minutes (30s in test mode)
  - Immediate persistence on slashes and parachain deregistration
  - Loads existing state on startup with lookback for missed blocks
  
Implementation:
  
  `PersistentDb` wraps the existing `Db` and adds persistence on top:

    - All reputation logic (scoring, decay, LRU) stays in `Db`
    - Persistence layer handles disk I/O and serialization
    - Per-para data stored in parachains_db
    
Tests:

- `basic_persistence.rs`: Validates persistence across restarts and
startup lookback
- `pruning.rs`: Validates automatic cleanup on parachain deregistration

---------

Signed-off-by: Alexandru Cihodaru <alexandru.cihodaru@parity.io>
Co-authored-by: alindima <alin@parity.io>
Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>
Co-authored-by: Serban Iorga <serban@parity.io>
Co-authored-by: Serban Iorga <serban300@gmail.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
### Summary
This PR optimizes gap sync bandwidth usage by skipping body requests for
non-archive nodes. Bodies are unnecessary during gap sync when the node
doesn't maintain full block history, while archive nodes continue to
request bodies to preserve complete history.
It reduces bandwidth consumption and database size significantly for
typical validator/full nodes.

Additionally added some gap sync statistics for observability:
- Introduced `GapSyncStats` to track bandwidth usage: header bytes, body
bytes, justification bytes
- Logged on gap sync completion to provide visibility into bandwidth
savings

---------

Co-authored-by: sistemd <enntheprogrammer@gmail.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
## Summary
- Fix address tracking in delegatecall operations for callTracer

## Changes
- Update callTracer to correctly track addresses during delegatecall
operations

## Test plan
- Existing tests should pass
- Verify callTracer correctly reports addresses for delegatecall
operations

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Robert van Eerdewijk <robertvaneerdewijk@gmail.com>
## Summary

Preparatory cleanup PR extracted from the EIP-7702 branch to simplify
review.

- **Counter.sol uint64**: Change `uint256` to `uint64` in
Counter/NestedCounter fixtures, to avoid U256 conversion in tests.
- **Debug log**: Add debug log for `eth_transact` substrate tx hash
- **RLP fix**: Fix `Transaction7702Signed` decoder field order (removed
incorrect `gas_price` field at index 4, aligned with encoder)

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Fix huge benchmark regression for storage-heavy extrinsics, enabling
jemalloc-allocator via polkadot-jemalloc-shim for omni-bencher, marked
as optional in the scope of PR paritytech#10590.

This close paritytech/trie#230.

Thanks @alexggh and @cheme for the help 🙇 

Tested against `runtime / main` and
[2.1.0](polkadot-fellows/runtimes#1065) as
described
[here](paritytech/trie#230 (comment)).
For the `usual` exstrinsic `force_apply_min_commission` doing massive
storage allocation/deallocation on benchmark setup and then just 1read -
2 write in the benchmark extrinsic itself, times goes down from ms to
µs.

The regression was introduced by paritytech#10590 `sc-client-db: Make jemalloc
optional`

```bash
runtimes git:(sigurpol-release-2_0_6) /home/paolo/github/polkadot-sdk/target/release/frame-omni-bencher v1 benchmark pallet --runtime ./target/release/wbuild/asset-hub-polkadot-runtime/asset_hub_polkadot_runtime.compact.compressed.wasm --pallet pallet_staking_async --extrinsic "force_apply_min_commission" --steps 2 --repeat 1
2026-02-13T15:06:30.145367Z  INFO frame::benchmark::pallet: Initialized runtime log filter to 'INFO'
2026-02-13T15:06:31.784936Z  INFO pallet_collator_selection::pallet: assembling new collators for new session 0 at #0
2026-02-13T15:06:31.784966Z  INFO pallet_collator_selection::pallet: assembling new collators for new session 1 at #0
2026-02-13T15:08:29.701636Z  INFO frame::benchmark::pallet: [  0 % ] Starting benchmark: pallet_staking_async::force_apply_min_commission
2026-02-13T15:08:35.130403Z  INFO frame::benchmark::pallet: [  0 % ] Running  benchmark: pallet_staking_async::force_apply_min_commission (overtime)
Pallet: "pallet_staking_async", Extrinsic: "force_apply_min_commission", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Staking::MinCommission` (r:1 w:0)
Proof: `Staking::MinCommission` (`max_values`: Some(1), `max_size`: Some(4), added: 499, mode: `MaxEncodedLen`)
Storage: `Staking::Validators` (r:1 w:1)
Proof: `Staking::Validators` (`max_values`: None, `max_size`: Some(45), added: 2520, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    50.31
              µs

Reads = 2
Writes = 1
Recorded proof Size = 564

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    50.31
              µs

Reads = 2
Writes = 1
Recorded proof Size = 564
```

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
…perations (paritytech#10384)

Introduce "ImbalanceAccounting" traits for dynamic dispatch management
of imbalances. These are helper traits to be used for generic Imbalance,
helpful for tracking multiple concrete types of `Imbalance` using
dynamic dispatch of these traits.

`xcm-executor` now tracks imbalances in holding.

Change the xcm executor implementation and inner types and adapters so
that it keeps track of imbalances across the stack.

Previously, XCM operations on fungible assets would break the respective
fungibles' total issuance invariants by burning and minting them in
different stages of XCM processing pipeline.

This commit fixes that by keeping track of the "withdrawn" or
"deposited" fungible assets in holding and other XCM registers as
imbalances. The imbalances are tied to the underlying pallet managing
the asset so that they keep the assets' total issuance correctness
throughout the execution of the XCM program.

Imbalances in XCM registers are resolved by the underlying pallets
managing them whenever they move from XCM registers to other parts of
the stack (e.g. deposited to accounts, burned, etc).

XCM emulated tests now also verify total issuance before/after
transfers, swaps, traps, claims, etc to guarantee implementation
correctness.

---------

Signed-off-by: Adrian Catangiu <adrian@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Daniel Shiposha <dev@shiposha.com>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: 0xRVE <robertvaneerdewijk@gmail.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Co-authored-by: Paolo La Camera <paolo@parity.io>
Co-authored-by: Ankan <10196091+Ank4n@users.noreply.github.com>
Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com>
Co-authored-by: Manuel Mauro <manuel.mauro@protonmail.com>
Co-authored-by: Alexandre R. Baldé <alexandre.balde@parity.io>
Co-authored-by: Omar <OmarAbdulla7@hotmail.com>
Co-authored-by: BDevParity <bruno.devic@parity.io>
Co-authored-by: Egor_P <egor@parity.io>
Co-authored-by: Andrei Eres <eresav@me.com>
Co-authored-by: Klapeyron <11329616+Klapeyron@users.noreply.github.com>
Co-authored-by: Alexander Theißen <alex.theissen@me.com>
Co-authored-by: Alexandru Gheorghe <49718502+alexggh@users.noreply.github.com>
Co-authored-by: Xavier Lau <x@acg.box>
Co-authored-by: Dónal Murray <donal.murray@parity.io>
bump zombienet to latest `v0.4.5` (and subxt to `0.44.`)

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…iate (paritytech#10919)

## Summary
- Add integration tests for revive runtime API
- Test Fibonacci contract deployment and execution via substrate APIs

## Changes
- Add test for Fibonacci contract call via runtime API
- Add test to verify large Fibonacci values run out of gas as expected
- Update dev-node runtime configuration for testing

## Test plan
- Run new integration tests
- Verify runtime API correctly handles contract deployment
- Verify gas limits are enforced correctly

---------

Co-authored-by: Mónica Jin <monica@parity.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.