"}
+{"page_id": "develop-toolkit-parachains-spawn-chains", "page_title": "Spawn Networks for Testing", "index": 2, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 777, "end_char": 1188, "estimated_token_count": 105, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
\n
"}
{"page_id": "develop-toolkit-parachains", "page_title": "Parachains", "index": 0, "depth": 2, "title": "Quick Links", "anchor": "quick-links", "start_char": 600, "end_char": 1005, "estimated_token_count": 110, "token_estimator": "heuristic-v1", "text": "## Quick Links\n\n- [Use Pop CLI to start your parachain project](/develop/toolkit/parachains/quickstart/pop-cli/)\n- [Use Zombienet to spawn a chain](/develop/toolkit/parachains/spawn-chains/zombienet/get-started/)\n- [Use Chopsticks to fork a chain](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/)\n- [Use Moonwall to execute E2E testing](/develop/toolkit/parachains/e2e-testing/moonwall/)"}
{"page_id": "develop-toolkit-parachains", "page_title": "Parachains", "index": 1, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 1005, "end_char": 1054, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
{"page_id": "develop-toolkit", "page_title": "Toolkit", "index": 0, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 853, "end_char": 902, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
@@ -794,7 +794,7 @@
{"page_id": "infrastructure-running-a-validator-onboarding-and-offboarding-stop-validating", "page_title": "Stop Validating", "index": 3, "depth": 2, "title": "Purge Validator Session Keys", "anchor": "purge-validator-session-keys", "start_char": 1499, "end_char": 2530, "estimated_token_count": 194, "token_estimator": "heuristic-v1", "text": "## Purge Validator Session Keys\n\nPurging validator session keys is a critical step in removing the association between your validator account and its session keys, which ensures that your account is fully disassociated from validator activities. The `session.purgeKeys` extrinsic removes the reference to your session keys from the stash or staking proxy account that originally set them.\n\nHere are a couple of important things to know about purging keys:\n\n- **Account used to purge keys**: Always use the same account to purge keys you originally used to set them, usually your stash or staking proxy account. Using a different account may leave an unremovable reference to the session keys on the original account, preventing its reaping.\n- **Account reaping issue**: Failing to purge keys will prevent you from reaping (fully deleting) your stash account. If you attempt to transfer tokens without purging, you'll need to rebond, purge the session keys, unbond again, and wait through the unbonding period before any transfer."}
{"page_id": "infrastructure-running-a-validator-onboarding-and-offboarding-stop-validating", "page_title": "Stop Validating", "index": 4, "depth": 2, "title": "Unbond Your Tokens", "anchor": "unbond-your-tokens", "start_char": 2530, "end_char": 3228, "estimated_token_count": 142, "token_estimator": "heuristic-v1", "text": "## Unbond Your Tokens\n\nAfter chilling your node and purging session keys, the final step is to unbond your staked tokens. This action removes them from staking and begins the unbonding period (usually 28 days for Polkadot and seven days for Kusama), after which the tokens will be transferable.\n\nTo unbond tokens, go to **Network > Staking > Account Actions** on Polkadot.js Apps. Select your stash account, click on the dropdown menu, and choose **Unbond Funds**. Alternatively, you can use the `staking.unbond` extrinsic if you handle this via a staking proxy account.\n\nOnce the unbonding period is complete, your tokens will be available for use in transactions or transfers outside of staking."}
{"page_id": "infrastructure-running-a-validator-onboarding-and-offboarding", "page_title": "Onboarding and Offboarding", "index": 0, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 381, "end_char": 431, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "infrastructure-running-a-validator-onboarding-and-offboarding", "page_title": "Onboarding and Offboarding", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 431, "end_char": 1975, "estimated_token_count": 404, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
+{"page_id": "infrastructure-running-a-validator-onboarding-and-offboarding", "page_title": "Onboarding and Offboarding", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 431, "end_char": 1931, "estimated_token_count": 392, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
{"page_id": "infrastructure-running-a-validator-operational-tasks-general-management", "page_title": "General Management", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 22, "end_char": 759, "estimated_token_count": 119, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nValidator performance is pivotal in maintaining the security and stability of the Polkadot network. As a validator, optimizing your setup ensures efficient transaction processing, minimizes latency, and maintains system reliability during high-demand periods. Proper configuration and proactive monitoring also help mitigate risks like slashing and service interruptions.\n\nThis guide covers essential practices for managing a validator, including performance tuning techniques, security hardening, and tools for real-time monitoring. Whether you're fine-tuning CPU settings, configuring NUMA balancing, or setting up a robust alert system, these steps will help you build a resilient and efficient validator operation."}
{"page_id": "infrastructure-running-a-validator-operational-tasks-general-management", "page_title": "General Management", "index": 1, "depth": 2, "title": "Configuration Optimization", "anchor": "configuration-optimization", "start_char": 759, "end_char": 987, "estimated_token_count": 35, "token_estimator": "heuristic-v1", "text": "## Configuration Optimization\n\nFor those seeking to optimize their validator's performance, the following configurations can improve responsiveness, reduce latency, and ensure consistent performance during high-demand periods."}
{"page_id": "infrastructure-running-a-validator-operational-tasks-general-management", "page_title": "General Management", "index": 2, "depth": 3, "title": "Deactivate Simultaneous Multithreading", "anchor": "deactivate-simultaneous-multithreading", "start_char": 987, "end_char": 2478, "estimated_token_count": 333, "token_estimator": "heuristic-v1", "text": "### Deactivate Simultaneous Multithreading\n\nPolkadot validators operate primarily in single-threaded mode for critical tasks, so optimizing single-core CPU performance can reduce latency and improve stability. Deactivating simultaneous multithreading (SMT) can prevent virtual cores from affecting performance. SMT is called Hyper-Threading on Intel and 2-way SMT on AMD Zen.\n\nTake the following steps to deactivate every other (vCPU) core:\n\n1. Loop though all the CPU cores and deactivate the virtual cores associated with them:\n\n ```bash\n for cpunum in $(cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | \\\n cut -s -d, -f2- | tr ',' '\\n' | sort -un)\n do\n echo 0 > /sys/devices/system/cpu/cpu$cpunum/online\n done\n ```\n\n2. To permanently save the changes, add `nosmt=force` to the `GRUB_CMDLINE_LINUX_DEFAULT` variable in `/etc/default/grub`:\n\n ```bash\n sudo nano /etc/default/grub\n # Add to GRUB_CMDLINE_LINUX_DEFAULT\n ```\n\n ```config title=\"/etc/default/grub\"\n GRUB_DEFAULT = 0;\n GRUB_HIDDEN_TIMEOUT = 0;\n GRUB_HIDDEN_TIMEOUT_QUIET = true;\n GRUB_TIMEOUT = 10;\n GRUB_DISTRIBUTOR = `lsb_release -i -s 2> /dev/null || echo Debian`;\n GRUB_CMDLINE_LINUX_DEFAULT = 'nosmt=force';\n GRUB_CMDLINE_LINUX = '';\n ```\n\n3. Update GRUB to apply changes:\n\n ```bash\n sudo update-grub\n ```\n\n4. After the reboot, you should see that half of the cores are offline. To confirm, run:\n\n ```bash\n lscpu --extended\n ```"}
@@ -827,14 +827,14 @@
{"page_id": "infrastructure-running-a-validator-operational-tasks-upgrade-your-node", "page_title": "Upgrade a Validator Node", "index": 5, "depth": 3, "title": "Session `N`", "anchor": "session-n", "start_char": 3111, "end_char": 4063, "estimated_token_count": 216, "token_estimator": "heuristic-v1", "text": "### Session `N`\n\n1. **Start Validator B**: Launch a secondary node and wait until it is fully synced with the network. Once synced, start it with the `--validator` flag. This node will now act as Validator B.\n2. **Generate session keys**: Create new session keys specifically for Validator B.\n3. **Submit the `set_key` extrinsic**: Use your staking proxy account to submit a `set_key` extrinsic, linking the session keys for Validator B to your staking setup.\n4. **Record the session**: Make a note of the session in which you executed this extrinsic.\n5. **Wait for session changes**: Allow the current session to end and then wait for two additional full sessions for the new keys to take effect.\n\n!!! warning \"Keep Validator A running\"\n\n It is crucial to keep Validator A operational during this entire waiting period. Since `set_key` does not take effect immediately, turning off Validator A too early may result in chilling or even slashing."}
{"page_id": "infrastructure-running-a-validator-operational-tasks-upgrade-your-node", "page_title": "Upgrade a Validator Node", "index": 6, "depth": 3, "title": "Session `N+3`", "anchor": "session-n3", "start_char": 4063, "end_char": 5624, "estimated_token_count": 378, "token_estimator": "heuristic-v1", "text": "### Session `N+3`\n\nAt this stage, Validator B becomes your active validator. You can now safely perform any maintenance tasks on Validator A.\n\nComplete the following steps when you are ready to bring Validator A back online:\n\n1. **Start Validator A**: Launch Validator A, sync the blockchain database, and ensure it is running with the `--validator` flag.\n2. **Generate new session keys for Validator A**: Create fresh session keys for Validator A.\n3. **Submit the `set_key` extrinsic**: Using your staking proxy account, submit a `set_key` extrinsic with the new Validator A session keys.\n4. **Record the session**: Again, make a note of the session in which you executed this extrinsic.\n\nKeep Validator B active until the session during which you executed the `set-key` extrinsic completes plus two additional full sessions have passed. Once Validator A has successfully taken over, you can safely stop Validator B. This process helps ensure a smooth handoff between nodes and minimizes the risk of downtime or penalties. Verify the transition by checking for finalized blocks in the new session. The logs should indicate the successful change, similar to the example below:\n\n
\n INSERT_COMMAND\n 2019-10-28 21:44:13 Applying authority set change scheduled at block #450092\n 2019-10-28 21:44:13 Applying GRANDPA set change to new set with 20 authorities\n \n
"}
{"page_id": "infrastructure-running-a-validator-operational-tasks", "page_title": "Operational Tasks", "index": 0, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 593, "end_char": 643, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "infrastructure-running-a-validator-operational-tasks", "page_title": "Operational Tasks", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 643, "end_char": 1520, "estimated_token_count": 224, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
+{"page_id": "infrastructure-running-a-validator-operational-tasks", "page_title": "Operational Tasks", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 643, "end_char": 1498, "estimated_token_count": 218, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
{"page_id": "infrastructure-running-a-validator-requirements", "page_title": "Validator Requirements", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 26, "end_char": 981, "estimated_token_count": 159, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nRunning a validator in the Polkadot ecosystem is essential for maintaining network security and decentralization. Validators are responsible for validating transactions and adding new blocks to the chain, ensuring the system operates smoothly. In return for their services, validators earn rewards. However, the role comes with inherent risks, such as slashing penalties for misbehavior or technical failures. If you’re new to validation, starting on Kusama provides a lower-stakes environment to gain valuable experience before progressing to the Polkadot network.\n\nThis guide covers everything you need to know about becoming a validator, including system requirements, staking prerequisites, and infrastructure setup. Whether you’re deploying on a VPS or running your node on custom hardware, you’ll learn how to optimize your validator for performance and security, ensuring compliance with network standards while minimizing risks."}
{"page_id": "infrastructure-running-a-validator-requirements", "page_title": "Validator Requirements", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 981, "end_char": 2390, "estimated_token_count": 296, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nRunning a validator requires solid system administration skills and a secure, well-maintained infrastructure. Below are the primary requirements you need to be aware of before getting started:\n\n- **System administration expertise**: Handling technical anomalies and maintaining node infrastructure is critical. Validators must be able to troubleshoot and optimize their setup.\n- **Security**: Ensure your setup follows best practices for securing your node. Refer to the [Secure Your Validator](/infrastructure/running-a-validator/operational-tasks/general-management/#secure-your-validator){target=\\_blank} section to learn about important security measures.\n- **Network choice**: Start with [Kusama](/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#run-a-kusama-validator){target=\\_blank} to gain experience. Look for \"Adjustments for Kusama\" throughout these guides for tips on adapting the provided instructions for the Kusama network.\n- **Staking requirements**: A minimum amount of native token (KSM or DOT) is required to be elected into the validator set. The required stake can come from your own holdings or from nominators.\n- **Risk of slashing**: Any DOT you stake is at risk if your setup fails or your validator misbehaves. If you’re unsure of your ability to maintain a reliable validator, consider nominating your DOT to a trusted validator."}
{"page_id": "infrastructure-running-a-validator-requirements", "page_title": "Validator Requirements", "index": 2, "depth": 2, "title": "Minimum Hardware Requirements", "anchor": "minimum-hardware-requirements", "start_char": 2390, "end_char": 3554, "estimated_token_count": 251, "token_estimator": "heuristic-v1", "text": "## Minimum Hardware Requirements\n\nPolkadot validators rely on high-performance hardware to process blocks efficiently. The recommended minimum hardware requirements to ensure a fully functional and performant validator are as follows:\n\n- CPU:\n\n - x86-64 compatible.\n - Eight physical cores @ 3.4 GHz.\n - Processor:\n - **Intel**: Ice Lake or newer (Xeon or Core series)\n - **AMD**: Zen3 or newer (EPYC or Ryzen)\n - Simultaneous multithreading disabled:\n - **Intel**: Hyper-Threading\n - **AMD**: SMT\n - [Single-threaded performance](https://www.cpubenchmark.net/singleThread.html){target=\\_blank} is prioritized over higher cores count.\n\n- Storage:\n\n - **NVMe SSD**: At least 2 TB for blockchain data recommended (prioritize latency rather than throughput).\n - Storage requirements will increase as the chain grows. For current estimates, see the [current chain snapshot](https://stakeworld.io/docs/dbsize){target=\\_blank}.\n\n- Memory:\n\n - 32 GB DDR4 ECC\n\n- Network:\n\n - Symmetric networking speed of 500 Mbit/s is required to handle large numbers of parachains and ensure congestion control during peak times."}
{"page_id": "infrastructure-running-a-validator-requirements", "page_title": "Validator Requirements", "index": 3, "depth": 2, "title": "VPS Provider List", "anchor": "vps-provider-list", "start_char": 3554, "end_char": 6073, "estimated_token_count": 575, "token_estimator": "heuristic-v1", "text": "## VPS Provider List\n\nWhen selecting a VPS provider for your validator node, prioritize reliability, consistent performance, and adherence to the specific hardware requirements set for Polkadot validators. The following server types have been tested and showed acceptable performance in benchmark tests. However, this is not an endorsement and actual performance may vary depending on your workload and VPS provider.\n\nBe aware that some providers may overprovision the underlying host and use shared storage such as NVMe over TCP, which appears as local storage. These setups might result in poor or inconsistent performance. Benchmark your infrastructure before deploying.\n\n- **[Google Cloud Platform (GCP)](https://cloud.google.com/){target=\\_blank}**: `c2` and `c2d` machine families offer high-performance configurations suitable for validators.\n- **[Amazon Web Services (AWS)](https://aws.amazon.com/){target=\\_blank}**: `c6id` machine family provides strong performance, particularly for I/O-intensive workloads.\n- **[OVH](https://www.ovhcloud.com/en-au/){target=\\_blank}**: Can be a budget-friendly solution if it meets your minimum hardware specifications.\n- **[Digital Ocean](https://www.digitalocean.com/){target=\\_blank}**: Popular among developers, Digital Ocean's premium droplets offer configurations suitable for medium to high-intensity workloads.\n- **[Vultr](https://www.vultr.com/){target=\\_blank}**: Offers flexibility with plans that may meet validator requirements, especially for high-bandwidth needs.\n- **[Linode](https://www.linode.com/){target=\\_blank}**: Provides detailed documentation, which can be helpful for setup.\n- **[Scaleway](https://www.scaleway.com/en/){target=\\_blank}**: Offers high-performance cloud instances that can be suitable for validator nodes.\n- **[OnFinality](https://onfinality.io/en){target=\\_blank}**: Specialized in blockchain infrastructure, OnFinality provides validator-specific support and configurations.\n\n!!! warning \"Acceptable use policies\"\n Different VPS providers have varying acceptable use policies, and not all allow cryptocurrency-related activities. \n\n For example, Digital Ocean, requires explicit permission to use servers for cryptocurrency mining and defines unauthorized mining as [network abuse](https://www.digitalocean.com/legal/acceptable-use-policy#network-abuse){target=\\_blank} in their acceptable use policy. \n \n Review the terms for your VPS provider to avoid account suspension or server shutdown due to policy violations."}
{"page_id": "infrastructure-running-a-validator-requirements", "page_title": "Validator Requirements", "index": 4, "depth": 2, "title": "Minimum Bond Requirement", "anchor": "minimum-bond-requirement", "start_char": 6073, "end_char": 6838, "estimated_token_count": 196, "token_estimator": "heuristic-v1", "text": "## Minimum Bond Requirement\n\nBefore bonding DOT, ensure you meet the minimum bond requirement to start a validator instance. The minimum bond is the least DOT you need to stake to enter the validator set. To become eligible for rewards, your validator node must be nominated by enough staked tokens.\n\nFor example, on November 19, 2024, the minimum stake backing a validator in Polkadot's era 1632 was 1,159,434.248 DOT. You can check the current minimum stake required using these tools:\n\n- [**Chain State Values**](https://wiki.polkadot.com/general/chain-state-values/){target=\\_blank}\n- [**Subscan**](https://polkadot.subscan.io/validator_list?status=validator){target=\\_blank}\n- [**Staking Dashboard**](https://staking.polkadot.cloud/#/overview){target=\\_blank}"}
{"page_id": "infrastructure-running-a-validator", "page_title": "Running a Validator", "index": 0, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 412, "end_char": 462, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "infrastructure-running-a-validator", "page_title": "Running a Validator", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 462, "end_char": 1603, "estimated_token_count": 307, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
+{"page_id": "infrastructure-running-a-validator", "page_title": "Running a Validator", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 462, "end_char": 1570, "estimated_token_count": 298, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
{"page_id": "infrastructure-staking-mechanics-offenses-and-slashes", "page_title": "Offenses and Slashes", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 24, "end_char": 674, "estimated_token_count": 104, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nIn Polkadot's Nominated Proof of Stake (NPoS) system, validator misconduct is deterred through a combination of slashing, disabling, and reputation penalties. Validators and nominators who stake tokens face consequences for validator misbehavior, which range from token slashes to restrictions on network participation.\n\nThis page outlines the types of offenses recognized by Polkadot, including block equivocations and invalid votes, as well as the corresponding penalties. While some parachains may implement additional custom slashing mechanisms, this guide focuses on the offenses tied to staking within the Polkadot ecosystem."}
{"page_id": "infrastructure-staking-mechanics-offenses-and-slashes", "page_title": "Offenses and Slashes", "index": 1, "depth": 2, "title": "Offenses", "anchor": "offenses", "start_char": 674, "end_char": 1106, "estimated_token_count": 86, "token_estimator": "heuristic-v1", "text": "## Offenses\n\nPolkadot is a public permissionless network. As such, it has a mechanism to disincentivize offenses and incentivize good behavior. You can review the [parachain protocol](https://wiki.polkadot.com/learn/learn-parachains-protocol/#parachain-protocol){target=\\_blank} to understand better the terminology used to describe offenses. Polkadot validator offenses fall into two categories: invalid votes and equivocations."}
{"page_id": "infrastructure-staking-mechanics-offenses-and-slashes", "page_title": "Offenses and Slashes", "index": 2, "depth": 3, "title": "Invalid Votes", "anchor": "invalid-votes", "start_char": 1106, "end_char": 1733, "estimated_token_count": 128, "token_estimator": "heuristic-v1", "text": "### Invalid Votes\n\nA validator will be penalized for inappropriate voting activity during the block inclusion and approval processes. The invalid voting related offenses are as follows:\n\n- **Backing an invalid block**: A para-validator backs an invalid block for inclusion in a fork of the relay chain.\n- **`ForInvalid` vote**: When acting as a secondary checker, the validator votes in favor of an invalid block.\n- **`AgainstValid` vote**: When acting as a secondary checker, the validator votes against a valid block. This type of vote wastes network resources required to resolve the disparate votes and resulting dispute."}
@@ -851,7 +851,7 @@
{"page_id": "infrastructure-staking-mechanics-rewards-payout", "page_title": "Rewards Payout", "index": 4, "depth": 2, "title": "Running Multiple Validators", "anchor": "running-multiple-validators", "start_char": 5622, "end_char": 7233, "estimated_token_count": 423, "token_estimator": "heuristic-v1", "text": "## Running Multiple Validators\n\nRunning multiple validators can offer a more favorable risk/reward ratio compared to running a single one. If you have sufficient DOT or nominators staking on your validators, maintaining multiple validators within the active set can yield higher rewards.\n\nIn the preceding section, with 18 DOT staked and no nominators, Alice earned 2 DOT in one era. This example uses DOT, but the same principles apply for KSM on the Kusama network. By managing stake across multiple validators, you can potentially increase overall returns. Recall the set of validators from the preceding section:\n\n``` mermaid\nflowchart TD\n A[\"Alice (18 DOT)\"]\n B[\"Bob (9 DOT)\"]\n C[\"Carol (8 DOT)\"]\n D[\"Dave (7 DOT)\"]\n E[\"Payout (8 DOT total)\"]\n E --\"2 DOT\"--> A\n E --\"2 DOT\"--> B\n E --\"2 DOT\"--> C\n E --\"2 DOT\"--> D \n```\n\nNow, assume Alice decides to split their stake and run two validators, each with a nine DOT stake. This validator set only has four spots and priority is given to validators with a larger stake. In this example, Dave has the smallest stake and loses his spot in the validator set. Now, Alice will earn two shares of the total payout each era as illustrated below:\n\n``` mermaid\nflowchart TD\n A[\"Alice (9 DOT)\"]\n F[\"Alice (9 DOT)\"]\n B[\"Bob (9 DOT)\"]\n C[\"Carol (8 DOT)\"]\n E[\"Payout (8 DOT total)\"]\n E --\"2 DOT\"--> A\n E --\"2 DOT\"--> B\n E --\"2 DOT\"--> C\n E --\"2 DOT\"--> F \n```\n\nWith enough stake, you could run more than two validators. However, each validator must have enough stake behind it to maintain a spot in the validator set."}
{"page_id": "infrastructure-staking-mechanics-rewards-payout", "page_title": "Rewards Payout", "index": 5, "depth": 2, "title": "Nominators and Validator Payments", "anchor": "nominators-and-validator-payments", "start_char": 7233, "end_char": 11070, "estimated_token_count": 990, "token_estimator": "heuristic-v1", "text": "## Nominators and Validator Payments\n\nA nominator's stake allows them to vote for validators and earn a share of the rewards without managing a validator node. Although staking rewards depend on validator activity during an era, validators themselves never control or own nominator rewards. To trigger payouts, anyone can call the `staking.payoutStakers` or `staking.payoutStakerByPage` methods, which mint and distribute rewards directly to the recipients. This trustless process ensures nominators receive their earned rewards.\n\nValidators set a commission rate as a percentage of the block reward, affecting how rewards are shared with nominators. A 0% commission means the validator keeps only rewards from their self-stake, while a 100% commission means they retain all rewards, leaving none for nominators.\n\nThe following examples model splitting validator payments between nominator and validator using various commission percentages. For simplicity, these examples assume a Polkadot-SDK based relay chain that uses DOT as a native token and a single nominator per validator. Calculations of KSM reward payouts for Kusama follow the same formula. \n\nStart with the original validator set from the previous section: \n\n``` mermaid\nflowchart TD\n A[\"Alice (18 DOT)\"]\n B[\"Bob (9 DOT)\"]\n C[\"Carol (8 DOT)\"]\n D[\"Dave (7 DOT)\"]\n E[\"Payout (8 DOT total)\"]\n E --\"2 DOT\"--> A\n E --\"2 DOT\"--> B\n E --\"2 DOT\"--> C\n E --\"2 DOT\"--> D \n```\n\nThe preceding diagram shows each validator receiving a 2 DOT payout, but doesn't account for sharing rewards with nominators. The following diagram shows what nominator payout might look like for validator Alice. Alice has a 20% commission rate and holds 50% of the stake for their validator:\n\n``` mermaid\n\nflowchart TD\n A[\"Gross Rewards = 2 DOT\"]\n E[\"Commission = 20%\"]\n F[\"Alice Validator Payment = 0.4 DOT\"]\n G[\"Total Stake Rewards = 1.6 DOT\"]\n B[\"Alice Validator Stake = 18 DOT\"]\n C[\"9 DOT Alice (50%)\"]\n H[\"Alice Stake Reward = 0.8 DOT\"]\n I[\"Total Alice Validator Reward = 1.2 DOT\"]\n D[\"9 DOT Nominator (50%)\"]\n J[\"Total Nominator Reward = 0.8 DOT\"]\n \n A --> E\n E --(2 x 0.20)--> F\n F --(2 - 0.4)--> G\n B --> C\n B --> D\n C --(1.6 x 0.50)--> H\n H --(0.4 + 0.8)--> I\n D --(1.60 x 0.50)--> J\n```\n\nNotice the validator commission rate is applied against the gross amount of rewards for the era. The validator commission is subtracted from the total rewards. After the commission is paid to the validator, the remaining amount is split among stake owners according to their percentage of the total stake. A validator's total rewards for an era include their commission plus their piece of the stake rewards. \n\nNow, consider a different scenario for validator Bob where the commission rate is 40%, and Bob holds 33% of the stake for their validator:\n\n``` mermaid\n\nflowchart TD\n A[\"Gross Rewards = 2 DOT\"]\n E[\"Commission = 40%\"]\n F[\"Bob Validator Payment = 0.8 DOT\"]\n G[\"Total Stake Rewards = 1.2 DOT\"]\n B[\"Bob Validator Stake = 9 DOT\"]\n C[\"3 DOT Bob (33%)\"]\n H[\"Bob Stake Reward = 0.4 DOT\"]\n I[\"Total Bob Validator Reward = 1.2 DOT\"]\n D[\"6 DOT Nominator (67%)\"]\n J[\"Total Nominator Reward = 0.8 DOT\"]\n \n A --> E\n E --(2 x 0.4)--> F\n F --(2 - 0.8)--> G\n B --> C\n B --> D\n C --(1.2 x 0.33)--> H\n H --(0.8 + 0.4)--> I\n D --(1.2 x 0.67)--> J\n```\n\nBob holds a smaller percentage of their node's total stake, making their stake reward smaller than Alice's. In this scenario, Bob makes up the difference by charging a 40% commission rate and ultimately ends up with the same total payment as Alice. Each validator will need to find their ideal balance between the amount of stake and commission rate to attract nominators while still making running a validator worthwhile."}
{"page_id": "infrastructure-staking-mechanics", "page_title": "Staking Mechanics", "index": 0, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 487, "end_char": 537, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "infrastructure-staking-mechanics", "page_title": "Staking Mechanics", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 537, "end_char": 1824, "estimated_token_count": 340, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
+{"page_id": "infrastructure-staking-mechanics", "page_title": "Staking Mechanics", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 537, "end_char": 1791, "estimated_token_count": 331, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
{"page_id": "infrastructure", "page_title": "Infrastructure", "index": 0, "depth": 2, "title": "Choosing the Right Role", "anchor": "choosing-the-right-role", "start_char": 486, "end_char": 2813, "estimated_token_count": 439, "token_estimator": "heuristic-v1", "text": "## Choosing the Right Role\n\nSelecting your role within the Polkadot ecosystem depends on your goals, resources, and expertise. Below are detailed considerations for each role:\n\n- **Running a node**:\n - **Purpose**: A node provides access to network data and supports API queries. It is commonly used for.\n - **Development and testing**: Offers a local instance to simulate network conditions and test applications.\n - **Production use**: Acts as a data source for dApps, clients, and other applications needing reliable access to the blockchain.\n - **Requirements**: Moderate hardware resources to handle blockchain data efficiently.\n - **Responsibilities**: A node’s responsibilities vary based on its purpose.\n - **Development and testing**: Enables developers to test features, debug code, and simulate network interactions in a controlled environment.\n - **Production use**: Provides consistent and reliable data access for dApps and other applications, ensuring minimal downtime.\n\n- **Running a validator**:\n - **Purpose**: Validators play a critical role in securing the Polkadot relay chain. They validate parachain block submissions, participate in consensus, and help maintain the network's overall integrity.\n - **Requirements**: Becoming a validator requires.\n - **Staking**: A variable amount of DOT tokens to secure the network and demonstrate commitment.\n - **Hardware**: High-performing hardware resources capable of supporting intensive blockchain operations.\n - **Technical expertise**: Proficiency in setting up and maintaining nodes, managing updates, and understanding Polkadot's consensus mechanisms.\n - **Community involvement**: Building trust and rapport within the community to attract nominators willing to stake with your validator.\n - **Responsibilities**: Validators have critical responsibilities to ensure network health.\n - **Uptime**: Maintain near-constant availability to avoid slashing penalties for downtime or unresponsiveness.\n - **Network security**: Participate in consensus and verify parachain transactions to uphold the network's security and integrity.\n - **Availability**: Monitor the network for events and respond to issues promptly, such as misbehavior reports or protocol updates."}
{"page_id": "infrastructure", "page_title": "Infrastructure", "index": 1, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 2813, "end_char": 2862, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
{"page_id": "polkadot-protocol-architecture-parachains-consensus", "page_title": "Parachain Consensus", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 23, "end_char": 936, "estimated_token_count": 146, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nParachains are independent blockchains built with the Polkadot SDK, designed to leverage Polkadot’s relay chain for shared security and transaction finality. These specialized chains operate as part of Polkadot’s execution sharding model, where each parachain manages its own state and transactions while relying on the relay chain for validation and consensus.\n\nAt the core of parachain functionality are collators, specialized nodes that sequence transactions into blocks and maintain the parachain’s state. Collators optimize Polkadot’s architecture by offloading state management from the relay chain, allowing relay chain validators to focus solely on validating parachain blocks.\n\nThis guide explores how parachain consensus works, including the roles of collators and validators, and the steps involved in securing parachain blocks within Polkadot’s scalable and decentralized framework."}
@@ -1239,7 +1239,7 @@
{"page_id": "tutorials-dapps-remark-tutorial", "page_title": "PAPI Account Watcher Tutorial", "index": 7, "depth": 2, "title": "Test the CLI", "anchor": "test-the-cli", "start_char": 6108, "end_char": 7721, "estimated_token_count": 521, "token_estimator": "heuristic-v1", "text": "## Test the CLI\n\nTo test the application, navigate to the [**Extrinsics** page of the PAPI Dev Console](https://dev.papi.how/extrinsics#networkId=westend&endpoint=light-client){target=\\_blank}. Select the **System** pallet and the **remark_with_event** call. Ensure the input field follows the convention `address+email`. For example, if monitoring `5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY`, the input should be:\n\n\n\nSubmit the extrinsic and sign it using the Polkadot.js browser wallet. The CLI will display the following output and play the \"You've Got Mail!\" sound:\n\n
\n npm start -- --account 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY\n __ __ _ _____ __ __ _ _ __ __ _ _\n \\ \\ / /__| |__|___ / | \\/ | __ _(_) | \\ \\ / /_ _| |_ ___| |__ ___ _ __\n \\ \\ /\\ / / _ \\ '_ \\ |_ \\ | |\\/| |/ _` | | | \\ \\ /\\ / / _` | __/ __| '_ \\ / _ \\ '__|\n \\ V V / __/ |_) |__) | | | | | (_| | | | \\ V V / (_| | || (__| | | | __/ |\n \\_/\\_/ \\___|_.__/____/ |_| |_|\\__,_|_|_| \\_/\\_/ \\__,_|\\__\\___|_| |_|\\___|_|\n \n 📬 Watching account: 5Cm8yiG45rqrpyV2zPLrbtr8efksrRuCXcqcB4xj8AejfcTB\n 📥 You've got mail!\n 👤 From: 5Cm8yiG45rqrpyV2zPLrbtr8efksrRuCXcqcB4xj8AejfcTB\n 🔖 Hash: 0xb6999c9082f5b1dede08b387404c9eb4eb2deee4781415dfa7edf08b87472050\n
"}
{"page_id": "tutorials-dapps-remark-tutorial", "page_title": "PAPI Account Watcher Tutorial", "index": 8, "depth": 2, "title": "Next Steps", "anchor": "next-steps", "start_char": 7721, "end_char": 8055, "estimated_token_count": 69, "token_estimator": "heuristic-v1", "text": "## Next Steps\n\nThis application demonstrates how the Polkadot API can be used to build decentralized applications. While this is not a production-grade application, it introduces several key features for developing with the Polkadot API.\n\nTo explore more, refer to the [official PAPI documentation](https://papi.how){target=\\_blank}."}
{"page_id": "tutorials-dapps", "page_title": "Decentralized Application Tutorials", "index": 0, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 491, "end_char": 541, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "tutorials-dapps", "page_title": "Decentralized Application Tutorials", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 541, "end_char": 1220, "estimated_token_count": 190, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
+{"page_id": "tutorials-dapps", "page_title": "Decentralized Application Tutorials", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 541, "end_char": 1198, "estimated_token_count": 184, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
{"page_id": "tutorials-interoperability-replay-and-dry-run-xcms", "page_title": "Replay and Dry Run XCMs", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 44, "end_char": 735, "estimated_token_count": 150, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nIn this tutorial, you'll learn how to replay and dry-run XCMs using [Chopsticks](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/){target=\\_blank}, a powerful tool for forking live Polkadot SDK-based chains in your local environment. These techniques are essential for:\n\n- Debugging cross-chain message failures.\n- Tracing execution across relay chains and parachains.\n- Analyzing weight usage, error types, and message flow.\n- Safely simulating XCMs without committing state changes.\n\nBy the end of this guide, you'll be able to set up a local fork, capture and replay real XCMs, and use dry-run features to diagnose and resolve complex cross-chain issues."}
{"page_id": "tutorials-interoperability-replay-and-dry-run-xcms", "page_title": "Replay and Dry Run XCMs", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 735, "end_char": 1478, "estimated_token_count": 199, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore you begin, make sure you have:\n\n- [Chopsticks](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/){target=\\_blank} installed (`npm i -g @acala-network/chopsticks`).\n- Access to the endpoint or genesis file of the parachain you want to fork.\n- The block number or hash where the XCM was sent.\n- (Optional) A Chopsticks config file for repeated setups.\n\nIf you haven't forked a chain before, see the [Fork a Chain with Chopsticks guide](/tutorials/polkadot-sdk/testing/fork-live-chains/){target=\\_blank} or [Fork a Network Locally using Chopsticks](https://wiki.polkadot.com/learn/learn-guides-test-opengov-proposals/#fork-a-network-locally-using-chopsticks){target=\\_blank} for step-by-step instructions."}
{"page_id": "tutorials-interoperability-replay-and-dry-run-xcms", "page_title": "Replay and Dry Run XCMs", "index": 2, "depth": 2, "title": "Set Up Your Project", "anchor": "set-up-your-project", "start_char": 1478, "end_char": 2310, "estimated_token_count": 194, "token_estimator": "heuristic-v1", "text": "## Set Up Your Project\n\nLet's start by creating a dedicated workspace for your XCM replay and dry-run experiments.\n\n1. Create a new directory and navigate into it:\n\n ```bash\n mkdir -p replay-xcm-tests\n cd replay-xcm-tests\n ```\n\n2. Initialize a new Node project:\n\n ```bash\n npm init -y\n ```\n\n3. Install Chopsticks globally (recommended to avoid conflicts with local installs):\n\n ```bash\n npm install -g @acala-network/chopsticks@latest\n ```\n\n4. Install TypeScript and related tooling for local development:\n\n ```bash\n npm install --save-dev typescript @types/node tsx\n ```\n\n5. Install the required Polkadot packages:\n\n ```bash\n npm install polkadot-api @polkadot-labs/hdkd @polkadot-labs/hdkd-helpers\n ```\n\n6. Initialize the TypeScript config:\n\n ```bash\n npx tsc --init\n ```"}
@@ -1276,7 +1276,7 @@
{"page_id": "tutorials-interoperability-xcm-channels-para-to-system", "page_title": "Opening HRMP Channels with System Parachains", "index": 5, "depth": 3, "title": "Craft and Submit the XCM Message", "anchor": "craft-and-submit-the-xcm-message", "start_char": 3780, "end_char": 7208, "estimated_token_count": 685, "token_estimator": "heuristic-v1", "text": "### Craft and Submit the XCM Message\n\nConnect to parachain 2500 using Polkadot.js Apps to send the XCM message to the relay chain. Input the necessary parameters as illustrated in the image below. Make sure to:\n\n1. Insert your previously encoded `establish_channel_with_system` call data into the **`call`** field.\n2. Provide beneficiary details.\n3. Dispatch the XCM message to the relay chain by clicking the **Submit Transaction** button.\n\n\n\n!!! note\n The exact process and parameters for submitting this XCM message may vary depending on your specific parachain and relay chain configurations. Always refer to the most current documentation for your particular network setup.\n\nAfter successfully submitting the XCM message to the relay chain, two [`HrmpSystemChannelOpened`](https://paritytech.github.io/polkadot-sdk/master/polkadot_runtime_parachains/hrmp/pallet/enum.Event.html#variant.HrmpSystemChannelOpened){target=\\_blank} events are emitted, indicating that the channels are now present in storage under [`HrmpOpenChannelRequests`](https://paritytech.github.io/polkadot-sdk/master/polkadot_runtime_parachains/hrmp/pallet/storage_types/struct.HrmpOpenChannelRequests.html){target=\\_blank}. However, the channels are not actually set up until the start of the next session, at which point bidirectional communication between parachain 2500 and system chain 1000 is established.\n\nTo verify this, wait for the next session and then follow these steps:\n\n1. Using Polkadot.js Apps, connect to the relay chain and navigate to the **Developer** dropdown, then select **Chain state**.\n\n \n\n2. Query the HRMP channels:\n\n 1. Select **`hrmp`** from the options.\n 2. Choose the **`hrmpChannels`** call.\n 3. Click the **+** button to execute the query.\n\n \n \n3. Examine the query results. You should see output similar to the following:\n\n ```json\n [\n [\n [\n {\n \"sender\": 1000,\n \"recipient\": 2500\n }\n ],\n {\n \"maxCapacity\": 8,\n \"maxTotalSize\": 8192,\n \"maxMessageSize\": 1048576,\n \"msgCount\": 0,\n \"totalSize\": 0,\n \"mqcHead\": null,\n \"senderDeposit\": 0,\n \"recipientDeposit\": 0\n }\n ],\n [\n [\n {\n \"sender\": 2500,\n \"recipient\": 1000\n }\n ],\n {\n \"maxCapacity\": 8,\n \"maxTotalSize\": 8192,\n \"maxMessageSize\": 1048576,\n \"msgCount\": 0,\n \"totalSize\": 0,\n \"mqcHead\": null,\n \"senderDeposit\": 0,\n \"recipientDeposit\": 0\n }\n ]\n ]\n\n ```\n\nThe output confirms the successful establishment of two HRMP channels:\n\n- From chain 1000 (system chain) to chain 2500 (parachain).\n- From chain 2500 (parachain) to chain 1000 (system chain).\n\nThis bidirectional channel enables direct communication between the system chain and the parachain, allowing for cross-chain message passing."}
{"page_id": "tutorials-interoperability-xcm-channels", "page_title": "Tutorials for Managing XCM Channels", "index": 0, "depth": 2, "title": "Understand the Process of Opening Channels", "anchor": "understand-the-process-of-opening-channels", "start_char": 787, "end_char": 1357, "estimated_token_count": 95, "token_estimator": "heuristic-v1", "text": "## Understand the Process of Opening Channels\n\nEach parachain starts with two default unidirectional XCM channels: an upward channel for sending messages to the relay chain, and a downward channel for receiving messages. These channels are implicitly available.\n\nTo enable communication between parachains, explicit HRMP channels must be established by registering them on the relay chain. This process requires a deposit to cover the costs associated with storing message queues on the relay chain. The deposit amount depends on the specific relay chain’s parameters."}
{"page_id": "tutorials-interoperability-xcm-channels", "page_title": "Tutorials for Managing XCM Channels", "index": 1, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 1357, "end_char": 1407, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "tutorials-interoperability-xcm-channels", "page_title": "Tutorials for Managing XCM Channels", "index": 2, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 1407, "end_char": 1808, "estimated_token_count": 101, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
+{"page_id": "tutorials-interoperability-xcm-channels", "page_title": "Tutorials for Managing XCM Channels", "index": 2, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 1407, "end_char": 1797, "estimated_token_count": 98, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
{"page_id": "tutorials-interoperability-xcm-fee-estimation", "page_title": "XCM Fee Estimation", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 22, "end_char": 450, "estimated_token_count": 76, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nWhen sending cross-chain messages, ensure that the transaction will be successful not only in the local chain but also in the destination chain and any intermediate chains.\n\nSending cross-chain messages requires estimating the fees for the operation. \n\nThis tutorial will demonstrate how to dry-run and estimate the fees for teleporting assets from the Paseo Asset Hub parachain to the Paseo Bridge Hub chain."}
{"page_id": "tutorials-interoperability-xcm-fee-estimation", "page_title": "XCM Fee Estimation", "index": 1, "depth": 2, "title": "Fee Mechanism", "anchor": "fee-mechanism", "start_char": 450, "end_char": 1437, "estimated_token_count": 222, "token_estimator": "heuristic-v1", "text": "## Fee Mechanism\n\nThere are three types of fees that can be charged when sending a cross-chain message:\n\n- **Local execution fees**: Fees charged in the local chain for executing the message.\n- **Delivery fees**: Fees charged for delivering the message to the destination chain.\n- **Remote execution fees**: Fees charged in the destination chain for executing the message.\n\nIf there are multiple intermediate chains, delivery fees and remote execution fees will be charged for each one.\n\nIn this example, you will estimate the fees for teleporting assets from the Paseo Asset Hub parachain to the Paseo Bridge Hub chain. The fee structure will be as follows:\n\n```mermaid\nflowchart LR\n AssetHub[Paseo Asset Hub] -->|Delivery Fees| BridgeHub[Paseo Bridge Hub]\n AssetHub -->|
Local
Execution
Fees| AssetHub\n BridgeHub -->|
Remote
Execution
Fees| BridgeHub\n```\n\nThe overall fees are `local_execution_fees` + `delivery_fees` + `remote_execution_fees`."}
{"page_id": "tutorials-interoperability-xcm-fee-estimation", "page_title": "XCM Fee Estimation", "index": 2, "depth": 2, "title": "Environment Setup", "anchor": "environment-setup", "start_char": 1437, "end_char": 3989, "estimated_token_count": 588, "token_estimator": "heuristic-v1", "text": "## Environment Setup\n\nFirst, you need to set up your environment:\n\n1. Create a new directory and initialize the project:\n\n ```bash\n mkdir xcm-fee-estimation && \\\n cd xcm-fee-estimation\n ```\n\n2. Initialize the project:\n\n ```bash\n npm init -y\n ```\n\n3. Install dev dependencies:\n\n ```bash\n npm install --save-dev @types/node@^22.12.0 ts-node@^10.9.2 typescript@^5.7.3\n ```\n\n4. Install dependencies:\n\n ```bash\n npm install --save @polkadot-labs/hdkd@^0.0.13 @polkadot-labs/hdkd-helpers@^0.0.13 polkadot-api@1.9.5\n ```\n\n5. Create TypeScript configuration:\n\n ```bash\n npx tsc --init\n ```\n\n6. Generate the types for the Polkadot API for Paseo Bridge Hub and Paseo Asset Hub:\n\n ```bash\n npx papi add paseoAssetHub -n paseo_asset_hub && \\\n npx papi add paseoBridgeHub -w wss://bridge-hub-paseo.dotters.network\n ```\n\n7. Create a new file called `teleport-ah-to-bridge-hub.ts`:\n\n ```bash\n touch teleport-ah-to-bridge-hub.ts\n ```\n\n8. Import the necessary modules. Add the following code to the `teleport-ah-to-bridge-hub.ts` file:\n\n ```typescript title=\"teleport-ah-to-bridge-hub.ts\"\n import { paseoAssetHub, paseoBridgeHub } from '@polkadot-api/descriptors';\n import { createClient, FixedSizeBinary, Enum } from 'polkadot-api';\n import { getWsProvider } from 'polkadot-api/ws-provider/node';\n import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat';\n import {\n XcmVersionedLocation,\n XcmVersionedAssetId,\n XcmV3Junctions,\n XcmV3MultiassetFungibility,\n XcmVersionedXcm,\n XcmV5Instruction,\n XcmV5Junctions,\n XcmV5Junction,\n XcmV5AssetFilter,\n XcmV5WildAsset,\n } from '@polkadot-api/descriptors';\n ```\n\n9. Define constants and a `main` function where you will implement all the logic:\n\n ```typescript title=\"teleport-ah-to-bridge-hub.ts\"\n // 1 PAS = 10^10 units\n const PAS_UNITS = 10_000_000_000n; // 1 PAS\n const PAS_CENTS = 100_000_000n; // 0.01 PAS\n\n // Paseo Asset Hub constants\n const PASEO_ASSET_HUB_RPC_ENDPOINT = 'ws://localhost:8001';\n const ASSET_HUB_ACCOUNT = '15oF4uVJwmo4TdGW7VfQxNLavjCXviqxT9S1MgbjMNHr6Sp5'; // Alice (Paseo Asset Hub)\n\n // Bridge Hub destination\n const BRIDGE_HUB_RPC_ENDPOINT = 'ws://localhost:8000';\n const BRIDGE_HUB_PARA_ID = 1002;\n const BRIDGE_HUB_BENEFICIARY =\n async function main() {\n // Code will go here\n }\n ```\n\nAll the following code explained in the subsequent sections must be added inside the `main` function."}
@@ -1299,7 +1299,7 @@
{"page_id": "tutorials-interoperability", "page_title": "Interoperability Tutorials", "index": 0, "depth": 2, "title": "XCM (Cross-Consensus Messaging)", "anchor": "xcm-cross-consensus-messaging", "start_char": 645, "end_char": 894, "estimated_token_count": 43, "token_estimator": "heuristic-v1", "text": "## XCM (Cross-Consensus Messaging)\n\nXCM provides a secure and trustless framework that facilitates communication between parachains, relay chains, and external blockchains, enabling asset transfers, data sharing, and complex cross-chain workflows."}
{"page_id": "tutorials-interoperability", "page_title": "Interoperability Tutorials", "index": 1, "depth": 3, "title": "For Parachain Integrators", "anchor": "for-parachain-integrators", "start_char": 894, "end_char": 1363, "estimated_token_count": 100, "token_estimator": "heuristic-v1", "text": "### For Parachain Integrators\n\nLearn to establish and use cross-chain communication channels:\n\n- **[Opening HRMP Channels Between Parachains](/tutorials/interoperability/xcm-channels/para-to-para/)**: Set up uni- and bidirectional messaging channels between parachains.\n- **[Opening HRMP Channels with System Parachains](/tutorials/interoperability/xcm-channels/para-to-system/)**: Establish communication channels with system parachains using optimized XCM messages."}
{"page_id": "tutorials-interoperability", "page_title": "Interoperability Tutorials", "index": 2, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 1363, "end_char": 1413, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "tutorials-interoperability", "page_title": "Interoperability Tutorials", "index": 3, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 1413, "end_char": 2197, "estimated_token_count": 194, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
+{"page_id": "tutorials-interoperability", "page_title": "Interoperability Tutorials", "index": 3, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 1413, "end_char": 2175, "estimated_token_count": 188, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n
"}
{"page_id": "tutorials-onchain-governance-fast-track-gov-proposal", "page_title": "Fast Track a Governance Proposal", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 36, "end_char": 1714, "estimated_token_count": 314, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nPolkadot's [OpenGov](/polkadot-protocol/onchain-governance/overview){target=\\_blank} is a sophisticated governance mechanism designed to allow the network to evolve gracefully over time, guided by its stakeholders. This system features multiple [tracks](https://wiki.polkadot.com/learn/learn-polkadot-opengov-origins/#origins-and-tracks-info){target=\\_blank} for different types of proposals, each with parameters for approval, support, and confirmation period. While this flexibility is powerful, it also introduces complexity that can lead to failed proposals or unexpected outcomes.\n\nTesting governance proposals before submission is crucial for the ecosystem. This process enhances efficiency by reducing the need for repeated submissions, improves security by identifying potential risks, and allows proposal optimization based on simulated outcomes. It also serves as an educational tool, providing stakeholders with a safe environment to understand the impacts of different voting scenarios. \n\nBy leveraging simulation tools like [Chopsticks](/develop/toolkit/parachains/fork-chains/chopsticks){target=\\_blank}, developers can:\n\n- Simulate the entire lifecycle of a proposal.\n- Test the voting outcomes by varying the support and approval levels.\n- Analyze the effects of a successfully executed proposal on the network's state.\n- Identify and troubleshoot potential issues or unexpected consequences before submitting the proposals.\n\nThis tutorial will guide you through using Chopsticks to test OpenGov proposals thoroughly. This ensures that when you submit a proposal to the live network, you can do so with confidence in its effects and viability."}
{"page_id": "tutorials-onchain-governance-fast-track-gov-proposal", "page_title": "Fast Track a Governance Proposal", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 1714, "end_char": 2238, "estimated_token_count": 130, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore proceeding, ensure the following prerequisites are met:\n\n- **Chopsticks installation**: If you have not installed Chopsticks yet, refer to the [Install Chopsticks](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/#install-chopsticks){target=\\_blank} guide for detailed instructions.\n- **Familiarity with key concepts**:\n - [Polkadot.js](/develop/toolkit/api-libraries/polkadot-js-api){target=\\_blank}\n - [OpenGov](/polkadot-protocol/onchain-governance/overview){target=\\_blank}"}
{"page_id": "tutorials-onchain-governance-fast-track-gov-proposal", "page_title": "Fast Track a Governance Proposal", "index": 2, "depth": 2, "title": "Set Up the Project", "anchor": "set-up-the-project", "start_char": 2238, "end_char": 3770, "estimated_token_count": 327, "token_estimator": "heuristic-v1", "text": "## Set Up the Project\n\nBefore testing OpenGov proposals, you need to set up your development environment. \nYou'll set up a TypeScript project and install the required dependencies to simulate and evaluate proposals. You'll use Chopsticks to fork the Polkadot network and simulate the proposal lifecycle, while Polkadot.js will be your interface for interacting with the forked network and submitting proposals.\n\nFollow these steps to set up your project:\n\n1. Create a new project directory and navigate into it:\n ```bash\n mkdir opengov-chopsticks && cd opengov-chopsticks\n ```\n\n2. Initialize a new TypeScript project:\n ```bash\n npm init -y \\\n && npm install typescript ts-node @types/node --save-dev \\\n && npx tsc --init\n ```\n\n3. Install the required dependencies:\n ```bash\n npm install @polkadot/api @acala-network/chopsticks\n ```\n\n4. Create a new TypeScript file for your script:\n ```bash\n touch test-proposal.ts\n ```\n\n !!!note\n You'll write your code to simulate and test OpenGov proposals in the `test-proposal.ts` file.\n\n5. Open the `tsconfig.json` file and ensure it includes these compiler options:\n ```json\n {\n \"compilerOptions\": {\n \"module\": \"CommonJS\",\n \"esModuleInterop\": true,\n \"target\": \"esnext\",\n \"moduleResolution\": \"node\",\n \"declaration\": true,\n \"sourceMap\": true,\n \"skipLibCheck\": true,\n \"outDir\": \"dist\",\n \"composite\": true\n }\n }\n\n ```"}
@@ -1313,7 +1313,7 @@
{"page_id": "tutorials-onchain-governance-fast-track-gov-proposal", "page_title": "Fast Track a Governance Proposal", "index": 10, "depth": 2, "title": "Summary", "anchor": "summary", "start_char": 92075, "end_char": 92735, "estimated_token_count": 125, "token_estimator": "heuristic-v1", "text": "## Summary\n\nIn this tutorial, you've learned how to use Chopsticks to test OpenGov proposals on a local fork of the Polkadot network. You've set up a TypeScript project, connected to a local fork, submitted a proposal, and forced its execution for testing purposes. This process allows you to:\n\n- Safely experiment with different types of proposals.\n- Test the effects of proposals without affecting the live network.\n- Rapidly iterate and debug your governance ideas.\n\nUsing these techniques, you can develop and refine your proposals before submitting them to the Polkadot network, ensuring they're well-tested and likely to achieve their intended effects."}
{"page_id": "tutorials-onchain-governance-fast-track-gov-proposal", "page_title": "Fast Track a Governance Proposal", "index": 11, "depth": 2, "title": "Full Code", "anchor": "full-code", "start_char": 92735, "end_char": 169907, "estimated_token_count": 15583, "token_estimator": "heuristic-v1", "text": "## Full Code\n\nHere's the complete code for the `test-proposal.ts` file, incorporating all the steps we've covered:\n\n??? code \"`test-proposal.ts`\"\n ```typescript\n // --8<-- [start:imports]\n import '@polkadot/api-augment/polkadot';\n import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup';\n import { blake2AsHex } from '@polkadot/util-crypto';\n import { ApiPromise, Keyring, WsProvider } from '@polkadot/api';\n import { type SubmittableExtrinsic } from '@polkadot/api/types';\n import { ISubmittableResult } from '@polkadot/types/types';\n // --8<-- [end:imports]\n\n // --8<-- [start:connectToFork]\n /**\n * Establishes a connection to the local forked chain.\n *\n * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain.\n */\n async function connectToFork(): Promise
{\n const wsProvider = new WsProvider('ws://localhost:8000');\n const api = await ApiPromise.create({ provider: wsProvider });\n await api.isReady;\n console.log(`Connected to chain: ${await api.rpc.system.chain()}`);\n return api;\n }\n // --8<-- [end:connectToFork]\n\n // --8<-- [start:generateProposal]\n /**\n * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed.\n * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`).\n * @returns A promise that resolves to the proposal ID of the generated proposal.\n *\n */\n async function generateProposal(\n api: ApiPromise,\n call: SubmittableExtrinsic<'promise', ISubmittableResult>,\n origin: any\n ): Promise {\n // Initialize the keyring\n const keyring = new Keyring({ type: 'sr25519' });\n\n // Set up Alice development account\n const alice = keyring.addFromUri('//Alice');\n\n // Get the next available proposal index\n const proposalIndex = (\n await api.query.referenda.referendumCount()\n ).toNumber();\n\n // Execute the batch transaction\n await new Promise(async (resolve) => {\n const unsub = await api.tx.utility\n .batch([\n // Register the preimage for your proposal\n api.tx.preimage.notePreimage(call.method.toHex()),\n // Submit your proposal to the referenda system\n api.tx.referenda.submit(\n origin as any,\n {\n Lookup: {\n Hash: call.method.hash.toHex(),\n len: call.method.encodedLength,\n },\n },\n { At: 0 }\n ),\n // Place the required decision deposit\n api.tx.referenda.placeDecisionDeposit(proposalIndex),\n ])\n .signAndSend(alice, (status: any) => {\n if (status.blockNumber) {\n unsub();\n resolve();\n }\n });\n });\n return proposalIndex;\n }\n // --8<-- [end:generateProposal]\n\n // --8<-- [start:moveScheduledCallTo]\n /**\n * Moves a scheduled call to a specified future block if it matches the given verifier criteria.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param blockCounts - The number of blocks to move the scheduled call forward.\n * @param verifier - A function to verify if a scheduled call matches the desired criteria.\n * @throws An error if no matching scheduled call is found.\n */\n async function moveScheduledCallTo(\n api: ApiPromise,\n blockCounts: number,\n verifier: (call: FrameSupportPreimagesBounded) => boolean\n ) {\n // Get the current block number\n const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber();\n \n // Retrieve the scheduler's agenda entries\n const agenda = await api.query.scheduler.agenda.entries();\n \n // Initialize a flag to track if a matching scheduled call is found\n let found = false;\n \n // Iterate through the scheduler's agenda entries\n for (const agendaEntry of agenda) {\n // Iterate through the scheduled entries in the current agenda entry\n for (const scheduledEntry of agendaEntry[1]) {\n // Check if the scheduled entry is valid and matches the verifier criteria\n if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) {\n found = true;\n \n // Overwrite the agendaEntry item in storage\n const result = await api.rpc('dev_setStorage', [\n [agendaEntry[0]], // require to ensure unique id\n [\n await api.query.scheduler.agenda.key(blockNumber + blockCounts),\n agendaEntry[1].toHex(),\n ],\n ]);\n \n // Check if the scheduled call has an associated lookup\n if (scheduledEntry.unwrap().maybeId.isSome) {\n // Get the lookup ID\n const id = scheduledEntry.unwrap().maybeId.unwrap().toHex();\n const lookup = await api.query.scheduler.lookup(id);\n\n // Check if the lookup exists\n if (lookup.isSome) {\n // Get the lookup key\n const lookupKey = await api.query.scheduler.lookup.key(id);\n \n // Create a new lookup object with the updated block number\n const fastLookup = api.registry.createType('Option<(u32,u32)>', [\n blockNumber + blockCounts,\n 0,\n ]);\n \n // Overwrite the lookup entry in storage\n const result = await api.rpc('dev_setStorage', [\n [lookupKey, fastLookup.toHex()],\n ]);\n }\n }\n }\n }\n }\n \n // Throw an error if no matching scheduled call is found\n if (!found) {\n throw new Error('No scheduled call found');\n }\n }\n // --8<-- [end:moveScheduledCallTo]\n\n // --8<-- [start:forceProposalExecution]\n /**\n * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param proposalIndex - The index of the proposal to be executed.\n * @throws An error if the referendum is not found or not in an ongoing state.\n */\n async function forceProposalExecution(api: ApiPromise, proposalIndex: number) {\n // Retrieve the referendum data for the given proposal index\n const referendumData = await api.query.referenda.referendumInfoFor(\n proposalIndex\n );\n // Get the storage key for the referendum data\n const referendumKey =\n api.query.referenda.referendumInfoFor.key(proposalIndex);\n\n // Check if the referendum data exists\n if (!referendumData.isSome) {\n throw new Error(`Referendum ${proposalIndex} not found`);\n }\n\n const referendumInfo = referendumData.unwrap();\n\n // Check if the referendum is in an ongoing state\n if (!referendumInfo.isOngoing) {\n throw new Error(`Referendum ${proposalIndex} is not ongoing`);\n }\n\n // Get the ongoing referendum data\n const ongoingData = referendumInfo.asOngoing;\n // Convert the ongoing data to JSON\n const ongoingJson = ongoingData.toJSON();\n\n // Support Lookup, Inline or Legacy proposals\n const callHash = ongoingData.proposal.isLookup\n ? ongoingData.proposal.asLookup.toHex()\n : ongoingData.proposal.isInline\n ? blake2AsHex(ongoingData.proposal.asInline.toHex())\n : ongoingData.proposal.asLegacy.toHex();\n\n // Get the total issuance of the native token\n const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt();\n\n // Get the current block number\n const proposalBlockTarget = (\n await api.rpc.chain.getHeader()\n ).number.toNumber();\n\n // Create a new proposal data object with the updated fields\n const fastProposalData = {\n ongoing: {\n ...ongoingJson,\n enactment: { after: 0 },\n deciding: {\n since: proposalBlockTarget - 1,\n confirming: proposalBlockTarget - 1,\n },\n tally: {\n ayes: totalIssuance - 1n,\n nays: 0,\n support: totalIssuance - 1n,\n },\n alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]],\n },\n };\n\n // Create a new proposal object from the proposal data\n let fastProposal;\n try {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n } catch {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n }\n\n // Update the storage with the new proposal object\n const result = await api.rpc('dev_setStorage', [\n [referendumKey, fastProposal.toHex()],\n ]);\n\n // Fast forward the nudge referendum to the next block to get the refendum to be scheduled\n await moveScheduledCallTo(api, 1, (call) => {\n if (!call.isInline) {\n return false;\n }\n\n const callData = api.createType('Call', call.asInline.toHex());\n\n return (\n callData.method == 'nudgeReferendum' &&\n (callData.args[0] as any).toNumber() == proposalIndex\n );\n });\n\n // Create a new block\n await api.rpc('dev_newBlock', { count: 1 });\n\n // Move the scheduled call to the next block\n await moveScheduledCallTo(api, 1, (call) =>\n call.isLookup\n ? call.asLookup.toHex() == callHash\n : call.isInline\n ? blake2AsHex(call.asInline.toHex()) == callHash\n : call.asLegacy.toHex() == callHash\n );\n\n // Create another new block\n await api.rpc('dev_newBlock', { count: 1 });\n }\n // --8<-- [end:forceProposalExecution]\n\n // --8<-- [start:main]\n const main = async () => {\n // Connect to the forked chain\n const api = await connectToFork();\n\n // Select the call to perform\n const call = api.tx.system.setCodeWithoutChecks('0x1234');\n\n // Select the origin\n const origin = {\n System: 'Root',\n };\n\n // Submit preimage, submit proposal, and place decision deposit\n const proposalIndex = await generateProposal(api, call, origin);\n\n // Force the proposal to be executed\n await forceProposalExecution(api, proposalIndex);\n\n process.exit(0);\n };\n // --8<-- [end:main]\n\n // --8<-- [start:try-catch-block]\n try {\n main();\n } catch (e) {\n console.log(e);\n process.exit(1);\n }\n // --8<-- [end:try-catch-block]\n\n // --8<-- [start:imports]\n import '@polkadot/api-augment/polkadot';\n import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup';\n import { blake2AsHex } from '@polkadot/util-crypto';\n import { ApiPromise, Keyring, WsProvider } from '@polkadot/api';\n import { type SubmittableExtrinsic } from '@polkadot/api/types';\n import { ISubmittableResult } from '@polkadot/types/types';\n // --8<-- [end:imports]\n\n // --8<-- [start:connectToFork]\n /**\n * Establishes a connection to the local forked chain.\n *\n * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain.\n */\n async function connectToFork(): Promise {\n const wsProvider = new WsProvider('ws://localhost:8000');\n const api = await ApiPromise.create({ provider: wsProvider });\n await api.isReady;\n console.log(`Connected to chain: ${await api.rpc.system.chain()}`);\n return api;\n }\n // --8<-- [end:connectToFork]\n\n // --8<-- [start:generateProposal]\n /**\n * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed.\n * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`).\n * @returns A promise that resolves to the proposal ID of the generated proposal.\n *\n */\n async function generateProposal(\n api: ApiPromise,\n call: SubmittableExtrinsic<'promise', ISubmittableResult>,\n origin: any\n ): Promise {\n // Initialize the keyring\n const keyring = new Keyring({ type: 'sr25519' });\n\n // Set up Alice development account\n const alice = keyring.addFromUri('//Alice');\n\n // Get the next available proposal index\n const proposalIndex = (\n await api.query.referenda.referendumCount()\n ).toNumber();\n\n // Execute the batch transaction\n await new Promise(async (resolve) => {\n const unsub = await api.tx.utility\n .batch([\n // Register the preimage for your proposal\n api.tx.preimage.notePreimage(call.method.toHex()),\n // Submit your proposal to the referenda system\n api.tx.referenda.submit(\n origin as any,\n {\n Lookup: {\n Hash: call.method.hash.toHex(),\n len: call.method.encodedLength,\n },\n },\n { At: 0 }\n ),\n // Place the required decision deposit\n api.tx.referenda.placeDecisionDeposit(proposalIndex),\n ])\n .signAndSend(alice, (status: any) => {\n if (status.blockNumber) {\n unsub();\n resolve();\n }\n });\n });\n return proposalIndex;\n }\n // --8<-- [end:generateProposal]\n\n // --8<-- [start:moveScheduledCallTo]\n /**\n * Moves a scheduled call to a specified future block if it matches the given verifier criteria.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param blockCounts - The number of blocks to move the scheduled call forward.\n * @param verifier - A function to verify if a scheduled call matches the desired criteria.\n * @throws An error if no matching scheduled call is found.\n */\n async function moveScheduledCallTo(\n api: ApiPromise,\n blockCounts: number,\n verifier: (call: FrameSupportPreimagesBounded) => boolean\n ) {\n // Get the current block number\n const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber();\n \n // Retrieve the scheduler's agenda entries\n const agenda = await api.query.scheduler.agenda.entries();\n \n // Initialize a flag to track if a matching scheduled call is found\n let found = false;\n \n // Iterate through the scheduler's agenda entries\n for (const agendaEntry of agenda) {\n // Iterate through the scheduled entries in the current agenda entry\n for (const scheduledEntry of agendaEntry[1]) {\n // Check if the scheduled entry is valid and matches the verifier criteria\n if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) {\n found = true;\n \n // Overwrite the agendaEntry item in storage\n const result = await api.rpc('dev_setStorage', [\n [agendaEntry[0]], // require to ensure unique id\n [\n await api.query.scheduler.agenda.key(blockNumber + blockCounts),\n agendaEntry[1].toHex(),\n ],\n ]);\n \n // Check if the scheduled call has an associated lookup\n if (scheduledEntry.unwrap().maybeId.isSome) {\n // Get the lookup ID\n const id = scheduledEntry.unwrap().maybeId.unwrap().toHex();\n const lookup = await api.query.scheduler.lookup(id);\n\n // Check if the lookup exists\n if (lookup.isSome) {\n // Get the lookup key\n const lookupKey = await api.query.scheduler.lookup.key(id);\n \n // Create a new lookup object with the updated block number\n const fastLookup = api.registry.createType('Option<(u32,u32)>', [\n blockNumber + blockCounts,\n 0,\n ]);\n \n // Overwrite the lookup entry in storage\n const result = await api.rpc('dev_setStorage', [\n [lookupKey, fastLookup.toHex()],\n ]);\n }\n }\n }\n }\n }\n \n // Throw an error if no matching scheduled call is found\n if (!found) {\n throw new Error('No scheduled call found');\n }\n }\n // --8<-- [end:moveScheduledCallTo]\n\n // --8<-- [start:forceProposalExecution]\n /**\n * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param proposalIndex - The index of the proposal to be executed.\n * @throws An error if the referendum is not found or not in an ongoing state.\n */\n async function forceProposalExecution(api: ApiPromise, proposalIndex: number) {\n // Retrieve the referendum data for the given proposal index\n const referendumData = await api.query.referenda.referendumInfoFor(\n proposalIndex\n );\n // Get the storage key for the referendum data\n const referendumKey =\n api.query.referenda.referendumInfoFor.key(proposalIndex);\n\n // Check if the referendum data exists\n if (!referendumData.isSome) {\n throw new Error(`Referendum ${proposalIndex} not found`);\n }\n\n const referendumInfo = referendumData.unwrap();\n\n // Check if the referendum is in an ongoing state\n if (!referendumInfo.isOngoing) {\n throw new Error(`Referendum ${proposalIndex} is not ongoing`);\n }\n\n // Get the ongoing referendum data\n const ongoingData = referendumInfo.asOngoing;\n // Convert the ongoing data to JSON\n const ongoingJson = ongoingData.toJSON();\n\n // Support Lookup, Inline or Legacy proposals\n const callHash = ongoingData.proposal.isLookup\n ? ongoingData.proposal.asLookup.toHex()\n : ongoingData.proposal.isInline\n ? blake2AsHex(ongoingData.proposal.asInline.toHex())\n : ongoingData.proposal.asLegacy.toHex();\n\n // Get the total issuance of the native token\n const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt();\n\n // Get the current block number\n const proposalBlockTarget = (\n await api.rpc.chain.getHeader()\n ).number.toNumber();\n\n // Create a new proposal data object with the updated fields\n const fastProposalData = {\n ongoing: {\n ...ongoingJson,\n enactment: { after: 0 },\n deciding: {\n since: proposalBlockTarget - 1,\n confirming: proposalBlockTarget - 1,\n },\n tally: {\n ayes: totalIssuance - 1n,\n nays: 0,\n support: totalIssuance - 1n,\n },\n alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]],\n },\n };\n\n // Create a new proposal object from the proposal data\n let fastProposal;\n try {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n } catch {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n }\n\n // Update the storage with the new proposal object\n const result = await api.rpc('dev_setStorage', [\n [referendumKey, fastProposal.toHex()],\n ]);\n\n // Fast forward the nudge referendum to the next block to get the refendum to be scheduled\n await moveScheduledCallTo(api, 1, (call) => {\n if (!call.isInline) {\n return false;\n }\n\n const callData = api.createType('Call', call.asInline.toHex());\n\n return (\n callData.method == 'nudgeReferendum' &&\n (callData.args[0] as any).toNumber() == proposalIndex\n );\n });\n\n // Create a new block\n await api.rpc('dev_newBlock', { count: 1 });\n\n // Move the scheduled call to the next block\n await moveScheduledCallTo(api, 1, (call) =>\n call.isLookup\n ? call.asLookup.toHex() == callHash\n : call.isInline\n ? blake2AsHex(call.asInline.toHex()) == callHash\n : call.asLegacy.toHex() == callHash\n );\n\n // Create another new block\n await api.rpc('dev_newBlock', { count: 1 });\n }\n // --8<-- [end:forceProposalExecution]\n\n // --8<-- [start:main]\n const main = async () => {\n // Connect to the forked chain\n const api = await connectToFork();\n\n // Select the call to perform\n const call = api.tx.system.setCodeWithoutChecks('0x1234');\n\n // Select the origin\n const origin = {\n System: 'Root',\n };\n\n // Submit preimage, submit proposal, and place decision deposit\n const proposalIndex = await generateProposal(api, call, origin);\n\n // Force the proposal to be executed\n await forceProposalExecution(api, proposalIndex);\n\n process.exit(0);\n };\n // --8<-- [end:main]\n\n // --8<-- [start:try-catch-block]\n try {\n main();\n } catch (e) {\n console.log(e);\n process.exit(1);\n }\n // --8<-- [end:try-catch-block]\n\n // --8<-- [start:imports]\n import '@polkadot/api-augment/polkadot';\n import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup';\n import { blake2AsHex } from '@polkadot/util-crypto';\n import { ApiPromise, Keyring, WsProvider } from '@polkadot/api';\n import { type SubmittableExtrinsic } from '@polkadot/api/types';\n import { ISubmittableResult } from '@polkadot/types/types';\n // --8<-- [end:imports]\n\n // --8<-- [start:connectToFork]\n /**\n * Establishes a connection to the local forked chain.\n *\n * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain.\n */\n async function connectToFork(): Promise {\n const wsProvider = new WsProvider('ws://localhost:8000');\n const api = await ApiPromise.create({ provider: wsProvider });\n await api.isReady;\n console.log(`Connected to chain: ${await api.rpc.system.chain()}`);\n return api;\n }\n // --8<-- [end:connectToFork]\n\n // --8<-- [start:generateProposal]\n /**\n * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed.\n * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`).\n * @returns A promise that resolves to the proposal ID of the generated proposal.\n *\n */\n async function generateProposal(\n api: ApiPromise,\n call: SubmittableExtrinsic<'promise', ISubmittableResult>,\n origin: any\n ): Promise {\n // Initialize the keyring\n const keyring = new Keyring({ type: 'sr25519' });\n\n // Set up Alice development account\n const alice = keyring.addFromUri('//Alice');\n\n // Get the next available proposal index\n const proposalIndex = (\n await api.query.referenda.referendumCount()\n ).toNumber();\n\n // Execute the batch transaction\n await new Promise(async (resolve) => {\n const unsub = await api.tx.utility\n .batch([\n // Register the preimage for your proposal\n api.tx.preimage.notePreimage(call.method.toHex()),\n // Submit your proposal to the referenda system\n api.tx.referenda.submit(\n origin as any,\n {\n Lookup: {\n Hash: call.method.hash.toHex(),\n len: call.method.encodedLength,\n },\n },\n { At: 0 }\n ),\n // Place the required decision deposit\n api.tx.referenda.placeDecisionDeposit(proposalIndex),\n ])\n .signAndSend(alice, (status: any) => {\n if (status.blockNumber) {\n unsub();\n resolve();\n }\n });\n });\n return proposalIndex;\n }\n // --8<-- [end:generateProposal]\n\n // --8<-- [start:moveScheduledCallTo]\n /**\n * Moves a scheduled call to a specified future block if it matches the given verifier criteria.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param blockCounts - The number of blocks to move the scheduled call forward.\n * @param verifier - A function to verify if a scheduled call matches the desired criteria.\n * @throws An error if no matching scheduled call is found.\n */\n async function moveScheduledCallTo(\n api: ApiPromise,\n blockCounts: number,\n verifier: (call: FrameSupportPreimagesBounded) => boolean\n ) {\n // Get the current block number\n const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber();\n \n // Retrieve the scheduler's agenda entries\n const agenda = await api.query.scheduler.agenda.entries();\n \n // Initialize a flag to track if a matching scheduled call is found\n let found = false;\n \n // Iterate through the scheduler's agenda entries\n for (const agendaEntry of agenda) {\n // Iterate through the scheduled entries in the current agenda entry\n for (const scheduledEntry of agendaEntry[1]) {\n // Check if the scheduled entry is valid and matches the verifier criteria\n if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) {\n found = true;\n \n // Overwrite the agendaEntry item in storage\n const result = await api.rpc('dev_setStorage', [\n [agendaEntry[0]], // require to ensure unique id\n [\n await api.query.scheduler.agenda.key(blockNumber + blockCounts),\n agendaEntry[1].toHex(),\n ],\n ]);\n \n // Check if the scheduled call has an associated lookup\n if (scheduledEntry.unwrap().maybeId.isSome) {\n // Get the lookup ID\n const id = scheduledEntry.unwrap().maybeId.unwrap().toHex();\n const lookup = await api.query.scheduler.lookup(id);\n\n // Check if the lookup exists\n if (lookup.isSome) {\n // Get the lookup key\n const lookupKey = await api.query.scheduler.lookup.key(id);\n \n // Create a new lookup object with the updated block number\n const fastLookup = api.registry.createType('Option<(u32,u32)>', [\n blockNumber + blockCounts,\n 0,\n ]);\n \n // Overwrite the lookup entry in storage\n const result = await api.rpc('dev_setStorage', [\n [lookupKey, fastLookup.toHex()],\n ]);\n }\n }\n }\n }\n }\n \n // Throw an error if no matching scheduled call is found\n if (!found) {\n throw new Error('No scheduled call found');\n }\n }\n // --8<-- [end:moveScheduledCallTo]\n\n // --8<-- [start:forceProposalExecution]\n /**\n * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param proposalIndex - The index of the proposal to be executed.\n * @throws An error if the referendum is not found or not in an ongoing state.\n */\n async function forceProposalExecution(api: ApiPromise, proposalIndex: number) {\n // Retrieve the referendum data for the given proposal index\n const referendumData = await api.query.referenda.referendumInfoFor(\n proposalIndex\n );\n // Get the storage key for the referendum data\n const referendumKey =\n api.query.referenda.referendumInfoFor.key(proposalIndex);\n\n // Check if the referendum data exists\n if (!referendumData.isSome) {\n throw new Error(`Referendum ${proposalIndex} not found`);\n }\n\n const referendumInfo = referendumData.unwrap();\n\n // Check if the referendum is in an ongoing state\n if (!referendumInfo.isOngoing) {\n throw new Error(`Referendum ${proposalIndex} is not ongoing`);\n }\n\n // Get the ongoing referendum data\n const ongoingData = referendumInfo.asOngoing;\n // Convert the ongoing data to JSON\n const ongoingJson = ongoingData.toJSON();\n\n // Support Lookup, Inline or Legacy proposals\n const callHash = ongoingData.proposal.isLookup\n ? ongoingData.proposal.asLookup.toHex()\n : ongoingData.proposal.isInline\n ? blake2AsHex(ongoingData.proposal.asInline.toHex())\n : ongoingData.proposal.asLegacy.toHex();\n\n // Get the total issuance of the native token\n const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt();\n\n // Get the current block number\n const proposalBlockTarget = (\n await api.rpc.chain.getHeader()\n ).number.toNumber();\n\n // Create a new proposal data object with the updated fields\n const fastProposalData = {\n ongoing: {\n ...ongoingJson,\n enactment: { after: 0 },\n deciding: {\n since: proposalBlockTarget - 1,\n confirming: proposalBlockTarget - 1,\n },\n tally: {\n ayes: totalIssuance - 1n,\n nays: 0,\n support: totalIssuance - 1n,\n },\n alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]],\n },\n };\n\n // Create a new proposal object from the proposal data\n let fastProposal;\n try {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n } catch {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n }\n\n // Update the storage with the new proposal object\n const result = await api.rpc('dev_setStorage', [\n [referendumKey, fastProposal.toHex()],\n ]);\n\n // Fast forward the nudge referendum to the next block to get the refendum to be scheduled\n await moveScheduledCallTo(api, 1, (call) => {\n if (!call.isInline) {\n return false;\n }\n\n const callData = api.createType('Call', call.asInline.toHex());\n\n return (\n callData.method == 'nudgeReferendum' &&\n (callData.args[0] as any).toNumber() == proposalIndex\n );\n });\n\n // Create a new block\n await api.rpc('dev_newBlock', { count: 1 });\n\n // Move the scheduled call to the next block\n await moveScheduledCallTo(api, 1, (call) =>\n call.isLookup\n ? call.asLookup.toHex() == callHash\n : call.isInline\n ? blake2AsHex(call.asInline.toHex()) == callHash\n : call.asLegacy.toHex() == callHash\n );\n\n // Create another new block\n await api.rpc('dev_newBlock', { count: 1 });\n }\n // --8<-- [end:forceProposalExecution]\n\n // --8<-- [start:main]\n const main = async () => {\n // Connect to the forked chain\n const api = await connectToFork();\n\n // Select the call to perform\n const call = api.tx.system.setCodeWithoutChecks('0x1234');\n\n // Select the origin\n const origin = {\n System: 'Root',\n };\n\n // Submit preimage, submit proposal, and place decision deposit\n const proposalIndex = await generateProposal(api, call, origin);\n\n // Force the proposal to be executed\n await forceProposalExecution(api, proposalIndex);\n\n process.exit(0);\n };\n // --8<-- [end:main]\n\n // --8<-- [start:try-catch-block]\n try {\n main();\n } catch (e) {\n console.log(e);\n process.exit(1);\n }\n // --8<-- [end:try-catch-block]\n\n // --8<-- [start:imports]\n import '@polkadot/api-augment/polkadot';\n import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup';\n import { blake2AsHex } from '@polkadot/util-crypto';\n import { ApiPromise, Keyring, WsProvider } from '@polkadot/api';\n import { type SubmittableExtrinsic } from '@polkadot/api/types';\n import { ISubmittableResult } from '@polkadot/types/types';\n // --8<-- [end:imports]\n\n // --8<-- [start:connectToFork]\n /**\n * Establishes a connection to the local forked chain.\n *\n * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain.\n */\n async function connectToFork(): Promise {\n const wsProvider = new WsProvider('ws://localhost:8000');\n const api = await ApiPromise.create({ provider: wsProvider });\n await api.isReady;\n console.log(`Connected to chain: ${await api.rpc.system.chain()}`);\n return api;\n }\n // --8<-- [end:connectToFork]\n\n // --8<-- [start:generateProposal]\n /**\n * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed.\n * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`).\n * @returns A promise that resolves to the proposal ID of the generated proposal.\n *\n */\n async function generateProposal(\n api: ApiPromise,\n call: SubmittableExtrinsic<'promise', ISubmittableResult>,\n origin: any\n ): Promise {\n // Initialize the keyring\n const keyring = new Keyring({ type: 'sr25519' });\n\n // Set up Alice development account\n const alice = keyring.addFromUri('//Alice');\n\n // Get the next available proposal index\n const proposalIndex = (\n await api.query.referenda.referendumCount()\n ).toNumber();\n\n // Execute the batch transaction\n await new Promise(async (resolve) => {\n const unsub = await api.tx.utility\n .batch([\n // Register the preimage for your proposal\n api.tx.preimage.notePreimage(call.method.toHex()),\n // Submit your proposal to the referenda system\n api.tx.referenda.submit(\n origin as any,\n {\n Lookup: {\n Hash: call.method.hash.toHex(),\n len: call.method.encodedLength,\n },\n },\n { At: 0 }\n ),\n // Place the required decision deposit\n api.tx.referenda.placeDecisionDeposit(proposalIndex),\n ])\n .signAndSend(alice, (status: any) => {\n if (status.blockNumber) {\n unsub();\n resolve();\n }\n });\n });\n return proposalIndex;\n }\n // --8<-- [end:generateProposal]\n\n // --8<-- [start:moveScheduledCallTo]\n /**\n * Moves a scheduled call to a specified future block if it matches the given verifier criteria.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param blockCounts - The number of blocks to move the scheduled call forward.\n * @param verifier - A function to verify if a scheduled call matches the desired criteria.\n * @throws An error if no matching scheduled call is found.\n */\n async function moveScheduledCallTo(\n api: ApiPromise,\n blockCounts: number,\n verifier: (call: FrameSupportPreimagesBounded) => boolean\n ) {\n // Get the current block number\n const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber();\n \n // Retrieve the scheduler's agenda entries\n const agenda = await api.query.scheduler.agenda.entries();\n \n // Initialize a flag to track if a matching scheduled call is found\n let found = false;\n \n // Iterate through the scheduler's agenda entries\n for (const agendaEntry of agenda) {\n // Iterate through the scheduled entries in the current agenda entry\n for (const scheduledEntry of agendaEntry[1]) {\n // Check if the scheduled entry is valid and matches the verifier criteria\n if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) {\n found = true;\n \n // Overwrite the agendaEntry item in storage\n const result = await api.rpc('dev_setStorage', [\n [agendaEntry[0]], // require to ensure unique id\n [\n await api.query.scheduler.agenda.key(blockNumber + blockCounts),\n agendaEntry[1].toHex(),\n ],\n ]);\n \n // Check if the scheduled call has an associated lookup\n if (scheduledEntry.unwrap().maybeId.isSome) {\n // Get the lookup ID\n const id = scheduledEntry.unwrap().maybeId.unwrap().toHex();\n const lookup = await api.query.scheduler.lookup(id);\n\n // Check if the lookup exists\n if (lookup.isSome) {\n // Get the lookup key\n const lookupKey = await api.query.scheduler.lookup.key(id);\n \n // Create a new lookup object with the updated block number\n const fastLookup = api.registry.createType('Option<(u32,u32)>', [\n blockNumber + blockCounts,\n 0,\n ]);\n \n // Overwrite the lookup entry in storage\n const result = await api.rpc('dev_setStorage', [\n [lookupKey, fastLookup.toHex()],\n ]);\n }\n }\n }\n }\n }\n \n // Throw an error if no matching scheduled call is found\n if (!found) {\n throw new Error('No scheduled call found');\n }\n }\n // --8<-- [end:moveScheduledCallTo]\n\n // --8<-- [start:forceProposalExecution]\n /**\n * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param proposalIndex - The index of the proposal to be executed.\n * @throws An error if the referendum is not found or not in an ongoing state.\n */\n async function forceProposalExecution(api: ApiPromise, proposalIndex: number) {\n // Retrieve the referendum data for the given proposal index\n const referendumData = await api.query.referenda.referendumInfoFor(\n proposalIndex\n );\n // Get the storage key for the referendum data\n const referendumKey =\n api.query.referenda.referendumInfoFor.key(proposalIndex);\n\n // Check if the referendum data exists\n if (!referendumData.isSome) {\n throw new Error(`Referendum ${proposalIndex} not found`);\n }\n\n const referendumInfo = referendumData.unwrap();\n\n // Check if the referendum is in an ongoing state\n if (!referendumInfo.isOngoing) {\n throw new Error(`Referendum ${proposalIndex} is not ongoing`);\n }\n\n // Get the ongoing referendum data\n const ongoingData = referendumInfo.asOngoing;\n // Convert the ongoing data to JSON\n const ongoingJson = ongoingData.toJSON();\n\n // Support Lookup, Inline or Legacy proposals\n const callHash = ongoingData.proposal.isLookup\n ? ongoingData.proposal.asLookup.toHex()\n : ongoingData.proposal.isInline\n ? blake2AsHex(ongoingData.proposal.asInline.toHex())\n : ongoingData.proposal.asLegacy.toHex();\n\n // Get the total issuance of the native token\n const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt();\n\n // Get the current block number\n const proposalBlockTarget = (\n await api.rpc.chain.getHeader()\n ).number.toNumber();\n\n // Create a new proposal data object with the updated fields\n const fastProposalData = {\n ongoing: {\n ...ongoingJson,\n enactment: { after: 0 },\n deciding: {\n since: proposalBlockTarget - 1,\n confirming: proposalBlockTarget - 1,\n },\n tally: {\n ayes: totalIssuance - 1n,\n nays: 0,\n support: totalIssuance - 1n,\n },\n alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]],\n },\n };\n\n // Create a new proposal object from the proposal data\n let fastProposal;\n try {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n } catch {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n }\n\n // Update the storage with the new proposal object\n const result = await api.rpc('dev_setStorage', [\n [referendumKey, fastProposal.toHex()],\n ]);\n\n // Fast forward the nudge referendum to the next block to get the refendum to be scheduled\n await moveScheduledCallTo(api, 1, (call) => {\n if (!call.isInline) {\n return false;\n }\n\n const callData = api.createType('Call', call.asInline.toHex());\n\n return (\n callData.method == 'nudgeReferendum' &&\n (callData.args[0] as any).toNumber() == proposalIndex\n );\n });\n\n // Create a new block\n await api.rpc('dev_newBlock', { count: 1 });\n\n // Move the scheduled call to the next block\n await moveScheduledCallTo(api, 1, (call) =>\n call.isLookup\n ? call.asLookup.toHex() == callHash\n : call.isInline\n ? blake2AsHex(call.asInline.toHex()) == callHash\n : call.asLegacy.toHex() == callHash\n );\n\n // Create another new block\n await api.rpc('dev_newBlock', { count: 1 });\n }\n // --8<-- [end:forceProposalExecution]\n\n // --8<-- [start:main]\n const main = async () => {\n // Connect to the forked chain\n const api = await connectToFork();\n\n // Select the call to perform\n const call = api.tx.system.setCodeWithoutChecks('0x1234');\n\n // Select the origin\n const origin = {\n System: 'Root',\n };\n\n // Submit preimage, submit proposal, and place decision deposit\n const proposalIndex = await generateProposal(api, call, origin);\n\n // Force the proposal to be executed\n await forceProposalExecution(api, proposalIndex);\n\n process.exit(0);\n };\n // --8<-- [end:main]\n\n // --8<-- [start:try-catch-block]\n try {\n main();\n } catch (e) {\n console.log(e);\n process.exit(1);\n }\n // --8<-- [end:try-catch-block]\n\n // --8<-- [start:imports]\n import '@polkadot/api-augment/polkadot';\n import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup';\n import { blake2AsHex } from '@polkadot/util-crypto';\n import { ApiPromise, Keyring, WsProvider } from '@polkadot/api';\n import { type SubmittableExtrinsic } from '@polkadot/api/types';\n import { ISubmittableResult } from '@polkadot/types/types';\n // --8<-- [end:imports]\n\n // --8<-- [start:connectToFork]\n /**\n * Establishes a connection to the local forked chain.\n *\n * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain.\n */\n async function connectToFork(): Promise {\n const wsProvider = new WsProvider('ws://localhost:8000');\n const api = await ApiPromise.create({ provider: wsProvider });\n await api.isReady;\n console.log(`Connected to chain: ${await api.rpc.system.chain()}`);\n return api;\n }\n // --8<-- [end:connectToFork]\n\n // --8<-- [start:generateProposal]\n /**\n * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed.\n * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`).\n * @returns A promise that resolves to the proposal ID of the generated proposal.\n *\n */\n async function generateProposal(\n api: ApiPromise,\n call: SubmittableExtrinsic<'promise', ISubmittableResult>,\n origin: any\n ): Promise {\n // Initialize the keyring\n const keyring = new Keyring({ type: 'sr25519' });\n\n // Set up Alice development account\n const alice = keyring.addFromUri('//Alice');\n\n // Get the next available proposal index\n const proposalIndex = (\n await api.query.referenda.referendumCount()\n ).toNumber();\n\n // Execute the batch transaction\n await new Promise(async (resolve) => {\n const unsub = await api.tx.utility\n .batch([\n // Register the preimage for your proposal\n api.tx.preimage.notePreimage(call.method.toHex()),\n // Submit your proposal to the referenda system\n api.tx.referenda.submit(\n origin as any,\n {\n Lookup: {\n Hash: call.method.hash.toHex(),\n len: call.method.encodedLength,\n },\n },\n { At: 0 }\n ),\n // Place the required decision deposit\n api.tx.referenda.placeDecisionDeposit(proposalIndex),\n ])\n .signAndSend(alice, (status: any) => {\n if (status.blockNumber) {\n unsub();\n resolve();\n }\n });\n });\n return proposalIndex;\n }\n // --8<-- [end:generateProposal]\n\n // --8<-- [start:moveScheduledCallTo]\n /**\n * Moves a scheduled call to a specified future block if it matches the given verifier criteria.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param blockCounts - The number of blocks to move the scheduled call forward.\n * @param verifier - A function to verify if a scheduled call matches the desired criteria.\n * @throws An error if no matching scheduled call is found.\n */\n async function moveScheduledCallTo(\n api: ApiPromise,\n blockCounts: number,\n verifier: (call: FrameSupportPreimagesBounded) => boolean\n ) {\n // Get the current block number\n const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber();\n \n // Retrieve the scheduler's agenda entries\n const agenda = await api.query.scheduler.agenda.entries();\n \n // Initialize a flag to track if a matching scheduled call is found\n let found = false;\n \n // Iterate through the scheduler's agenda entries\n for (const agendaEntry of agenda) {\n // Iterate through the scheduled entries in the current agenda entry\n for (const scheduledEntry of agendaEntry[1]) {\n // Check if the scheduled entry is valid and matches the verifier criteria\n if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) {\n found = true;\n \n // Overwrite the agendaEntry item in storage\n const result = await api.rpc('dev_setStorage', [\n [agendaEntry[0]], // require to ensure unique id\n [\n await api.query.scheduler.agenda.key(blockNumber + blockCounts),\n agendaEntry[1].toHex(),\n ],\n ]);\n \n // Check if the scheduled call has an associated lookup\n if (scheduledEntry.unwrap().maybeId.isSome) {\n // Get the lookup ID\n const id = scheduledEntry.unwrap().maybeId.unwrap().toHex();\n const lookup = await api.query.scheduler.lookup(id);\n\n // Check if the lookup exists\n if (lookup.isSome) {\n // Get the lookup key\n const lookupKey = await api.query.scheduler.lookup.key(id);\n \n // Create a new lookup object with the updated block number\n const fastLookup = api.registry.createType('Option<(u32,u32)>', [\n blockNumber + blockCounts,\n 0,\n ]);\n \n // Overwrite the lookup entry in storage\n const result = await api.rpc('dev_setStorage', [\n [lookupKey, fastLookup.toHex()],\n ]);\n }\n }\n }\n }\n }\n \n // Throw an error if no matching scheduled call is found\n if (!found) {\n throw new Error('No scheduled call found');\n }\n }\n // --8<-- [end:moveScheduledCallTo]\n\n // --8<-- [start:forceProposalExecution]\n /**\n * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param proposalIndex - The index of the proposal to be executed.\n * @throws An error if the referendum is not found or not in an ongoing state.\n */\n async function forceProposalExecution(api: ApiPromise, proposalIndex: number) {\n // Retrieve the referendum data for the given proposal index\n const referendumData = await api.query.referenda.referendumInfoFor(\n proposalIndex\n );\n // Get the storage key for the referendum data\n const referendumKey =\n api.query.referenda.referendumInfoFor.key(proposalIndex);\n\n // Check if the referendum data exists\n if (!referendumData.isSome) {\n throw new Error(`Referendum ${proposalIndex} not found`);\n }\n\n const referendumInfo = referendumData.unwrap();\n\n // Check if the referendum is in an ongoing state\n if (!referendumInfo.isOngoing) {\n throw new Error(`Referendum ${proposalIndex} is not ongoing`);\n }\n\n // Get the ongoing referendum data\n const ongoingData = referendumInfo.asOngoing;\n // Convert the ongoing data to JSON\n const ongoingJson = ongoingData.toJSON();\n\n // Support Lookup, Inline or Legacy proposals\n const callHash = ongoingData.proposal.isLookup\n ? ongoingData.proposal.asLookup.toHex()\n : ongoingData.proposal.isInline\n ? blake2AsHex(ongoingData.proposal.asInline.toHex())\n : ongoingData.proposal.asLegacy.toHex();\n\n // Get the total issuance of the native token\n const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt();\n\n // Get the current block number\n const proposalBlockTarget = (\n await api.rpc.chain.getHeader()\n ).number.toNumber();\n\n // Create a new proposal data object with the updated fields\n const fastProposalData = {\n ongoing: {\n ...ongoingJson,\n enactment: { after: 0 },\n deciding: {\n since: proposalBlockTarget - 1,\n confirming: proposalBlockTarget - 1,\n },\n tally: {\n ayes: totalIssuance - 1n,\n nays: 0,\n support: totalIssuance - 1n,\n },\n alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]],\n },\n };\n\n // Create a new proposal object from the proposal data\n let fastProposal;\n try {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n } catch {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n }\n\n // Update the storage with the new proposal object\n const result = await api.rpc('dev_setStorage', [\n [referendumKey, fastProposal.toHex()],\n ]);\n\n // Fast forward the nudge referendum to the next block to get the refendum to be scheduled\n await moveScheduledCallTo(api, 1, (call) => {\n if (!call.isInline) {\n return false;\n }\n\n const callData = api.createType('Call', call.asInline.toHex());\n\n return (\n callData.method == 'nudgeReferendum' &&\n (callData.args[0] as any).toNumber() == proposalIndex\n );\n });\n\n // Create a new block\n await api.rpc('dev_newBlock', { count: 1 });\n\n // Move the scheduled call to the next block\n await moveScheduledCallTo(api, 1, (call) =>\n call.isLookup\n ? call.asLookup.toHex() == callHash\n : call.isInline\n ? blake2AsHex(call.asInline.toHex()) == callHash\n : call.asLegacy.toHex() == callHash\n );\n\n // Create another new block\n await api.rpc('dev_newBlock', { count: 1 });\n }\n // --8<-- [end:forceProposalExecution]\n\n // --8<-- [start:main]\n const main = async () => {\n // Connect to the forked chain\n const api = await connectToFork();\n\n // Select the call to perform\n const call = api.tx.system.setCodeWithoutChecks('0x1234');\n\n // Select the origin\n const origin = {\n System: 'Root',\n };\n\n // Submit preimage, submit proposal, and place decision deposit\n const proposalIndex = await generateProposal(api, call, origin);\n\n // Force the proposal to be executed\n await forceProposalExecution(api, proposalIndex);\n\n process.exit(0);\n };\n // --8<-- [end:main]\n\n // --8<-- [start:try-catch-block]\n try {\n main();\n } catch (e) {\n console.log(e);\n process.exit(1);\n }\n // --8<-- [end:try-catch-block]\n\n // --8<-- [start:imports]\n import '@polkadot/api-augment/polkadot';\n import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup';\n import { blake2AsHex } from '@polkadot/util-crypto';\n import { ApiPromise, Keyring, WsProvider } from '@polkadot/api';\n import { type SubmittableExtrinsic } from '@polkadot/api/types';\n import { ISubmittableResult } from '@polkadot/types/types';\n // --8<-- [end:imports]\n\n // --8<-- [start:connectToFork]\n /**\n * Establishes a connection to the local forked chain.\n *\n * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain.\n */\n async function connectToFork(): Promise {\n const wsProvider = new WsProvider('ws://localhost:8000');\n const api = await ApiPromise.create({ provider: wsProvider });\n await api.isReady;\n console.log(`Connected to chain: ${await api.rpc.system.chain()}`);\n return api;\n }\n // --8<-- [end:connectToFork]\n\n // --8<-- [start:generateProposal]\n /**\n * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed.\n * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`).\n * @returns A promise that resolves to the proposal ID of the generated proposal.\n *\n */\n async function generateProposal(\n api: ApiPromise,\n call: SubmittableExtrinsic<'promise', ISubmittableResult>,\n origin: any\n ): Promise {\n // Initialize the keyring\n const keyring = new Keyring({ type: 'sr25519' });\n\n // Set up Alice development account\n const alice = keyring.addFromUri('//Alice');\n\n // Get the next available proposal index\n const proposalIndex = (\n await api.query.referenda.referendumCount()\n ).toNumber();\n\n // Execute the batch transaction\n await new Promise(async (resolve) => {\n const unsub = await api.tx.utility\n .batch([\n // Register the preimage for your proposal\n api.tx.preimage.notePreimage(call.method.toHex()),\n // Submit your proposal to the referenda system\n api.tx.referenda.submit(\n origin as any,\n {\n Lookup: {\n Hash: call.method.hash.toHex(),\n len: call.method.encodedLength,\n },\n },\n { At: 0 }\n ),\n // Place the required decision deposit\n api.tx.referenda.placeDecisionDeposit(proposalIndex),\n ])\n .signAndSend(alice, (status: any) => {\n if (status.blockNumber) {\n unsub();\n resolve();\n }\n });\n });\n return proposalIndex;\n }\n // --8<-- [end:generateProposal]\n\n // --8<-- [start:moveScheduledCallTo]\n /**\n * Moves a scheduled call to a specified future block if it matches the given verifier criteria.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param blockCounts - The number of blocks to move the scheduled call forward.\n * @param verifier - A function to verify if a scheduled call matches the desired criteria.\n * @throws An error if no matching scheduled call is found.\n */\n async function moveScheduledCallTo(\n api: ApiPromise,\n blockCounts: number,\n verifier: (call: FrameSupportPreimagesBounded) => boolean\n ) {\n // Get the current block number\n const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber();\n \n // Retrieve the scheduler's agenda entries\n const agenda = await api.query.scheduler.agenda.entries();\n \n // Initialize a flag to track if a matching scheduled call is found\n let found = false;\n \n // Iterate through the scheduler's agenda entries\n for (const agendaEntry of agenda) {\n // Iterate through the scheduled entries in the current agenda entry\n for (const scheduledEntry of agendaEntry[1]) {\n // Check if the scheduled entry is valid and matches the verifier criteria\n if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) {\n found = true;\n \n // Overwrite the agendaEntry item in storage\n const result = await api.rpc('dev_setStorage', [\n [agendaEntry[0]], // require to ensure unique id\n [\n await api.query.scheduler.agenda.key(blockNumber + blockCounts),\n agendaEntry[1].toHex(),\n ],\n ]);\n \n // Check if the scheduled call has an associated lookup\n if (scheduledEntry.unwrap().maybeId.isSome) {\n // Get the lookup ID\n const id = scheduledEntry.unwrap().maybeId.unwrap().toHex();\n const lookup = await api.query.scheduler.lookup(id);\n\n // Check if the lookup exists\n if (lookup.isSome) {\n // Get the lookup key\n const lookupKey = await api.query.scheduler.lookup.key(id);\n \n // Create a new lookup object with the updated block number\n const fastLookup = api.registry.createType('Option<(u32,u32)>', [\n blockNumber + blockCounts,\n 0,\n ]);\n \n // Overwrite the lookup entry in storage\n const result = await api.rpc('dev_setStorage', [\n [lookupKey, fastLookup.toHex()],\n ]);\n }\n }\n }\n }\n }\n \n // Throw an error if no matching scheduled call is found\n if (!found) {\n throw new Error('No scheduled call found');\n }\n }\n // --8<-- [end:moveScheduledCallTo]\n\n // --8<-- [start:forceProposalExecution]\n /**\n * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param proposalIndex - The index of the proposal to be executed.\n * @throws An error if the referendum is not found or not in an ongoing state.\n */\n async function forceProposalExecution(api: ApiPromise, proposalIndex: number) {\n // Retrieve the referendum data for the given proposal index\n const referendumData = await api.query.referenda.referendumInfoFor(\n proposalIndex\n );\n // Get the storage key for the referendum data\n const referendumKey =\n api.query.referenda.referendumInfoFor.key(proposalIndex);\n\n // Check if the referendum data exists\n if (!referendumData.isSome) {\n throw new Error(`Referendum ${proposalIndex} not found`);\n }\n\n const referendumInfo = referendumData.unwrap();\n\n // Check if the referendum is in an ongoing state\n if (!referendumInfo.isOngoing) {\n throw new Error(`Referendum ${proposalIndex} is not ongoing`);\n }\n\n // Get the ongoing referendum data\n const ongoingData = referendumInfo.asOngoing;\n // Convert the ongoing data to JSON\n const ongoingJson = ongoingData.toJSON();\n\n // Support Lookup, Inline or Legacy proposals\n const callHash = ongoingData.proposal.isLookup\n ? ongoingData.proposal.asLookup.toHex()\n : ongoingData.proposal.isInline\n ? blake2AsHex(ongoingData.proposal.asInline.toHex())\n : ongoingData.proposal.asLegacy.toHex();\n\n // Get the total issuance of the native token\n const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt();\n\n // Get the current block number\n const proposalBlockTarget = (\n await api.rpc.chain.getHeader()\n ).number.toNumber();\n\n // Create a new proposal data object with the updated fields\n const fastProposalData = {\n ongoing: {\n ...ongoingJson,\n enactment: { after: 0 },\n deciding: {\n since: proposalBlockTarget - 1,\n confirming: proposalBlockTarget - 1,\n },\n tally: {\n ayes: totalIssuance - 1n,\n nays: 0,\n support: totalIssuance - 1n,\n },\n alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]],\n },\n };\n\n // Create a new proposal object from the proposal data\n let fastProposal;\n try {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n } catch {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n }\n\n // Update the storage with the new proposal object\n const result = await api.rpc('dev_setStorage', [\n [referendumKey, fastProposal.toHex()],\n ]);\n\n // Fast forward the nudge referendum to the next block to get the refendum to be scheduled\n await moveScheduledCallTo(api, 1, (call) => {\n if (!call.isInline) {\n return false;\n }\n\n const callData = api.createType('Call', call.asInline.toHex());\n\n return (\n callData.method == 'nudgeReferendum' &&\n (callData.args[0] as any).toNumber() == proposalIndex\n );\n });\n\n // Create a new block\n await api.rpc('dev_newBlock', { count: 1 });\n\n // Move the scheduled call to the next block\n await moveScheduledCallTo(api, 1, (call) =>\n call.isLookup\n ? call.asLookup.toHex() == callHash\n : call.isInline\n ? blake2AsHex(call.asInline.toHex()) == callHash\n : call.asLegacy.toHex() == callHash\n );\n\n // Create another new block\n await api.rpc('dev_newBlock', { count: 1 });\n }\n // --8<-- [end:forceProposalExecution]\n\n // --8<-- [start:main]\n const main = async () => {\n // Connect to the forked chain\n const api = await connectToFork();\n\n // Select the call to perform\n const call = api.tx.system.setCodeWithoutChecks('0x1234');\n\n // Select the origin\n const origin = {\n System: 'Root',\n };\n\n // Submit preimage, submit proposal, and place decision deposit\n const proposalIndex = await generateProposal(api, call, origin);\n\n // Force the proposal to be executed\n await forceProposalExecution(api, proposalIndex);\n\n process.exit(0);\n };\n // --8<-- [end:main]\n\n // --8<-- [start:try-catch-block]\n try {\n main();\n } catch (e) {\n console.log(e);\n process.exit(1);\n }\n // --8<-- [end:try-catch-block]\n\n // --8<-- [start:imports]\n import '@polkadot/api-augment/polkadot';\n import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup';\n import { blake2AsHex } from '@polkadot/util-crypto';\n import { ApiPromise, Keyring, WsProvider } from '@polkadot/api';\n import { type SubmittableExtrinsic } from '@polkadot/api/types';\n import { ISubmittableResult } from '@polkadot/types/types';\n // --8<-- [end:imports]\n\n // --8<-- [start:connectToFork]\n /**\n * Establishes a connection to the local forked chain.\n *\n * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain.\n */\n async function connectToFork(): Promise {\n const wsProvider = new WsProvider('ws://localhost:8000');\n const api = await ApiPromise.create({ provider: wsProvider });\n await api.isReady;\n console.log(`Connected to chain: ${await api.rpc.system.chain()}`);\n return api;\n }\n // --8<-- [end:connectToFork]\n\n // --8<-- [start:generateProposal]\n /**\n * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed.\n * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`).\n * @returns A promise that resolves to the proposal ID of the generated proposal.\n *\n */\n async function generateProposal(\n api: ApiPromise,\n call: SubmittableExtrinsic<'promise', ISubmittableResult>,\n origin: any\n ): Promise {\n // Initialize the keyring\n const keyring = new Keyring({ type: 'sr25519' });\n\n // Set up Alice development account\n const alice = keyring.addFromUri('//Alice');\n\n // Get the next available proposal index\n const proposalIndex = (\n await api.query.referenda.referendumCount()\n ).toNumber();\n\n // Execute the batch transaction\n await new Promise(async (resolve) => {\n const unsub = await api.tx.utility\n .batch([\n // Register the preimage for your proposal\n api.tx.preimage.notePreimage(call.method.toHex()),\n // Submit your proposal to the referenda system\n api.tx.referenda.submit(\n origin as any,\n {\n Lookup: {\n Hash: call.method.hash.toHex(),\n len: call.method.encodedLength,\n },\n },\n { At: 0 }\n ),\n // Place the required decision deposit\n api.tx.referenda.placeDecisionDeposit(proposalIndex),\n ])\n .signAndSend(alice, (status: any) => {\n if (status.blockNumber) {\n unsub();\n resolve();\n }\n });\n });\n return proposalIndex;\n }\n // --8<-- [end:generateProposal]\n\n // --8<-- [start:moveScheduledCallTo]\n /**\n * Moves a scheduled call to a specified future block if it matches the given verifier criteria.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param blockCounts - The number of blocks to move the scheduled call forward.\n * @param verifier - A function to verify if a scheduled call matches the desired criteria.\n * @throws An error if no matching scheduled call is found.\n */\n async function moveScheduledCallTo(\n api: ApiPromise,\n blockCounts: number,\n verifier: (call: FrameSupportPreimagesBounded) => boolean\n ) {\n // Get the current block number\n const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber();\n \n // Retrieve the scheduler's agenda entries\n const agenda = await api.query.scheduler.agenda.entries();\n \n // Initialize a flag to track if a matching scheduled call is found\n let found = false;\n \n // Iterate through the scheduler's agenda entries\n for (const agendaEntry of agenda) {\n // Iterate through the scheduled entries in the current agenda entry\n for (const scheduledEntry of agendaEntry[1]) {\n // Check if the scheduled entry is valid and matches the verifier criteria\n if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) {\n found = true;\n \n // Overwrite the agendaEntry item in storage\n const result = await api.rpc('dev_setStorage', [\n [agendaEntry[0]], // require to ensure unique id\n [\n await api.query.scheduler.agenda.key(blockNumber + blockCounts),\n agendaEntry[1].toHex(),\n ],\n ]);\n \n // Check if the scheduled call has an associated lookup\n if (scheduledEntry.unwrap().maybeId.isSome) {\n // Get the lookup ID\n const id = scheduledEntry.unwrap().maybeId.unwrap().toHex();\n const lookup = await api.query.scheduler.lookup(id);\n\n // Check if the lookup exists\n if (lookup.isSome) {\n // Get the lookup key\n const lookupKey = await api.query.scheduler.lookup.key(id);\n \n // Create a new lookup object with the updated block number\n const fastLookup = api.registry.createType('Option<(u32,u32)>', [\n blockNumber + blockCounts,\n 0,\n ]);\n \n // Overwrite the lookup entry in storage\n const result = await api.rpc('dev_setStorage', [\n [lookupKey, fastLookup.toHex()],\n ]);\n }\n }\n }\n }\n }\n \n // Throw an error if no matching scheduled call is found\n if (!found) {\n throw new Error('No scheduled call found');\n }\n }\n // --8<-- [end:moveScheduledCallTo]\n\n // --8<-- [start:forceProposalExecution]\n /**\n * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered.\n *\n * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain.\n * @param proposalIndex - The index of the proposal to be executed.\n * @throws An error if the referendum is not found or not in an ongoing state.\n */\n async function forceProposalExecution(api: ApiPromise, proposalIndex: number) {\n // Retrieve the referendum data for the given proposal index\n const referendumData = await api.query.referenda.referendumInfoFor(\n proposalIndex\n );\n // Get the storage key for the referendum data\n const referendumKey =\n api.query.referenda.referendumInfoFor.key(proposalIndex);\n\n // Check if the referendum data exists\n if (!referendumData.isSome) {\n throw new Error(`Referendum ${proposalIndex} not found`);\n }\n\n const referendumInfo = referendumData.unwrap();\n\n // Check if the referendum is in an ongoing state\n if (!referendumInfo.isOngoing) {\n throw new Error(`Referendum ${proposalIndex} is not ongoing`);\n }\n\n // Get the ongoing referendum data\n const ongoingData = referendumInfo.asOngoing;\n // Convert the ongoing data to JSON\n const ongoingJson = ongoingData.toJSON();\n\n // Support Lookup, Inline or Legacy proposals\n const callHash = ongoingData.proposal.isLookup\n ? ongoingData.proposal.asLookup.toHex()\n : ongoingData.proposal.isInline\n ? blake2AsHex(ongoingData.proposal.asInline.toHex())\n : ongoingData.proposal.asLegacy.toHex();\n\n // Get the total issuance of the native token\n const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt();\n\n // Get the current block number\n const proposalBlockTarget = (\n await api.rpc.chain.getHeader()\n ).number.toNumber();\n\n // Create a new proposal data object with the updated fields\n const fastProposalData = {\n ongoing: {\n ...ongoingJson,\n enactment: { after: 0 },\n deciding: {\n since: proposalBlockTarget - 1,\n confirming: proposalBlockTarget - 1,\n },\n tally: {\n ayes: totalIssuance - 1n,\n nays: 0,\n support: totalIssuance - 1n,\n },\n alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]],\n },\n };\n\n // Create a new proposal object from the proposal data\n let fastProposal;\n try {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n } catch {\n fastProposal = api.registry.createType(\n `Option`,\n fastProposalData\n );\n }\n\n // Update the storage with the new proposal object\n const result = await api.rpc('dev_setStorage', [\n [referendumKey, fastProposal.toHex()],\n ]);\n\n // Fast forward the nudge referendum to the next block to get the refendum to be scheduled\n await moveScheduledCallTo(api, 1, (call) => {\n if (!call.isInline) {\n return false;\n }\n\n const callData = api.createType('Call', call.asInline.toHex());\n\n return (\n callData.method == 'nudgeReferendum' &&\n (callData.args[0] as any).toNumber() == proposalIndex\n );\n });\n\n // Create a new block\n await api.rpc('dev_newBlock', { count: 1 });\n\n // Move the scheduled call to the next block\n await moveScheduledCallTo(api, 1, (call) =>\n call.isLookup\n ? call.asLookup.toHex() == callHash\n : call.isInline\n ? blake2AsHex(call.asInline.toHex()) == callHash\n : call.asLegacy.toHex() == callHash\n );\n\n // Create another new block\n await api.rpc('dev_newBlock', { count: 1 });\n }\n // --8<-- [end:forceProposalExecution]\n\n // --8<-- [start:main]\n const main = async () => {\n // Connect to the forked chain\n const api = await connectToFork();\n\n // Select the call to perform\n const call = api.tx.system.setCodeWithoutChecks('0x1234');\n\n // Select the origin\n const origin = {\n System: 'Root',\n };\n\n // Submit preimage, submit proposal, and place decision deposit\n const proposalIndex = await generateProposal(api, call, origin);\n\n // Force the proposal to be executed\n await forceProposalExecution(api, proposalIndex);\n\n process.exit(0);\n };\n // --8<-- [end:main]\n\n // --8<-- [start:try-catch-block]\n try {\n main();\n } catch (e) {\n console.log(e);\n process.exit(1);\n }\n // --8<-- [end:try-catch-block]\n\n ```"}
{"page_id": "tutorials-onchain-governance", "page_title": "On-Chain Governance Tutorials", "index": 0, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 405, "end_char": 455, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "tutorials-onchain-governance", "page_title": "On-Chain Governance Tutorials", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 455, "end_char": 883, "estimated_token_count": 117, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n"}
+{"page_id": "tutorials-onchain-governance", "page_title": "On-Chain Governance Tutorials", "index": 1, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 455, "end_char": 872, "estimated_token_count": 114, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n"}
{"page_id": "tutorials-polkadot-sdk-parachains-zero-to-hero-add-pallets-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 30, "end_char": 866, "estimated_token_count": 192, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nIn previous tutorials, you learned how to [create a custom pallet](/tutorials/polkadot-sdk/parachains/zero-to-hero/build-custom-pallet/){target=\\_blank} and [test it](/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-unit-testing/){target=\\_blank}. The next step is to include this pallet in your runtime, integrating it into the core logic of your blockchain.\n\nThis tutorial will guide you through adding two pallets to your runtime: the custom pallet you previously developed and the [utility pallet](https://paritytech.github.io/polkadot-sdk/master/pallet_utility/index.html){target=\\_blank}. This standard Polkadot SDK pallet provides powerful dispatch functionality. The utility pallet offers, for example, batch dispatch, a stateless operation that enables executing multiple calls in a single transaction."}
{"page_id": "tutorials-polkadot-sdk-parachains-zero-to-hero-add-pallets-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 1, "depth": 2, "title": "Add the Pallets as Dependencies", "anchor": "add-the-pallets-as-dependencies", "start_char": 866, "end_char": 8510, "estimated_token_count": 1856, "token_estimator": "heuristic-v1", "text": "## Add the Pallets as Dependencies\n\nFirst, you'll update the runtime's `Cargo.toml` file to include the Utility pallet and your custom pallets as dependencies for the runtime. Follow these steps:\n\n1. Open the `runtime/Cargo.toml` file and locate the `[dependencies]` section. Add pallet-utility as one of the features for the `polkadot-sdk` dependency with the following line:\n\n ```toml hl_lines=\"4\" title=\"runtime/Cargo.toml\"\n [dependencies]\n ...\n polkadot-sdk = { workspace = true, features = [\n \"pallet-utility\",\n ...\n ], default-features = false }\n ```\n\n2. In the same `[dependencies]` section, add the custom pallet that you built from scratch with the following line:\n\n ```toml hl_lines=\"3\" title=\"Cargo.toml\"\n [dependencies]\n ...\n custom-pallet = { path = \"../pallets/custom-pallet\", default-features = false }\n ```\n\n3. In the `[features]` section, add the custom pallet to the `std` feature list:\n\n ```toml hl_lines=\"5\" title=\"Cargo.toml\"\n [features]\n default = [\"std\"]\n std = [\n ...\n \"custom-pallet/std\",\n ...\n ]\n ```\n\n3. Save the changes and close the `Cargo.toml` file.\n\n Once you have saved your file, it should look like the following:\n\n ???- code \"runtime/Cargo.toml\"\n \n ```rust title=\"runtime/Cargo.toml\"\n [package]\n name = \"parachain-template-runtime\"\n description = \"A parachain runtime template built with Substrate and Cumulus, part of Polkadot Sdk.\"\n version = \"0.1.0\"\n license = \"Unlicense\"\n authors.workspace = true\n homepage.workspace = true\n repository.workspace = true\n edition.workspace = true\n publish = false\n\n [package.metadata.docs.rs]\n targets = [\"x86_64-unknown-linux-gnu\"]\n\n [build-dependencies]\n docify = { workspace = true }\n substrate-wasm-builder = { optional = true, workspace = true, default-features = true }\n\n [dependencies]\n codec = { features = [\"derive\"], workspace = true }\n cumulus-pallet-parachain-system.workspace = true\n docify = { workspace = true }\n hex-literal = { optional = true, workspace = true, default-features = true }\n log = { workspace = true }\n pallet-parachain-template = { path = \"../pallets/template\", default-features = false }\n polkadot-sdk = { workspace = true, features = [\n \"pallet-utility\",\n \"cumulus-pallet-aura-ext\",\n \"cumulus-pallet-session-benchmarking\",\n \"cumulus-pallet-weight-reclaim\",\n \"cumulus-pallet-xcm\",\n \"cumulus-pallet-xcmp-queue\",\n \"cumulus-primitives-aura\",\n \"cumulus-primitives-core\",\n \"cumulus-primitives-utility\",\n \"pallet-aura\",\n \"pallet-authorship\",\n \"pallet-balances\",\n \"pallet-collator-selection\",\n \"pallet-message-queue\",\n \"pallet-session\",\n \"pallet-sudo\",\n \"pallet-timestamp\",\n \"pallet-transaction-payment\",\n \"pallet-transaction-payment-rpc-runtime-api\",\n \"pallet-xcm\",\n \"parachains-common\",\n \"polkadot-parachain-primitives\",\n \"polkadot-runtime-common\",\n \"runtime\",\n \"staging-parachain-info\",\n \"staging-xcm\",\n \"staging-xcm-builder\",\n \"staging-xcm-executor\",\n ], default-features = false }\n scale-info = { features = [\"derive\"], workspace = true }\n serde_json = { workspace = true, default-features = false, features = [\n \"alloc\",\n ] }\n smallvec = { workspace = true, default-features = true }\n\n custom-pallet = { path = \"../pallets/custom-pallet\", default-features = false }\n\n [features]\n default = [\"std\"]\n std = [\n \"codec/std\",\n \"cumulus-pallet-parachain-system/std\",\n \"log/std\",\n \"pallet-parachain-template/std\",\n \"polkadot-sdk/std\",\n \"scale-info/std\",\n \"serde_json/std\",\n \"substrate-wasm-builder\",\n \"custom-pallet/std\",\n ]\n\n runtime-benchmarks = [\n \"cumulus-pallet-parachain-system/runtime-benchmarks\",\n \"hex-literal\",\n \"pallet-parachain-template/runtime-benchmarks\",\n \"polkadot-sdk/runtime-benchmarks\",\n ]\n\n try-runtime = [\n \"cumulus-pallet-parachain-system/try-runtime\",\n \"pallet-parachain-template/try-runtime\",\n \"polkadot-sdk/try-runtime\",\n ]\n\n # Enable the metadata hash generation.\n #\n # This is hidden behind a feature because it increases the compile time.\n # The wasm binary needs to be compiled twice, once to fetch the metadata,\n # generate the metadata hash and then a second time with the\n # `RUNTIME_METADATA_HASH` environment variable set for the `CheckMetadataHash`\n # extension.\n metadata-hash = [\"substrate-wasm-builder/metadata-hash\"]\n\n # A convenience feature for enabling things when doing a build\n # for an on-chain release.\n on-chain-release-build = [\"metadata-hash\"]\n\n ```\n\nUpdate your root parachain template's `Cargo.toml` file to include your custom pallet as a dependency. Follow these steps:\n\n1. Open the `./Cargo.toml` file and locate the `[workspace]` section. \n \n Make sure the `custom-pallet` is a member of the workspace:\n\n ```toml hl_lines=\"4\" title=\"Cargo.toml\"\n [workspace]\n default-members = [\"pallets/template\", \"runtime\"]\n members = [\n \"node\", \"pallets/custom-pallet\",\n \"pallets/template\",\n \"runtime\",\n ]\n ```\n\n???- code \"./Cargo.toml\"\n\n ```rust title=\"./Cargo.toml\"\n [workspace.package]\n license = \"MIT-0\"\n authors = [\"Parity Technologies \"]\n homepage = \"https://paritytech.github.io/polkadot-sdk/\"\n repository = \"https://github.com/paritytech/polkadot-sdk-parachain-template.git\"\n edition = \"2021\"\n\n [workspace]\n default-members = [\"pallets/template\", \"runtime\"]\n members = [\n \"node\", \"pallets/custom-pallet\",\n \"pallets/template\",\n \"runtime\",\n ]\n resolver = \"2\"\n\n [workspace.dependencies]\n parachain-template-runtime = { path = \"./runtime\", default-features = false }\n pallet-parachain-template = { path = \"./pallets/template\", default-features = false }\n clap = { version = \"4.5.13\" }\n color-print = { version = \"0.3.4\" }\n docify = { version = \"0.2.9\" }\n futures = { version = \"0.3.31\" }\n jsonrpsee = { version = \"0.24.3\" }\n log = { version = \"0.4.22\", default-features = false }\n polkadot-sdk = { version = \"2503.0.1\", default-features = false }\n prometheus-endpoint = { version = \"0.17.2\", default-features = false, package = \"substrate-prometheus-endpoint\" }\n serde = { version = \"1.0.214\", default-features = false }\n codec = { version = \"3.7.4\", default-features = false, package = \"parity-scale-codec\" }\n cumulus-pallet-parachain-system = { version = \"0.20.0\", default-features = false }\n hex-literal = { version = \"0.4.1\", default-features = false }\n scale-info = { version = \"2.11.6\", default-features = false }\n serde_json = { version = \"1.0.132\", default-features = false }\n smallvec = { version = \"1.11.0\", default-features = false }\n substrate-wasm-builder = { version = \"26.0.1\", default-features = false }\n frame = { version = \"0.9.1\", default-features = false, package = \"polkadot-sdk-frame\" }\n\n [profile.release]\n opt-level = 3\n panic = \"unwind\"\n\n [profile.production]\n codegen-units = 1\n inherits = \"release\"\n lto = true\n ```"}
{"page_id": "tutorials-polkadot-sdk-parachains-zero-to-hero-add-pallets-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 2, "depth": 3, "title": "Update the Runtime Configuration", "anchor": "update-the-runtime-configuration", "start_char": 8510, "end_char": 10415, "estimated_token_count": 406, "token_estimator": "heuristic-v1", "text": "### Update the Runtime Configuration\n\nConfigure the pallets by implementing their `Config` trait and update the runtime macro to include the new pallets:\n\n1. Add the `OriginCaller` import:\n\n ```rust title=\"mod.rs\" hl_lines=\"8\"\n // Local module imports\n use super::OriginCaller;\n ...\n ```\n\n2. Implement the [`Config`](https://paritytech.github.io/polkadot-sdk/master/pallet_utility/pallet/trait.Config.html){target=\\_blank} trait for both pallets at the end of the `runtime/src/config/mod.rs` file:\n\n ```rust title=\"mod.rs\" hl_lines=\"8-25\"\n ...\n /// Configure the pallet template in pallets/template.\n impl pallet_parachain_template::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type WeightInfo = pallet_parachain_template::weights::SubstrateWeight;\n }\n\n // Configure utility pallet.\n impl pallet_utility::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type RuntimeCall = RuntimeCall;\n type PalletsOrigin = OriginCaller;\n type WeightInfo = pallet_utility::weights::SubstrateWeight;\n }\n // Define counter max value runtime constant.\n parameter_types! {\n pub const CounterMaxValue: u32 = 500;\n }\n\n // Configure custom pallet.\n impl custom_pallet::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = CounterMaxValue;\n }\n ```\n\n3. Locate the `#[frame_support::runtime]` macro in the `runtime/src/lib.rs` file and add the pallets:\n\n ```rust hl_lines=\"9-14\" title=\"lib.rs\"\n #[frame_support::runtime]\n mod runtime {\n #[runtime::runtime]\n #[runtime::derive(\n ...\n )]\n pub struct Runtime;\n #[runtime::pallet_index(51)]\n pub type Utility = pallet_utility;\n\n #[runtime::pallet_index(52)]\n pub type CustomPallet = custom_pallet;\n }\n ```"}
@@ -1433,7 +1433,7 @@
{"page_id": "tutorials-polkadot-sdk-system-chains-asset-hub", "page_title": "Asset Hub Tutorials", "index": 0, "depth": 2, "title": "Benefits of Asset Hub", "anchor": "benefits-of-asset-hub", "start_char": 23, "end_char": 1017, "estimated_token_count": 224, "token_estimator": "heuristic-v1", "text": "## Benefits of Asset Hub\n\nPolkadot SDK-based relay chains focus on security and consensus, leaving asset management to an external component, such as a [system chain](/polkadot-protocol/architecture/system-chains/){target=\\_blank}. The [Asset Hub](/polkadot-protocol/architecture/system-chains/asset-hub/){target=\\_blank} is one example of a system chain and is vital to managing tokens which aren't native to the Polkadot ecosystem. Developers opting to integrate with Asset Hub can expect the following benefits:\n\n- **Support for non-native on-chain assets**: Create and manage your own tokens or NFTs with Polkadot ecosystem compatibility available out of the box.\n- **Lower transaction fees**: Approximately 1/10th of the cost of using the relay chain.\n- **Reduced deposit requirements**: Approximately 1/100th of the deposit required for the relay chain.\n- **Payment of fees with non-native assets**: No need to buy native tokens for gas, increasing flexibility for developers and users."}
{"page_id": "tutorials-polkadot-sdk-system-chains-asset-hub", "page_title": "Asset Hub Tutorials", "index": 1, "depth": 2, "title": "Get Started", "anchor": "get-started", "start_char": 1017, "end_char": 1303, "estimated_token_count": 48, "token_estimator": "heuristic-v1", "text": "## Get Started\n\nThrough these tutorials, you'll learn how to manage cross-chain assets, including:\n\n- Asset registration and configuration\n- Cross-chain asset representation\n- Liquidity pool creation and management \n- Asset swapping and conversion\n- Transaction parameter optimization"}
{"page_id": "tutorials-polkadot-sdk-system-chains-asset-hub", "page_title": "Asset Hub Tutorials", "index": 2, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 1303, "end_char": 1353, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "tutorials-polkadot-sdk-system-chains-asset-hub", "page_title": "Asset Hub Tutorials", "index": 3, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 1353, "end_char": 1778, "estimated_token_count": 116, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n"}
+{"page_id": "tutorials-polkadot-sdk-system-chains-asset-hub", "page_title": "Asset Hub Tutorials", "index": 3, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 1353, "end_char": 1767, "estimated_token_count": 113, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n"}
{"page_id": "tutorials-polkadot-sdk-system-chains", "page_title": "System Chains Tutorials", "index": 0, "depth": 2, "title": "For Parachain Integrators", "anchor": "for-parachain-integrators", "start_char": 619, "end_char": 990, "estimated_token_count": 83, "token_estimator": "heuristic-v1", "text": "## For Parachain Integrators\n\nEnhance cross-chain interoperability and expand your parachain’s functionality:\n\n- **[Register your parachain's asset on Asset Hub](/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/)**: Connect your parachain’s assets to Asset Hub as a foreign asset using XCM, enabling seamless cross-chain transfers and integration."}
{"page_id": "tutorials-polkadot-sdk-system-chains", "page_title": "System Chains Tutorials", "index": 1, "depth": 2, "title": "For Developers Leveraging System Chains", "anchor": "for-developers-leveraging-system-chains", "start_char": 990, "end_char": 1551, "estimated_token_count": 134, "token_estimator": "heuristic-v1", "text": "## For Developers Leveraging System Chains\n\nUnlock new possibilities by tapping into Polkadot’s system chains:\n\n- **[Register a new asset on Asset Hub](/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/)**: Create and customize assets directly on Asset Hub (local assets) with parameters like metadata, minimum balances, and more.\n\n- **[Convert Assets](/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/)**: Use Asset Hub's AMM functionality to swap between different assets, provide liquidity to pools, and manage LP tokens."}
{"page_id": "tutorials-polkadot-sdk-system-chains", "page_title": "System Chains Tutorials", "index": 2, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 1551, "end_char": 1600, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
@@ -1459,7 +1459,7 @@
{"page_id": "tutorials-polkadot-sdk-testing", "page_title": "Blockchain Testing Tutorials", "index": 1, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 794, "end_char": 843, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
{"page_id": "tutorials-polkadot-sdk", "page_title": "Polkadot SDK Tutorials", "index": 0, "depth": 2, "title": "Build and Deploy a Parachain", "anchor": "build-and-deploy-a-parachain", "start_char": 450, "end_char": 1038, "estimated_token_count": 133, "token_estimator": "heuristic-v1", "text": "## Build and Deploy a Parachain\n\nFollow these key milestones to guide you through parachain development. Each step links to detailed tutorials for a deeper dive into each stage:\n\n- **[Install the Polkadot SDK](/develop/parachains/install-polkadot-sdk/)**: Set up the necessary tools to begin building on Polkadot. This step will get your environment ready for parachain development.\n\n- **[Parachains Zero to Hero](/tutorials/polkadot-sdk/parachains/zero-to-hero/)**: A series of step-by-step guides to building, testing, and deploying custom pallets and runtimes using the Polkadot SDK."}
{"page_id": "tutorials-polkadot-sdk", "page_title": "Polkadot SDK Tutorials", "index": 1, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 1038, "end_char": 1088, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
-{"page_id": "tutorials-polkadot-sdk", "page_title": "Polkadot SDK Tutorials", "index": 2, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 1088, "end_char": 1489, "estimated_token_count": 113, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n"}
+{"page_id": "tutorials-polkadot-sdk", "page_title": "Polkadot SDK Tutorials", "index": 2, "depth": 2, "title": "Additional Resources", "anchor": "additional-resources", "start_char": 1088, "end_char": 1478, "estimated_token_count": 110, "token_estimator": "heuristic-v1", "text": "## Additional Resources\n\n"}
{"page_id": "tutorials-smart-contracts-demo-aplications-deploying-uniswap-v2", "page_title": "Deploying Uniswap V2 on Polkadot", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 191, "end_char": 857, "estimated_token_count": 131, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nDecentralized exchanges (DEXs) are a cornerstone of the DeFi ecosystem, allowing for permissionless token swaps without intermediaries. [Uniswap V2](https://docs.uniswap.org/contracts/v2/overview){target=\\_blank}, with its Automated Market Maker (AMM) model, revolutionized DEXs by enabling liquidity provision for any ERC-20 token pair.\n\nThis tutorial will guide you through how Uniswap V2 works so you can take advantage of it in your projects deployed to Polkadot Hub. By understanding these contracts, you'll gain hands-on experience with one of the most influential DeFi protocols and understand how it functions across blockchain ecosystems."}
{"page_id": "tutorials-smart-contracts-demo-aplications-deploying-uniswap-v2", "page_title": "Deploying Uniswap V2 on Polkadot", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 857, "end_char": 1352, "estimated_token_count": 121, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore starting, make sure you have:\n\n- Node.js (v16.0.0 or later) and npm installed.\n- Basic understanding of Solidity and JavaScript.\n- Familiarity with [`hardhat-polkadot`](/develop/smart-contracts/dev-environments/hardhat){target=\\_blank} development environment.\n- Some PAS test tokens to cover transaction fees (obtained from the [Polkadot faucet](https://faucet.polkadot.io/?parachain=1111){target=\\_blank}).\n- Basic understanding of how AMMs and liquidity pools work."}
{"page_id": "tutorials-smart-contracts-demo-aplications-deploying-uniswap-v2", "page_title": "Deploying Uniswap V2 on Polkadot", "index": 2, "depth": 2, "title": "Set Up the Project", "anchor": "set-up-the-project", "start_char": 1352, "end_char": 3690, "estimated_token_count": 572, "token_estimator": "heuristic-v1", "text": "## Set Up the Project\n\nLet's start by cloning the Uniswap V2 project:\n\n1. Clone the Uniswap V2 repository:\n\n ```\n git clone https://github.com/polkadot-developers/polkavm-hardhat-examples.git -b v0.0.6\n cd polkavm-hardhat-examples/uniswap-v2-polkadot/\n ```\n\n2. Install the required dependencies:\n\n ```bash\n npm install\n ```\n\n3. Update the `hardhat.config.js` file so the paths for the Substrate node and the ETH-RPC adapter match with the paths on your machine. For more info, check the [Testing your Contract](/develop/smart-contracts/dev-environments/hardhat/#testing-your-contract){target=\\_blank} section in the Hardhat guide.\n\n ```js title=\"hardhat.config.js\"\n hardhat: {\n polkavm: true,\n nodeConfig: {\n nodeBinaryPath: '../bin/substrate-node',\n rpcPort: 8000,\n dev: true,\n },\n adapterConfig: {\n adapterBinaryPath: '../bin/eth-rpc',\n dev: true,\n },\n },\n ```\n\n4. Create a `.env` file in your project root to store your private keys (you can use as an example the `env.example` file):\n\n ```text title=\".env\"\n LOCAL_PRIV_KEY=\"INSERT_LOCAL_PRIVATE_KEY\"\n AH_PRIV_KEY=\"INSERT_AH_PRIVATE_KEY\"\n ```\n\n Ensure to replace `\"INSERT_LOCAL_PRIVATE_KEY\"` with a private key available in the local environment (you can get them from this [file](https://github.com/paritytech/hardhat-polkadot/blob/main/packages/hardhat-polkadot-node/src/constants.ts#L22){target=\\_blank}). And `\"INSERT_AH_PRIVATE_KEY\"` with the account's private key you want to use to deploy the contracts. You can get this by exporting the private key from your wallet (e.g., MetaMask).\n\n !!!warning\n Keep your private key safe, and never share it with anyone. If it is compromised, your funds can be stolen.\n\n5. Compile the contracts:\n\n ```bash\n npx hardhat compile\n ```\n\nIf the compilation is successful, you should see the following output:\n\n\n npx hardhat compile\n Compiling 12 Solidity files\n Successfully compiled 12 Solidity files\n
\n\nAfter running the above command, you should see the compiled contracts in the `artifacts-pvm` directory. This directory contains the ABI and bytecode of your contracts."}
@@ -1522,6 +1522,6 @@
{"page_id": "tutorials-smart-contracts", "page_title": "Smart Contracts", "index": 1, "depth": 2, "title": "Start Building", "anchor": "start-building", "start_char": 739, "end_char": 1008, "estimated_token_count": 51, "token_estimator": "heuristic-v1", "text": "## Start Building\n\nJump into the tutorials and learn how to:\n\n- Write and compile smart contracts.\n- Deploy contracts to the Polkadot network.\n- Interact with deployed contracts using libraries like Ethers.js and viem.\n\nChoose a tutorial below and start coding today!"}
{"page_id": "tutorials-smart-contracts", "page_title": "Smart Contracts", "index": 2, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 1008, "end_char": 1057, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
{"page_id": "tutorials", "page_title": "Tutorials", "index": 0, "depth": 2, "title": "Polkadot Zero to Hero", "anchor": "polkadot-zero-to-hero", "start_char": 326, "end_char": 452, "estimated_token_count": 25, "token_estimator": "heuristic-v1", "text": "## Polkadot Zero to Hero\n\nThe Zero to Hero series offers step-by-step guidance to development across the Polkadot ecosystem."}
-{"page_id": "tutorials", "page_title": "Tutorials", "index": 1, "depth": 3, "title": "Parachain Developers", "anchor": "parachain-developers", "start_char": 452, "end_char": 948, "estimated_token_count": 135, "token_estimator": "heuristic-v1", "text": "### Parachain Developers\n\n"}
-{"page_id": "tutorials", "page_title": "Tutorials", "index": 2, "depth": 2, "title": "Featured Tutorials", "anchor": "featured-tutorials", "start_char": 948, "end_char": 2449, "estimated_token_count": 422, "token_estimator": "heuristic-v1", "text": "## Featured Tutorials\n\n"}
-{"page_id": "tutorials", "page_title": "Tutorials", "index": 3, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 2449, "end_char": 2498, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
+{"page_id": "tutorials", "page_title": "Tutorials", "index": 1, "depth": 3, "title": "Parachain Developers", "anchor": "parachain-developers", "start_char": 452, "end_char": 937, "estimated_token_count": 132, "token_estimator": "heuristic-v1", "text": "### Parachain Developers\n\n"}
+{"page_id": "tutorials", "page_title": "Tutorials", "index": 2, "depth": 2, "title": "Featured Tutorials", "anchor": "featured-tutorials", "start_char": 937, "end_char": 2394, "estimated_token_count": 410, "token_estimator": "heuristic-v1", "text": "## Featured Tutorials\n\n"}
+{"page_id": "tutorials", "page_title": "Tutorials", "index": 3, "depth": 2, "title": "In This Section", "anchor": "in-this-section", "start_char": 2394, "end_char": 2443, "estimated_token_count": 12, "token_estimator": "heuristic-v1", "text": "## In This Section\n\n:::INSERT_IN_THIS_SECTION:::"}
diff --git a/tutorials/dapps/index.md b/tutorials/dapps/index.md
index a67b8a74e..0e425bdc5 100644
--- a/tutorials/dapps/index.md
+++ b/tutorials/dapps/index.md
@@ -20,14 +20,12 @@ You'll explore a range of topics—from client-side apps and CLI tools to on-cha
diff --git a/tutorials/index.md b/tutorials/index.md
index 86de48570..96e58d0ea 100644
--- a/tutorials/index.md
+++ b/tutorials/index.md
@@ -20,7 +20,6 @@ The Zero to Hero series offers step-by-step guidance to development across the P
@@ -32,28 +31,24 @@ The Zero to Hero series offers step-by-step guidance to development across the P
diff --git a/tutorials/interoperability/index.md b/tutorials/interoperability/index.md
index 83c3e4550..4d27d61e1 100644
--- a/tutorials/interoperability/index.md
+++ b/tutorials/interoperability/index.md
@@ -31,14 +31,12 @@ Learn to establish and use cross-chain communication channels:
diff --git a/tutorials/interoperability/xcm-channels/index.md b/tutorials/interoperability/xcm-channels/index.md
index a927df0b6..47253b6e9 100644
--- a/tutorials/interoperability/xcm-channels/index.md
+++ b/tutorials/interoperability/xcm-channels/index.md
@@ -26,7 +26,6 @@ To enable communication between parachains, explicit HRMP channels must be estab
diff --git a/tutorials/onchain-governance/index.md b/tutorials/onchain-governance/index.md
index fb270890d..2d68e137f 100644
--- a/tutorials/onchain-governance/index.md
+++ b/tutorials/onchain-governance/index.md
@@ -20,7 +20,6 @@ This section provides step-by-step tutorials to help you navigate the technical
diff --git a/tutorials/polkadot-sdk/index.md b/tutorials/polkadot-sdk/index.md
index 633105c3c..d9f4ee471 100644
--- a/tutorials/polkadot-sdk/index.md
+++ b/tutorials/polkadot-sdk/index.md
@@ -28,7 +28,6 @@ Follow these key milestones to guide you through parachain development. Each ste
diff --git a/tutorials/polkadot-sdk/system-chains/asset-hub/index.md b/tutorials/polkadot-sdk/system-chains/asset-hub/index.md
index 692ed8e8a..fc6a91236 100644
--- a/tutorials/polkadot-sdk/system-chains/asset-hub/index.md
+++ b/tutorials/polkadot-sdk/system-chains/asset-hub/index.md
@@ -35,7 +35,6 @@ Through these tutorials, you'll learn how to manage cross-chain assets, includin