mirror of
https://github.com/pezkuwichain/pwap.git
synced 2026-04-25 00:37:55 +00:00
feat(web): add network subpages and subdomains listing page
- Add /subdomains page listing all 20 PezkuwiChain subdomains - Add Back to Home button to Subdomains page - Create NetworkPage reusable component for network details - Add 7 network subpages: /mainnet, /staging, /testnet, /beta, /alfa, /development, /local - Update ChainSpecs network cards to navigate to network subpages - Add i18n translations for chainSpecs section in en.ts - Add SDK docs with rebranding support (rebrand-rustdoc.cjs) - Add generate-docs-structure.cjs for automatic docs generation - Update shared libs: endpoints, polkadot, wallet, xcm-bridge - Add new token logos: TYR, ZGR, pezkuwi_icon - Add new pages: Explorer, Docs, Wallet, Api, Faucet, Developers, Grants, Wiki, Forum, Telemetry
This commit is contained in:
@@ -0,0 +1,254 @@
|
||||
//! # Upgrade Teyrchain for Asynchronous Backing Compatibility
|
||||
//!
|
||||
//! This guide is relevant for cumulus based teyrchain projects started in 2023 or before, whose
|
||||
//! backing process is synchronous where parablocks can only be built on the latest Relay Chain
|
||||
//! block. Async Backing allows collators to build parablocks on older Relay Chain blocks and create
|
||||
//! pipelines of multiple pending parablocks. This parallel block generation increases efficiency
|
||||
//! and throughput. For more information on Async backing and its terminology, refer to the document
|
||||
//! on [the Pezkuwi SDK docs.](https://docs.pezkuwichain.io/sdk/master/polkadot_sdk_docs/guides/async_backing_guide/index.html)
|
||||
//!
|
||||
//! > If starting a new teyrchain project, please use an async backing compatible template such as
|
||||
//! > the
|
||||
//! > [teyrchain template](https://github.com/pezkuwichain/pezkuwi-sdk/tree/master/templates/teyrchain).
|
||||
//! The rollout process for Async Backing has three phases. Phases 1 and 2 below put new
|
||||
//! infrastructure in place. Then we can simply turn on async backing in phase 3.
|
||||
//!
|
||||
//! ## Prerequisite
|
||||
//!
|
||||
//! The relay chain needs to have async backing enabled so double-check that the relay-chain
|
||||
//! configuration contains the following three parameters (especially when testing locally e.g. with
|
||||
//! zombienet):
|
||||
//!
|
||||
//! ```json
|
||||
//! "async_backing_params": {
|
||||
//! "max_candidate_depth": 3,
|
||||
//! "allowed_ancestry_len": 2
|
||||
//! },
|
||||
//! "scheduling_lookahead": 2
|
||||
//! ```
|
||||
//!
|
||||
//! <div class="warning"><code>scheduling_lookahead</code> must be set to 2, otherwise teyrchain
|
||||
//! block times will degrade to worse than with sync backing!</div>
|
||||
//!
|
||||
//! ## Phase 1 - Update Teyrchain Runtime
|
||||
//!
|
||||
//! This phase involves configuring your teyrchain’s runtime `/runtime/src/lib.rs` to make use of
|
||||
//! async backing system.
|
||||
//!
|
||||
//! 1. Establish and ensure constants for `capacity` and `velocity` are both set to 1 in the
|
||||
//! runtime.
|
||||
//! 2. Establish and ensure the constant relay chain slot duration measured in milliseconds equal to
|
||||
//! `6000` in the runtime.
|
||||
//! ```rust
|
||||
//! // Maximum number of blocks simultaneously accepted by the Runtime, not yet included into the
|
||||
//! // relay chain.
|
||||
//! pub const UNINCLUDED_SEGMENT_CAPACITY: u32 = 1;
|
||||
//! // How many teyrchain blocks are processed by the relay chain per parent. Limits the number of
|
||||
//! // blocks authored per slot.
|
||||
//! pub const BLOCK_PROCESSING_VELOCITY: u32 = 1;
|
||||
//! // Relay chain slot duration, in milliseconds.
|
||||
//! pub const RELAY_CHAIN_SLOT_DURATION_MILLIS: u32 = 6000;
|
||||
//! ```
|
||||
//!
|
||||
//! 3. Establish constants `MILLISECS_PER_BLOCK` and `SLOT_DURATION` if not already present in the
|
||||
//! runtime.
|
||||
//! ```ignore
|
||||
//! // `SLOT_DURATION` is picked up by `pallet_timestamp` which is in turn picked
|
||||
//! // up by `pallet_aura` to implement `fn slot_duration()`.
|
||||
//! //
|
||||
//! // Change this to adjust the block time.
|
||||
//! pub const MILLISECS_PER_BLOCK: u64 = 12000;
|
||||
//! pub const SLOT_DURATION: u64 = MILLISECS_PER_BLOCK;
|
||||
//! ```
|
||||
//!
|
||||
//! 4. Configure `cumulus_pallet_teyrchain_system` in the runtime.
|
||||
//!
|
||||
//! - Define a `FixedVelocityConsensusHook` using our capacity, velocity, and relay slot duration
|
||||
//! constants. Use this to set the teyrchain system `ConsensusHook` property.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/lib.rs", ConsensusHook)]
|
||||
//! ```ignore
|
||||
//! impl cumulus_pallet_teyrchain_system::Config for Runtime {
|
||||
//! ..
|
||||
//! type ConsensusHook = ConsensusHook;
|
||||
//! ..
|
||||
//! }
|
||||
//! ```
|
||||
//! - Set the teyrchain system property `CheckAssociatedRelayNumber` to
|
||||
//! `RelayNumberMonotonicallyIncreases`
|
||||
//! ```ignore
|
||||
//! impl cumulus_pallet_teyrchain_system::Config for Runtime {
|
||||
//! ..
|
||||
//! type CheckAssociatedRelayNumber = RelayNumberMonotonicallyIncreases;
|
||||
//! ..
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! 5. Configure `pallet_aura` in the runtime.
|
||||
//!
|
||||
//! - Set `AllowMultipleBlocksPerSlot` to `false` (don't worry, we will set it to `true` when we
|
||||
//! activate async backing in phase 3).
|
||||
//!
|
||||
//! - Define `pallet_aura::SlotDuration` using our constant `SLOT_DURATION`
|
||||
//! ```ignore
|
||||
//! impl pallet_aura::Config for Runtime {
|
||||
//! ..
|
||||
//! type AllowMultipleBlocksPerSlot = ConstBool<false>;
|
||||
//! #[cfg(feature = "experimental")]
|
||||
//! type SlotDuration = ConstU64<SLOT_DURATION>;
|
||||
//! ..
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! 6. Update `sp_consensus_aura::AuraApi::slot_duration` in `sp_api::impl_runtime_apis` to match
|
||||
//! the constant `SLOT_DURATION`
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/apis.rs", impl_slot_duration)]
|
||||
//!
|
||||
//! 7. Implement the `AuraUnincludedSegmentApi`, which allows the collator client to query its
|
||||
//! runtime to determine whether it should author a block.
|
||||
//!
|
||||
//! - Add the dependency `cumulus-primitives-aura` to the `runtime/Cargo.toml` file for your
|
||||
//! runtime
|
||||
//! ```ignore
|
||||
//! ..
|
||||
//! cumulus-primitives-aura = { path = "../../../../primitives/aura", default-features = false }
|
||||
//! ..
|
||||
//! ```
|
||||
//!
|
||||
//! - In the same file, add `"cumulus-primitives-aura/std",` to the `std` feature.
|
||||
//!
|
||||
//! - Inside the `impl_runtime_apis!` block for your runtime, implement the
|
||||
//! `cumulus_primitives_aura::AuraUnincludedSegmentApi` as shown below.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/apis.rs", impl_can_build_upon)]
|
||||
//!
|
||||
//! **Note:** With a capacity of 1 we have an effective velocity of ½ even when velocity is
|
||||
//! configured to some larger value. This is because capacity will be filled after a single block is
|
||||
//! produced and will only be freed up after that block is included on the relay chain, which takes
|
||||
//! 2 relay blocks to accomplish. Thus with capacity 1 and velocity 1 we get the customary 12 second
|
||||
//! teyrchain block time.
|
||||
//!
|
||||
//! 8. If your `runtime/src/lib.rs` provides a `CheckInherents` type to `register_validate_block`,
|
||||
//! remove it. `FixedVelocityConsensusHook` makes it unnecessary. The following example shows how
|
||||
//! `register_validate_block` should look after removing `CheckInherents`.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/lib.rs", register_validate_block)]
|
||||
//!
|
||||
//!
|
||||
//! ## Phase 2 - Update Teyrchain Nodes
|
||||
//!
|
||||
//! This phase consists of plugging in the new lookahead collator node.
|
||||
//!
|
||||
//! 1. Import `cumulus_primitives_core::ValidationCode` to `node/src/service.rs`.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/node/src/service.rs", cumulus_primitives)]
|
||||
//!
|
||||
//! 2. In `node/src/service.rs`, modify `sc_service::spawn_tasks` to use a clone of `Backend` rather
|
||||
//! than the original
|
||||
//! ```ignore
|
||||
//! sc_service::spawn_tasks(sc_service::SpawnTasksParams {
|
||||
//! ..
|
||||
//! backend: backend.clone(),
|
||||
//! ..
|
||||
//! })?;
|
||||
//! ```
|
||||
//!
|
||||
//! 3. Add `backend` as a parameter to `start_consensus()` in `node/src/service.rs`
|
||||
//! ```text
|
||||
//! fn start_consensus(
|
||||
//! ..
|
||||
//! backend: Arc<TeyrchainBackend>,
|
||||
//! ..
|
||||
//! ```
|
||||
//! ```ignore
|
||||
//! if validator {
|
||||
//! start_consensus(
|
||||
//! ..
|
||||
//! backend.clone(),
|
||||
//! ..
|
||||
//! )?;
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! 4. In `node/src/service.rs` import the lookahead collator rather than the basic collator
|
||||
#![doc = docify::embed!("../../templates/teyrchain/node/src/service.rs", lookahead_collator)]
|
||||
//!
|
||||
//! 5. In `start_consensus()` replace the `BasicAuraParams` struct with `AuraParams`
|
||||
//! - Change the struct type from `BasicAuraParams` to `AuraParams`
|
||||
//! - In the `para_client` field, pass in a cloned para client rather than the original
|
||||
//! - Add a `para_backend` parameter after `para_client`, passing in our para backend
|
||||
//! - Provide a `code_hash_provider` closure like that shown below
|
||||
//! - Increase `authoring_duration` from 500 milliseconds to 2000
|
||||
//! ```ignore
|
||||
//! let params = AuraParams {
|
||||
//! ..
|
||||
//! para_client: client.clone(),
|
||||
//! para_backend: backend.clone(),
|
||||
//! ..
|
||||
//! code_hash_provider: move |block_hash| {
|
||||
//! client.code_at(block_hash).ok().map(|c| ValidationCode::from(c).hash())
|
||||
//! },
|
||||
//! ..
|
||||
//! authoring_duration: Duration::from_millis(2000),
|
||||
//! ..
|
||||
//! };
|
||||
//! ```
|
||||
//!
|
||||
//! **Note:** Set `authoring_duration` to whatever you want, taking your own hardware into account.
|
||||
//! But if the backer who should be slower than you due to reading from disk, times out at two
|
||||
//! seconds your candidates will be rejected.
|
||||
//!
|
||||
//! 6. In `start_consensus()` replace `basic_aura::run` with `aura::run`
|
||||
//! ```ignore
|
||||
//! let fut =
|
||||
//! aura::run::<Block, sp_consensus_aura::sr25519::AuthorityPair, _, _, _, _, _, _, _, _, _>(
|
||||
//! params,
|
||||
//! );
|
||||
//! task_manager.spawn_essential_handle().spawn("aura", None, fut);
|
||||
//! ```
|
||||
//!
|
||||
//! ## Phase 3 - Activate Async Backing
|
||||
//!
|
||||
//! This phase consists of changes to your teyrchain’s runtime that activate async backing feature.
|
||||
//!
|
||||
//! 1. Configure `pallet_aura`, setting `AllowMultipleBlocksPerSlot` to true in
|
||||
//! `runtime/src/lib.rs`.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/configs/mod.rs", aura_config)]
|
||||
//!
|
||||
//! 2. Increase the maximum `UNINCLUDED_SEGMENT_CAPACITY` in `runtime/src/lib.rs`.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/lib.rs", async_backing_params)]
|
||||
//!
|
||||
//! 3. Decrease `MILLISECS_PER_BLOCK` to 6000.
|
||||
//!
|
||||
//! - Note: For a teyrchain which measures time in terms of its own block number rather than by
|
||||
//! relay block number it may be preferable to increase velocity. Changing block time may cause
|
||||
//! complications, requiring additional changes. See the section “Timing by Block Number”.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/lib.rs", block_times)]
|
||||
//!
|
||||
//! 4. Update `MAXIMUM_BLOCK_WEIGHT` to reflect the increased time available for block production.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/lib.rs", max_block_weight)]
|
||||
//!
|
||||
//! 5. Add a feature flagged alternative for `MinimumPeriod` in `pallet_timestamp`. The type should
|
||||
//! be `ConstU64<0>` with the feature flag experimental, and `ConstU64<{SLOT_DURATION / 2}>`
|
||||
//! without.
|
||||
//! ```ignore
|
||||
//! impl pallet_timestamp::Config for Runtime {
|
||||
//! ..
|
||||
//! #[cfg(feature = "experimental")]
|
||||
//! type MinimumPeriod = ConstU64<0>;
|
||||
//! #[cfg(not(feature = "experimental"))]
|
||||
//! type MinimumPeriod = ConstU64<{ SLOT_DURATION / 2 }>;
|
||||
//! ..
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! ## Timing by Block Number
|
||||
//!
|
||||
//! With asynchronous backing it will be possible for teyrchains to opt for a block time of 6
|
||||
//! seconds rather than 12 seconds. But modifying block duration isn’t so simple for a teyrchain
|
||||
//! which was measuring time in terms of its own block number. It could result in expected and
|
||||
//! actual time not matching up, stalling the teyrchain.
|
||||
//!
|
||||
//! One strategy to deal with this issue is to instead rely on relay chain block numbers for timing.
|
||||
//! Relay block number is kept track of by each teyrchain in `pallet-teyrchain-system` with the
|
||||
//! storage value `LastRelayChainBlockNumber`. This value can be obtained and used wherever timing
|
||||
//! based on block number is needed.
|
||||
|
||||
#![deny(rustdoc::broken_intra_doc_links)]
|
||||
#![deny(rustdoc::private_intra_doc_links)]
|
||||
@@ -0,0 +1 @@
|
||||
//! # Changing Consensus
|
||||
@@ -0,0 +1 @@
|
||||
//! # Cumulus Enabled Teyrchain
|
||||
@@ -0,0 +1,182 @@
|
||||
//! # Enable elastic scaling for a teyrchain
|
||||
//!
|
||||
//! <div class="warning">This guide assumes full familiarity with Asynchronous Backing and its
|
||||
//! terminology, as defined in <a href="https://docs.pezkuwichain.io/sdk/master/polkadot_sdk_docs/guides/async_backing_guide/index.html">the Pezkuwi SDK Docs</a>.
|
||||
//! </div>
|
||||
//!
|
||||
//! ## Quick introduction to Elastic Scaling
|
||||
//!
|
||||
//! [Elastic scaling](https://www.parity.io/blog/polkadot-web3-cloud) is a feature that enables teyrchains (rollups) to use multiple cores.
|
||||
//! Teyrchains can adjust their usage of core resources on the fly to increase TPS and decrease
|
||||
//! latency.
|
||||
//!
|
||||
//! ### When do you need Elastic Scaling?
|
||||
//!
|
||||
//! Depending on their use case, applications might have an increased need for the following:
|
||||
//! - compute (CPU weight)
|
||||
//! - bandwidth (proof size)
|
||||
//! - lower latency (block time)
|
||||
//!
|
||||
//! ### High throughput (TPS) and lower latency
|
||||
//!
|
||||
//! If the main bottleneck is the CPU, then your teyrchain needs to maximize the compute usage of
|
||||
//! each core while also achieving a lower latency.
|
||||
//! 3 cores provide the best balance between CPU, bandwidth and latency: up to 6s of execution,
|
||||
//! 5MB/s of DA bandwidth and fast block time of just 2 seconds.
|
||||
//!
|
||||
//! ### High bandwidth
|
||||
//!
|
||||
//! Useful for applications that are bottlenecked by bandwidth.
|
||||
//! By using 6 cores, applications can make use of up to 6s of compute, 10MB/s of bandwidth
|
||||
//! while also achieving 1 second block times.
|
||||
//!
|
||||
//! ### Ultra low latency
|
||||
//!
|
||||
//! When latency is the primary requirement, Elastic scaling is currently the only solution. The
|
||||
//! caveat is the efficiency of core time usage decreases as more cores are used.
|
||||
//!
|
||||
//! For example, using 12 cores enables fast transaction confirmations with 500ms blocks and up to
|
||||
//! 20 MB/s of DA bandwidth.
|
||||
//!
|
||||
//! ## Dependencies
|
||||
//!
|
||||
//! Prerequisites: Pezkuwi-SDK `2509` or newer.
|
||||
//!
|
||||
//! To ensure the security and reliability of your chain when using this feature you need the
|
||||
//! following:
|
||||
//! - An omni-node based collator. This has already become the default choice for collators.
|
||||
//! - UMP signal support.
|
||||
//! [RFC103](https://github.com/polkadot-fellows/RFCs/blob/main/text/0103-introduce-core-index-commitment.md).
|
||||
//! This is mandatory protection against PoV replay attacks.
|
||||
//! - Enabling the relay parent offset feature. This is required to ensure the teyrchain block times
|
||||
//! and transaction in-block confidence are not negatively affected by relay chain forks. Read
|
||||
//! [`crate::guides::handling_teyrchain_forks`] for more information.
|
||||
//! - Block production configuration adjustments.
|
||||
//!
|
||||
//! ### Upgrade to Pezkuwi Omni node
|
||||
//!
|
||||
//! Your collators need to run `pezkuwi-teyrchain` or `pezkuwi-omni-node` with the `--authoring
|
||||
//! slot-based` CLI argument.
|
||||
//! To avoid potential issues and get best performance it is recommeneded to always run the
|
||||
//! latest release on all of the collators.
|
||||
//!
|
||||
//! Further information about omni-node and how to upgrade is available:
|
||||
//! - [high level docs](https://docs.pezkuwichain.io/develop/toolkit/parachains/polkadot-omni-node/)
|
||||
//! - [`crate::reference_docs::omni_node`]
|
||||
//!
|
||||
//! ### UMP signals
|
||||
//!
|
||||
//! UMP signals are now enabled by default in the `teyrchain-system` pallet and are used for
|
||||
//! elastic scaling. You can find more technical details about UMP signals and their usage for
|
||||
//! elastic scaling
|
||||
//! [here](https://github.com/polkadot-fellows/RFCs/blob/main/text/0103-introduce-core-index-commitment.md).
|
||||
//!
|
||||
//! ### Enable the relay parent offset feature
|
||||
//!
|
||||
//! It is recommended to use an offset of `1`, which is sufficient to eliminate any issues
|
||||
//! with relay chain forks.
|
||||
//!
|
||||
//! Configure the relay parent offset like this:
|
||||
//! ```ignore
|
||||
//! /// Build with an offset of 1 behind the relay chain best block.
|
||||
//! const RELAY_PARENT_OFFSET: u32 = 1;
|
||||
//!
|
||||
//! impl cumulus_pallet_teyrchain_system::Config for Runtime {
|
||||
//! // ...
|
||||
//! type RelayParentOffset = ConstU32<RELAY_PARENT_OFFSET>;
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! Implement the runtime API to retrieve the offset on the client side.
|
||||
//! ```ignore
|
||||
//! impl cumulus_primitives_core::RelayParentOffsetApi<Block> for Runtime {
|
||||
//! fn relay_parent_offset() -> u32 {
|
||||
//! RELAY_PARENT_OFFSET
|
||||
//! }
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! ### Block production configuration
|
||||
//!
|
||||
//! This configuration directly controls the minimum block time and maximum number of cores
|
||||
//! the teyrchain can use.
|
||||
//!
|
||||
//! Example configuration for a 3 core teyrchain:
|
||||
//! ```ignore
|
||||
//! /// The upper limit of how many teyrchain blocks are processed by the relay chain per
|
||||
//! /// parent. Limits the number of blocks authored per slot. This determines the minimum
|
||||
//! /// block time of the teyrchain:
|
||||
//! /// `RELAY_CHAIN_SLOT_DURATION_MILLIS/BLOCK_PROCESSING_VELOCITY`
|
||||
//! const BLOCK_PROCESSING_VELOCITY: u32 = 3;
|
||||
//!
|
||||
//! /// Maximum number of blocks simultaneously accepted by the Runtime, not yet included
|
||||
//! /// into the relay chain.
|
||||
//! const UNINCLUDED_SEGMENT_CAPACITY: u32 = (2 + RELAY_PARENT_OFFSET) *
|
||||
//! BLOCK_PROCESSING_VELOCITY + 1;
|
||||
//!
|
||||
//! /// Relay chain slot duration, in milliseconds.
|
||||
//! const RELAY_CHAIN_SLOT_DURATION_MILLIS: u32 = 6000;
|
||||
//!
|
||||
//! type ConsensusHook = cumulus_pallet_aura_ext::FixedVelocityConsensusHook<
|
||||
//! Runtime,
|
||||
//! RELAY_CHAIN_SLOT_DURATION_MILLIS,
|
||||
//! BLOCK_PROCESSING_VELOCITY,
|
||||
//! UNINCLUDED_SEGMENT_CAPACITY,
|
||||
//! >;
|
||||
//!
|
||||
//! ```
|
||||
//!
|
||||
//! ### Teyrchain Slot Duration
|
||||
//!
|
||||
//! A common source of confusion is the correct configuration of the `SlotDuration` that is passed
|
||||
//! to `pallet-aura`.
|
||||
//! ```ignore
|
||||
//! impl pallet_aura::Config for Runtime {
|
||||
//! // ...
|
||||
//! type SlotDuration = ConstU64<SLOT_DURATION>;
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! The slot duration determines the length of each author's turn and is decoupled from the block
|
||||
//! production interval. During their slot, authors are allowed to produce multiple blocks. **The
|
||||
//! slot duration is required to be at least 6s (same as on the relay chain).**
|
||||
//!
|
||||
//! **Configuration recommendations:**
|
||||
//! - For new teyrchains starting from genesis: use a slot duration of 24 seconds
|
||||
//! - For existing live teyrchains: leave the slot duration unchanged
|
||||
//!
|
||||
//!
|
||||
//! ## Current limitations
|
||||
//!
|
||||
//! ### Maximum execution time per relay chain block.
|
||||
//!
|
||||
//! Since teyrchain block authoring is sequential, the next block can only be built after
|
||||
//! the previous one has been imported.
|
||||
//! At present, a core allows up to 2 seconds of execution per relay chain block.
|
||||
//!
|
||||
//! If we assume a 6s teyrchain slot, and each block takes the full 2 seconds to execute,
|
||||
//! the teyrchain will not be able to fully utilize the compute resources of all 3 cores.
|
||||
//!
|
||||
//! If the collator hardware is faster, it can author and import full blocks more quickly,
|
||||
//! making it possible to utilize even more than 3 cores efficiently.
|
||||
//!
|
||||
//! #### Why?
|
||||
//!
|
||||
//! Within a 6-second teyrchain slot, collators can author multiple teyrchain blocks.
|
||||
//! Before building the first block in a slot, the new block author must import the last
|
||||
//! block produced by the previous author.
|
||||
//! If the import of the last block is not completed before the next relay chain slot starts,
|
||||
//! the new author will build on its parent (assuming it was imported). This will create a fork
|
||||
//! which degrades the teyrchain block confidence and block times.
|
||||
//!
|
||||
//! This means that, on reference hardware, a teyrchain with a slot time of 6s can
|
||||
//! effectively utilize up to 4 seconds of execution per relay chain block, because it needs to
|
||||
//! ensure the next block author has enough time to import the last block.
|
||||
//! Hardware with higher single-core performance can enable a teyrchain to fully utilize more
|
||||
//! cores.
|
||||
//!
|
||||
//! ### Fixed factor scaling.
|
||||
//!
|
||||
//! For true elasticity, a teyrchain needs to acquire more cores when needed in an automated
|
||||
//! manner. This functionality is not yet available in the SDK, thus acquiring additional
|
||||
//! on-demand or bulk cores has to be managed externally.
|
||||
@@ -0,0 +1,88 @@
|
||||
//! # Enable metadata hash verification
|
||||
//!
|
||||
//! This guide will teach you how to enable the metadata hash verification in your runtime.
|
||||
//!
|
||||
//! ## What is metadata hash verification?
|
||||
//!
|
||||
//! Each FRAME based runtime exposes metadata about itself. This metadata is used by consumers of
|
||||
//! the runtime to interpret the state, to construct transactions etc. Part of this metadata are the
|
||||
//! type information. These type information can be used to e.g. decode storage entries or to decode
|
||||
//! a transaction. So, the metadata is quite useful for wallets to interact with a FRAME based
|
||||
//! chain. Online wallets can fetch the metadata directly from any node of the chain they are
|
||||
//! connected to, but offline wallets can not do this. So, for the offline wallet to have access to
|
||||
//! the metadata it needs to be transferred and stored on the device. The problem is that the
|
||||
//! metadata has a size of several hundreds of kilobytes, which takes quite a while to transfer to
|
||||
//! these offline wallets and the internal storage of these devices is also not big enough to store
|
||||
//! the metadata for one or more networks. The next problem is that the offline wallet/user can not
|
||||
//! trust the metadata to be correct. It is very important for the metadata to be correct or
|
||||
//! otherwise an attacker could change them in a way that the offline wallet decodes a transaction
|
||||
//! in a different way than what it will be decoded to on chain. So, the user may sign an incorrect
|
||||
//! transaction leading to unexpected behavior.
|
||||
//!
|
||||
//! The metadata hash verification circumvents the issues of the huge metadata and the need to trust
|
||||
//! some metadata blob to be correct. To generate a hash for the metadata, the metadata is chunked,
|
||||
//! these chunks are put into a merkle tree and then the root of this merkle tree is the "metadata
|
||||
//! hash". For a more technical explanation on how it works, see
|
||||
//! [RFC78](https://polkadot-fellows.github.io/RFCs/approved/0078-merkleized-metadata.html). At compile
|
||||
//! time the metadata hash is generated and "baked" into the runtime. This makes it extremely cheap
|
||||
//! for the runtime to verify on chain that the metadata hash is correct. By having the runtime
|
||||
//! verify the hash on chain, the user also doesn't need to trust the offchain metadata. If the
|
||||
//! metadata hash doesn't match the on chain metadata hash the transaction will be rejected. The
|
||||
//! metadata hash itself is added to the data of the transaction that is signed, this means the
|
||||
//! actual hash does not appear in the transaction. On chain the same procedure is repeated with the
|
||||
//! metadata hash that is known by the runtime and if the metadata hash doesn't match the signature
|
||||
//! verification will fail. As the metadata hash is actually the root of a merkle tree, the offline
|
||||
//! wallet can get proofs of individual types to decode a transaction. This means that the offline
|
||||
//! wallet does not require the entire metadata to be present on the device.
|
||||
//!
|
||||
//! ## Integrating metadata hash verification into your runtime
|
||||
//!
|
||||
//! The integration of the metadata hash verification is split into two parts, first the actual
|
||||
//! integration into the runtime and secondly the enabling of the metadata hash generation at
|
||||
//! compile time.
|
||||
//!
|
||||
//! ### Runtime integration
|
||||
//!
|
||||
//! From the runtime side only the
|
||||
//! [`CheckMetadataHash`](frame_metadata_hash_extension::CheckMetadataHash) needs to be added to the
|
||||
//! list of signed extension:
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/lib.rs", template_signed_extra)]
|
||||
//!
|
||||
//! > **Note:**
|
||||
//! >
|
||||
//! > Adding the signed extension changes the encoding of the transaction and adds one extra byte
|
||||
//! > per transaction!
|
||||
//!
|
||||
//! This signed extension will make sure to decode the requested `mode` and will add the metadata
|
||||
//! hash to the signed data depending on the requested `mode`. The `mode` gives the user/wallet
|
||||
//! control over deciding if the metadata hash should be verified or not. The metadata hash itself
|
||||
//! is drawn from the `RUNTIME_METADATA_HASH` environment variable. If the environment variable is
|
||||
//! not set, any transaction that requires the metadata hash is rejected with the error
|
||||
//! `CannotLookup`. This is a security measurement to prevent including invalid transactions.
|
||||
//!
|
||||
//! <div class="warning">
|
||||
//!
|
||||
//! The extension does not work with the native runtime, because the
|
||||
//! `RUNTIME_METADATA_HASH` environment variable is not set when building the
|
||||
//! `frame-metadata-hash-extension` crate.
|
||||
//!
|
||||
//! </div>
|
||||
//!
|
||||
//! ### Enable metadata hash generation
|
||||
//!
|
||||
//! The metadata hash generation needs to be enabled when building the wasm binary. The
|
||||
//! `substrate-wasm-builder` supports this out of the box:
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/build.rs", template_enable_metadata_hash)]
|
||||
//!
|
||||
//! > **Note:**
|
||||
//! >
|
||||
//! > The `metadata-hash` feature needs to be enabled for the `substrate-wasm-builder` to enable the
|
||||
//! > code for being able to generate the metadata hash. It is also recommended to put the metadata
|
||||
//! > hash generation behind a feature in the runtime as shown above. The reason behind is that it
|
||||
//! > adds a lot of code which increases the compile time and the generation itself also increases
|
||||
//! > the compile time. Thus, it is recommended to enable the feature only when the metadata hash is
|
||||
//! > required (e.g. for an on-chain build).
|
||||
//!
|
||||
//! The two parameters to `enable_metadata_hash` are the token symbol and the number of decimals of
|
||||
//! the primary token of the chain. These information are included for the wallets to show token
|
||||
//! related operations in a more user friendly way.
|
||||
@@ -0,0 +1,88 @@
|
||||
//! # Enable storage weight reclaiming
|
||||
//!
|
||||
//! This guide will teach you how to enable storage weight reclaiming for a teyrchain. The
|
||||
//! explanations in this guide assume a project structure similar to the one detailed in
|
||||
//! the [substrate documentation](crate::pezkuwi_sdk::substrate#anatomy-of-a-binary-crate). Full
|
||||
//! technical details are available in the original [pull request](https://github.com/paritytech/polkadot-sdk/pull/3002).
|
||||
//!
|
||||
//! # What is PoV reclaim?
|
||||
//! When a teyrchain submits a block to a relay chain like Pezkuwi or Kusama, it sends the block
|
||||
//! itself and a storage proof. Together they form the Proof-of-Validity (PoV). The PoV allows the
|
||||
//! relay chain to validate the teyrchain block by re-executing it. Relay chain
|
||||
//! validators distribute this PoV among themselves over the network. This distribution is costly
|
||||
//! and limits the size of the storage proof. The storage weight dimension of FRAME weights reflects
|
||||
//! this cost and limits the size of the storage proof. However, the storage weight determined
|
||||
//! during [benchmarking](crate::reference_docs::frame_benchmarking_weight) represents the worst
|
||||
//! case. In reality, runtime operations often consume less space in the storage proof. PoV reclaim
|
||||
//! offers a mechanism to reclaim the difference between the benchmarked worst-case and the real
|
||||
//! proof-size consumption.
|
||||
//!
|
||||
//!
|
||||
//! # How to enable PoV reclaim
|
||||
//! ## 1. Add the host function to your node
|
||||
//!
|
||||
//! To reclaim excess storage weight, a teyrchain runtime needs the
|
||||
//! ability to fetch the size of the storage proof from the node. The reclaim
|
||||
//! mechanism uses the
|
||||
//! [`storage_proof_size`](cumulus_primitives_proof_size_hostfunction::storage_proof_size)
|
||||
//! host function for this purpose. For convenience, cumulus provides
|
||||
//! [`TeyrchainHostFunctions`](cumulus_client_service::TeyrchainHostFunctions), a set of
|
||||
//! host functions typically used by cumulus-based teyrchains. In the binary crate of your
|
||||
//! teyrchain, find the instantiation of the [`WasmExecutor`](sc_executor::WasmExecutor) and set the
|
||||
//! correct generic type.
|
||||
//!
|
||||
//! This example from the teyrchain-template shows a type definition that includes the correct
|
||||
//! host functions.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/node/src/service.rs", wasm_executor)]
|
||||
//!
|
||||
//! > **Note:**
|
||||
//! >
|
||||
//! > If you see error `runtime requires function imports which are not present on the host:
|
||||
//! > 'env:ext_storage_proof_size_storage_proof_size_version_1'`, it is likely
|
||||
//! > that this step in the guide was not set up correctly.
|
||||
//!
|
||||
//! ## 2. Enable storage proof recording during import
|
||||
//!
|
||||
//! The reclaim mechanism reads the size of the currently recorded storage proof multiple times
|
||||
//! during block authoring and block import. Proof recording during authoring is already enabled on
|
||||
//! teyrchains. You must also ensure that storage proof recording is enabled during block import.
|
||||
//! Find where your node builds the fundamental substrate components by calling
|
||||
//! [`new_full_parts`](sc_service::new_full_parts). Replace this
|
||||
//! with [`new_full_parts_record_import`](sc_service::new_full_parts_record_import) and
|
||||
//! pass `true` as the last parameter to enable import recording.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/node/src/service.rs", component_instantiation)]
|
||||
//!
|
||||
//! > **Note:**
|
||||
//! >
|
||||
//! > If you see error `Storage root must match that calculated.` during block import, it is likely
|
||||
//! > that this step in the guide was not
|
||||
//! > set up correctly.
|
||||
//!
|
||||
//! ## 3. Add the TransactionExtension to your runtime
|
||||
//!
|
||||
//! In your runtime, you will find a list of TransactionExtensions.
|
||||
//! To enable the reclaiming,
|
||||
//! set [`StorageWeightReclaim`](cumulus_pallet_weight_reclaim::StorageWeightReclaim)
|
||||
//! as a warpper of that list.
|
||||
//! It is necessary that this extension wraps all the other transaction extensions in order to catch
|
||||
//! the whole PoV size of the transactions.
|
||||
//! The extension will check the size of the storage proof before and after an extrinsic execution.
|
||||
//! It reclaims the difference between the calculated size and the benchmarked size.
|
||||
#![doc = docify::embed!("../../templates/teyrchain/runtime/src/lib.rs", template_signed_extra)]
|
||||
//!
|
||||
//! ## Optional: Verify that reclaim works
|
||||
//!
|
||||
//! Start your node with the log target `runtime::storage_reclaim` set to `trace` to enable full
|
||||
//! logging for `StorageWeightReclaim`. The following log is an example from a local testnet. To
|
||||
//! trigger the log, execute any extrinsic on the network.
|
||||
//!
|
||||
//! ```ignore
|
||||
//! ...
|
||||
//! 2024-04-22 17:31:48.014 TRACE runtime::storage_reclaim: [ferdie] Reclaiming storage weight. benchmarked: 3593, consumed: 265 unspent: 0
|
||||
//! ...
|
||||
//! ```
|
||||
//!
|
||||
//! In the above example we see a benchmarked size of 3593 bytes, while the extrinsic only consumed
|
||||
//! 265 bytes of proof size. This results in 3328 bytes of reclaim.
|
||||
#![deny(rustdoc::broken_intra_doc_links)]
|
||||
#![deny(rustdoc::private_intra_doc_links)]
|
||||
@@ -0,0 +1,90 @@
|
||||
//! # Teyrchain forks
|
||||
//!
|
||||
//! In this guide, we will examine how AURA-based teyrchains handle forks. AURA (Authority Round) is
|
||||
//! a consensus mechanism where block authors rotate at fixed time intervals. Each author gets a
|
||||
//! predetermined time slice during which they are allowed to author a block. On its own, this
|
||||
//! mechanism is fork-free.
|
||||
//!
|
||||
//! However, since the relay chain provides security and serves as the source of truth for
|
||||
//! teyrchains, the teyrchain is dependent on it. This relationship can introduce complexities that
|
||||
//! lead to forking scenarios.
|
||||
//!
|
||||
//! ## Background
|
||||
//! Each teyrchain block has a relay parent, which is a relay chain block that provides context to
|
||||
//! our teyrchain block. The constraints the relay chain imposes on our teyrchain can cause forks
|
||||
//! under certain conditions. With asynchronous-backing enabled chains, the node side is building
|
||||
//! blocks on all relay chain forks. This means that no matter which fork of the relay chain
|
||||
//! ultimately progressed, the teyrchain would have a block ready for that fork. The situation
|
||||
//! changes when teyrchains want to produce blocks at a faster cadence. In a scenario where a
|
||||
//! teyrchain might author on 3 cores with elastic scaling, it is not possible to author on all
|
||||
//! relay chain forks. The time constraints do not allow it. Building on two forks would result in 6
|
||||
//! blocks. The authoring of these blocks would consume more time than we have available before the
|
||||
//! next relay chain block arrives. This limitation requires a more fork-resistant approach to
|
||||
//! block-building.
|
||||
//!
|
||||
//! ## Impact of Forks
|
||||
//! When a relay chain fork occurs and the teyrchain builds on a fork that will not be extended in
|
||||
//! the future, the blocks built on that fork are lost and need to be rebuilt. This increases
|
||||
//! latency and reduces throughput, affecting the overall performance of the teyrchain.
|
||||
//!
|
||||
//! # Building on Older Pelay Parents
|
||||
//! Cumulus offers a way to mitigate the occurence of forks. Instead of picking a block at the tip
|
||||
//! of the relay chain to build blocks, the node side can pick a relay chain block that is older. By
|
||||
//! building on 12s old relay chain blocks, forks will already have settled and the teyrchain can
|
||||
//! build fork-free.
|
||||
//!
|
||||
//! ```text
|
||||
//! Without offset:
|
||||
//! Relay Chain: A --- B --- C --- D --- E
|
||||
//! \
|
||||
//! --- D' --- E'
|
||||
//! Teyrchain: X --- Y --- ? (builds on both D and D', wasting resources)
|
||||
//!
|
||||
//! With offset (2 blocks):
|
||||
//! Relay Chain: A --- B --- C --- D --- E
|
||||
//! \
|
||||
//! --- D' --- E'
|
||||
//! Teyrchain: X(A) - Y (B) - Z (on C, fork already resolved)
|
||||
//! ```
|
||||
//! **Note:** It is possible that relay chain forks extend over more than 1-2 blocks. However, it is
|
||||
//! unlikely.
|
||||
//! ## Tradeoffs
|
||||
//! Fork-free teyrchains come with a few tradeoffs:
|
||||
//! - The latency of incoming XCM messages will be delayed by `N * 6s`, where `N` is the number of
|
||||
//! relay chain blocks we want to offset by. For example, by building 2 relay chain blocks behind
|
||||
//! the tip, the XCM latency will be increased by 12 seconds.
|
||||
//! - The available PoV space will be slightly reduced. Assuming a 10mb PoV, teyrchains need to be
|
||||
//! ready to sacrifice around 0.5% of PoV space.
|
||||
//!
|
||||
//! ## Enabling Guide
|
||||
//! The decision whether the teyrchain should build on older relay parents is embedded into the
|
||||
//! runtime. After the changes are implemented, the runtime will enforce that no author can build
|
||||
//! with an offset smaller than the desired offset. If you wish to keep your current teyrchain
|
||||
//! behaviour and do not want aforementioned tradeoffs, set the offset to 0.
|
||||
//!
|
||||
//! **Note:** The APIs mentioned here are available in `pezkuwi-sdk` versions after `stable-2506`.
|
||||
//!
|
||||
//! 1. Define the relay parent offset your teyrchain should respect in the runtime.
|
||||
//! ```ignore
|
||||
//! const RELAY_PARENT_OFFSET = 2;
|
||||
//! ```
|
||||
//! 2. Pass this constant to the `teyrchain-system` pallet.
|
||||
//!
|
||||
//! ```ignore
|
||||
//! impl cumulus_pallet_teyrchain_system::Config for Runtime {
|
||||
//! // Other config items here
|
||||
//! ...
|
||||
//! type RelayParentOffset = ConstU32<RELAY_PARENT_OFFSET>;
|
||||
//! }
|
||||
//! ```
|
||||
//! 3. Implement the `RelayParentOffsetApi` runtime API for your runtime.
|
||||
//!
|
||||
//! ```ignore
|
||||
//! impl cumulus_primitives_core::RelayParentOffsetApi<Block> for Runtime {
|
||||
//! fn relay_parent_offset() -> u32 {
|
||||
//! RELAY_PARENT_OFFSET
|
||||
//! }
|
||||
//! }
|
||||
//! ```
|
||||
//! 4. Increase the `UNINCLUDED_SEGMENT_CAPICITY` for your runtime. It needs to be increased by
|
||||
//! `RELAY_PARENT_OFFSET * BLOCK_PROCESSING_VELOCITY`.
|
||||
@@ -0,0 +1,50 @@
|
||||
//! # Pezkuwi SDK Docs Guides
|
||||
//!
|
||||
//! This crate contains a collection of guides that are foundational to the developers of
|
||||
//! Pezkuwi SDK. They are common user-journeys that are traversed in the Pezkuwi ecosystem.
|
||||
//!
|
||||
//! The main user-journey covered by these guides is:
|
||||
//!
|
||||
//! * [`your_first_pallet`], where you learn what a FRAME pallet is, and write your first
|
||||
//! application logic.
|
||||
//! * [`your_first_runtime`], where you learn how to compile your pallets into a WASM runtime.
|
||||
//! * [`your_first_node`], where you learn how to run the said runtime in a node.
|
||||
//!
|
||||
//! > By this step, you have already launched a full Pezkuwi-SDK-based blockchain!
|
||||
//!
|
||||
//! Once done, feel free to step up into one of our templates: [`crate::pezkuwi_sdk::templates`].
|
||||
//!
|
||||
//! [`your_first_pallet`]: crate::guides::your_first_pallet
|
||||
//! [`your_first_runtime`]: crate::guides::your_first_runtime
|
||||
//! [`your_first_node`]: crate::guides::your_first_node
|
||||
//!
|
||||
//! Other guides are related to other miscellaneous topics and are listed as modules below.
|
||||
|
||||
/// Write your first simple pallet, learning the most most basic features of FRAME along the way.
|
||||
pub mod your_first_pallet;
|
||||
|
||||
/// Write your first real [runtime](`crate::reference_docs::wasm_meta_protocol`),
|
||||
/// compiling it to [WASM](crate::pezkuwi_sdk::substrate#wasm-build).
|
||||
pub mod your_first_runtime;
|
||||
|
||||
/// Running the given runtime with a node. No specific consensus mechanism is used at this stage.
|
||||
pub mod your_first_node;
|
||||
|
||||
/// How to enhance a given runtime and node to be cumulus-enabled, run it as a teyrchain
|
||||
/// and connect it to a relay-chain.
|
||||
// pub mod your_first_teyrchain;
|
||||
|
||||
/// How to enable storage weight reclaiming in a teyrchain node and runtime.
|
||||
pub mod enable_pov_reclaim;
|
||||
|
||||
/// How to enable Async Backing on teyrchain projects that started in 2023 or before.
|
||||
pub mod async_backing_guide;
|
||||
|
||||
/// How to enable metadata hash verification in the runtime.
|
||||
pub mod enable_metadata_hash;
|
||||
|
||||
/// How to enable elastic scaling on a teyrchain.
|
||||
pub mod enable_elastic_scaling;
|
||||
|
||||
/// How to parameterize teyrchain forking in relation to relay chain forking.
|
||||
pub mod handling_teyrchain_forks;
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
//! # XCM Enabled Teyrchain
|
||||
@@ -0,0 +1,342 @@
|
||||
//! # Your first Node
|
||||
//!
|
||||
//! In this guide, you will learn how to run a runtime, such as the one created in
|
||||
//! [`your_first_runtime`], in a node. Within the context of this guide, we will focus on running
|
||||
//! the runtime with an [`omni-node`]. Please first read this page to learn about the OmniNode, and
|
||||
//! other options when it comes to running a node.
|
||||
//!
|
||||
//! [`your_first_runtime`] is a runtime with no consensus related code, and therefore can only be
|
||||
//! executed with a node that also expects no consensus ([`sc_consensus_manual_seal`]).
|
||||
//! `pezkuwi-omni-node`'s [`--dev-block-time`] precisely does this.
|
||||
//!
|
||||
//! > All of the following steps are coded as unit tests of this module. Please see `Source` of the
|
||||
//! > page for more information.
|
||||
//!
|
||||
//! ## Running The Omni Node
|
||||
//!
|
||||
//! ### Installs
|
||||
//!
|
||||
//! The `pezkuwi-omni-node` can either be downloaded from the latest [Release](https://github.com/pezkuwichain/pezkuwi-sdk/releases/) of `pezkuwi-sdk`,
|
||||
//! or installed using `cargo`:
|
||||
//!
|
||||
//! ```text
|
||||
//! cargo install pezkuwi-omni-node
|
||||
//! ```
|
||||
//!
|
||||
//! Next, we need to install the `chain-spec-builder`. This is the tool that allows us to build
|
||||
//! chain-specifications, through interacting with the genesis related APIs of the runtime, as
|
||||
//! described in [`crate::guides::your_first_runtime#genesis-configuration`].
|
||||
//!
|
||||
//! ```text
|
||||
//! cargo install staging-chain-spec-builder
|
||||
//! ```
|
||||
//!
|
||||
//! > The name of the crate is prefixed with `staging` as the crate name `chain-spec-builder` on
|
||||
//! > crates.io is already taken and is not controlled by `pezkuwi-sdk` developers.
|
||||
//!
|
||||
//! ### Building Runtime
|
||||
//!
|
||||
//! Next, we need to build the corresponding runtime that we wish to interact with.
|
||||
//!
|
||||
//! ```text
|
||||
//! cargo build --release -p path-to-runtime
|
||||
//! ```
|
||||
//! Equivalent code in tests:
|
||||
#![doc = docify::embed!("./src/guides/your_first_runtime.rs", build_runtime)]
|
||||
//!
|
||||
//! This creates the wasm file under `./target/{release}/wbuild/release` directory.
|
||||
//!
|
||||
//! ### Building Chain Spec
|
||||
//!
|
||||
//! Next, we can generate the corresponding chain-spec file. For this example, we will use the
|
||||
//! `development` (`sp_genesis_config::DEVELOPMENT`) preset.
|
||||
//!
|
||||
//! Note that we intend to run this chain-spec with `pezkuwi-omni-node`, which is tailored for
|
||||
//! running teyrchains. This requires the chain-spec to always contain the `para_id` and a
|
||||
//! `relay_chain` fields, which are provided below as CLI arguments.
|
||||
//!
|
||||
//! ```text
|
||||
//! chain-spec-builder \
|
||||
//! -c <path-to-output> \
|
||||
//! create \
|
||||
//! --relay-chain dontcare \
|
||||
//! --runtime pezkuwi_sdk_docs_first_runtime.wasm \
|
||||
//! named-preset development
|
||||
//! ```
|
||||
//!
|
||||
//! Equivalent code in tests:
|
||||
#![doc = docify::embed!("./src/guides/your_first_node.rs", csb)]
|
||||
//!
|
||||
//!
|
||||
//! ### Running `pezkuwi-omni-node`
|
||||
//!
|
||||
//! Finally, we can run the node with the generated chain-spec file. We can also specify the block
|
||||
//! time using the `--dev-block-time` flag.
|
||||
//!
|
||||
//! ```text
|
||||
//! pezkuwi-omni-node \
|
||||
//! --tmp \
|
||||
//! --dev-block-time 1000 \
|
||||
//! --chain <chain_spec_file>.json
|
||||
//! ```
|
||||
//!
|
||||
//! > Note that we always prefer to use `--tmp` for testing, as it will save the chain state to a
|
||||
//! > temporary folder, allowing the chain-to be easily restarted without `purge-chain`. See
|
||||
//! > [`sc_cli::commands::PurgeChainCmd`] and [`sc_cli::commands::RunCmd::tmp`] for more info.
|
||||
//!
|
||||
//! This will start the node and import the blocks. Note while using `--dev-block-time`, the node
|
||||
//! will use the testing-specific manual-seal consensus. This is an efficient way to test the
|
||||
//! application logic of your runtime, without needing to yet care about consensus, block
|
||||
//! production, relay-chain and so on.
|
||||
//!
|
||||
//! ### Next Steps
|
||||
//!
|
||||
//! * See the rest of the steps in [`crate::reference_docs::omni_node#user-journey`].
|
||||
//!
|
||||
//! [`runtime`]: crate::reference_docs::glossary#runtime
|
||||
//! [`node`]: crate::reference_docs::glossary#node
|
||||
//! [`build_config`]: first_runtime::Runtime#method.build_config
|
||||
//! [`omni-node`]: crate::reference_docs::omni_node
|
||||
//! [`--dev-block-time`]: (pezkuwi_omni_node_lib::cli::Cli::dev_block_time)
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use assert_cmd::assert::OutputAssertExt;
|
||||
use cmd_lib::*;
|
||||
use rand::Rng;
|
||||
use sc_chain_spec::{DEV_RUNTIME_PRESET, LOCAL_TESTNET_RUNTIME_PRESET};
|
||||
use sp_genesis_builder::PresetId;
|
||||
use std::{
|
||||
io::{BufRead, BufReader},
|
||||
path::PathBuf,
|
||||
process::{ChildStderr, Command, Stdio},
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
const PARA_RUNTIME: &'static str = "teyrchain-template-runtime";
|
||||
const CHAIN_SPEC_BUILDER: &'static str = "chain-spec-builder";
|
||||
const OMNI_NODE: &'static str = "pezkuwi-omni-node";
|
||||
|
||||
fn cargo() -> Command {
|
||||
Command::new(std::env::var("CARGO").unwrap_or_else(|_| "cargo".to_string()))
|
||||
}
|
||||
|
||||
fn get_target_directory() -> Option<PathBuf> {
|
||||
let output = cargo().arg("metadata").arg("--format-version=1").output().ok()?;
|
||||
|
||||
if !output.status.success() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let metadata: serde_json::Value = serde_json::from_slice(&output.stdout).ok()?;
|
||||
let target_directory = metadata["target_directory"].as_str()?;
|
||||
|
||||
Some(PathBuf::from(target_directory))
|
||||
}
|
||||
|
||||
fn find_release_binary(name: &str) -> Option<PathBuf> {
|
||||
let target_dir = get_target_directory()?;
|
||||
let release_path = target_dir.join("release").join(name);
|
||||
|
||||
if release_path.exists() {
|
||||
Some(release_path)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
fn find_wasm(runtime_name: &str) -> Option<PathBuf> {
|
||||
let target_dir = get_target_directory()?;
|
||||
let wasm_path = target_dir
|
||||
.join("release")
|
||||
.join("wbuild")
|
||||
.join(runtime_name)
|
||||
.join(format!("{}.wasm", runtime_name.replace('-', "_")));
|
||||
|
||||
if wasm_path.exists() {
|
||||
Some(wasm_path)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
fn maybe_build_runtimes() {
|
||||
if find_wasm(&PARA_RUNTIME).is_none() {
|
||||
println!("Building teyrchain-template-runtime...");
|
||||
Command::new("cargo")
|
||||
.arg("build")
|
||||
.arg("--release")
|
||||
.arg("-p")
|
||||
.arg(PARA_RUNTIME)
|
||||
.assert()
|
||||
.success();
|
||||
}
|
||||
|
||||
assert!(find_wasm(PARA_RUNTIME).is_some());
|
||||
}
|
||||
|
||||
fn maybe_build_chain_spec_builder() {
|
||||
if find_release_binary(CHAIN_SPEC_BUILDER).is_none() {
|
||||
println!("Building chain-spec-builder...");
|
||||
Command::new("cargo")
|
||||
.arg("build")
|
||||
.arg("--release")
|
||||
.arg("-p")
|
||||
.arg("staging-chain-spec-builder")
|
||||
.assert()
|
||||
.success();
|
||||
}
|
||||
assert!(find_release_binary(CHAIN_SPEC_BUILDER).is_some());
|
||||
}
|
||||
|
||||
fn maybe_build_omni_node() {
|
||||
if find_release_binary(OMNI_NODE).is_none() {
|
||||
println!("Building pezkuwi-omni-node...");
|
||||
Command::new("cargo")
|
||||
.arg("build")
|
||||
.arg("--release")
|
||||
.arg("-p")
|
||||
.arg("pezkuwi-omni-node")
|
||||
.assert()
|
||||
.success();
|
||||
}
|
||||
}
|
||||
|
||||
async fn imported_block_found(stderr: ChildStderr, block: u64, timeout: u64) -> bool {
|
||||
tokio::time::timeout(Duration::from_secs(timeout), async {
|
||||
let want = format!("Imported #{}", block);
|
||||
let reader = BufReader::new(stderr);
|
||||
let mut found_block = false;
|
||||
for line in reader.lines() {
|
||||
if line.unwrap().contains(&want) {
|
||||
found_block = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
found_block
|
||||
})
|
||||
.await
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
async fn test_runtime_preset(
|
||||
runtime: &'static str,
|
||||
block_time: u64,
|
||||
maybe_preset: Option<PresetId>,
|
||||
) {
|
||||
sp_tracing::try_init_simple();
|
||||
maybe_build_runtimes();
|
||||
maybe_build_chain_spec_builder();
|
||||
maybe_build_omni_node();
|
||||
|
||||
let chain_spec_builder =
|
||||
find_release_binary(&CHAIN_SPEC_BUILDER).expect("we built it above; qed");
|
||||
let omni_node = find_release_binary(OMNI_NODE).expect("we built it above; qed");
|
||||
let runtime_path = find_wasm(runtime).expect("we built it above; qed");
|
||||
|
||||
let random_seed: u32 = rand::thread_rng().gen();
|
||||
let chain_spec_file = std::env::current_dir()
|
||||
.unwrap()
|
||||
.join(format!("{}_{}_{}.json", runtime, block_time, random_seed));
|
||||
|
||||
Command::new(chain_spec_builder)
|
||||
.args(["-c", chain_spec_file.to_str().unwrap()])
|
||||
.arg("create")
|
||||
.args(["--relay-chain", "dontcare"])
|
||||
.args(["-r", runtime_path.to_str().unwrap()])
|
||||
.args(match maybe_preset {
|
||||
Some(preset) => vec!["named-preset".to_string(), preset.to_string()],
|
||||
None => vec!["default".to_string()],
|
||||
})
|
||||
.assert()
|
||||
.success();
|
||||
|
||||
let mut child = Command::new(omni_node)
|
||||
.arg("--tmp")
|
||||
.args(["--chain", chain_spec_file.to_str().unwrap()])
|
||||
.args(["--dev-block-time", block_time.to_string().as_str()])
|
||||
.stderr(Stdio::piped())
|
||||
.spawn()
|
||||
.unwrap();
|
||||
|
||||
// Take stderr and parse it with timeout.
|
||||
let stderr = child.stderr.take().unwrap();
|
||||
let expected_blocks = (10_000 / block_time).saturating_div(2);
|
||||
assert!(expected_blocks > 0, "test configuration is bad, should give it more time");
|
||||
assert_eq!(imported_block_found(stderr, expected_blocks, 100).await, true);
|
||||
std::fs::remove_file(chain_spec_file).unwrap();
|
||||
child.kill().unwrap();
|
||||
}
|
||||
|
||||
// Sets up omni-node to run a text exercise based on a chain spec.
|
||||
async fn omni_node_test_setup(chain_spec_path: PathBuf) {
|
||||
maybe_build_omni_node();
|
||||
let omni_node = find_release_binary(OMNI_NODE).unwrap();
|
||||
|
||||
let mut child = Command::new(omni_node)
|
||||
.arg("--dev")
|
||||
.args(["--chain", chain_spec_path.to_str().unwrap()])
|
||||
.stderr(Stdio::piped())
|
||||
.spawn()
|
||||
.unwrap();
|
||||
|
||||
let stderr = child.stderr.take().unwrap();
|
||||
assert_eq!(imported_block_found(stderr, 7, 100).await, true);
|
||||
child.kill().unwrap();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn works_with_different_block_times() {
|
||||
test_runtime_preset(PARA_RUNTIME, 100, Some(DEV_RUNTIME_PRESET.into())).await;
|
||||
test_runtime_preset(PARA_RUNTIME, 3000, Some(DEV_RUNTIME_PRESET.into())).await;
|
||||
|
||||
// we need this snippet just for docs
|
||||
#[docify::export_content(csb)]
|
||||
fn build_teyrchain_spec_works() {
|
||||
let chain_spec_builder = find_release_binary(&CHAIN_SPEC_BUILDER).unwrap();
|
||||
let runtime_path = find_wasm(PARA_RUNTIME).unwrap();
|
||||
let output = "/tmp/demo-chain-spec.json";
|
||||
let runtime_str = runtime_path.to_str().unwrap();
|
||||
run_cmd!(
|
||||
$chain_spec_builder -c $output create --relay-chain dontcare -r $runtime_str named-preset development
|
||||
).expect("Failed to run command");
|
||||
std::fs::remove_file(output).unwrap();
|
||||
}
|
||||
build_teyrchain_spec_works();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn teyrchain_runtime_works() {
|
||||
// TODO: None doesn't work. But maybe it should? it would be misleading as many users might
|
||||
// use it.
|
||||
for preset in [Some(DEV_RUNTIME_PRESET.into()), Some(LOCAL_TESTNET_RUNTIME_PRESET.into())] {
|
||||
test_runtime_preset(PARA_RUNTIME, 1000, preset).await;
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn omni_node_dev_mode_works() {
|
||||
//Omni Node in dev mode works with teyrchain's template `dev_chain_spec`
|
||||
let dev_chain_spec = std::env::current_dir()
|
||||
.unwrap()
|
||||
.parent()
|
||||
.unwrap()
|
||||
.parent()
|
||||
.unwrap()
|
||||
.join("templates")
|
||||
.join("teyrchain")
|
||||
.join("dev_chain_spec.json");
|
||||
omni_node_test_setup(dev_chain_spec).await;
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
// This is a regresion test so that we still remain compatible with runtimes that use
|
||||
// `para-id` in chain specs, instead of implementing the
|
||||
// `cumulus_primitives_core::GetTeyrchainInfo`.
|
||||
async fn omni_node_dev_mode_works_without_getteyrchaininfo() {
|
||||
let dev_chain_spec = std::env::current_dir()
|
||||
.unwrap()
|
||||
.join("src/guides/teyrchain_without_getteyrchaininfo.json");
|
||||
omni_node_test_setup(dev_chain_spec).await;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,789 @@
|
||||
//! # Currency Pallet
|
||||
//!
|
||||
//! By the end of this guide, you will have written a small FRAME pallet (see
|
||||
//! [`crate::pezkuwi_sdk::frame_runtime`]) that is capable of handling a simple crypto-currency.
|
||||
//! This pallet will:
|
||||
//!
|
||||
//! 1. Allow anyone to mint new tokens into accounts (which is obviously not a great idea for a real
|
||||
//! system).
|
||||
//! 2. Allow any user that owns tokens to transfer them to others.
|
||||
//! 3. Track the total issuance of all tokens at all times.
|
||||
//!
|
||||
//! > This guide will build a currency pallet from scratch using only the lowest primitives of
|
||||
//! > FRAME, and is mainly intended for education, not *applicability*. For example, almost all
|
||||
//! > FRAME-based runtimes use various techniques to re-use a currency pallet instead of writing
|
||||
//! > one. Further advanced FRAME related topics are discussed in [`crate::reference_docs`].
|
||||
//!
|
||||
//! ## Writing Your First Pallet
|
||||
//!
|
||||
//! To get started, clone one of the templates mentioned in [`crate::pezkuwi_sdk::templates`]. We
|
||||
//! recommend using the `pezkuwi-sdk-minimal-template`. You might need to change small parts of
|
||||
//! this guide, namely the crate/package names, based on which template you use.
|
||||
//!
|
||||
//! > Be aware that you can read the entire source code backing this tutorial by clicking on the
|
||||
//! > `source` button at the top right of the page.
|
||||
//!
|
||||
//! You should have studied the following modules as a prelude to this guide:
|
||||
//!
|
||||
//! - [`crate::reference_docs::blockchain_state_machines`]
|
||||
//! - [`crate::reference_docs::trait_based_programming`]
|
||||
//! - [`crate::pezkuwi_sdk::frame_runtime`]
|
||||
//!
|
||||
//! ## Topics Covered
|
||||
//!
|
||||
//! The following FRAME topics are covered in this guide:
|
||||
//!
|
||||
//! - [`pallet::storage`]
|
||||
//! - [`pallet::call`]
|
||||
//! - [`pallet::event`]
|
||||
//! - [`pallet::error`]
|
||||
//! - Basics of testing a pallet
|
||||
//! - [Constructing a runtime](frame::runtime::prelude::construct_runtime)
|
||||
//!
|
||||
//! ### Shell Pallet
|
||||
//!
|
||||
//! Consider the following as a "shell pallet". We continue building the rest of this pallet based
|
||||
//! on this template.
|
||||
//!
|
||||
//! [`pallet::config`] and [`pallet::pallet`] are both mandatory parts of any
|
||||
//! pallet. Refer to the documentation of each to get an overview of what they do.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", shell_pallet)]
|
||||
//!
|
||||
//! All of the code that follows in this guide should live inside of the `mod pallet`.
|
||||
//!
|
||||
//! ### Storage
|
||||
//!
|
||||
//! First, we will need to create two onchain storage declarations.
|
||||
//!
|
||||
//! One should be a mapping from account-ids to a balance type, and one value that is the total
|
||||
//! issuance.
|
||||
//!
|
||||
//! > For the rest of this guide, we will opt for a balance type of `u128`. For the sake of
|
||||
//! > simplicity, we are hardcoding this type. In a real pallet is best practice to define it as a
|
||||
//! > generic bounded type in the `Config` trait, and then specify it in the implementation.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", Balance)]
|
||||
//!
|
||||
//! The definition of these two storage items, based on [`pallet::storage`] details, is as follows:
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", TotalIssuance)]
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", Balances)]
|
||||
//!
|
||||
//! ### Dispatchables
|
||||
//!
|
||||
//! Next, we will define the dispatchable functions. As per [`pallet::call`], these will be defined
|
||||
//! as normal `fn`s attached to `struct Pallet`.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", impl_pallet)]
|
||||
//!
|
||||
//! The logic of these functions is self-explanatory. Instead, we will focus on the FRAME-related
|
||||
//! details:
|
||||
//!
|
||||
//! - Where do `T::AccountId` and `T::RuntimeOrigin` come from? These are both defined in
|
||||
//! [`frame::prelude::frame_system::Config`], therefore we can access them in `T`.
|
||||
//! - What is `ensure_signed`, and what does it do with the aforementioned `T::RuntimeOrigin`? This
|
||||
//! is outside the scope of this guide, and you can learn more about it in the origin reference
|
||||
//! document ([`crate::reference_docs::frame_origin`]). For now, you should only know the
|
||||
//! signature of the function: it takes a generic `T::RuntimeOrigin` and returns a
|
||||
//! `Result<T::AccountId, _>`. So by the end of this function call, we know that this dispatchable
|
||||
//! was signed by `sender`.
|
||||
#![doc = docify::embed!("../../substrate/frame/system/src/lib.rs", ensure_signed)]
|
||||
//!
|
||||
//! - Where does `mutate`, `get` and `insert` and other storage APIs come from? All of them are
|
||||
//! explained in the corresponding `type`, for example, for `Balances::<T>::insert`, you can look
|
||||
//! into [`frame::prelude::StorageMap::insert`].
|
||||
//!
|
||||
//! - The return type of all dispatchable functions is [`frame::prelude::DispatchResult`]:
|
||||
#![doc = docify::embed!("../../substrate/frame/support/src/dispatch.rs", DispatchResult)]
|
||||
//!
|
||||
//! Which is more or less a normal Rust `Result`, with a custom [`frame::prelude::DispatchError`] as
|
||||
//! the `Err` variant. We won't cover this error in detail here, but importantly you should know
|
||||
//! that there is an `impl From<&'static string> for DispatchError` provided (see
|
||||
//! [here](`frame::prelude::DispatchError#impl-From<%26str>-for-DispatchError`)). Therefore,
|
||||
//! we can use basic string literals as our error type and `.into()` them into `DispatchError`.
|
||||
//!
|
||||
//! - Why are all `get` and `mutate` functions returning an `Option`? This is the default behavior
|
||||
//! of FRAME storage APIs. You can learn more about how to override this by looking into
|
||||
//! [`pallet::storage`], and [`frame::prelude::ValueQuery`]/[`frame::prelude::OptionQuery`]
|
||||
//!
|
||||
//! ### Improving Errors
|
||||
//!
|
||||
//! How we handle error in the above snippets is fairly rudimentary. Let's look at how this can be
|
||||
//! improved. First, we can use [`frame::prelude::ensure`] to express the error slightly better.
|
||||
//! This macro will call `.into()` under the hood.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", transfer_better)]
|
||||
//!
|
||||
//! Moreover, you will learn in the [Defensive Programming
|
||||
//! section](crate::reference_docs::defensive_programming) that it is always recommended to use
|
||||
//! safe arithmetic operations in your runtime. By using [`frame::traits::CheckedSub`], we can not
|
||||
//! only take a step in that direction, but also improve the error handing and make it slightly more
|
||||
//! ergonomic.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", transfer_better_checked)]
|
||||
//!
|
||||
//! This is more or less all the logic that there is in this basic currency pallet!
|
||||
//!
|
||||
//! ### Your First (Test) Runtime
|
||||
//!
|
||||
//! The typical testing code of a pallet lives in a module that imports some preludes useful for
|
||||
//! testing, similar to:
|
||||
//!
|
||||
//! ```
|
||||
//! pub mod pallet {
|
||||
//! // snip -- actually pallet code.
|
||||
//! }
|
||||
//!
|
||||
//! #[cfg(test)]
|
||||
//! mod tests {
|
||||
//! // bring in the testing prelude of frame
|
||||
//! use frame::testing_prelude::*;
|
||||
//! // bring in all pallet items
|
||||
//! use super::pallet::*;
|
||||
//!
|
||||
//! // snip -- rest of the testing code.
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! Next, we create a "test runtime" in order to test our pallet. Recall from
|
||||
//! [`crate::pezkuwi_sdk::frame_runtime`] that a runtime is a collection of pallets, expressed
|
||||
//! through [`frame::runtime::prelude::construct_runtime`]. All runtimes also have to include
|
||||
//! [`frame::prelude::frame_system`]. So we expect to see a runtime with two pallet, `frame_system`
|
||||
//! and the one we just wrote.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", runtime)]
|
||||
//!
|
||||
//! > [`frame::pallet_macros::derive_impl`] is a FRAME feature that enables developers to have
|
||||
//! > defaults for associated types.
|
||||
//!
|
||||
//! Recall that within our pallet, (almost) all blocks of code are generic over `<T: Config>`. And,
|
||||
//! because `trait Config: frame_system::Config`, we can get access to all items in `Config` (or
|
||||
//! `frame_system::Config`) using `T::NameOfItem`. This is all within the boundaries of how
|
||||
//! Rust traits and generics work. If unfamiliar with this pattern, read
|
||||
//! [`crate::reference_docs::trait_based_programming`] before going further.
|
||||
//!
|
||||
//! Crucially, a typical FRAME runtime contains a `struct Runtime`. The main role of this `struct`
|
||||
//! is to implement the `trait Config` of all pallets. That is, anywhere within your pallet code
|
||||
//! where you see `<T: Config>` (read: *"some type `T` that implements `Config`"*), in the runtime,
|
||||
//! it can be replaced with `<Runtime>`, because `Runtime` implements `Config` of all pallets, as we
|
||||
//! see above.
|
||||
//!
|
||||
//! Another way to think about this is that within a pallet, a lot of types are "unknown" and, we
|
||||
//! only know that they will be provided at some later point. For example, when you write
|
||||
//! `T::AccountId` (which is short for `<T as frame_system::Config>::AccountId`) in your pallet,
|
||||
//! you are in fact saying "*Some type `AccountId` that will be known later*". That "later" is in
|
||||
//! fact when you specify these types when you implement all `Config` traits for `Runtime`.
|
||||
//!
|
||||
//! As you see above, `frame_system::Config` is setting the `AccountId` to `u64`. Of course, a real
|
||||
//! runtime will not use this type, and instead reside to a proper type like a 32-byte standard
|
||||
//! public key. This is a HUGE benefit that FRAME developers can tap into: through the framework
|
||||
//! being so generic, different types can always be customized to simple things when needed.
|
||||
//!
|
||||
//! > Imagine how hard it would have been if all tests had to use a real 32-byte account id, as
|
||||
//! > opposed to just a u64 number 🙈.
|
||||
//!
|
||||
//! ### Your First Test
|
||||
//!
|
||||
//! The above is all you need to execute the dispatchables of your pallet. The last thing you need
|
||||
//! to learn is that all of your pallet testing code should be wrapped in
|
||||
//! [`frame::testing_prelude::TestState`]. This is a type that provides access to an in-memory state
|
||||
//! to be used in our tests.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", first_test)]
|
||||
//!
|
||||
//! In the first test, we simply assert that there is no total issuance, and no balance associated
|
||||
//! with Alice's account. Then, we mint some balance into Alice's, and re-check.
|
||||
//!
|
||||
//! As noted above, the `T::AccountId` is now `u64`. Moreover, `Runtime` is replacing `<T: Config>`.
|
||||
//! This is why for example you see `Balances::<Runtime>::get(..)`. Finally, notice that the
|
||||
//! dispatchables are simply functions that can be called on top of the `Pallet` struct.
|
||||
//!
|
||||
//! Congratulations! You have written your first pallet and tested it! Next, we learn a few optional
|
||||
//! steps to improve our pallet.
|
||||
//!
|
||||
//! ## Improving the Currency Pallet
|
||||
//!
|
||||
//! ### Better Test Setup
|
||||
//!
|
||||
//! Idiomatic FRAME pallets often use Builder pattern to define their initial state.
|
||||
//!
|
||||
//! > The Pezkuwi Blockchain Academy's Rust entrance exam has a
|
||||
//! > [section](https://github.com/pezkuwichain/kurdistan_blockchain-akademy/blob/main/src/m_builder.rs)
|
||||
//! > on this that you can use to learn the Builder Pattern.
|
||||
//!
|
||||
//! Let's see how we can implement a better test setup using this pattern. First, we define a
|
||||
//! `struct StateBuilder`.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", StateBuilder)]
|
||||
//!
|
||||
//! This struct is meant to contain the same list of accounts and balances that we want to have at
|
||||
//! the beginning of each block. We hardcoded this to `let accounts = vec![(ALICE, 100), (2, 100)];`
|
||||
//! so far. Then, if desired, we attach a default value for this struct.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", default_state_builder)]
|
||||
//!
|
||||
//! Like any other builder pattern, we attach functions to the type to mutate its internal
|
||||
//! properties.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", impl_state_builder_add)]
|
||||
//!
|
||||
//! Finally --the useful part-- we write our own custom `build_and_execute` function on
|
||||
//! this type. This function will do multiple things:
|
||||
//!
|
||||
//! 1. It would consume `self` to produce our `TestState` based on the properties that we attached
|
||||
//! to `self`.
|
||||
//! 2. It would execute any test function that we pass in as closure.
|
||||
//! 3. A nifty trick, this allows our test setup to have some code that is executed both before and
|
||||
//! after each test. For example, in this test, we do some additional checking about the
|
||||
//! correctness of the `TotalIssuance`. We leave it up to you as an exercise to learn why the
|
||||
//! assertion should always hold, and how it is checked.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", impl_state_builder_build)]
|
||||
//!
|
||||
//! We can write tests that specifically check the initial state, and making sure our `StateBuilder`
|
||||
//! is working exactly as intended.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", state_builder_works)]
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", state_builder_add_balance)]
|
||||
//!
|
||||
//! ### More Tests
|
||||
//!
|
||||
//! Now that we have a more ergonomic test setup, let's see how a well written test for transfer and
|
||||
//! mint would look like.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", transfer_works)]
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", mint_works)]
|
||||
//!
|
||||
//! It is always a good idea to build a mental model where you write *at least* one test for each
|
||||
//! "success path" of a dispatchable, and one test for each "failure path", such as:
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", transfer_from_non_existent_fails)]
|
||||
//!
|
||||
//! We leave it up to you to write a test that triggers the `InsufficientBalance` error.
|
||||
//!
|
||||
//! ### Event and Error
|
||||
//!
|
||||
//! Our pallet is mainly missing two parts that are common in most FRAME pallets: Events, and
|
||||
//! Errors. First, let's understand what each is.
|
||||
//!
|
||||
//! - **Error**: The static string-based error scheme we used so far is good for readability, but it
|
||||
//! has a few drawbacks. The biggest problem with strings are that they are not type safe, e.g. a
|
||||
//! match statement cannot be exhaustive. These string literals will bloat the final wasm blob,
|
||||
//! and are relatively heavy to transmit and encode/decode. Moreover, it is easy to mistype them
|
||||
//! by one character. FRAME errors are exactly a solution to maintain readability, whilst fixing
|
||||
//! the drawbacks mentioned. In short, we use an enum to represent different variants of our
|
||||
//! error. These variants are then mapped in an efficient way (using only `u8` indices) to
|
||||
//! [`sp_runtime::DispatchError::Module`]. Read more about this in [`pallet::error`].
|
||||
//!
|
||||
//! - **Event**: Events are akin to the return type of dispatchables. They are mostly data blobs
|
||||
//! emitted by the runtime to let outside world know what is happening inside the pallet. Since
|
||||
//! otherwise, the outside world does not have an easy access to the state changes. They should
|
||||
//! represent what happened at the end of a dispatch operation. Therefore, the convention is to
|
||||
//! use passive tense for event names (eg. `SomethingHappened`). This allows other sub-systems or
|
||||
//! external parties (eg. a light-node, a DApp) to listen to particular events happening, without
|
||||
//! needing to re-execute the whole state transition function.
|
||||
//!
|
||||
//! With the explanation out of the way, let's see how these components can be added. Both follow a
|
||||
//! fairly familiar syntax: normal Rust enums, with extra [`pallet::event`] and [`pallet::error`]
|
||||
//! attributes attached.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", Event)]
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", Error)]
|
||||
//!
|
||||
//! One slightly custom part of this is the [`pallet::generate_deposit`] part. Without going into
|
||||
//! too much detail, in order for a pallet to emit events to the rest of the system, it needs to do
|
||||
//! two things:
|
||||
//!
|
||||
//! 1. Declare a type in its `Config` that refers to the overarching event type of the runtime. In
|
||||
//! short, by doing this, the pallet is expressing an important bound: `type RuntimeEvent:
|
||||
//! From<Event<Self>>`. Read: a `RuntimeEvent` exists, and it can be created from the local `enum
|
||||
//! Event` of this pallet. This enables the pallet to convert its `Event` into `RuntimeEvent`, and
|
||||
//! store it where needed.
|
||||
//!
|
||||
//! 2. But, doing this conversion and storing is too much to expect each pallet to define. FRAME
|
||||
//! provides a default way of storing events, and this is what [`pallet::generate_deposit`] is
|
||||
//! doing.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", config_v2)]
|
||||
//!
|
||||
//! > These `Runtime*` types are better explained in
|
||||
//! > [`crate::reference_docs::frame_runtime_types`].
|
||||
//!
|
||||
//! Then, we can rewrite the `transfer` dispatchable as such:
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", transfer_v2)]
|
||||
//!
|
||||
//! Then, notice how now we would need to provide this `type RuntimeEvent` in our test runtime
|
||||
//! setup.
|
||||
#![doc = docify::embed!("./packages/guides/first-pallet/src/lib.rs", runtime_v2)]
|
||||
//!
|
||||
//! In this snippet, the actual `RuntimeEvent` type (right hand side of `type RuntimeEvent =
|
||||
//! RuntimeEvent`) is generated by
|
||||
//! [`construct_runtime`](frame::runtime::prelude::construct_runtime). An interesting way to inspect
|
||||
//! this type is to see its definition in rust-docs:
|
||||
//! [`crate::guides::your_first_pallet::pallet_v2::tests::runtime_v2::RuntimeEvent`].
|
||||
//!
|
||||
//!
|
||||
//! ## What Next?
|
||||
//!
|
||||
//! The following topics where used in this guide, but not covered in depth. It is suggested to
|
||||
//! study them subsequently:
|
||||
//!
|
||||
//! - [`crate::reference_docs::defensive_programming`].
|
||||
//! - [`crate::reference_docs::frame_origin`].
|
||||
//! - [`crate::reference_docs::frame_runtime_types`].
|
||||
//! - The pallet we wrote in this guide was using `dev_mode`, learn more in [`pallet::config`].
|
||||
//! - Learn more about the individual pallet items/macros, such as event and errors and call, in
|
||||
//! [`frame::pallet_macros`].
|
||||
//!
|
||||
//! [`pallet::storage`]: frame_support::pallet_macros::storage
|
||||
//! [`pallet::call`]: frame_support::pallet_macros::call
|
||||
//! [`pallet::event`]: frame_support::pallet_macros::event
|
||||
//! [`pallet::error`]: frame_support::pallet_macros::error
|
||||
//! [`pallet::pallet`]: frame_support::pallet
|
||||
//! [`pallet::config`]: frame_support::pallet_macros::config
|
||||
//! [`pallet::generate_deposit`]: frame_support::pallet_macros::generate_deposit
|
||||
|
||||
#[docify::export]
|
||||
#[frame::pallet(dev_mode)]
|
||||
pub mod shell_pallet {
|
||||
use frame::prelude::*;
|
||||
|
||||
#[pallet::config]
|
||||
pub trait Config: frame_system::Config {}
|
||||
|
||||
#[pallet::pallet]
|
||||
pub struct Pallet<T>(_);
|
||||
}
|
||||
|
||||
#[frame::pallet(dev_mode)]
|
||||
pub mod pallet {
|
||||
use frame::prelude::*;
|
||||
|
||||
#[docify::export]
|
||||
pub type Balance = u128;
|
||||
|
||||
#[pallet::config]
|
||||
pub trait Config: frame_system::Config {}
|
||||
|
||||
#[pallet::pallet]
|
||||
pub struct Pallet<T>(_);
|
||||
|
||||
#[docify::export]
|
||||
/// Single storage item, of type `Balance`.
|
||||
#[pallet::storage]
|
||||
pub type TotalIssuance<T: Config> = StorageValue<_, Balance>;
|
||||
|
||||
#[docify::export]
|
||||
/// A mapping from `T::AccountId` to `Balance`
|
||||
#[pallet::storage]
|
||||
pub type Balances<T: Config> = StorageMap<_, _, T::AccountId, Balance>;
|
||||
|
||||
#[docify::export(impl_pallet)]
|
||||
#[pallet::call]
|
||||
impl<T: Config> Pallet<T> {
|
||||
/// An unsafe mint that can be called by anyone. Not a great idea.
|
||||
pub fn mint_unsafe(
|
||||
origin: T::RuntimeOrigin,
|
||||
dest: T::AccountId,
|
||||
amount: Balance,
|
||||
) -> DispatchResult {
|
||||
// ensure that this is a signed account, but we don't really check `_anyone`.
|
||||
let _anyone = ensure_signed(origin)?;
|
||||
|
||||
// update the balances map. Notice how all `<T: Config>` remains as `<T>`.
|
||||
Balances::<T>::mutate(dest, |b| *b = Some(b.unwrap_or(0) + amount));
|
||||
// update total issuance.
|
||||
TotalIssuance::<T>::mutate(|t| *t = Some(t.unwrap_or(0) + amount));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Transfer `amount` from `origin` to `dest`.
|
||||
pub fn transfer(
|
||||
origin: T::RuntimeOrigin,
|
||||
dest: T::AccountId,
|
||||
amount: Balance,
|
||||
) -> DispatchResult {
|
||||
let sender = ensure_signed(origin)?;
|
||||
|
||||
// ensure sender has enough balance, and if so, calculate what is left after `amount`.
|
||||
let sender_balance = Balances::<T>::get(&sender).ok_or("NonExistentAccount")?;
|
||||
if sender_balance < amount {
|
||||
return Err("InsufficientBalance".into());
|
||||
}
|
||||
let remainder = sender_balance - amount;
|
||||
|
||||
// update sender and dest balances.
|
||||
Balances::<T>::mutate(dest, |b| *b = Some(b.unwrap_or(0) + amount));
|
||||
Balances::<T>::insert(&sender, remainder);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(unused)]
|
||||
impl<T: Config> Pallet<T> {
|
||||
#[docify::export]
|
||||
pub fn transfer_better(
|
||||
origin: T::RuntimeOrigin,
|
||||
dest: T::AccountId,
|
||||
amount: Balance,
|
||||
) -> DispatchResult {
|
||||
let sender = ensure_signed(origin)?;
|
||||
|
||||
let sender_balance = Balances::<T>::get(&sender).ok_or("NonExistentAccount")?;
|
||||
ensure!(sender_balance >= amount, "InsufficientBalance");
|
||||
let remainder = sender_balance - amount;
|
||||
|
||||
// .. snip
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[docify::export]
|
||||
/// Transfer `amount` from `origin` to `dest`.
|
||||
pub fn transfer_better_checked(
|
||||
origin: T::RuntimeOrigin,
|
||||
dest: T::AccountId,
|
||||
amount: Balance,
|
||||
) -> DispatchResult {
|
||||
let sender = ensure_signed(origin)?;
|
||||
|
||||
let sender_balance = Balances::<T>::get(&sender).ok_or("NonExistentAccount")?;
|
||||
let remainder = sender_balance.checked_sub(amount).ok_or("InsufficientBalance")?;
|
||||
|
||||
// .. snip
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(any(test, doc))]
|
||||
pub(crate) mod tests {
|
||||
use crate::guides::your_first_pallet::pallet::*;
|
||||
|
||||
#[docify::export(testing_prelude)]
|
||||
use frame::testing_prelude::*;
|
||||
|
||||
pub(crate) const ALICE: u64 = 1;
|
||||
pub(crate) const BOB: u64 = 2;
|
||||
pub(crate) const CHARLIE: u64 = 3;
|
||||
|
||||
#[docify::export]
|
||||
// This runtime is only used for testing, so it should be somewhere like `#[cfg(test)] mod
|
||||
// tests { .. }`
|
||||
mod runtime {
|
||||
use super::*;
|
||||
// we need to reference our `mod pallet` as an identifier to pass to
|
||||
// `construct_runtime`.
|
||||
// YOU HAVE TO CHANGE THIS LINE BASED ON YOUR TEMPLATE
|
||||
use crate::guides::your_first_pallet::pallet as pallet_currency;
|
||||
|
||||
construct_runtime!(
|
||||
pub enum Runtime {
|
||||
// ---^^^^^^ This is where `enum Runtime` is defined.
|
||||
System: frame_system,
|
||||
Currency: pallet_currency,
|
||||
}
|
||||
);
|
||||
|
||||
#[derive_impl(frame_system::config_preludes::TestDefaultConfig)]
|
||||
impl frame_system::Config for Runtime {
|
||||
type Block = MockBlock<Runtime>;
|
||||
// within pallet we just said `<T as frame_system::Config>::AccountId`, now we
|
||||
// finally specified it.
|
||||
type AccountId = u64;
|
||||
}
|
||||
|
||||
// our simple pallet has nothing to be configured.
|
||||
impl pallet_currency::Config for Runtime {}
|
||||
}
|
||||
|
||||
pub(crate) use runtime::*;
|
||||
|
||||
#[allow(unused)]
|
||||
#[docify::export]
|
||||
fn new_test_state_basic() -> TestState {
|
||||
let mut state = TestState::new_empty();
|
||||
let accounts = vec![(ALICE, 100), (BOB, 100)];
|
||||
state.execute_with(|| {
|
||||
for (who, amount) in &accounts {
|
||||
Balances::<Runtime>::insert(who, amount);
|
||||
TotalIssuance::<Runtime>::mutate(|b| *b = Some(b.unwrap_or(0) + amount));
|
||||
}
|
||||
});
|
||||
|
||||
state
|
||||
}
|
||||
|
||||
#[docify::export]
|
||||
pub(crate) struct StateBuilder {
|
||||
balances: Vec<(<Runtime as frame_system::Config>::AccountId, Balance)>,
|
||||
}
|
||||
|
||||
#[docify::export(default_state_builder)]
|
||||
impl Default for StateBuilder {
|
||||
fn default() -> Self {
|
||||
Self { balances: vec![(ALICE, 100), (BOB, 100)] }
|
||||
}
|
||||
}
|
||||
|
||||
#[docify::export(impl_state_builder_add)]
|
||||
impl StateBuilder {
|
||||
fn add_balance(
|
||||
mut self,
|
||||
who: <Runtime as frame_system::Config>::AccountId,
|
||||
amount: Balance,
|
||||
) -> Self {
|
||||
self.balances.push((who, amount));
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
#[docify::export(impl_state_builder_build)]
|
||||
impl StateBuilder {
|
||||
pub(crate) fn build_and_execute(self, test: impl FnOnce() -> ()) {
|
||||
let mut ext = TestState::new_empty();
|
||||
ext.execute_with(|| {
|
||||
for (who, amount) in &self.balances {
|
||||
Balances::<Runtime>::insert(who, amount);
|
||||
TotalIssuance::<Runtime>::mutate(|b| *b = Some(b.unwrap_or(0) + amount));
|
||||
}
|
||||
});
|
||||
|
||||
ext.execute_with(test);
|
||||
|
||||
// assertions that must always hold
|
||||
ext.execute_with(|| {
|
||||
assert_eq!(
|
||||
Balances::<Runtime>::iter().map(|(_, x)| x).sum::<u128>(),
|
||||
TotalIssuance::<Runtime>::get().unwrap_or_default()
|
||||
);
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[docify::export]
|
||||
#[test]
|
||||
fn first_test() {
|
||||
TestState::new_empty().execute_with(|| {
|
||||
// We expect Alice's account to have no funds.
|
||||
assert_eq!(Balances::<Runtime>::get(&ALICE), None);
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), None);
|
||||
|
||||
// mint some funds into Alice's account.
|
||||
assert_ok!(Pallet::<Runtime>::mint_unsafe(
|
||||
RuntimeOrigin::signed(ALICE),
|
||||
ALICE,
|
||||
100
|
||||
));
|
||||
|
||||
// re-check the above
|
||||
assert_eq!(Balances::<Runtime>::get(&ALICE), Some(100));
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(100));
|
||||
})
|
||||
}
|
||||
|
||||
#[docify::export]
|
||||
#[test]
|
||||
fn state_builder_works() {
|
||||
StateBuilder::default().build_and_execute(|| {
|
||||
assert_eq!(Balances::<Runtime>::get(&ALICE), Some(100));
|
||||
assert_eq!(Balances::<Runtime>::get(&BOB), Some(100));
|
||||
assert_eq!(Balances::<Runtime>::get(&CHARLIE), None);
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(200));
|
||||
});
|
||||
}
|
||||
|
||||
#[docify::export]
|
||||
#[test]
|
||||
fn state_builder_add_balance() {
|
||||
StateBuilder::default().add_balance(CHARLIE, 42).build_and_execute(|| {
|
||||
assert_eq!(Balances::<Runtime>::get(&CHARLIE), Some(42));
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(242));
|
||||
})
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic]
|
||||
fn state_builder_duplicate_genesis_fails() {
|
||||
StateBuilder::default()
|
||||
.add_balance(CHARLIE, 42)
|
||||
.add_balance(CHARLIE, 43)
|
||||
.build_and_execute(|| {
|
||||
assert_eq!(Balances::<Runtime>::get(&CHARLIE), None);
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(242));
|
||||
})
|
||||
}
|
||||
|
||||
#[docify::export]
|
||||
#[test]
|
||||
fn mint_works() {
|
||||
StateBuilder::default().build_and_execute(|| {
|
||||
// given the initial state, when:
|
||||
assert_ok!(Pallet::<Runtime>::mint_unsafe(RuntimeOrigin::signed(ALICE), BOB, 100));
|
||||
|
||||
// then:
|
||||
assert_eq!(Balances::<Runtime>::get(&BOB), Some(200));
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(300));
|
||||
|
||||
// given:
|
||||
assert_ok!(Pallet::<Runtime>::mint_unsafe(
|
||||
RuntimeOrigin::signed(ALICE),
|
||||
CHARLIE,
|
||||
100
|
||||
));
|
||||
|
||||
// then:
|
||||
assert_eq!(Balances::<Runtime>::get(&CHARLIE), Some(100));
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(400));
|
||||
});
|
||||
}
|
||||
|
||||
#[docify::export]
|
||||
#[test]
|
||||
fn transfer_works() {
|
||||
StateBuilder::default().build_and_execute(|| {
|
||||
// given the initial state, when:
|
||||
assert_ok!(Pallet::<Runtime>::transfer(RuntimeOrigin::signed(ALICE), BOB, 50));
|
||||
|
||||
// then:
|
||||
assert_eq!(Balances::<Runtime>::get(&ALICE), Some(50));
|
||||
assert_eq!(Balances::<Runtime>::get(&BOB), Some(150));
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(200));
|
||||
|
||||
// when:
|
||||
assert_ok!(Pallet::<Runtime>::transfer(RuntimeOrigin::signed(BOB), ALICE, 50));
|
||||
|
||||
// then:
|
||||
assert_eq!(Balances::<Runtime>::get(&ALICE), Some(100));
|
||||
assert_eq!(Balances::<Runtime>::get(&BOB), Some(100));
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(200));
|
||||
});
|
||||
}
|
||||
|
||||
#[docify::export]
|
||||
#[test]
|
||||
fn transfer_from_non_existent_fails() {
|
||||
StateBuilder::default().build_and_execute(|| {
|
||||
// given the initial state, when:
|
||||
assert_err!(
|
||||
Pallet::<Runtime>::transfer(RuntimeOrigin::signed(CHARLIE), ALICE, 10),
|
||||
"NonExistentAccount"
|
||||
);
|
||||
|
||||
// then nothing has changed.
|
||||
assert_eq!(Balances::<Runtime>::get(&ALICE), Some(100));
|
||||
assert_eq!(Balances::<Runtime>::get(&BOB), Some(100));
|
||||
assert_eq!(Balances::<Runtime>::get(&CHARLIE), None);
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(200));
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[frame::pallet(dev_mode)]
|
||||
pub mod pallet_v2 {
|
||||
use super::pallet::Balance;
|
||||
use frame::prelude::*;
|
||||
|
||||
#[docify::export(config_v2)]
|
||||
#[pallet::config]
|
||||
pub trait Config: frame_system::Config {
|
||||
/// The overarching event type of the runtime.
|
||||
#[allow(deprecated)]
|
||||
type RuntimeEvent: From<Event<Self>>
|
||||
+ IsType<<Self as frame_system::Config>::RuntimeEvent>
|
||||
+ TryInto<Event<Self>>;
|
||||
}
|
||||
|
||||
#[pallet::pallet]
|
||||
pub struct Pallet<T>(_);
|
||||
|
||||
#[pallet::storage]
|
||||
pub type Balances<T: Config> = StorageMap<_, _, T::AccountId, Balance>;
|
||||
|
||||
#[pallet::storage]
|
||||
pub type TotalIssuance<T: Config> = StorageValue<_, Balance>;
|
||||
|
||||
#[docify::export]
|
||||
#[pallet::error]
|
||||
pub enum Error<T> {
|
||||
/// Account does not exist.
|
||||
NonExistentAccount,
|
||||
/// Account does not have enough balance.
|
||||
InsufficientBalance,
|
||||
}
|
||||
|
||||
#[docify::export]
|
||||
#[pallet::event]
|
||||
#[pallet::generate_deposit(pub(super) fn deposit_event)]
|
||||
pub enum Event<T: Config> {
|
||||
/// A transfer succeeded.
|
||||
Transferred { from: T::AccountId, to: T::AccountId, amount: Balance },
|
||||
}
|
||||
|
||||
#[pallet::call]
|
||||
impl<T: Config> Pallet<T> {
|
||||
#[docify::export(transfer_v2)]
|
||||
pub fn transfer(
|
||||
origin: T::RuntimeOrigin,
|
||||
dest: T::AccountId,
|
||||
amount: Balance,
|
||||
) -> DispatchResult {
|
||||
let sender = ensure_signed(origin)?;
|
||||
|
||||
// ensure sender has enough balance, and if so, calculate what is left after `amount`.
|
||||
let sender_balance =
|
||||
Balances::<T>::get(&sender).ok_or(Error::<T>::NonExistentAccount)?;
|
||||
let remainder =
|
||||
sender_balance.checked_sub(amount).ok_or(Error::<T>::InsufficientBalance)?;
|
||||
|
||||
Balances::<T>::mutate(&dest, |b| *b = Some(b.unwrap_or(0) + amount));
|
||||
Balances::<T>::insert(&sender, remainder);
|
||||
|
||||
Self::deposit_event(Event::<T>::Transferred { from: sender, to: dest, amount });
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(any(test, doc))]
|
||||
pub mod tests {
|
||||
use super::{super::pallet::tests::StateBuilder, *};
|
||||
use frame::testing_prelude::*;
|
||||
const ALICE: u64 = 1;
|
||||
const BOB: u64 = 2;
|
||||
|
||||
#[docify::export]
|
||||
pub mod runtime_v2 {
|
||||
use super::*;
|
||||
use crate::guides::your_first_pallet::pallet_v2 as pallet_currency;
|
||||
|
||||
construct_runtime!(
|
||||
pub enum Runtime {
|
||||
System: frame_system,
|
||||
Currency: pallet_currency,
|
||||
}
|
||||
);
|
||||
|
||||
#[derive_impl(frame_system::config_preludes::TestDefaultConfig)]
|
||||
impl frame_system::Config for Runtime {
|
||||
type Block = MockBlock<Runtime>;
|
||||
type AccountId = u64;
|
||||
}
|
||||
|
||||
impl pallet_currency::Config for Runtime {
|
||||
type RuntimeEvent = RuntimeEvent;
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) use runtime_v2::*;
|
||||
|
||||
#[docify::export(transfer_works_v2)]
|
||||
#[test]
|
||||
fn transfer_works() {
|
||||
StateBuilder::default().build_and_execute(|| {
|
||||
// skip the genesis block, as events are not deposited there and we need them for
|
||||
// the final assertion.
|
||||
System::set_block_number(ALICE);
|
||||
|
||||
// given the initial state, when:
|
||||
assert_ok!(Pallet::<Runtime>::transfer(RuntimeOrigin::signed(ALICE), BOB, 50));
|
||||
|
||||
// then:
|
||||
assert_eq!(Balances::<Runtime>::get(&ALICE), Some(50));
|
||||
assert_eq!(Balances::<Runtime>::get(&BOB), Some(150));
|
||||
assert_eq!(TotalIssuance::<Runtime>::get(), Some(200));
|
||||
|
||||
// now we can also check that an event has been deposited:
|
||||
assert_eq!(
|
||||
System::read_events_for_pallet::<Event<Runtime>>(),
|
||||
vec![Event::Transferred { from: ALICE, to: BOB, amount: 50 }]
|
||||
);
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,186 @@
|
||||
//! # Your first Runtime
|
||||
//!
|
||||
//! This guide will walk you through the steps to add your pallet to a runtime.
|
||||
//!
|
||||
//! The good news is, in [`crate::guides::your_first_pallet`], we have already created a _test_
|
||||
//! runtime that was used for testing, and a real runtime is not that much different!
|
||||
//!
|
||||
//! ## Setup
|
||||
//!
|
||||
//! A runtime shares a few similar setup requirements as with a pallet:
|
||||
//!
|
||||
//! * importing [`frame`], [`codec`], and [`scale_info`] crates.
|
||||
//! * following the [`std` feature-gating](crate::pezkuwi_sdk::substrate#wasm-build) pattern.
|
||||
//!
|
||||
//! But, more specifically, it also contains:
|
||||
//!
|
||||
//! * a `build.rs` that uses [`substrate_wasm_builder`]. This entails declaring
|
||||
//! `[build-dependencies]` in the Cargo manifest file:
|
||||
//!
|
||||
//! ```ignore
|
||||
//! [build-dependencies]
|
||||
//! substrate-wasm-builder = { ... }
|
||||
//! ```
|
||||
//!
|
||||
//! >Note that a runtime must always be one-runtime-per-crate. You cannot define multiple runtimes
|
||||
//! per rust crate.
|
||||
//!
|
||||
//! You can find the full code of this guide in [`first_runtime`].
|
||||
//!
|
||||
//! ## Your First Runtime
|
||||
//!
|
||||
//! The first new property of a real runtime that it must define its
|
||||
//! [`frame::runtime::prelude::RuntimeVersion`]:
|
||||
#![doc = docify::embed!("./packages/guides/first-runtime/src/lib.rs", VERSION)]
|
||||
//!
|
||||
//! The version contains a number of very important fields, such as `spec_version` and `spec_name`
|
||||
//! that play an important role in identifying your runtime and its version, more importantly in
|
||||
//! runtime upgrades. More about runtime upgrades in
|
||||
//! [`crate::reference_docs::frame_runtime_upgrades_and_migrations`].
|
||||
//!
|
||||
//! Then, a real runtime also contains the `impl` of all individual pallets' `trait Config` for
|
||||
//! `struct Runtime`, and a [`frame::runtime::prelude::construct_runtime`] macro that amalgamates
|
||||
//! them all.
|
||||
//!
|
||||
//! In the case of our example:
|
||||
#![doc = docify::embed!("./packages/guides/first-runtime/src/lib.rs", our_config_impl)]
|
||||
//!
|
||||
//! In this example, we bring in a number of other pallets from [`frame`] into the runtime, each of
|
||||
//! their `Config` need to be implemented for `struct Runtime`:
|
||||
#![doc = docify::embed!("./packages/guides/first-runtime/src/lib.rs", config_impls)]
|
||||
//!
|
||||
//! Notice how we use [`frame::pallet_macros::derive_impl`] to provide "default" configuration items
|
||||
//! for each pallet. Feel free to dive into the definition of each default prelude (eg.
|
||||
//! [`frame::prelude::frame_system::pallet::config_preludes`]) to learn more which types are exactly
|
||||
//! used.
|
||||
//!
|
||||
//! Recall that in test runtime in [`crate::guides::your_first_pallet`], we provided `type AccountId
|
||||
//! = u64` to `frame_system`, while in this case we rely on whatever is provided by
|
||||
//! [`SolochainDefaultConfig`], which is indeed a "real" 32 byte account id.
|
||||
//!
|
||||
//! Then, a familiar instance of `construct_runtime` amalgamates all of the pallets:
|
||||
#![doc = docify::embed!("./packages/guides/first-runtime/src/lib.rs", cr)]
|
||||
//!
|
||||
//! Recall from [`crate::reference_docs::wasm_meta_protocol`] that every (real) runtime needs to
|
||||
//! implement a set of runtime APIs that will then let the node to communicate with it. The final
|
||||
//! steps of crafting a runtime are related to achieving exactly this.
|
||||
//!
|
||||
//! First, we define a number of types that eventually lead to the creation of an instance of
|
||||
//! [`frame::runtime::prelude::Executive`]. The executive is a handy FRAME utility that, through
|
||||
//! amalgamating all pallets and further types, implements some of the very very core pieces of the
|
||||
//! runtime logic, such as how blocks are executed and other runtime-api implementations.
|
||||
#![doc = docify::embed!("./packages/guides/first-runtime/src/lib.rs", runtime_types)]
|
||||
//!
|
||||
//! Finally, we use [`frame::runtime::prelude::impl_runtime_apis`] to implement all of the runtime
|
||||
//! APIs that the runtime wishes to expose. As you will see in the code, most of these runtime API
|
||||
//! implementations are merely forwarding calls to `RuntimeExecutive` which handles the actual
|
||||
//! logic. Given that the implementation block is somewhat large, we won't repeat it here. You can
|
||||
//! look for `impl_runtime_apis!` in [`first_runtime`].
|
||||
//!
|
||||
//! ```ignore
|
||||
//! impl_runtime_apis! {
|
||||
//! impl apis::Core<Block> for Runtime {
|
||||
//! fn version() -> RuntimeVersion {
|
||||
//! VERSION
|
||||
//! }
|
||||
//!
|
||||
//! fn execute_block(block: Block) {
|
||||
//! RuntimeExecutive::execute_block(block)
|
||||
//! }
|
||||
//!
|
||||
//! fn initialize_block(header: &Header) -> ExtrinsicInclusionMode {
|
||||
//! RuntimeExecutive::initialize_block(header)
|
||||
//! }
|
||||
//! }
|
||||
//!
|
||||
//! // many more trait impls...
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! And that more or less covers the details of how you would write a real runtime!
|
||||
//!
|
||||
//! Once you compile a crate that contains a runtime as above, simply running `cargo build` will
|
||||
//! generate the wasm blobs and place them under `./target/release/wbuild`, as explained
|
||||
//! [here](crate::pezkuwi_sdk::substrate#wasm-build).
|
||||
//!
|
||||
//! ## Genesis Configuration
|
||||
//!
|
||||
//! Every runtime specifies a number of runtime APIs that help the outer world (most notably, a
|
||||
//! `node`) know what is the genesis state of this runtime. These APIs are then used to generate
|
||||
//! what is known as a **Chain Specification, or chain spec for short**. A chain spec is the
|
||||
//! primary way to run a new chain.
|
||||
//!
|
||||
//! These APIs are defined in [`sp_genesis_builder`], and are re-exposed as a part of
|
||||
//! [`frame::runtime::apis`]. Therefore, the implementation blocks can be found inside of
|
||||
//! `impl_runtime_apis!` similar to:
|
||||
//!
|
||||
//! ```ignore
|
||||
//! impl_runtime_apis! {
|
||||
//! impl apis::GenesisBuilder<Block> for Runtime {
|
||||
//! fn build_state(config: Vec<u8>) -> GenesisBuilderResult {
|
||||
//! build_state::<RuntimeGenesisConfig>(config)
|
||||
//! }
|
||||
//!
|
||||
//! fn get_preset(id: &Option<PresetId>) -> Option<Vec<u8>> {
|
||||
//! get_preset::<RuntimeGenesisConfig>(id, self::genesis_config_presets::get_preset)
|
||||
//! }
|
||||
//!
|
||||
//! fn preset_names() -> Vec<PresetId> {
|
||||
//! crate::genesis_config_presets::preset_names()
|
||||
//! }
|
||||
//! }
|
||||
//!
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! The implementation of these function can naturally vary from one runtime to the other, but the
|
||||
//! overall pattern is common. For the case of this runtime, we do the following:
|
||||
//!
|
||||
//! 1. Expose one non-default preset, namely [`sp_genesis_builder::DEV_RUNTIME_PRESET`]. This means
|
||||
//! our runtime has two "presets" of genesis state in total: `DEV_RUNTIME_PRESET` and `None`.
|
||||
#![doc = docify::embed!("./packages/guides/first-runtime/src/lib.rs", preset_names)]
|
||||
//!
|
||||
//! For `build_state` and `get_preset`, we use the helper functions provide by frame:
|
||||
//!
|
||||
//! * [`frame::runtime::prelude::build_state`] and [`frame::runtime::prelude::get_preset`].
|
||||
//!
|
||||
//! Indeed, our runtime needs to specify what its `DEV_RUNTIME_PRESET` genesis state should be like:
|
||||
#![doc = docify::embed!("./packages/guides/first-runtime/src/lib.rs", development_config_genesis)]
|
||||
//!
|
||||
//! For more in-depth information about `GenesisConfig`, `ChainSpec`, the `GenesisBuilder` API and
|
||||
//! `chain-spec-builder`, see [`crate::reference_docs::chain_spec_genesis`].
|
||||
//!
|
||||
//! ## Next Step
|
||||
//!
|
||||
//! See [`crate::guides::your_first_node`].
|
||||
//!
|
||||
//! ## Further Reading
|
||||
//!
|
||||
//! 1. To learn more about signed extensions, see [`crate::reference_docs::signed_extensions`].
|
||||
//! 2. `AllPalletsWithSystem` is also generated by `construct_runtime`, as explained in
|
||||
//! [`crate::reference_docs::frame_runtime_types`].
|
||||
//! 3. `Executive` supports more generics, most notably allowing the runtime to configure more
|
||||
//! runtime migrations, as explained in
|
||||
//! [`crate::reference_docs::frame_runtime_upgrades_and_migrations`].
|
||||
//! 4. Learn more about adding and implementing runtime apis in
|
||||
//! [`crate::reference_docs::custom_runtime_api_rpc`].
|
||||
//! 5. To see a complete example of a runtime+pallet that is similar to this guide, please see
|
||||
//! [`crate::pezkuwi_sdk::templates`].
|
||||
//!
|
||||
//! [`SolochainDefaultConfig`]: struct@frame_system::pallet::config_preludes::SolochainDefaultConfig
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use cmd_lib::run_cmd;
|
||||
|
||||
const FIRST_RUNTIME: &'static str = "pezkuwi-sdk-docs-first-runtime";
|
||||
|
||||
#[docify::export_content]
|
||||
#[test]
|
||||
fn build_runtime() {
|
||||
run_cmd!(
|
||||
cargo build --release -p $FIRST_RUNTIME
|
||||
)
|
||||
.expect("Failed to run command");
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user