into main

This commit is contained in:
4meta5
2023-12-15 16:51:43 -05:00
13 changed files with 933 additions and 21 deletions
+2 -4
View File
@@ -1,9 +1,7 @@
name: 🐞 Bug report
description: Create a report to help us improve
title: '🐞 [Bug]: '
title: "🐞 [Bug]: "
labels: ["bug"]
assignees:
- ozgunozerk
body:
- type: markdown
attributes:
@@ -20,7 +18,7 @@ body:
id: platform
attributes:
label: platform
description: On which operating system did this bug emerged?
description: On which operating system did this bug emerge?
options:
- label: linux
required: false
@@ -2,8 +2,6 @@ name: 🎁 Feature Request
description: Suggest an idea for this project ⚡️
title: "🎁 [Feature Request]: "
labels: ["enhancement"]
assignees:
- ozgunozerk
body:
- type: markdown
attributes:
@@ -24,5 +22,3 @@ body:
options:
- label: I agree to follow this project's Contribution Guidelines
required: true
@@ -1,4 +1,4 @@
name: "ci tests"
name: "cargo tests"
on:
push:
@@ -7,9 +7,11 @@ on:
- plain-cumulus-template
paths-ignore:
- "**.md"
- "**.adoc"
pull_request:
paths-ignore:
- "**.md"
- "**.adoc"
workflow_dispatch:
inputs:
test-macos-and-windows:
@@ -75,4 +77,4 @@ jobs:
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
+1 -1
View File
@@ -5,4 +5,4 @@ nav:
- modules/ROOT/nav.adoc
asciidoc:
attributes:
page-sidebar-collapse-default: true
page-sidebar-collapse-default: false
+6 -1
View File
@@ -1,4 +1,9 @@
* General Guides
* Runtime Descriptions
* Pallet Specifications
* [Pallet Transaction Payment](./pages/pallet_transaction_payment.adoc)
** xref:pallets/pallet_transaction_payment.adoc[pallet_transaction_payment]
** xref:pallets/proxy.adoc[pallet_proxy]
** xref:pallets/message-queue.adoc[pallet_message_queue]
** xref:pallets/aura_ext.adoc[cumulus_aura_ext]
** xref:pallets/collator-selection.adoc[collator_selection]
** xref:pallets/parachain-system.adoc[parachain_system]
@@ -0,0 +1,47 @@
:source-highlighter: highlight.js
:highlightjs-languages: rust
:github-icon: pass:[<svg class="icon"><use href="#github-icon"/></svg>]
= cumulus_aura_ext
Branch/Release: `release-polkadot-v1.3.0`
== Purpose
This pallet integrates parachains own block production mechanism (for example AuRa) into Cumulus parachain system. It allows:
- to manage the unincluded blocks from the current slot
- to validate produced block against the relay chain
== Configuration and Integration link:https://github.com/paritytech/polkadot-sdk/tree/release-polkadot-v1.3.0/cumulus/pallets/aura-ext[{github-icon},role=heading-link]
There is no special config for this integration and it has no dispatchables, but you need to integrate it with other `parachain-system` crate:
=== Integrate `BlockExecutor`
When you invoke the `register_validate_block` macro, you should provide `cumulus_pallet_aura_ext::BlockExecutor` to it to allow `aura-ext` to validate the blocks produced by `aura`
[source, rust]
----
cumulus_pallet_parachain_system::register_validate_block! {
Runtime = Runtime,
BlockExecutor = cumulus_pallet_aura_ext::BlockExecutor::<Runtime, Executive>,
}
----
=== Integrate `ConsensusHook`
Also you might want to manage the consensus externally and control the segment that is not yet included (its capacity, speed and etc.) `aura-ext` provides the `FixedVelocityConsensusHook` that allows to check if we are still in the limits for the slot.
[source, rust]
----
impl cumulus_pallet_parachain_system::Config for Runtime {
...
type ConsensusHook = cumulus_pallet_aura_ext::FixedVelocityConsensusHook<
Runtime,
RELAY_CHAIN_SLOT_DURATION_MILLIS,
BLOCK_PROCESSING_VELOCITY,
UNINCLUDED_SEGMENT_CAPACITY,
>;
}
----
@@ -0,0 +1,197 @@
:source-highlighter: highlight.js
:highlightjs-languages: rust
:github-icon: pass:[<svg class="icon"><use href="#github-icon"/></svg>]
= collator_selection
Branch/Release: `release-polkadot-v1.3.0`
== Purpose
This pallet is needed to manage the set of collators for each session and to provision the next session with the actual list of collators.
== Glossary
- _Collator_ — a node that gathers collation information and communicates it to relay chain to make a block.
- _Pot_ — stake that will reward block authors. Block author will get half of the current stake.
- _Candidate_ — a self-promoted collator, who deposited a candidacy bond to participate in collation process
- _Candidacy Bond_ — fixed amount to deposit to become a collator
- _Invulnerable_ — permissioned collator, that will always be part of the collation process.
== Config link:https://github.com/paritytech/polkadot-sdk/blob/release-polkadot-v1.3.0/cumulus/pallets/collator-selection/src/lib.rs#L118[{github-icon},role=heading-link]
* Pallet-specific configs
** `UpdateOrigin` — defines the list of origins that are able to modify the settings of collators (e.g. set and remove list of invulnerables, desired candidates, candidacy bond). This type should implement the trait `EnsureOrigin`.
** `PotId` — id of account that will hold a Pot.
** `MaxCandidates` — maximum number of candidates
** `MinEligibleCollators` — minimum number of collators to collect for the session
** `MaxInvulnerables` — maximum number of invulnerables
** `KickThreshold` — number of blocks that should pass since the last block produced by a candidate collator for it to be removed from a candidates list and not participate in collation for the next session.
** `ValidatorId` — type for validator id
** `ValidatorIdOf` — type that allows to convert AccountId to ValidatorId
** `ValidatorRegistration` — type that checks that AccountId has its validator keys registered.
* Common configs:
** `RuntimeEvent`
** `Currency`
** `WeightInfo`
== Dispatchables link:https://github.com/paritytech/polkadot-sdk/blob/release-polkadot-v1.3.0/cumulus/pallets/collator-selection/src/lib.rs#L301[{github-icon},role=heading-link]
[.contract-item]
[[set_invulnerables]]
==== `[.contract-item-name]#++set_invulnerables++#`
[source,rust]
----
pub fn set_invulnerables(new: Vec<T::AccountId>)
----
Sets a new list of invulnerable collators. The call must be signed and origin of the call must fulfill the EnsureOrigin check.
IMPORTANT: This call does not maintain the mutual exclusiveness of candidates and invulnerables lists.
**Params:**
* `new: Vec<T::AccountId>` — a list of AccountIds of new invulnerables
**Errors:**
- `BadOrigin` — callers origin does not fulfill the `Config::EnsureOrigin` check.
- `TooFewEligibleCollators` — empty list of invulnerables was submitted and the number of candidates is smaller than `Config::MinEligibleCollators`
- `TooManyInvulnerables` — submitted list length is longer than `Config::MaxInvulnerables`
**Events:**
- `InvalidInvulnerableSkipped(account_id)` — submitted invulnerable does not have validator key or it is not registered
- `NewInvulnerables(invulnerables)` — new invulnerable list was set
[.contract-item]
[[set_desired_candidates]]
==== `[.contract-item-name]#++set_desired_candidates++#`
[source,rust]
----
pub fn set_desired_candidates(max: u32)
----
Set a new maximum possible number of candidates. If it is higher than `Config::MaxCandidates`, you should consider to rerun the benchmarks. The callers origin should fulfill the `Config::EnsureOrigin` check.
**Params:**
- `max: u32` — new desired candidates number
**Errors:**
- `BadOrigin` — callers origin does not fulfill the `Config::EnsureOrigin` check.
**Events:**
- `NewDesiredCandidates(desired_candidates)`
[.contract-item]
[[set_candidacy_bond]]
==== `[.contract-item-name]#++set_candidacy_bond++#`
[source,rust]
----
pub fn set_candidacy_bond(max: u32)
----
Set the amount for the deposit to be a candidate for collator.
**Params:**
- `bond: u32` — new amount for candidate deposit
**Errors:**
- `BadOrigin` — callers origin does not fulfill the `Config::EnsureOrigin` check.
**Events:**
- `NewCandidacyBond(bond_amount)`
[.contract-item]
[[register_as_candidate]]
==== `[.contract-item-name]#++register_as_candidate++#`
[source,rust]
----
pub fn register_as_candidate()
----
Register the caller as a collator candidate. Caller should be signed, have registered session keys and have amount needed for candidacy bond deposit. If successful, candidate will participate in collation process starting from the next session.
**Errors:**
- `BadOrigin` — call is not signed
- `TooManyCandidates` — number of collators is already at its maximum (specified in `desired_candidates` getter)
- `AlreadyInvulnerable` — caller is already in invulnerable collators list, it does not need to be a candidate to become a collator
- `NoAssociatedValidatorId` — caller does not have a session key.
- `ValidatorNotRegistered` — caller session key is not registered
- `AlreadyCandidate` — caller is already in candidates list
- `InsufficientBalance` — candidate does not have enough funds for deposit for candidacy bond
- `LiquidityRestrictions` — account restrictions (like frozen funds or vesting) prevent from creating a deposit
- `Overflow` — reserved funds overflow the currency type. Should not happen in usual scenarios.
**Events:**
- `CandidateAdded(account_id, deposit)`
[.contract-item]
[[leave_intent]]
==== `[.contract-item-name]#++leave_intent++#`
[source,rust]
----
pub fn leave_intent()
----
Unregister the caller from being a collator candidate. If successful, deposit will be returned and during the next session change collator will no longer participate in collation process. This call must be signed.
**Errors:**
- `BadOrigin` — call is not signed
- `TooFewEligibleCollators` — the number of collators for the next session will be less than `Config::MinEligibleCollators` in case of unregistration so the process is stopped.
- `NotCandidate` — caller is not on candidate list, nothing to unregister
**Events:**
- `CandidateRemoved(account_id)`
[.contract-item]
[[add_invulnerable]]
==== `[.contract-item-name]#++add_invulnerable++#`
[source,rust]
----
pub fn add_invulnerable(who: T::AccountId)
----
Add a new invulnerable. Call must be signed and caller pass `Config::EnsureOrigin` check. If a new invulnerable was previously a candidate, it will be removed from them.
*Params:*
- `who: T::AccountId` — an account to add to invulnerables list
**Errors:**
- `BadOrigin` — callers origin does not fulfill the `Config::EnsureOrigin` check.
- `NoAssociatedValidatorId` — new invulnerable does not have a session key.
- `ValidatorNotRegistered` — new invulnerable session key is not registered
- `AlreadyInvulnerable` — caller is already in invulnerable collators list
**Events:**
- `InvulnerableAdded(account_id)`
[.contract-item]
[[remove_invulnerable]]
==== `[.contract-item-name]#++remove_invulnerable++#`
[source,rust]
----
pub fn remove_invulnerable(who: T::AccountId)
----
Remove an invulnerable from the list. Call must be signed and caller pass `Config::EnsureOrigin` check.
*Params:*
- `who: T::AccountId` — an account to add to invulnerables list
**Errors:**
- `BadOrigin` — callers origin does not fulfill the `Config::EnsureOrigin` check.
- `TooFewEligibleCollators` — the number of invulnerable will become less than `Config::MinEligibleCollators` after the removal.
- `NotInvulnerable` — the `who` is not an invulnerable
**Events:**
- `InvulnerableRemoved(account_id)`
@@ -0,0 +1,155 @@
:source-highlighter: highlight.js
:highlightjs-languages: rust
:github-icon: pass:[<svg class="icon"><use href="#github-icon"/></svg>]
= pallet_message_queue
Branch/Release: `release-polkadot-v{1.3.0}`
== Purpose
Flexible FRAME pallet for implementing message queues. This pallet can also initiate message processing using the `MessageProcessor` (see `Config`).
== Config link:https://github.com/paritytech/polkadot-sdk/blob/release-polkadot-v1.3.0/substrate/frame/message-queue/src/lib.rs#L445[{github-icon},role=heading-link]
* Pallet-specific configs:
** `MessageProcessor` -- Processor for messages
** `Size` -- Page/heap size type.
** `QueueChangeHandler` -- Code to be called when a message queue changes - either with items introduced or removed.
** `QueuePausedQuery` -- Queried by the pallet to check whether a queue can be serviced.
** `HeapSize` -- The size of the page; this also serves as the maximum message size which can be sent.
** `MaxStale` -- The maximum number of stale pages (i.e. of overweight messages) allowed before culling can happen. Once there are more stale pages than this, then historical pages may be dropped, even if they contain unprocessed overweight messages.
** `ServiceWeight` -- The amount of weight (if any) which should be provided to the message queue for servicing enqueued items. This may be legitimately `None` in the case that you will call `ServiceQueues::service_queues` manually.
* Common configs:
** `RuntimeEvent` -- The overarching event type.
** `WeightInfo` -- Weight information for extrinsics in this pallet.
== Dispatchables link:https://github.com/paritytech/polkadot-sdk/blob/release-polkadot-v1.3.0/substrate/frame/message-queue/src/lib.rs#L594[{github-icon},role=heading-link]
[.contract-item]
[[execute_overweight]]
==== `[.contract-item-name]#++execute_overweight++#`
[source,rust]
----
pub fn execute_overweight(
origin: OriginFor<T>,
message_origin: MessageOriginOf<T>,
page: PageIndex,
index: T::Size,
weight_limit: Weight,
) -> DispatchResultWithPostInfo
----
Execute an overweight message.
NOTE: Temporary processing errors will be propagated whereas permanent errors are treated
as success condition.
IMPORTANT: The `weight_limit` passed to this function does not affect the `weight_limit` set in other parts of the pallet.
**Params:**
* `param1: Type1` -- description of the parameter
* `origin: OriginFor<T>` -- Must be `Signed`.
* `message_origin: MessageOriginOf<T>` -- indicates where the message to be executed arrived from (used for finding the respective queue that this message belongs to).
* `page: PageIndex` -- The page in the queue in which the message to be executed is sitting.
* `index: T::Size` -- The index into the queue of the message to be executed.
* `weight_limit: Weight` -- The maximum amount of weight allowed to be consumed in the execution
of the message. This weight limit does not affect other parts of the pallet, and it is only used for this call of `execute_overweight`.
**Errors:**
* `QueuePaused` -- if the queue that overweight message to be executed belongs to is paused.
* `NoPage` -- if the page that overweight message to be executed belongs to does not exist.
* `NoMessage` -- if the overweight message could not be found.
* `Queued` -- if the overweight message is already scheduled for future execution.
For a message to be labeled as overweight, the pallet must have previously attempted execution and
encountered failure due to insufficient weight for processing. Once marked as overweight, the message
is excluded from the queue for future execution.
* `AlreadyProcessed` -- if the overweight message is already processed.
* `InsufficientWeight` -- if the `weight_limit` is not enough to execute the overweight message.
* `TemporarilyUnprocessable` -- if the message processor `Yield`s execution of this message. This means processing should be reattempted later.
**Events:**
* `ProcessingFailed(id, origin, error)`
* `Processed(id, origin, weight_used, success)`
[.contract-item]
[[reap_page]]
==== `[.contract-item-name]#++reap_page++#`
[source,rust]
----
pub fn reap_page(
origin: OriginFor<T>,
message_origin: MessageOriginOf<T>,
page_index: PageIndex,
) -> DispatchResult
----
Remove a page which has no more messages remaining to be processed or is stale.
**Params:**
* `param1: Type1` -- description of the parameter
* `origin: OriginFor<T>` -- Must be `Signed`.
* `message_origin: MessageOriginOf<T>` -- indicates where the messages arrived from (used for finding the respective queue that this page belongs to).
* `page_index: PageIndex` -- The page to be reaped
**Errors:**
* `NotReapable` -- if the page is not stale yet.
* `NoPage` -- if the page does not exist.
**Events:**
* `PageReaped(origin, index)` -- the queue (origin), and the index of the page
== Important Mentions and FAQ's
IMPORTANT: The pallet utilizes the [`sp_weights::WeightMeter`] to manually track its consumption to always stay within
the required limit. This implies that the message processor hook can calculate the weight of a message without executing it.
==== How does this pallet work under the hood?
- This pallet utilizes queues to store, enqueue, dequeue, and process messages.
- Queues are stored in `BookStateFor` storage, with their origin serving as the key (so, we can identify queues by their origins).
- Each message has an origin (message_origin), that defines into which queue the message will be stored.
- Messages are stored by being appended to the last `Page` of the Queue's Book. A Queue is a book along with the MessageOrigin for that book.
- Each book keeps track of its pages, and the state (begin, end, count, etc.)
- Each page also keeps track of its messages, and the state (remaining, first, last, etc.)
- `ReadyRing` contains all ready queues as a double-linked list. A Queue is ready if it contains at least one Message which can be processed.
- `ServiceHead` is a pointer to the `ReadyRing`, pointing at the next `Queue` to be serviced. Service means: attempting to process the messages.
*Execution:*
* `service_queues` → returns the weight that is consumed by this function
** we will process a queue, till either:
*** there is no more message left
**** if there is no more message left in the queue, we wont stop, service_head will proceed with the next queue
*** or weight is insufficient
**** if weight is insufficient for the next message in the queue, service_head will try to switch to next queue, and try to process message from that queue. This will go on, until it visits every queue, and no message can be processed. Only then, it will stop.
** each call to `service_queues`, we will bump the header, and start processing the next queue instead of the previous one to prevent starvation
*** Example:
**** service head is on queue 2
**** we called `service_queues`, which bumped the service head to queue 3
**** we processed messages from queue 3,
***** but weight was insufficient for the next message in queue 3,
***** so we switched to queue 4, (we dont bump the service head for that)
***** weight was insufficient for queue 4 and other queues as well, and we made a round trip across queues, till we reach queue 3, and we stopped.
**** `service_queues` call finished
**** service head is on queue 3
**** we called `service_queue` again, which bumped the service head to queue 4 (although there are still messages left in queue 3)
**** we continue processing from queue 4.
*** but, to preserve priority, if we made a switch to a new queue due to weight, we dont bump the service head. So, the next call, will be starting on the queue where we left off.
*** Example:
**** service head is on queue 2
**** we called `service_queues`, which bumped the service head to queue 3
**** we processed messages from queue 3,
***** but weight was insufficient for the next message in queue 3,
***** so we switched to queue 4, (we dont bump the service head for that)
***** we processed a message from queue 4
***** weight was insufficient for queue 4 and other queues as well, and we made a round trip across queues, till we reach queue 3, and we stopped.
**** `service_queues` call finished
**** service head is on queue 3 (there are still messages in queue 3)
**** we called `service_queue` again, which bumped the service head to queue 4
**** we continue processing from queue 4, although we were processing queue 4 in the last call
@@ -37,12 +37,17 @@ chance to be included by the transaction queue.
- _length fee_: A fee proportional to the encoded length of the transaction.
- _tip_: An optional tip. Tip increases the priority of the transaction, giving it a higher chance to be included by the transaction queue.
== config
== Config link:https://github.com/paritytech/polkadot-sdk/blob/release-polkadot-v1.3.0/substrate/frame/pallet-transaction-payment/src/lib.rs#L445[{github-icon},role=heading-link]
- [`Config::WeightToFee`]: mapping between the smallest unit of weight and smallest unit of fee
- [`Config::FeeMultiplierUpdate`]: A means of updating the fee for the next block, via defining a multiplier, based on the
* Pallet-specific configs:
** `WeightToFee` -- mapping between the smallest unit of weight and smallest unit of fee
** `Config::FeeMultiplierUpdate` -- A means of updating the fee for the next block, via defining a multiplier, based on the
final state of the chain at the end of the previous block. Possible values include `ConstantFee`, SlowAdjustingFee`, FastAdjustingFee`, etc.
- [`Config::OnChargeTransaction`]: A means of defining the storage and state changes associated with paying transaction fees.
** `Config::OnChargeTransaction` -- A means of defining the storage and state changes associated with paying transaction fees.
* Common configs:
** `RuntimeEvent`
** `Currency`
** `WeightInfo`
== Dispatchables
@@ -0,0 +1,136 @@
:source-highlighter: highlight.js
:highlightjs-languages: rust
:github-icon: pass:[<svg class="icon"><use href="#github-icon"/></svg>]
= parachain_system
Branch/Release: `release-polkadot-v1.3.0`
== Purpose
This pallet is a core element of each parachain. It will:
- Aggregate information about built blocks
- Process binary code upgrades
- Process incoming messages from both relay chain and other parachains (if a channel is established between them)
- Send outgoing messages to relay chain and other parachains
- Build collation info when requested by collator
== Glossary
- _Validation Code_ — the runtime binary that runs in the parachain
- _Validation Data_ — information passed from the relay chain to validate the next block
- _(Aggregated) Unincluded Segment_ — sequence of blocks that were not yet included into the relay chain state transition
== Config link:https://github.com/paritytech/polkadot-sdk/blob/release-polkadot-v1.3.0/cumulus/pallets/parachain-system/src/lib.rs#L207[{github-icon},role=heading-link]
* Pallet-specific configs:
** `OnSystemEvent` — a handler that will be called when new validation data will be set (once each block). New validation data will also be passed to it. Look to `trait OnSystemEvent` for more details.
** `SelfParaId` — getter for a parachain id of this chain
** `OutboundXcmpMessageSource` — source of outgoing XCMP messages. It is queried in `finalize_block` and later included into collation information
// it was added after 1.3.0. I am leaving it commented for the future updates
// ** `DmpQueue` — a handler for the incoming *downward* messages from relay chain
** `ReservedDmpWeight` — weight reserved for DMP message processing. This config seems to be is not used as the function that processes these messages (`enqueue_inbound_downward_messages`) returns used weight.
** `XcmpMessageHandler` — a handler for the incoming _horizontal_ messages from other parachains
** `ReservedXcmpWeight` — default weight limit for the for the XCMP message processing. May be overriden by storage `ReservedXcmpWeightOverride` . If incoming messages in block will exceed the weight limit, they wont be processed.
** `CheckAssociatedRelayNumber` — type that implements `trait CheckAssociatedRelayNumber` . Currently there are three implementations: no check (`AnyRelayNumber`), strict increase (`RelayNumberStrictlyIncreases`), monotonic increase (`RelayNumberMonotonicallyIncreases`). It is needed to maintain some order between blocks in relay chain and parachain.
** `ConsensusHook` — this is a feature-enabled config ( for the management of the unincluded segment. Requires the implementation of `trait ConsensusHook`. There are several implementations of it, in `parachain-system` crate (`FixedCapacityUnincludedSegment`) and in `aura-ext` crate (`FixedVelocityConsensusHook`). It is needed to maintain the logic of segment length handling.
* Common parameters for all pallets:
** `RuntimeEvent`
** `WeightInfo`
== Dispatchables link:https://github.com/paritytech/polkadot-sdk/blob/release-polkadot-v1.3.0/cumulus/pallets/parachain-system/src/lib.rs#L506[{github-icon},role=heading-link]
[.contract-item]
[[set_validation_data]]
==== `[.contract-item-name]#++set_validation_data++#`
[source,rust]
----
pub fn set_validation_data(
data: ParachainInherentData,
)
----
This call is an inherent, you cant call this from another dispatchable or from client side. This call sets up validation data for collation, processes code upgrades and updates unincluded segments.
[.contract-item]
[[sudo_send_upward_message]]
==== `[.contract-item-name]#++sudo_send_upward_message++#`
[source,rust]
----
pub fn sudo_send_upward_message(
message: UpwardMessage,
)
----
Send a message to relay as a sudo.
**Params:**
- `message` — a vec of bytes that represents a message that you send to the relay
**Errors:**
- `BadOrigin` — call was made not from a sudo
[.contract-item]
[[authorize_upgrade]]
==== `[.contract-item-name]#++authorize_upgrade++#`
[source,rust]
----
pub fn authorize_upgrade(
code_hash: T::Hash,
check_version: bool,
)
----
Authorize the upgrade. This call will put the hash and flag to the storage `AuthorizedUpgrade`. This call must be made as a sudo.
**Params:**
- `code_hash` — hash of the authorized runtime binary
- `check_version` — flag that indicates that the code should be checked for the possibility to upgrade. It will happen during the upgrade process itself.
**Errors:**
- `BadOrigin` — call was made not from a sudo
**Events:**
- `UpgradeAuthorized(code_hash)`
[.contract-item]
[[enact_authorized_upgrade]]
==== `[.contract-item-name]#++enact_authorized_upgrade++#`
[source,rust]
----
pub fn enact_authorized_upgrade(
code: Vec<u8>,
)
----
Validate and perform the authorized upgrade.
**Params:**
- `code` — runtime binary for the upgrade
**Errors:**
- `NothingAuthorized` — there is no authorized upgrade, call `authorize_upgrade` in advance
- `Unauthorized` — there is another upgrade authorized
== Important Mentions and FAQ's
=== Pallet's workflow
* Block Initialization
** Remove already processed validation code
** Update `UnincludedSegment` with latest parent hash
** Cleans up `ValidationData` and other functions.
** Calculate weights for everything that was done in `on_finalize` hook
* Inherents — `set_validation_data` call
** Clean the included segments from `UnincludedSegment` and update the `AggregatedUnincludedSegment`
** Update `ValidationData`, `RelayStateProof` and other configs from relay.
** Process the `ValidationCode` upgrade
* Block Finalization
** Enqueue all received messages from relay chain and other parachains
** Update `UnincludedSegment` and `AggregatedUnincludedSegment` with the latest block data
+362
View File
@@ -0,0 +1,362 @@
:source-highlighter: highlight.js
:highlightjs-languages: rust
:github-icon: pass:[<svg class="icon"><use href="#github-icon"/></svg>]
= pallet_proxy
Branch/Release: `release-polkadot-v1.3.0`
== Purpose
This pallet enables delegation of rights to execute certain call types from one origin to another.
== Glossary
* _Announcement_ — a statement of call hash for the proxy to execute in some future block. Required for some proxies where delay is specified
* _Delay_ — number of block that should pass from announcement before call execution
* _Delegatee_ — account that was granted call execution rights with proxy creation
* _Delegator_ — account that granted call execution rights with proxy creation
* _Proxy_ — statement of call execution rights transfer from delegator to delegatee. Specified by proxy type and delay.
* _Proxy type_ — type of calls that can be executed using this proxy.
* _Pure account_ — account that was spawned only to be a delegatee for some proxy.
== Config link:https://github.com/paritytech/polkadot-sdk/blob/release-polkadot-v1.3.0/substrate/frame/proxy/src/lib.rs#L107[{github-icon},role=heading-link]
* Pallet-specific configs:
** `ProxyType` -- a type that describes different variants of proxy. It must implement `Default` trait and `InstanceFilter<RuntimeCall>` trait.
** `ProxyDepositBase` -- a base amount of currency that defines a deposit for proxy creation.
** `ProxyDepositFactor` -- an amount of currency that will be frozen along with the `ProxyDepositBase` for each additional proxy.
** `MaxProxies` -- maximum number of proxies that single account can create.
** `MaxPending` -- maximum number of announcements that can be made per account.
** `CallHasher` -- a type implementing a `Hash` trait. Will be used to hash the executed call.
** `AnnouncementDepositBase` -- a base amount of currency that defines a deposit for announcement creation.
** `AnnouncementDepositFactor` -- an amount of currency that will be frozen along with the `AnnouncementDepositBase` for each additional announcement.
* Common configs:
** `RuntimeEvent`
** `RuntimeCall`
** `Currency`
== Dispatchables link:https://github.com/paritytech/polkadot-sdk/blob/release-polkadot-v1.3.0/substrate/frame/proxy/src/lib.rs#L179[{github-icon},role=heading-link]
[.contract-item]
[[add_proxy]]
==== `[.contract-item-name]#++add_proxy++#`
[source,rust]
----
pub fn add_proxy<T: Config>(
delegate: <<T as Config>::Lookup as StaticLookup>::Source,
proxy_type: T::ProxyType,
delay: BlockNumberFor<T>
)
----
Create a new `Proxy` that allows `delegate` to execute calls that fulfill `proxy_type` check on your origins behalf.
The origin must be signed for this call.
This call will take (or modify) a deposit based on number of proxies created by delegator and calculated by this formula: `ProxyDepositBase + ProxyDepositFactor * <number of proxies>`
There may not be more proxies than `MaxProxies`
**Params:**
- `delegate: <<T as Config>::Lookup as StaticLookup>::Source` — account that will become a proxy for the origin
- `proxy_type: T::ProxyType` — type of calls that will be allowed for the delegate
- `delay: BlockNumberFor<T>` — number of blocks that needs to happen between announcement and call for this proxy
**Errors:**
- `BadOrigin` — request not signed
- `LookupError` — delegate not found
- `NoSelfProxy` — delegate and call origin is the same account
- `Duplicate` — proxy already exists
- `TooMany` — too many proxies created for this delegate with this type and the same delay
- `InsufficientBalance` — delegator does not have enough funds for deposit for proxy creation
- `LiquidityRestrictions` — account restrictions (like frozen funds or vesting) prevent from creating a deposit
- `Overflow` — reserved funds overflow the currency type. Should not happen in usual scenarios.
**Events:**
- `ProxyAdded(delegator, delegatee, proxy_type, delay)`
[.contract-item]
[[announce]]
==== `[.contract-item-name]#++announce++#`
[source,rust]
----
pub fn announce
real: AccountIdLookupOf<T>,
call_hash: CallHashOf<T>,
)
----
Announce a call that will be executed using a proxy. As a result announcement will be created. You must create an announcement if the proxy you are using specified a delay greater than zero. In that case you will be able to execute a call after the number of blocks specified by delay.
The origin must be signed for this call.
This call will take (or modify) a deposit calculated by this formula: `AnnouncementDepositBase + AnnouncementDepositFactor * <number of announcements present>`
There may not be more announcements than `MaxPending`
**Params:**
- `real: AccountIdLookupOf<T>` — the account on which behalf this call will be made
- `call_hash: CallHashOf<T>` — hash of the call that is going to be made
**Errors:**
- `BadOrigin` — request not signed
- `LookupError` — `real` account not found
- `NotProxy` — there is no proxy between the caller and real
- `TooMany` — there is more announcements for this sender than specified in `MaxPending`
- `InsufficientBalance` — caller does not have enough funds for deposit for announcement creation
- `LiquidityRestrictions` — account restrictions (like frozen funds or vesting) prevent from creating a deposit
- `Overflow` — reserved funds overflow the currency type. Should not happen in usual scenarios.
**Events:**
- `Announced(real, proxy, call_hash)`
[.contract-item]
[[proxy]]
==== `[.contract-item-name]#++proxy++#`
[source,rust]
----
pub fn proxy(
real: AccountIdLookupOf<T>,
force_proxy_type: Option<T::ProxyType>,
call: Box<<T as Config>::RuntimeCall>,
)
----
Dispatch a `call` on behalf of `real` account using a proxy that was created in advance. Proxy must be created for the call sender to execute the call.
The origin must be signed for this call.
If the proxy requires an announcement before the call, this dispatchable will fail.
**Params:**
- `real: AccountIdLookupOf<T>` — the account on which behalf this call will be made
- `force_proxy_type: Option<T::ProxyType>` — specific proxy type to get proxy for. If not specified, first one found in the storage will be used.
- `call: Box<<T as Config>::RuntimeCall>` — a call to execute
**Errors:**
- `BadOrigin` — request not signed
- `LookupError` — `real` account not found
- `NotProxy` — there is no proxy between the caller and real
- `Unannounced` — there was a delay specified but not fulfilled
**Events:**
- `ProxyExecuted(result)`
[.contract-item]
[[proxy_announced]]
==== `[.contract-item-name]#++proxy_announced++#`
[source,rust]
----
pub fn proxy_announced<T: Config>(
delegate: <<T as Config>::Lookup as StaticLookup>::Source,
real: <<T as Config>::Lookup as StaticLookup>::Source,
force_proxy_type: Option<T::ProxyType>,
call: Box<<T as Config>::RuntimeCall>
)
----
Execute previously announced call using a proxy and remove the announcement. Proxy must be created for the call sender to execute the call.
The origin must be signed for this call.
This call will fail if delay after announcement have not passed or call was not announced.
**Params:**
- `delegate: <<T as Config>::Lookup as StaticLookup>::Source` — the account proxy was given to and who announced the call
- `real: <<T as Config>::Lookup as StaticLookup>::Source` — delegator of the proxy, on whose behalf call will be executed
- `force_proxy_type: Option<T::ProxyType>` — specific proxy type to get proxy for. If not specified, first one found in the storage will be used.
- `call: Box<<T as Config>::RuntimeCall>` — a call to execute
**Errors:**
- `BadOrigin` — request not signed
- `LookupError` — `real` or `delegate` account not found
- `NotProxy` — there is no proxy between the `delegate` and `real`
- `Unannounced` — there was a delay specified but not fulfilled or call was not announced
**Events:**
- `ProxyExecuted(result)`
[.contract-item]
[[reject_announcement]]
==== `[.contract-item-name]#++reject_announcement++#`
[source,rust]
----
pub fn reject_announcement<T: Config>(
delegate: <<T as Config>::Lookup as StaticLookup>::Source,
call_hash: <<T as Config>::CallHasher as Hash>::Output
)
----
Remove the given announcement. Deposit is returned in case of success.
May be called from delegator of the proxy to remove announcement made by delegatee.
The origin must be signed for this call.
**Params:**
- `delegate: <<T as Config>::Lookup as StaticLookup>::Source` — account that created an announcement
- `call_hash: <<T as Config>::CallHasher as Hash>::Output` — hash that was created for the announcement
**Errors:**
- `BadOrigin` — request not signed
- `LookupError` — `delegate` account not found
- `NotFound` — proxy not found for this delegator and delegatee
[.contract-item]
[[remove_announcement]]
==== `[.contract-item-name]#++remove_announcement++#`
[source,rust]
----
pub fn remove_announcement<T: Config>(
real: <<T as Config>::Lookup as StaticLookup>::Source,
call_hash: <<T as Config>::CallHasher as Hash>::Output
)
----
Remove the given announcement. Deposit is returned in case of success.
May be called from delegatee of the proxy to remove announcement made by them.
The origin must be signed for this call.
**Params:**
- `real: <<T as Config>::Lookup as StaticLookup>::Source` — delegator of the proxy for the announcement to remove
- `call_hash: <<T as Config>::CallHasher as Hash>::Output` — hash of announced call
**Errors:**
- `BadOrigin` — request not signed
- `LookupError` — `delegate` account not found
- `NotFound` — proxy not found for this delegator and delegatee
[.contract-item]
[[remove_proxies]]
==== `[.contract-item-name]#++remove_proxies++#`
[source,rust]
----
pub fn remove_proxies()
----
Removes all proxies _issued to_ the caller. The origin must be signed for this call.
**Errors:**
- `BadOrigin` — request not signed
[.contract-item]
[[remove_proxy]]
==== `[.contract-item-name]#++remove_proxy++#`
[source,rust]
----
pub fn remove_proxy<T: Config>(
delegate: <<T as Config>::Lookup as StaticLookup>::Source,
proxy_type: T::ProxyType,
delay: BlockNumberFor<T>
)
----
Remove the proxy issued by the caller. Deposit is returned to the delegator.
Origin must be signed for this call.
**Params:**
- `delegate: <<T as Config>::Lookup as StaticLookup>::Source` — account to whom this proxy was issued
- `proxy_type: T::ProxyType` — type of the issued proxy
- `delay: BlockNumberFor<T>` — delay of the issued proxy
**Errors:**
- `BadOrigin` — request not signed
- `LookupError` — `delegate` account not found
- `NotFound` — no such proxy exists
**Events:**
- `ProxyRemoved(delegator, delegatee, proxy_type, delay)`
[.contract-item]
[[create_pure]]
==== `[.contract-item-name]#++create_pure++#`
[source,rust]
----
pub fn create_pure<T: Config>(
proxy_type: T::ProxyType,
delay: BlockNumberFor<T>,
index: u16
)
----
This call creates a new account with a proxy issued to it from the calls origin.
The origin must be signed for this call.
**Params:**
- `proxy_type: T::ProxyType` — type of calls that will be allowed for the proxy
- `delay: BlockNumberFor<T>` — number of blocks that needs to happen between announcement and call for this proxy
- `index: u16` — A disambiguation index, in case this is called multiple times in the same
transaction (e.g. with `utility::batch`). Unless youre using `batch` you probably just
want to use `0`.
**Errors:**
- `BadOrigin` — request not signed
- `Duplicate` — `create_pure` was called more than once with the same parameters in the same transaction
- `TooMany` — there is more announcements for this sender than specified in `MaxPending`
- `InsufficientBalance` — delegator does not have enough funds for deposit for proxy creation
- `LiquidityRestrictions` — account restrictions (like frozen funds or vesting) prevent from creating a deposit
- `Overflow` — reserved funds overflow the currency type. Should not happen in usual scenarios.
**Events:**
- `PureCreated(pure, who, proxy_type, disambiguation_index)`
[.contract-item]
[[kill_pure]]
==== `[.contract-item-name]#++kill_pure++#`
[source,rust]
----
pub fn kill_pure<T: Config>(
spawner: <<T as Config>::Lookup as StaticLookup>::Source,
proxy_type: T::ProxyType,
index: u16,
height: BlockNumberFor<T>,
ext_index: u32
)
----
Remove a previously created pure account.
Requires a `Signed` origin, and the sender account must have been created by a call to
`pure` with corresponding parameters.
WARNING: All access to this account will be lost.
**Params:**
- `spawner` — account who created a proxy and pure account
- `proxy_type` — type of proxy used for it
- `index` — the disambiguation index used for pure account creation
- `height` — the height of the chain when the call to `pure` was processed.
- `ext_index` — the extrinsic index in which the call to `pure` was processed.
**Errors:**
- `BadOrigin` — request not signed
- `LookupError` — `spawner` account not found
- `NoPermission` — emitted when account tries to remove somebody but not itself
+10 -5
View File
@@ -2,7 +2,9 @@
:highlightjs-languages: rust
:github-icon: pass:[<svg class="icon"><use href="#github-icon"/></svg>]
= Pallet Name link:https://google.com[{github-icon},role=heading-link]
= Pallet Name
Branch/Release: `release-polkadot-v{x.x.x}`
== Purpose
@@ -12,11 +14,14 @@ This is a freeform description of the tasks that this pallet fulfills
* _Term_ -- definition of the term
== Config
== Config link:https://google.com[{github-icon},role=heading-link]
* `ConfigType` -- desription of config. Include the possible values if there is a set of them.
* Pallet-specific configs
** `ConfigType` -- description of config. Include the possible values if there is a set of them.
* Common configs
** `ConfigType` -- description of config, if needed
== Dispatchables
== Dispatchables link:https://google.com[{github-icon},role=heading-link]
[.contract-item]
[[dispatchable_name]]
@@ -30,7 +35,7 @@ pub fn dispatchable_name(
----
Freeform description of the dispatchable. It is good to include the important things that should be included there.
// four following blocks show how to make a higlight of some information. It will become a styled block
// four following blocks show how to make a highlight of some information. It will become a styled block
NOTE: This is how you state important information that should be acknowledged
+4
View File
@@ -0,0 +1,4 @@
[build]
base = "docs/"
command = "npm run docs"
publish = "build/site"