mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-04-30 01:27:56 +00:00
Disputes High-level rewrite & Disputes runtime (#2424)
* REVERT: comment out graphviz * rewrite most of protocol-disputes * write about conclusion and chain selection * tie back in overview * basic disputes module * guide: InclusionInherent -> ParaInherent * language * add ParaInherentData type * plug parainherentdata into provisioner * provide_multi_dispute * tweak * inclusion pipeline logic for disputes * be clearer about signature checking * reject backing of disputed blocks * some type rejigging * known-disputes runtime API * wire up inclusion * Revert "REVERT: comment out graphviz" This reverts commit 66203e362f7872cb413d258f74634a0aad70302b. * timeouts * include in initialization order * address grumbles
This commit is contained in:
committed by
GitHub
parent
8734cf62b2
commit
a8d3aca13d
@@ -5,102 +5,97 @@ After a backed candidate is made available, it is included and proceeds into an
|
||||
However, this isn't the end of the story. We are working in a forkful blockchain environment, which carries three important considerations:
|
||||
|
||||
1. For security, validators that misbehave shouldn't only be slashed on one fork, but on all possible forks. Validators that misbehave shouldn't be able to create a new fork of the chain when caught and get away with their misbehavior.
|
||||
1. It is possible that the parablock being contested has not appeared on all forks.
|
||||
1. It is possible (and likely) that the parablock being contested has not appeared on all forks.
|
||||
1. If a block author believes that there is a disputed parablock on a specific fork that will resolve to a reversion of the fork, that block author is better incentivized to build on a different fork which does not include that parablock.
|
||||
|
||||
This means that in all likelihood, there is the possibility of disputes that are started on one fork of the relay chain, and as soon as the dispute resolution process starts to indicate that the parablock is indeed invalid, that fork of the relay chain will be abandoned and the dispute will never be fully resolved on that chain.
|
||||
|
||||
Even if this doesn't happen, there is the possibility that there are two disputes underway, and one resolves leading to a reversion of the chain before the other has concluded. In this case we want to both transplant the concluded dispute onto other forks of the chain as well as the unconcluded dispute.
|
||||
|
||||
We account for these requirements by having the validity module handle two kinds of disputes.
|
||||
We account for these requirements by having the disputes module handle two kinds of disputes.
|
||||
|
||||
1. Local disputes: those contesting the validity of the current fork by disputing a parablock included within it.
|
||||
1. Remote disputes: a dispute that has partially or fully resolved on another fork which is transplanted to the local fork for completion and eventual slashing.
|
||||
|
||||
## Approval
|
||||
When a local dispute concludes negatively, the chain needs to be abandoned and reverted back to a block where the state does not contain the bad parablock. We expect that due to the [Approval Checking Protocol](../protocol-approval.md), the current executing block should not be finalized. So we do two things when a local dispute concludes negatively:
|
||||
1. Freeze the state of parachains so nothing further is backed or included.
|
||||
1. Issue a digest in the header of the block that signals to nodes that this branch of the chain is to be abandoned.
|
||||
|
||||
We begin approval checks upon any candidate immediately once it becomes available.
|
||||
If, as is expected, the chain is unfinalized, the freeze will have no effect as no honest validator will attempt to build on the frozen chain. However, if the approval checking protocol has failed and the bad parablock is finalized, the freeze serves to put the chain into a governance-only mode.
|
||||
|
||||
Assigning approval checks involve VRF secret keys held by every validator, making it primarily an off-chain process. All assignment criteria require specific data called "stories" about the relay chain block in which the candidate assigned by that criteria became available. Among these criteria, the BABE VRF output provides the story for two, and the other's story consists of the candidate's block hash plus external knowledge that a relay chain equivocation exists with a conflicting candidate.
|
||||
The storage of this module is designed around tracking [`DisputeState`s](../types/disputes.md#disputestate), updating them with votes, and tracking blocks included by this branch of the relay chain. It also contains a `Frozen` parameter designed to freeze the state of all parachains.
|
||||
|
||||
We liberate availability cores when their candidate becomes available of course, but one approval assignment criteria continues associating each candidate with the core number it occupied when it became available.
|
||||
## Storage
|
||||
|
||||
Assignment proceeds in loosely timed rounds called `DelayTranche`s roughly 12 times faster than block production, in which validators send assignment notices until all candidates have enough checkers assigned. Assignment tracks when approval votes arrive too and assigns more checkers if some checkers run late.
|
||||
Storage Layout:
|
||||
|
||||
Approval checks provide more security than backing checks, so polkadot becomes more efficient when validators perform more approval checks per backing check. If validators run 4 approval checks for every backing check, and run almost one backing check per relay chain block, then validators actually check almost 6 blocks per relay chain block.
|
||||
```rust
|
||||
LastPrunedSession: Option<SessionIndex>,
|
||||
|
||||
We should therefore reward approval checkers correctly because approval checks should actually represent our single largest workload. It follows that both assignment notices and approval votes should be tracked on-chain.
|
||||
// All ongoing or concluded disputes for the last several sessions.
|
||||
Disputes: double_map (SessionIndex, CandidateHash) -> Option<DisputeState>,
|
||||
// All included blocks on the chain, as well as the block number in this chain that
|
||||
// should be reverted back to if the candidate is disputed and determined to be invalid.
|
||||
Included: double_map (SessionIndex, CandidateHash) -> Option<BlockNumber>,
|
||||
// Whether the chain is frozen or not. Starts as `false`. When this is `true`,
|
||||
// the chain will not accept any new parachain blocks for backing or inclusion.
|
||||
// It can only be set back to `false` by governance intervention.
|
||||
Frozen: bool,
|
||||
```
|
||||
|
||||
We might track the assignments and approvals together as pairs in a simple rewards system. There are however two reasons to witness approvals on chain by tracking assignments and approvals on-chain, rewards and finality integration.
|
||||
Configuration:
|
||||
|
||||
First, an approval that arrives too slowly prompts assigning extra "no show" replacement checkers. Yet, we consider a block valid if the earlier checker completes their work, even if the extra checkers never quite finish, which complicates rewarding these extra checkers. We could support more nuanced rewards for extra checkers if assignments are placed on-chain earlier. Assignment delay tranches progress 12ish times faster than the relay chain, but no shows could still be witness by the relay chain because the no show delay takes longer than a relay chain slot.
|
||||
```rust
|
||||
/// How many sessions before the current that disputes should be accepted for.
|
||||
DisputePeriod: SessionIndex;
|
||||
/// How long after conclusion to accept statements.
|
||||
PostConclusionAcceptancePeriod: BlockNumber;
|
||||
/// How long is takes for a dispute to conclude by time-out, if no supermajority is reached.
|
||||
ConclusionByTimeOutPeriod: BlockNumber;
|
||||
```
|
||||
|
||||
Second, we know off-chain when the approval process completes based upon all gossiped assignment notices, not just the approving ones. We need not-yet-approved assignment notices to appear on-chain if the chain should know about the validity of recently approved blocks. Relay chain blocks become eligible for finality in GRANDPA only once all their included candidates pass approvals checks, meaning all assigned checkers either voted approve or else were declared "no show" and replaced by more assigned checkers. A purely off-chain approvals scheme complicates GRANDPA with additional objections logic.
|
||||
## Session Change
|
||||
|
||||
Integration with GRANDPA appears simplest if we witness approvals in chain: Aside from inherents for assignment notices and approval votes, we provide an "Approved" inherent by which a relay chain block declares a past relay chain block approved. In other words, it trigger the on-chain approval counting logic in a relay chain block `R1` to rerun the assignment and approval tracker logic for some ancestor `R0`, which then declares `R0` approved. In this case, we could integrate with GRANDPA by gossiping messages that list the descendent `R1`, but then map this into the approved ancestor `R0` for GRANDPA itself.
|
||||
1. If the current session is not greater than `dispute_period + 1`, nothing to do here.
|
||||
1. Set `pruning_target = current_session - dispute_period - 1`. We add the extra `1` because we want to keep things for `dispute_period` _full_ sessions. The stuff at the end of the most recent session has been around for ~0 sessions, not ~1.
|
||||
1. If `LastPrunedSession` is `None`, then set `LastPrunedSession` to `Some(pruning_target)` and return.
|
||||
1. Otherwise, clear out all disputes and included candidates in the range `last_pruned..=pruning_target` and set `LastPrunedSession` to `Some(pruning_target)`.
|
||||
|
||||
Approval votes could be recorded on-chain quickly because they represent a major commitments.
|
||||
## Block Initialization
|
||||
|
||||
Assignment notices should be recorded on-chain only when relevant. Any sent too early are retained but ignore until relevant by our off-chain assignment system. Assignments are ignored completely by the dispute system because any dispute immediately escalates into all validators checking, but disputes count existing approval votes of course.
|
||||
1. Iterate through all disputes. If any have not concluded and started more than `ConclusionByTimeOutPeriod` blocks ago, set them to `Concluded` and mildly punish all validators associated, as they have failed to distribute available data.
|
||||
|
||||
## Routines
|
||||
|
||||
## Local Disputes
|
||||
* `provide_multi_dispute_data(MultiDisputeStatementSet) -> Vec<(SessionIndex, Hash)>`:
|
||||
1. Fail if any disputes in the set are duplicate or concluded before the `PostConclusionAcceptancePeriod` window relative to now.
|
||||
1. Pass on each dispute statement set to `provide_dispute_data`, propagating failure.
|
||||
1. Return a list of all candidates who just had disputes initiated.
|
||||
|
||||
There is little overlap between the approval system and the disputes systems since disputes cares only that two validators disagree. We do however require that disputes count validity votes from elsewhere, both the backing votes and the approval votes.
|
||||
* `provide_dispute_data(DisputeStatementSet) -> bool`: Provide data to an ongoing dispute or initiate a dispute.
|
||||
1. All statements must be issued under the correct session for the correct candidate.
|
||||
1. `SessionInfo` is used to check statement signatures and this function should fail if any signatures are invalid.
|
||||
1. If there is no dispute under `Disputes`, create a new `DisputeState` with blank bitfields.
|
||||
1. If `concluded_at` is `Some`, and is `concluded_at + PostConclusionAcceptancePeriod < now`, return false.
|
||||
1. Import all statements into the dispute. This should fail if any disputes are duplicate; if the corresponding bit for the corresponding validator is set in the dispute already.
|
||||
1. If `concluded_at` is `None`, reward all statements slightly less.
|
||||
1. If `concluded_at` is `Some`, reward all statements slightly less.
|
||||
1. If either side now has supermajority, slash the other side. This may be both sides, and we support this possibility in code, but note that this requires validators to participate on both sides which has negative expected value. Set `concluded_at` to `Some(now)`.
|
||||
1. If just concluded against the candidate and the `Included` map contains `(session, candidate)`: invoke `revert_and_freeze` with the stored block number.
|
||||
1. Return true if just initiated, false otherwise.
|
||||
|
||||
We could approve, and even finalize, a relay chain block which then later disputes due to claims of some parachain being invalid.
|
||||
* `disputes() -> Vec<(SessionIndex, CandidateHash, DisputeState)>`: Get a list of all disputes and info about dispute state.
|
||||
1. Iterate over all disputes in `Disputes`. Set the flag according to `concluded`.
|
||||
|
||||
> TODO: store all included candidate and attestations on them here. accept additional backing after the fact. accept reports based on VRF. candidate included in session S should only be reported on by validator keys from session S. trigger slashing. probably only slash for session S even if the report was submitted in session S+k because it is hard to unify identity
|
||||
* `note_included(SessionIndex, CandidateHash, included_in: BlockNumber)`:
|
||||
1. Add `(SessionIndex, CandidateHash)` to the `Included` map with `included_in - 1` as the value.
|
||||
1. If there is a dispute under `(SessionIndex, CandidateHash)` that has concluded against the candidate, invoke `revert_and_freeze` with the stored block number.
|
||||
|
||||
One first question is to ask why different logic for local disputes is necessary. It seems that local disputes are necessary in order to create the first escalation that leads to block producers abandoning the chain and making remote disputes possible.
|
||||
* `could_be_invalid(SessionIndex, CandidateHash) -> bool`: Returns whether a candidate has a live dispute ongoing or a dispute which has already concluded in the negative.
|
||||
|
||||
Local disputes are only allowed on parablocks that have been included on the local chain and are in the acceptance period.
|
||||
* `is_frozen()`: Load the value of `Frozen` from storage.
|
||||
|
||||
For each such parablock, it is guaranteed by the inclusion pipeline that the parablock is available and the relevant validation code is available.
|
||||
|
||||
Disputes may occur against blocks that have happened in the session prior to the current one, from the perspective of the chain. In this case, the prior validator set is responsible for handling the dispute and to do so with their keys from the last session. This means that validator duty actually extends 1 session beyond leaving the validator set.
|
||||
|
||||
...
|
||||
|
||||
After concluding with enough validtors voting, the dispute will remain open for some time in order to collect further evidence of misbehaving validators, and then issue a signal in the header-chain that this fork should be abandoned along with the hash of the last ancestor before inclusion, which the chain should be reverted to, along with information about the invalid block that should be used to blacklist it from being included.
|
||||
|
||||
## Remote Disputes
|
||||
|
||||
When a dispute has occurred on another fork, we need to transplant that dispute to every other fork. This poses some major challenges.
|
||||
|
||||
There are two types of remote disputes. The first is a remote roll-up of a concluded dispute. These are simply all attestations for the block, those against it, and the result of all (secondary) approval checks. A concluded remote dispute can be resolved in a single transaction as it is an open-and-shut case of a quorum of validators disagreeing with another.
|
||||
|
||||
The second type of remote dispute is the unconcluded dispute. An unconcluded remote dispute is started by any validator, using these things:
|
||||
|
||||
- A candidate
|
||||
- The session that the candidate has appeared in.
|
||||
- Backing for that candidate
|
||||
- The validation code necessary for validation of the candidate.
|
||||
> TODO: optimize by excluding in case where code appears in `Paras::CurrentCode` of this fork of relay-chain
|
||||
- Secondary checks already done on that candidate, containing one or more disputes by validators. None of the disputes are required to have appeared on other chains.
|
||||
> TODO: validator-dispute could be instead replaced by a fisherman w/ bond
|
||||
|
||||
When beginning a remote dispute, at least one escalation by a validator is required, but this validator may be malicious and desires to be slashed. There is no guarantee that the para is registered on this fork of the relay chain or that the para was considered available on any fork of the relay chain.
|
||||
|
||||
So the first step is to have the remote dispute proceed through an availability process similar to the one in the [Inclusion Module](inclusion.md), but without worrying about core assignments or compactness in bitfields.
|
||||
|
||||
We assume that remote disputes are with respect to the same validator set as on the current fork, as BABE and GRANDPA assure that forks are never long enough to diverge in validator set.
|
||||
> TODO: this is at least directionally correct. handling disputes on other validator sets seems useless anyway as they wouldn't be bonded.
|
||||
|
||||
As with local disputes, the validators of the session the candidate was included on another chain are responsible for resolving the dispute and determining availability of the candidate.
|
||||
|
||||
If the candidate was not made available on another fork of the relay chain, the availability process will time out and the disputing validator will be slashed on this fork. The escalation used by the validator(s) can be replayed onto other forks to lead the wrongly-escalating validator(s) to be slashed on all other forks as well. We assume that the adversary cannot censor validators from seeing any particular forks indefinitely
|
||||
|
||||
> TODO: set the availability timeout for this accordingly - unlike in the inclusion pipeline we are slashing for unavailability here!
|
||||
|
||||
If the availability process passes, the remote dispute is ready to be included on this chain. As with the local dispute, validators self-select based on a VRF. Given that a remote dispute is likely to be replayed across multiple forks, it is important to choose a VRF in a way that all forks processing the remote dispute will have the same one. Choosing the VRF is important as it should not allow an adversary to have control over who will be selected as a secondary approval checker.
|
||||
|
||||
After enough validator self-select, under the same escalation rules as for local disputes, the Remote dispute will conclude, slashing all those on the wrong side of the dispute. After concluding, the remote dispute remains open for a set amount of blocks to accept any further proof of additional validators being on the wrong side.
|
||||
|
||||
## Slashing and Incentivization
|
||||
|
||||
The goal of the dispute is to garner a `>2/3` (`2f + 1`) supermajority either in favor of or against the candidate.
|
||||
|
||||
For remote disputes, it is possible that the parablock disputed has never actually passed any availability process on any chain. In this case, validators will not be able to obtain the PoV of the parablock and there will be relatively few votes. We want to disincentivize voters claiming validity of the block from preventing it from becoming available, so we charge them a small distraction fee for wasting the others' time if the dispute does not garner a 2/3+ supermajority on either side. This fee can take the form of a small slash or a reduction in rewards.
|
||||
|
||||
When a supermajority is achieved for the dispute in either the valid or invalid direction, we will penalize non-voters either by issuing a small slash or reducing their rewards. We prevent censorship of the remaining validators by leaving the dispute open for some blocks after resolution in order to accept late votes.
|
||||
* `revert_and_freeze(BlockNumber):
|
||||
1. If `is_frozen()` return.
|
||||
1. issue a digest in the block header which indicates the chain is to be abandoned back to the stored block number.
|
||||
1. Set `Frozen` to true.
|
||||
|
||||
@@ -59,7 +59,7 @@ All failed checks should lead to an unrecoverable error making the block invalid
|
||||
1. apply each bit of bitfield to the corresponding pending candidate. looking up parathread cores using the `core_lookup`. Disregard bitfields that have a `1` bit for any free cores.
|
||||
1. For each applied bit of each availability-bitfield, set the bit for the validator in the `CandidatePendingAvailability`'s `availability_votes` bitfield. Track all candidates that now have >2/3 of bits set in their `availability_votes`. These candidates are now available and can be enacted.
|
||||
1. For all now-available candidates, invoke the `enact_candidate` routine with the candidate and relay-parent number.
|
||||
1. Return a list of freed cores consisting of the cores where candidates have become available.
|
||||
1. Return a list of `(CoreIndex, CandidateHash)` from freed cores consisting of the cores where candidates have become available.
|
||||
* `process_candidates(parent_storage_root, BackedCandidates, scheduled: Vec<CoreAssignment>, group_validators: Fn(GroupIndex) -> Option<Vec<ValidatorIndex>>)`:
|
||||
1. check that each candidate corresponds to a scheduled core and that they are ordered in the same order the cores appear in assignments in `scheduled`.
|
||||
1. check that `scheduled` is sorted ascending by `CoreIndex`, without duplicates.
|
||||
@@ -89,7 +89,7 @@ All failed checks should lead to an unrecoverable error making the block invalid
|
||||
* `collect_pending`:
|
||||
|
||||
```rust
|
||||
fn collect_pending(f: impl Fn(CoreIndex, BlockNumber) -> bool) -> Vec<u32> {
|
||||
fn collect_pending(f: impl Fn(CoreIndex, BlockNumber) -> bool) -> Vec<CoreIndex> {
|
||||
// sweep through all paras pending availability. if the predicate returns true, when given the core index and
|
||||
// the block number the candidate has been pending availability since, then clean up the corresponding storage for that candidate and the commitments.
|
||||
// return a vector of cleaned-up core IDs.
|
||||
@@ -98,3 +98,4 @@ All failed checks should lead to an unrecoverable error making the block invalid
|
||||
* `force_enact(ParaId)`: Forcibly enact the candidate with the given ID as though it had been deemed available by bitfields. Is a no-op if there is no candidate pending availability for this para-id. This should generally not be used but it is useful during execution of Runtime APIs, where the changes to the state are expected to be discarded directly after.
|
||||
* `candidate_pending_availability(ParaId) -> Option<CommittedCandidateReceipt>`: returns the `CommittedCandidateReceipt` pending availability for the para provided, if any.
|
||||
* `pending_availability(ParaId) -> Option<CandidatePendingAvailability>`: returns the metadata around the candidate pending availability for the para, if any.
|
||||
* `collect_disputed(disputed: Vec<CandidateHash>) -> Vec<CoreIndex>`: Sweeps through all paras pending availability. If the candidate hash is one of the disputed candidates, then clean up the corresponding storage for that candidate and the commitments. Return a vector of cleaned-up core IDs.
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
# InclusionInherent
|
||||
|
||||
This module is responsible for all the logic carried by the `Inclusion` entry-point. This entry-point is mandatory, in that it must be invoked exactly once within every block, and it is also "inherent", in that it is provided with no origin by the block author. The data within it carries its own authentication. If any of the steps within fails, the entry-point is considered as having failed and the block will be invalid.
|
||||
|
||||
This module does not have the same initialization/finalization concerns as the others, as it only requires that entry points be triggered after all modules have initialized and that finalization happens after entry points are triggered. Both of these are assumptions we have already made about the runtime's order of operations, so this module doesn't need to be initialized or finalized by the `Initializer`.
|
||||
|
||||
## Storage
|
||||
|
||||
```rust
|
||||
Included: Option<()>,
|
||||
```
|
||||
|
||||
## Finalization
|
||||
|
||||
1. Take (get and clear) the value of `Included`. If it is not `Some`, throw an unrecoverable error.
|
||||
|
||||
## Entry Points
|
||||
|
||||
* `inclusion`: This entry-point accepts three parameters: The relay-chain parent block header, [`Bitfields`](../types/availability.md#signed-availability-bitfield) and [`BackedCandidates`](../types/backing.md#backed-candidate).
|
||||
1. Hash the parent header and make sure that it corresponds to the block hash of the parent (tracked by the `frame_system` FRAME module),
|
||||
1. The `Bitfields` are first forwarded to the `Inclusion::process_bitfields` routine, returning a set of freed cores. Provide a `Scheduler::core_para` as a core-lookup to the `process_bitfields` routine. Annotate each of these freed cores with `FreedReason::Concluded`.
|
||||
1. If `Scheduler::availability_timeout_predicate` is `Some`, invoke `Inclusion::collect_pending` using it, and add timed-out cores to the free cores, annotated with `FreedReason::TimedOut`.
|
||||
1. Invoke `Scheduler::clear`
|
||||
1. Invoke `Scheduler::schedule(freed, System::current_block())`
|
||||
1. Extract `parent_storage_root` from the parent header,
|
||||
1. Invoke the `Inclusion::process_candidates` routine with the parameters `(parent_storage_root, backed_candidates, Scheduler::scheduled(), Scheduler::group_validators)`.
|
||||
1. Call `Scheduler::occupied` using the return value of the `Inclusion::process_candidates` call above, first sorting the list of assigned core indices.
|
||||
1. Call the `Ump::process_pending_upward_messages` routine to execute all messages in upward dispatch queues.
|
||||
1. If all of the above succeeds, set `Included` to `Some(())`.
|
||||
@@ -24,6 +24,7 @@ The other parachains modules are initialized in this order:
|
||||
1. Scheduler
|
||||
1. Inclusion
|
||||
1. SessionInfo
|
||||
1. Disputes
|
||||
1. DMP
|
||||
1. UMP
|
||||
1. HRMP
|
||||
|
||||
@@ -0,0 +1,41 @@
|
||||
# ParaInherent
|
||||
|
||||
This module is responsible for providing all data given to the runtime by the block author to the various parachains modules. The entry-point is mandatory, in that it must be invoked exactly once within every block, and it is also "inherent", in that it is provided with no origin by the block author. The data within it carries its own authentication; i.e. the data takes the form of signed statements by validators. If any of the steps within fails, the entry-point is considered as having failed and the block will be invalid.
|
||||
|
||||
This module does not have the same initialization/finalization concerns as the others, as it only requires that entry points be triggered after all modules have initialized and that finalization happens after entry points are triggered. Both of these are assumptions we have already made about the runtime's order of operations, so this module doesn't need to be initialized or finalized by the `Initializer`.
|
||||
|
||||
There are a couple of important notes to the operations in this inherent as they relate to disputes.
|
||||
1. We don't accept bitfields or backed candidates if in "governance-only" mode from having a local dispute conclude on this fork.
|
||||
1. When disputes are initiated, we remove the block from pending availability. This allows us to roll back chains to the block before blocks are included as opposed to backing. It's important to do this before processing bitfields.
|
||||
1. `Inclusion::collect_disputed` is kind of expensive so it's important to gate this on whether there are actually any new disputes. Which should be never.
|
||||
1. And we don't accept parablocks that have open disputes or disputes that have concluded against the candidate. It's important to import dispute statements before backing, but this is already the case as disputes are imported before processing bitfields.
|
||||
|
||||
## Storage
|
||||
|
||||
```rust
|
||||
Included: Option<()>,
|
||||
```
|
||||
|
||||
## Finalization
|
||||
|
||||
1. Take (get and clear) the value of `Included`. If it is not `Some`, throw an unrecoverable error.
|
||||
|
||||
## Entry Points
|
||||
|
||||
* `enter`: This entry-point accepts three parameters: The relay-chain parent block header, [`Bitfields`](../types/availability.md#signed-availability-bitfield) and [`BackedCandidates`](../types/backing.md#backed-candidate).
|
||||
1. Hash the parent header and make sure that it corresponds to the block hash of the parent (tracked by the `frame_system` FRAME module),
|
||||
1. Invoke `Disputes::provide_multi_dispute_data`.
|
||||
1. If `Disputes::is_frozen`, return and set `Included` to `Some(())`.
|
||||
1. If there are any created disputes from the current session, invoke `Inclusion::collect_disputed` with the disputed candidates. Annotate each returned core with `FreedReason::Concluded`.
|
||||
1. The `Bitfields` are first forwarded to the `Inclusion::process_bitfields` routine, returning a set of freed cores. Provide a `Scheduler::core_para` as a core-lookup to the `process_bitfields` routine. Annotate each of these freed cores with `FreedReason::Concluded`.
|
||||
1. For each freed candidate from the `Inclusion::process_bitfields` call, invoke `Disputes::note_included(current_session, candidate)`.
|
||||
1. If `Scheduler::availability_timeout_predicate` is `Some`, invoke `Inclusion::collect_pending` using it and annotate each of those freed cores with `FreedReason::TimedOut`.
|
||||
1. Combine and sort the dispute-freed cores, the bitfield-freed cores, and the timed-out cores.
|
||||
1. Invoke `Scheduler::clear`
|
||||
1. Invoke `Scheduler::schedule(freed_cores, System::current_block())`
|
||||
1. Extract `parent_storage_root` from the parent header,
|
||||
1. If `Disputes::could_be_invalid(current_session, candidate)` is true for any of the `backed_candidates`, fail.
|
||||
1. Invoke the `Inclusion::process_candidates` routine with the parameters `(parent_storage_root, backed_candidates, Scheduler::scheduled(), Scheduler::group_validators)`.
|
||||
1. Call `Scheduler::occupied` using the return value of the `Inclusion::process_candidates` call above, first sorting the list of assigned core indices.
|
||||
1. Call the `Ump::process_pending_upward_messages` routine to execute all messages in upward dispatch queues.
|
||||
1. If all of the above succeeds, set `Included` to `Some(())`.
|
||||
Reference in New Issue
Block a user