diff --git a/polkadot/.gitlab-ci.yml b/polkadot/.gitlab-ci.yml index 50ec69b15e..f2de0fa0a8 100644 --- a/polkadot/.gitlab-ci.yml +++ b/polkadot/.gitlab-ci.yml @@ -188,6 +188,15 @@ build-linux-release: &build - cp -r scripts/docker/* ./artifacts - sccache -s + +generate-impl-guide: + stage: build + image: + name: michaelfbryan/mdbook-docker-image:latest + entrypoint: [""] + script: + - mdbook build roadmap/implementors-guide + .publish-build: &publish-build stage: publish dependencies: diff --git a/polkadot/roadmap/implementors-guide/README.md b/polkadot/roadmap/implementors-guide/README.md index 0b0644b9fd..33bc0a0912 100644 --- a/polkadot/roadmap/implementors-guide/README.md +++ b/polkadot/roadmap/implementors-guide/README.md @@ -3,7 +3,7 @@ The implementers' guide is compiled from several source files with [mdBook](https://github.com/rust-lang/mdBook). To view it live, locally, from the repo root: ```sh -cargo install mdbook +cargo install mdbook mdbook-linkcheck mdbook serve roadmap/implementors-guide open http://localhost:3000 ``` diff --git a/polkadot/roadmap/implementors-guide/book.toml b/polkadot/roadmap/implementors-guide/book.toml index 4ef35df863..dd2ec97902 100644 --- a/polkadot/roadmap/implementors-guide/book.toml +++ b/polkadot/roadmap/implementors-guide/book.toml @@ -4,3 +4,6 @@ language = "en" multilingual = false src = "src" title = "The Polkadot Parachain Host Implementers' Guide" + +[output.html] +[output.linkcheck] diff --git a/polkadot/roadmap/implementors-guide/src/architecture.md b/polkadot/roadmap/implementors-guide/src/architecture.md index 8090c8c20a..d81e6826b0 100644 --- a/polkadot/roadmap/implementors-guide/src/architecture.md +++ b/polkadot/roadmap/implementors-guide/src/architecture.md @@ -76,7 +76,7 @@ It is also helpful to divide Node-side behavior into two further categories: Net ``` -Node-side behavior is split up into various subsystems. Subsystems are long-lived workers that perform a particular category of work. Subsystems can communicate with each other, and do so via an [Overseer](node/overseer.html) that prevents race conditions. +Node-side behavior is split up into various subsystems. Subsystems are long-lived workers that perform a particular category of work. Subsystems can communicate with each other, and do so via an [Overseer](node/overseer.md) that prevents race conditions. Runtime logic is divided up into Modules and APIs. Modules encapsulate particular behavior of the system. Modules consist of storage, routines, and entry-points. Routines are invoked by entry points, by other modules, upon block initialization or closing. Routines can read and alter the storage of the module. Entry-points are the means by which new information is introduced to a module and can limit the origins (user, root, parachain) that they accept being called by. Each block in the blockchain contains a set of Extrinsics. Each extrinsic targets a a specific entry point to trigger and which data should be passed to it. Runtime APIs provide a means for Node-side behavior to extract meaningful information from the state of a single fork. diff --git a/polkadot/roadmap/implementors-guide/src/glossary.md b/polkadot/roadmap/implementors-guide/src/glossary.md index 44934c1f21..21cf5fc291 100644 --- a/polkadot/roadmap/implementors-guide/src/glossary.md +++ b/polkadot/roadmap/implementors-guide/src/glossary.md @@ -29,6 +29,6 @@ Here you can find definitions of a bunch of jargon, usually specific to the Polk - Validator: Specially-selected node in the network who is responsible for validating parachain blocks and issuing attestations about their validity. - Validation Function: A piece of Wasm code that describes the state-transition function of a parachain. -Also of use is the [Substrate Glossary](https://substrate.dev/docs/en/overview/glossary). +Also of use is the [Substrate Glossary](https://substrate.dev/docs/en/knowledgebase/getting-started/glossary). [0]: https://wiki.polkadot.network/docs/en/learn-consensus diff --git a/polkadot/roadmap/implementors-guide/src/node/availability/availability-distribution.md b/polkadot/roadmap/implementors-guide/src/node/availability/availability-distribution.md index a919aed655..a916b3766b 100644 --- a/polkadot/roadmap/implementors-guide/src/node/availability/availability-distribution.md +++ b/polkadot/roadmap/implementors-guide/src/node/availability/availability-distribution.md @@ -2,7 +2,7 @@ Distribute availability erasure-coded chunks to validators. -After a candidate is backed, the availability of the PoV block must be confirmed by 2/3+ of all validators. Validating a candidate successfully and contributing it to being backable leads to the PoV and erasure-coding being stored in the [Availability Store](../utility/availability-store.html). +After a candidate is backed, the availability of the PoV block must be confirmed by 2/3+ of all validators. Validating a candidate successfully and contributing it to being backable leads to the PoV and erasure-coding being stored in the [Availability Store](../utility/availability-store.md). ## Protocol @@ -34,7 +34,7 @@ We re-attempt to send anything live to a peer upon any view update from that pee On our view change, for all live candidates, we will check if we have the PoV by issuing a `QueryPoV` message and waiting for the response. If the query returns `Some`, we will perform the erasure-coding and distribute all messages to peers that will accept them. -If we are operating as a validator, we note our index `i` in the validator set and keep the `i`th availability chunk for any live candidate, as we receive it. We keep the chunk and its merkle proof in the [Availability Store](../utility/availability-store.html) by sending a `StoreChunk` command. This includes chunks and proofs generated as the result of a successful `QueryPoV`. +If we are operating as a validator, we note our index `i` in the validator set and keep the `i`th availability chunk for any live candidate, as we receive it. We keep the chunk and its merkle proof in the [Availability Store](../utility/availability-store.md) by sending a `StoreChunk` command. This includes chunks and proofs generated as the result of a successful `QueryPoV`. > TODO: back-and-forth is kind of ugly but drastically simplifies the pruning in the availability store, as it creates an invariant that chunks are only stored if the candidate was actually backed > diff --git a/polkadot/roadmap/implementors-guide/src/node/availability/bitfield-distribution.md b/polkadot/roadmap/implementors-guide/src/node/availability/bitfield-distribution.md index 7e23d34a56..97a5c14be3 100644 --- a/polkadot/roadmap/implementors-guide/src/node/availability/bitfield-distribution.md +++ b/polkadot/roadmap/implementors-guide/src/node/availability/bitfield-distribution.md @@ -6,7 +6,7 @@ Validators vote on the availability of a backed candidate by issuing signed bitf `ProtocolId`: `b"bitd"` -Input: [`BitfieldDistributionMessage`](../../overseer-protocol.md#bitfield-distribution-message) +Input: [`BitfieldDistributionMessage`](../../types/overseer-protocol.md#bitfield-distribution-message) Output: - `NetworkBridge::RegisterEventProducer(ProtocolId)` @@ -16,6 +16,6 @@ Output: ## Functionality -This is implemented as a gossip system. Register a [network bridge](../utility/network-bridge.html) event producer on startup and track peer connection, view change, and disconnection events. Only accept bitfields relevant to our current view and only distribute bitfields to other peers when relevant to their most recent view. Check bitfield signatures in this subsystem and accept and distribute only one bitfield per validator. +This is implemented as a gossip system. Register a [network bridge](../utility/network-bridge.md) event producer on startup and track peer connection, view change, and disconnection events. Only accept bitfields relevant to our current view and only distribute bitfields to other peers when relevant to their most recent view. Check bitfield signatures in this subsystem and accept and distribute only one bitfield per validator. When receiving a bitfield either from the network or from a `DistributeBitfield` message, forward it along to the block authorship (provisioning) subsystem for potential inclusion in a block. diff --git a/polkadot/roadmap/implementors-guide/src/node/availability/bitfield-signing.md b/polkadot/roadmap/implementors-guide/src/node/availability/bitfield-signing.md index 20db290f99..613736901d 100644 --- a/polkadot/roadmap/implementors-guide/src/node/availability/bitfield-signing.md +++ b/polkadot/roadmap/implementors-guide/src/node/availability/bitfield-signing.md @@ -20,6 +20,6 @@ If not running as a validator, do nothing. - Determine our validator index `i`, the set of backed candidates pending availability in `r`, and which bit of the bitfield each corresponds to. - > TODO: wait T time for availability distribution? -- Start with an empty bitfield. For each bit in the bitfield, if there is a candidate pending availability, query the [Availability Store](../utility/availability-store.html) for whether we have the availability chunk for our validator index. +- Start with an empty bitfield. For each bit in the bitfield, if there is a candidate pending availability, query the [Availability Store](../utility/availability-store.md) for whether we have the availability chunk for our validator index. - For all chunks we have, set the corresponding bit in the bitfield. - Sign the bitfield and dispatch a `BitfieldDistribution::DistributeBitfield` message. diff --git a/polkadot/roadmap/implementors-guide/src/node/backing/candidate-backing.md b/polkadot/roadmap/implementors-guide/src/node/backing/candidate-backing.md index baceaff466..c57680e635 100644 --- a/polkadot/roadmap/implementors-guide/src/node/backing/candidate-backing.md +++ b/polkadot/roadmap/implementors-guide/src/node/backing/candidate-backing.md @@ -2,17 +2,17 @@ The Candidate Backing subsystem ensures every parablock considered for relay block inclusion has been seconded by at least one validator, and approved by a quorum. Parablocks for which no validator will assert correctness are discarded. If the block later proves invalid, the initial backers are slashable; this gives polkadot a rational threat model during subsequent stages. -Its role is to produce backable candidates for inclusion in new relay-chain blocks. It does so by issuing signed [`Statement`s](../../types/backing.html#statement-type) and tracking received statements signed by other validators. Once enough statements are received, they can be combined into backing for specific candidates. +Its role is to produce backable candidates for inclusion in new relay-chain blocks. It does so by issuing signed [`Statement`s](../../types/backing.md#statement-type) and tracking received statements signed by other validators. Once enough statements are received, they can be combined into backing for specific candidates. Note that though the candidate backing subsystem attempts to produce as many backable candidates as possible, it does _not_ attempt to choose a single authoritative one. The choice of which actually gets included is ultimately up to the block author, by whatever metrics it may use; those are opaque to this subsystem. -Once a sufficient quorum has agreed that a candidate is valid, this subsystem notifies the [Provisioner](../utility/provisioner.html), which in turn engages block production mechanisms to include the parablock. +Once a sufficient quorum has agreed that a candidate is valid, this subsystem notifies the [Provisioner](../utility/provisioner.md), which in turn engages block production mechanisms to include the parablock. ## Protocol -The [Candidate Selection subsystem](candidate-selection.html) is the primary source of non-overseer messages into this subsystem. That subsystem generates appropriate [`CandidateBackingMessage`s](../../types/overseer-protocol.html#candidate-backing-message), and passes them to this subsystem. +The [Candidate Selection subsystem](candidate-selection.md) is the primary source of non-overseer messages into this subsystem. That subsystem generates appropriate [`CandidateBackingMessage`s](../../types/overseer-protocol.md#candidate-backing-message), and passes them to this subsystem. -This subsystem validates the candidates and generates an appropriate [`Statement`](../../types/backing.html#statement-type). All `Statement`s are then passed on to the [Statement Distribution subsystem](statement-distribution.html) to be gossiped to peers. When this subsystem decides that a candidate is invalid, and it was recommended to us to second by our own Candidate Selection subsystem, a message is sent to the Candidate Selection subsystem with the candidate's hash so that the collator which recommended it can be penalized. +This subsystem validates the candidates and generates an appropriate [`Statement`](../../types/backing.md#statement-type). All `Statement`s are then passed on to the [Statement Distribution subsystem](statement-distribution.md) to be gossiped to peers. When this subsystem decides that a candidate is invalid, and it was recommended to us to second by our own Candidate Selection subsystem, a message is sent to the Candidate Selection subsystem with the candidate's hash so that the collator which recommended it can be penalized. ## Functionality @@ -20,8 +20,8 @@ The subsystem should maintain a set of handles to Candidate Backing Jobs that ar ### On Overseer Signal -* If the signal is an [`OverseerSignal`](../../types/overseer-protocol.html#overseer-signal)`::StartWork(relay_parent)`, spawn a Candidate Backing Job with the given relay parent, storing a bidirectional channel with the Candidate Backing Job in the set of handles. -* If the signal is an [`OverseerSignal`](../../types/overseer-protocol.html#overseer-signal)`::StopWork(relay_parent)`, cease the Candidate Backing Job under that relay parent, if any. +* If the signal is an [`OverseerSignal`](../../types/overseer-protocol.md#overseer-signal)`::StartWork(relay_parent)`, spawn a Candidate Backing Job with the given relay parent, storing a bidirectional channel with the Candidate Backing Job in the set of handles. +* If the signal is an [`OverseerSignal`](../../types/overseer-protocol.md#overseer-signal)`::StopWork(relay_parent)`, cease the Candidate Backing Job under that relay parent, if any. ### On `CandidateBackingMessage` @@ -39,7 +39,7 @@ The subsystem should maintain a set of handles to Candidate Backing Jobs that ar The Candidate Backing Job represents the work a node does for backing candidates with respect to a particular relay-parent. -The goal of a Candidate Backing Job is to produce as many backable candidates as possible. This is done via signed [`Statement`s](../../types/backing.html#statement-type) by validators. If a candidate receives a majority of supporting Statements from the Parachain Validators currently assigned, then that candidate is considered backable. +The goal of a Candidate Backing Job is to produce as many backable candidates as possible. This is done via signed [`Statement`s](../../types/backing.md#statement-type) by validators. If a candidate receives a majority of supporting Statements from the Parachain Validators currently assigned, then that candidate is considered backable. ### On Startup diff --git a/polkadot/roadmap/implementors-guide/src/node/backing/candidate-selection.md b/polkadot/roadmap/implementors-guide/src/node/backing/candidate-selection.md index 2382716d90..103146d161 100644 --- a/polkadot/roadmap/implementors-guide/src/node/backing/candidate-selection.md +++ b/polkadot/roadmap/implementors-guide/src/node/backing/candidate-selection.md @@ -6,18 +6,18 @@ This subsystem includes networking code for communicating with collators, and tr This subsystem is only ever interested in parablocks assigned to the particular parachain which this validator is currently handling. -New parablock candidates may arrive from a potentially unbounded set of collators. This subsystem chooses either 0 or 1 of them per relay parent to second. If it chooses to second a candidate, it sends an appropriate message to the [Candidate Backing subsystem](candidate-backing.html) to generate an appropriate [`Statement`](../../types/backing.html#statement-type). +New parablock candidates may arrive from a potentially unbounded set of collators. This subsystem chooses either 0 or 1 of them per relay parent to second. If it chooses to second a candidate, it sends an appropriate message to the [Candidate Backing subsystem](candidate-backing.md) to generate an appropriate [`Statement`](../../types/backing.md#statement-type). -In the event that a parablock candidate proves invalid, this subsystem will receive a message back from the Candidate Backing subsystem indicating so. If that parablock candidate originated from a collator, this subsystem will blacklist that collator. If that parablock candidate originated from a peer, this subsystem generates a report for the [Misbehavior Arbitration subsystem](../utility/misbehavior-arbitration.html). +In the event that a parablock candidate proves invalid, this subsystem will receive a message back from the Candidate Backing subsystem indicating so. If that parablock candidate originated from a collator, this subsystem will blacklist that collator. If that parablock candidate originated from a peer, this subsystem generates a report for the [Misbehavior Arbitration subsystem](../utility/misbehavior-arbitration.md). ## Protocol -Input: [`CandidateSelectionMessage`](../../types/overseer-protocol#candidate-selection-message) +Input: [`CandidateSelectionMessage`](../../types/overseer-protocol.md#candidate-selection-message) Output: - Validation requests to Validation subsystem -- [`CandidateBackingMessage`](../../types/overseer-protocol.html#candidate-backing-message)`::Second` +- [`CandidateBackingMessage`](../../types/overseer-protocol.md#candidate-backing-message)`::Second` - Peer set manager: report peers (collators who have misbehaved) ## Functionality diff --git a/polkadot/roadmap/implementors-guide/src/node/backing/pov-distribution.md b/polkadot/roadmap/implementors-guide/src/node/backing/pov-distribution.md index 60493d4812..d7290cbfbf 100644 --- a/polkadot/roadmap/implementors-guide/src/node/backing/pov-distribution.md +++ b/polkadot/roadmap/implementors-guide/src/node/backing/pov-distribution.md @@ -1,6 +1,6 @@ # PoV Distribution -This subsystem is responsible for distributing PoV blocks. For now, unified with [Statement Distribution subsystem](statement-distribution.html). +This subsystem is responsible for distributing PoV blocks. For now, unified with [Statement Distribution subsystem](statement-distribution.md). ## Protocol diff --git a/polkadot/roadmap/implementors-guide/src/node/backing/statement-distribution.md b/polkadot/roadmap/implementors-guide/src/node/backing/statement-distribution.md index 1683361c85..557e4db201 100644 --- a/polkadot/roadmap/implementors-guide/src/node/backing/statement-distribution.md +++ b/polkadot/roadmap/implementors-guide/src/node/backing/statement-distribution.md @@ -22,7 +22,7 @@ Implemented as a gossip protocol. Register a network event producer on startup. Statement Distribution is the only backing subsystem which has any notion of peer nodes, who are any full nodes on the network. Validators will also act as peer nodes. -It is responsible for signing statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](../utility/misbehavior-arbitration.html). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes, who distribute statements by validators. On receiving a signed statement from a peer, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](candidate-backing.html) to handle the validator's statement. +It is responsible for signing statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](../utility/misbehavior-arbitration.md). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes, who distribute statements by validators. On receiving a signed statement from a peer, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](candidate-backing.md) to handle the validator's statement. Track equivocating validators and stop accepting information from them. Forward double-vote proofs to the double-vote reporting system. Establish a data-dependency order: @@ -35,7 +35,7 @@ The Statement Distribution subsystem sends statements to peer nodes and detects ## Peer Receipt State Machine -There is a very simple state machine which governs which messages we are willing to receive from peers. Not depicted in the state machine: on initial receipt of any [`SignedStatement`](../../types/backing.html#signed-statement-type), validate that the provided signature does in fact sign the included data. Note that each individual parablock candidate gets its own instance of this state machine; it is perfectly legal to receive a `Valid(X)` before a `Seconded(Y)`, as long as a `Seconded(X)` has been received. +There is a very simple state machine which governs which messages we are willing to receive from peers. Not depicted in the state machine: on initial receipt of any [`SignedStatement`](../../types/backing.md#signed-statement-type), validate that the provided signature does in fact sign the included data. Note that each individual parablock candidate gets its own instance of this state machine; it is perfectly legal to receive a `Valid(X)` before a `Seconded(Y)`, as long as a `Seconded(X)` has been received. A: Initial State. Receive `SignedStatement(Statement::Second)`: extract `Statement`, forward to Candidate Backing, proceed to B. Receive any other `SignedStatement` variant: drop it. diff --git a/polkadot/roadmap/implementors-guide/src/node/overseer.md b/polkadot/roadmap/implementors-guide/src/node/overseer.md index 94896c4d47..27c7c7ebb4 100644 --- a/polkadot/roadmap/implementors-guide/src/node/overseer.md +++ b/polkadot/roadmap/implementors-guide/src/node/overseer.md @@ -24,7 +24,7 @@ The hierarchy of subsystems: ``` -The overseer determines work to do based on block import events and block finalization events. It does this by keeping track of the set of relay-parents for which work is currently being done. This is known as the "active leaves" set. It determines an initial set of active leaves on startup based on the data on-disk, and uses events about blockchain import to update the active leaves. Updates lead to [`OverseerSignal`](../types.overseer-protocol.html#overseer-signal)`::StartWork` and [`OverseerSignal`](../types/overseer-protocol.html#overseer-signal)`::StopWork` being sent according to new relay-parents, as well as relay-parents to stop considering. Block import events inform the overseer of leaves that no longer need to be built on, now that they have children, and inform us to begin building on those children. Block finalization events inform us when we can stop focusing on blocks that appear to have been orphaned. +The overseer determines work to do based on block import events and block finalization events. It does this by keeping track of the set of relay-parents for which work is currently being done. This is known as the "active leaves" set. It determines an initial set of active leaves on startup based on the data on-disk, and uses events about blockchain import to update the active leaves. Updates lead to [`OverseerSignal`](../types/overseer-protocol.md#overseer-signal)`::StartWork` and [`OverseerSignal`](../types/overseer-protocol.md#overseer-signal)`::StopWork` being sent according to new relay-parents, as well as relay-parents to stop considering. Block import events inform the overseer of leaves that no longer need to be built on, now that they have children, and inform us to begin building on those children. Block finalization events inform us when we can stop focusing on blocks that appear to have been orphaned. The overseer's logic can be described with these functions: diff --git a/polkadot/roadmap/implementors-guide/src/node/subsystems-and-jobs.md b/polkadot/roadmap/implementors-guide/src/node/subsystems-and-jobs.md index 9cbced3f41..a9a65b3c43 100644 --- a/polkadot/roadmap/implementors-guide/src/node/subsystems-and-jobs.md +++ b/polkadot/roadmap/implementors-guide/src/node/subsystems-and-jobs.md @@ -2,7 +2,7 @@ In this section we define the notions of Subsystems and Jobs. These are guidelines for how we will employ an architecture of hierarchical state machines. We'll have a top-level state machine which oversees the next level of state machines which oversee another layer of state machines and so on. The next sections will lay out these guidelines for what we've called subsystems and jobs, since this model applies to many of the tasks that the Node-side behavior needs to encompass, but these are only guidelines and some Subsystems may have deeper hierarchies internally. -Subsystems are long-lived worker tasks that are in charge of performing some particular kind of work. All subsystems can communicate with each other via a well-defined protocol. Subsystems can't generally communicate directly, but must coordinate communication through an [Overseer](overseer.html), which is responsible for relaying messages, handling subsystem failures, and dispatching work signals. +Subsystems are long-lived worker tasks that are in charge of performing some particular kind of work. All subsystems can communicate with each other via a well-defined protocol. Subsystems can't generally communicate directly, but must coordinate communication through an [Overseer](overseer.md), which is responsible for relaying messages, handling subsystem failures, and dispatching work signals. Most work that happens on the Node-side is related to building on top of a specific relay-chain block, which is contextually known as the "relay parent". We call it the relay parent to explicitly denote that it is a block in the relay chain and not on a parachain. We refer to the parent because when we are in the process of building a new block, we don't know what that new block is going to be. The parent block is our only stable point of reference, even though it is usually only useful when it is not yet a parent but in fact a leaf of the block-DAG expected to soon become a parent (because validators are authoring on top of it). Furthermore, we are assuming a forkful blockchain-extension protocol, which means that there may be multiple possible children of the relay-parent. Even if the relay parent has multiple children blocks, the parent of those children is the same, and the context in which those children is authored should be the same. The parent block is the best and most stable reference to use for defining the scope of work items and messages, and is typically referred to by its cryptographic hash. diff --git a/polkadot/roadmap/implementors-guide/src/node/utility/availability-store.md b/polkadot/roadmap/implementors-guide/src/node/utility/availability-store.md index 6004a730f2..51810a06a0 100644 --- a/polkadot/roadmap/implementors-guide/src/node/utility/availability-store.md +++ b/polkadot/roadmap/implementors-guide/src/node/utility/availability-store.md @@ -25,7 +25,7 @@ There may be multiple competing blocks all ending the availability phase for a p ## Protocol -Input: [`AvailabilityStoreMessage`](../../types/overseer-protocol.html#availability-store-message) +Input: [`AvailabilityStoreMessage`](../../types/overseer-protocol.md#availability-store-message) ## Functionality diff --git a/polkadot/roadmap/implementors-guide/src/node/utility/candidate-validation.md b/polkadot/roadmap/implementors-guide/src/node/utility/candidate-validation.md index edee48014c..ffeaa7a37e 100644 --- a/polkadot/roadmap/implementors-guide/src/node/utility/candidate-validation.md +++ b/polkadot/roadmap/implementors-guide/src/node/utility/candidate-validation.md @@ -6,7 +6,7 @@ A variety of subsystems want to know if a parachain block candidate is valid. No ## Protocol -Input: [`CandidateValidationMessage`](../../types/overseer-protocol.html#validation-request-type) +Input: [`CandidateValidationMessage`](../../types/overseer-protocol.md#validation-request-type) Output: Validation result via the provided response side-channel. diff --git a/polkadot/roadmap/implementors-guide/src/node/utility/network-bridge.md b/polkadot/roadmap/implementors-guide/src/node/utility/network-bridge.md index 283974f273..0e83bc6277 100644 --- a/polkadot/roadmap/implementors-guide/src/node/utility/network-bridge.md +++ b/polkadot/roadmap/implementors-guide/src/node/utility/network-bridge.md @@ -10,7 +10,7 @@ So in short, this Subsystem acts as a bridge between an actual network component ## Protocol -Input: [`NetworkBridgeMessage`](../../types/overseer-protocol.html#network-bridge-message) +Input: [`NetworkBridgeMessage`](../../types/overseer-protocol.md#network-bridge-message) Output: Varying, based on registered event producers. ## Functionality diff --git a/polkadot/roadmap/implementors-guide/src/node/utility/provisioner.md b/polkadot/roadmap/implementors-guide/src/node/utility/provisioner.md index 72f205a4bf..33fb394f1b 100644 --- a/polkadot/roadmap/implementors-guide/src/node/utility/provisioner.md +++ b/polkadot/roadmap/implementors-guide/src/node/utility/provisioner.md @@ -10,11 +10,11 @@ There are several distinct types of provisionable data, but they share this prop ### Backed Candidates -The block author can choose 0 or 1 backed parachain candidates per parachain; the only constraint is that each backed candidate has the appropriate relay parent. However, the choice of a backed candidate must be the block author's; the provisioner must ensure that block authors are aware of all available [`BackedCandidate`s](../../types/backing.html#backed-candidate). +The block author can choose 0 or 1 backed parachain candidates per parachain; the only constraint is that each backed candidate has the appropriate relay parent. However, the choice of a backed candidate must be the block author's; the provisioner must ensure that block authors are aware of all available [`BackedCandidate`s](../../types/backing.md#backed-candidate). ### Signed Bitfields -[Signed bitfields](../../types/availability.html#signed-availability-bitfield) are attestations from a particular validator about which candidates it believes are available. +[Signed bitfields](../../types/availability.md#signed-availability-bitfield) are attestations from a particular validator about which candidates it believes are available. ### Misbehavior Reports @@ -26,13 +26,13 @@ Note that there is no mechanism in place which forces a block author to include The dispute inherent is similar to a misbehavior report in that it is an attestation of misbehavior on the part of a validator or group of validators. Unlike a misbehavior report, it is not self-contained: resolution requires coordinated action by several validators. The canonical example of a dispute inherent involves an approval checker discovering that a set of validators has improperly approved an invalid parachain block: resolving this requires the entire validator set to re-validate the block, so that the minority can be slashed. -Dispute resolution is complex and is explained in substantially more detail [here](../../runtime/validity.html). +Dispute resolution is complex and is explained in substantially more detail [here](../../runtime/validity.md). > TODO: The provisioner is responsible for selecting remote disputes to replay. Let's figure out the details. ## Protocol -Input: [`ProvisionerMessage`](../../types/overseer-protocol.html#provisioner-message). Backed candidates come from the [Candidate Backing subsystem](../backing/candidate-backing.html), signed bitfields come from the [Bitfield Distribution subsystem](../availability/bitfield-distribution.html), and misbehavior reports and disputes come from the [Misbehavior Arbitration subsystem](misbehavior-arbitration.html). +Input: [`ProvisionerMessage`](../../types/overseer-protocol.md#provisioner-message). Backed candidates come from the [Candidate Backing subsystem](../backing/candidate-backing.md), signed bitfields come from the [Bitfield Distribution subsystem](../availability/bitfield-distribution.md), and misbehavior reports and disputes come from the [Misbehavior Arbitration subsystem](misbehavior-arbitration.md). At initialization, this subsystem has no outputs. Block authors can send a `ProvisionerMessage::RequestBlockAuthorshipData`, which includes a channel over which provisionable data can be sent. All appropriate provisionable data will then be sent over this channel, as it is received. diff --git a/polkadot/roadmap/implementors-guide/src/node/utility/runtime-api.md b/polkadot/roadmap/implementors-guide/src/node/utility/runtime-api.md index 1c6c73d321..05ceb85f4d 100644 --- a/polkadot/roadmap/implementors-guide/src/node/utility/runtime-api.md +++ b/polkadot/roadmap/implementors-guide/src/node/utility/runtime-api.md @@ -4,7 +4,7 @@ The Runtime API subsystem is responsible for providing a single point of access ## Protocol -Input: [`RuntimeApiMessage`](../../types/overseer-protocol.html#runtime-api-message) +Input: [`RuntimeApiMessage`](../../types/overseer-protocol.md#runtime-api-message) Output: None diff --git a/polkadot/roadmap/implementors-guide/src/node/validity/README.md b/polkadot/roadmap/implementors-guide/src/node/validity/README.md index ab39cf6351..866044f074 100644 --- a/polkadot/roadmap/implementors-guide/src/node/validity/README.md +++ b/polkadot/roadmap/implementors-guide/src/node/validity/README.md @@ -1,3 +1,3 @@ # Validity -The node validity subsystems exist to support the runtime [Validity module](../../runtime/validity.html). Their behavior and specifications are as-yet undefined. +The node validity subsystems exist to support the runtime [Validity module](../../runtime/validity.md). Their behavior and specifications are as-yet undefined. diff --git a/polkadot/roadmap/implementors-guide/src/parachains-overview.md b/polkadot/roadmap/implementors-guide/src/parachains-overview.md index 561811ee7f..43ab974962 100644 --- a/polkadot/roadmap/implementors-guide/src/parachains-overview.md +++ b/polkadot/roadmap/implementors-guide/src/parachains-overview.md @@ -18,11 +18,11 @@ Here is a description of the Inclusion Pipeline: the path a parachain block (or 1. Validators are selected and assigned to parachains by the Validator Assignment routine. 1. A collator produces the parachain block, which is known as a parachain candidate or candidate, along with a PoV for the candidate. -1. The collator forwards the candidate and PoV to validators assigned to the same parachain via the [Collation Distribution subsystem](node/collators/collation-distribution.html). -1. The validators assigned to a parachain at a given point in time participate in the [Candidate Backing subsystem](node/backing/candidate-backing.html) to validate candidates that were put forward for validation. Candidates which gather enough signed validity statements from validators are considered "backable". Their backing is the set of signed validity statements. +1. The collator forwards the candidate and PoV to validators assigned to the same parachain via the [Collation Distribution subsystem](node/collators/collation-distribution.md). +1. The validators assigned to a parachain at a given point in time participate in the [Candidate Backing subsystem](node/backing/candidate-backing.md) to validate candidates that were put forward for validation. Candidates which gather enough signed validity statements from validators are considered "backable". Their backing is the set of signed validity statements. 1. A relay-chain block author, selected by BABE, can note up to one (1) backable candidate for each parachain to include in the relay-chain block alongside its backing. A backable candidate once included in the relay-chain is considered backed in that fork of the relay-chain. 1. Once backed in the relay-chain, the parachain candidate is considered to be "pending availability". It is not considered to be included as part of the parachain until it is proven available. -1. In the following relay-chain blocks, validators will participate in the [Availability Distribution subsystem](node/availability/availability-distribution.html) to ensure availability of the candidate. Information regarding the availability of the candidate will be noted in the subsequent relay-chain blocks. +1. In the following relay-chain blocks, validators will participate in the [Availability Distribution subsystem](node/availability/availability-distribution.md) to ensure availability of the candidate. Information regarding the availability of the candidate will be noted in the subsequent relay-chain blocks. 1. Once the relay-chain state machine has enough information to consider the candidate's PoV as being available, the candidate is considered to be part of the parachain and is graduated to being a full parachain block, or parablock for short. Note that the candidate can fail to be included in any of the following ways: diff --git a/polkadot/roadmap/implementors-guide/src/runtime/README.md b/polkadot/roadmap/implementors-guide/src/runtime/README.md index 8ff2f7eca4..2a806d4c2d 100644 --- a/polkadot/roadmap/implementors-guide/src/runtime/README.md +++ b/polkadot/roadmap/implementors-guide/src/runtime/README.md @@ -21,7 +21,7 @@ We will split the logic of the runtime up into these modules: * Inclusion: handles the inclusion and availability of scheduled parachains and parathreads. * Validity: handles secondary checks and dispute resolution for included, available parablocks. -The [Initializer module](initializer.html) is special - it's responsible for handling the initialization logic of the other modules to ensure that the correct initialization order and related invariants are maintained. The other modules won't specify a on-initialize logic, but will instead expose a special semi-private routine that the initialization module will call. The other modules are relatively straightforward and perform the roles described above. +The [Initializer module](initializer.md) is special - it's responsible for handling the initialization logic of the other modules to ensure that the correct initialization order and related invariants are maintained. The other modules won't specify a on-initialize logic, but will instead expose a special semi-private routine that the initialization module will call. The other modules are relatively straightforward and perform the roles described above. The Parachain Host operates under a changing set of validators. Time is split up into periodic sessions, where each session brings a potentially new set of validators. Sessions are buffered by one, meaning that the validators of the upcoming session are fixed and always known. Parachain Host runtime modules need to react to changes in the validator set, as it will affect the runtime logic for processing candidate backing, availability bitfields, and misbehavior reports. The Parachain Host modules can't determine ahead-of-time exactly when session change notifications are going to happen within the block (note: this depends on module initialization order again - better to put session before parachains modules). Ideally, session changes are always handled before initialization. It is clearly a problem if we compute validator assignments to parachains during initialization and then the set of validators changes. In the best case, we can recognize that re-initialization needs to be done. In the worst case, bugs would occur. @@ -33,7 +33,7 @@ There are 3 main ways that we can handle this issue: Although option 3 is the most comprehensive, it runs counter to our goal of simplicity. Option 1 means requiring the runtime to do redundant work at all sessions and will also mean, like option 3, that designing things in such a way that initialization can be rolled back and reapplied under the new environment. That leaves option 2, although it is a "nuclear" option in a way and requires us to constrain the parachain host to only run in full runtimes with a certain order of operations. -So the other role of the initializer module is to forward session change notifications to modules in the initialization order, throwing an unrecoverable error if the notification is received after initialization. Session change is the point at which the [Configuration Module](configuration.html) updates the configuration. Most of the other modules will handle changes in the configuration during their session change operation, so the initializer should provide both the old and new configuration to all the other +So the other role of the initializer module is to forward session change notifications to modules in the initialization order, throwing an unrecoverable error if the notification is received after initialization. Session change is the point at which the [Configuration Module](configuration.md) updates the configuration. Most of the other modules will handle changes in the configuration during their session change operation, so the initializer should provide both the old and new configuration to all the other modules alongside the session change notification. This means that a session change notification should consist of the following data: ```rust diff --git a/polkadot/roadmap/implementors-guide/src/runtime/configuration.md b/polkadot/roadmap/implementors-guide/src/runtime/configuration.md index 46c041beb5..37e5202429 100644 --- a/polkadot/roadmap/implementors-guide/src/runtime/configuration.md +++ b/polkadot/roadmap/implementors-guide/src/runtime/configuration.md @@ -1,8 +1,8 @@ # Configuration Module -This module is responsible for managing all configuration of the parachain host in-flight. It provides a central point for configuration updates to prevent races between configuration changes and parachain-processing logic. Configuration can only change during the session change routine, and as this module handles the session change notification first it provides an invariant that the configuration does not change throughout the entire session. Both the [scheduler](scheduler.html) and [inclusion](inclusion.html) modules rely on this invariant to ensure proper behavior of the scheduler. +This module is responsible for managing all configuration of the parachain host in-flight. It provides a central point for configuration updates to prevent races between configuration changes and parachain-processing logic. Configuration can only change during the session change routine, and as this module handles the session change notification first it provides an invariant that the configuration does not change throughout the entire session. Both the [scheduler](scheduler.md) and [inclusion](inclusion.md) modules rely on this invariant to ensure proper behavior of the scheduler. -The configuration that we will be tracking is the [`HostConfiguration`](../types/runtime.html#host-configuration) struct. +The configuration that we will be tracking is the [`HostConfiguration`](../types/runtime.md#host-configuration) struct. ## Storage diff --git a/polkadot/roadmap/implementors-guide/src/runtime/inclusioninherent.md b/polkadot/roadmap/implementors-guide/src/runtime/inclusioninherent.md index c65754c0e2..c9f1b2105f 100644 --- a/polkadot/roadmap/implementors-guide/src/runtime/inclusioninherent.md +++ b/polkadot/roadmap/implementors-guide/src/runtime/inclusioninherent.md @@ -16,7 +16,7 @@ Included: Option<()>, ## Entry Points -* `inclusion`: This entry-point accepts two parameters: [`Bitfields`](../types/availability.html#signed-availability-bitfield) and [`BackedCandidates`](../type-definitions.html#backed-candidate). +* `inclusion`: This entry-point accepts two parameters: [`Bitfields`](../types/availability.md#signed-availability-bitfield) and [`BackedCandidates`](../types/backing.md#backed-candidate). 1. The `Bitfields` are first forwarded to the `Inclusion::process_bitfields` routine, returning a set of freed cores. Provide a `Scheduler::core_para` as a core-lookup to the `process_bitfields` routine. Annotate each of these freed cores with `FreedReason::Concluded`. 1. If `Scheduler::availability_timeout_predicate` is `Some`, invoke `Inclusion::collect_pending` using it, and add timed-out cores to the free cores, annotated with `FreedReason::TimedOut`. 1. Invoke `Scheduler::schedule(freed)` diff --git a/polkadot/roadmap/implementors-guide/src/runtime/initializer.md b/polkadot/roadmap/implementors-guide/src/runtime/initializer.md index e184f0d829..aba4d5f352 100644 --- a/polkadot/roadmap/implementors-guide/src/runtime/initializer.md +++ b/polkadot/roadmap/implementors-guide/src/runtime/initializer.md @@ -19,7 +19,7 @@ The other modules are initialized in this order: 1. Validity. 1. Router. -The [Configuration Module](configuration.html) is first, since all other modules need to operate under the same configuration as each other. It would lead to inconsistency if, for example, the scheduler ran first and then the configuration was updated before the Inclusion module. +The [Configuration Module](configuration.md) is first, since all other modules need to operate under the same configuration as each other. It would lead to inconsistency if, for example, the scheduler ran first and then the configuration was updated before the Inclusion module. Set `HasInitialized` to true. diff --git a/polkadot/roadmap/implementors-guide/src/runtime/scheduler.md b/polkadot/roadmap/implementors-guide/src/runtime/scheduler.md index c408ab3538..0b6a60a383 100644 --- a/polkadot/roadmap/implementors-guide/src/runtime/scheduler.md +++ b/polkadot/roadmap/implementors-guide/src/runtime/scheduler.md @@ -60,11 +60,11 @@ Availability Core Transitions within Block | Availability Timeout ``` -Validator group assignments do not need to change very quickly. The security benefits of fast rotation is redundant with the challenge mechanism in the [Validity module](validity.html). Because of this, we only divide validators into groups at the beginning of the session and do not shuffle membership during the session. However, we do take steps to ensure that no particular validator group has dominance over a single parachain or parathread-multiplexer for an entire session to provide better guarantees of liveness. +Validator group assignments do not need to change very quickly. The security benefits of fast rotation is redundant with the challenge mechanism in the [Validity module](validity.md). Because of this, we only divide validators into groups at the beginning of the session and do not shuffle membership during the session. However, we do take steps to ensure that no particular validator group has dominance over a single parachain or parathread-multiplexer for an entire session to provide better guarantees of liveness. Validator groups rotate across availability cores in a round-robin fashion, with rotation occurring at fixed intervals. The i'th group will be assigned to the `(i+k)%n`'th core at any point in time, where `k` is the number of rotations that have occurred in the session, and `n` is the number of cores. This makes upcoming rotations within the same session predictable. -When a rotation occurs, validator groups are still responsible for distributing availability chunks for any previous cores that are still occupied and pending availability. In practice, rotation and availability-timeout frequencies should be set so this will only be the core they have just been rotated from. It is possible that a validator group is rotated onto a core which is currently occupied. In this case, the validator group will have nothing to do until the previously-assigned group finishes their availability work and frees the core or the availability process times out. Depending on if the core is for a parachain or parathread, a different timeout `t` from the [`HostConfiguration`](../types/runtime.html#host-configuration) will apply. Availability timeouts should only be triggered in the first `t-1` blocks after the beginning of a rotation. +When a rotation occurs, validator groups are still responsible for distributing availability chunks for any previous cores that are still occupied and pending availability. In practice, rotation and availability-timeout frequencies should be set so this will only be the core they have just been rotated from. It is possible that a validator group is rotated onto a core which is currently occupied. In this case, the validator group will have nothing to do until the previously-assigned group finishes their availability work and frees the core or the availability process times out. Depending on if the core is for a parachain or parathread, a different timeout `t` from the [`HostConfiguration`](../types/runtime.md#host-configuration) will apply. Availability timeouts should only be triggered in the first `t-1` blocks after the beginning of a rotation. Parathreads operate on a system of claims. Collators participate in auctions to stake a claim on authoring the next block of a parathread, although the auction mechanism is beyond the scope of the scheduler. The scheduler guarantees that they'll be given at least a certain number of attempts to author a candidate that is backed. Attempts that fail during the availability phase are not counted, since ensuring availability at that stage is the responsibility of the backing validators, not of the collator. When a claim is accepted, it is placed into a queue of claims, and each claim is assigned to a particular parathread-multiplexing core in advance. Given that the current assignments of validator groups to cores are known, and the upcoming assignments are predictable, it is possible for parathread collators to know who they should be talking to now and how they should begin establishing connections with as a fallback. @@ -147,13 +147,13 @@ Scheduled: Vec, // sorted ascending by CoreIndex. ## Session Change -Session changes are the only time that configuration can change, and the [Configuration module](configuration.html)'s session-change logic is handled before this module's. We also lean on the behavior of the [Inclusion module](inclusion.html) which clears all its occupied cores on session change. Thus we don't have to worry about cores being occupied across session boundaries and it is safe to re-size the `AvailabilityCores` bitfield. +Session changes are the only time that configuration can change, and the [Configuration module](configuration.md)'s session-change logic is handled before this module's. We also lean on the behavior of the [Inclusion module](inclusion.md) which clears all its occupied cores on session change. Thus we don't have to worry about cores being occupied across session boundaries and it is safe to re-size the `AvailabilityCores` bitfield. Actions: 1. Set `SessionStartBlock` to current block number. 1. Clear all `Some` members of `AvailabilityCores`. Return all parathread claims to queue with retries un-incremented. -1. Set `configuration = Configuration::configuration()` (see [`HostConfiguration`](../types/runtime.html#host-configuration)) +1. Set `configuration = Configuration::configuration()` (see [`HostConfiguration`](../types/runtime.md#host-configuration)) 1. Resize `AvailabilityCores` to have length `Paras::parachains().len() + configuration.parathread_cores with all`None` entries. 1. Compute new validator groups by shuffling using a secure randomness beacon - We need a total of `N = Paras::parathreads().len() + configuration.parathread_cores` validator groups. diff --git a/polkadot/roadmap/implementors-guide/src/runtime/validity.md b/polkadot/roadmap/implementors-guide/src/runtime/validity.md index cd0c216332..11907cea77 100644 --- a/polkadot/roadmap/implementors-guide/src/runtime/validity.md +++ b/polkadot/roadmap/implementors-guide/src/runtime/validity.md @@ -53,7 +53,7 @@ The second type of remote dispute is the unconcluded dispute. An unconcluded rem When beginning a remote dispute, at least one escalation by a validator is required, but this validator may be malicious and desires to be slashed. There is no guarantee that the para is registered on this fork of the relay chain or that the para was considered available on any fork of the relay chain. -So the first step is to have the remote dispute proceed through an availability process similar to the one in the [Inclusion Module](inclusion.html), but without worrying about core assignments or compactness in bitfields. +So the first step is to have the remote dispute proceed through an availability process similar to the one in the [Inclusion Module](inclusion.md), but without worrying about core assignments or compactness in bitfields. We assume that remote disputes are with respect to the same validator set as on the current fork, as BABE and GRANDPA assure that forks are never long enough to diverge in validator set. > TODO: this is at least directionally correct. handling disputes on other validator sets seems useless anyway as they wouldn't be bonded. diff --git a/polkadot/roadmap/implementors-guide/src/types/availability.md b/polkadot/roadmap/implementors-guide/src/types/availability.md index 7556019eab..c01f1e44a3 100644 --- a/polkadot/roadmap/implementors-guide/src/types/availability.md +++ b/polkadot/roadmap/implementors-guide/src/types/availability.md @@ -18,7 +18,7 @@ struct SignedAvailabilityBitfield { struct Bitfields(Vec<(SignedAvailabilityBitfield)>), // bitfields sorted by validator index, ascending ``` -The signed payload is the SCALE encoding of the tuple `(bitfield, signing_context)` where `signing_context` is a [`SigningContext`](../types/candidate.html#signing-context). +The signed payload is the SCALE encoding of the tuple `(bitfield, signing_context)` where `signing_context` is a [`SigningContext`](../types/candidate.md#signing-context). ## Proof-of-Validity diff --git a/polkadot/roadmap/implementors-guide/src/types/backing.md b/polkadot/roadmap/implementors-guide/src/types/backing.md index 46f1d79e87..843744c77f 100644 --- a/polkadot/roadmap/implementors-guide/src/types/backing.md +++ b/polkadot/roadmap/implementors-guide/src/types/backing.md @@ -1,6 +1,6 @@ # Backing Types -[Candidates](candidate.html) go through many phases before being considered included in a fork of the relay chain and eventually accepted. +[Candidates](candidate.md) go through many phases before being considered included in a fork of the relay chain and eventually accepted. These types describe the data used in the backing phase. Some are sent over the wire within subsystems, and some are simply included in the relay-chain block. @@ -21,7 +21,7 @@ enum ValidityAttestation { ## Statement Type -The [Candidate Backing subsystem](../node/backing/candidate-backing.html) issues and signs these after candidate validation. +The [Candidate Backing subsystem](../node/backing/candidate-backing.md) issues and signs these after candidate validation. ```rust /// A statement about the validity of a parachain candidate. @@ -57,13 +57,13 @@ struct SignedStatement { ``` The actual signed payload will be the SCALE encoding of `(compact_statement, signing_context)` where -`compact_statement` is a tweak of the [`Statement`](#statement) enum where all variants, including `Seconded`, contain only the hash of the candidate, and the `signing_context` is a [`SigningContext`](../types/candidate.html#signing-context). +`compact_statement` is a tweak of the [`Statement`](#statement) enum where all variants, including `Seconded`, contain only the hash of the candidate, and the `signing_context` is a [`SigningContext`](../types/candidate.md#signing-context). This prevents against replay attacks and allows the candidate receipt itself to be omitted when checking a signature on a `Seconded` statement in situations where the hash is known. ## Backed Candidate -An [`AbridgedCandidateReceipt`](candidate.html#abridgedcandidatereceipt) along with all data necessary to prove its backing. This is submitted to the relay-chain to process and move along the candidate to the pending-availability stage. +An [`AbridgedCandidateReceipt`](candidate.md#abridgedcandidatereceipt) along with all data necessary to prove its backing. This is submitted to the relay-chain to process and move along the candidate to the pending-availability stage. ```rust struct BackedCandidate { diff --git a/polkadot/roadmap/implementors-guide/src/types/candidate.md b/polkadot/roadmap/implementors-guide/src/types/candidate.md index 4ffb12d973..ff09365c93 100644 --- a/polkadot/roadmap/implementors-guide/src/types/candidate.md +++ b/polkadot/roadmap/implementors-guide/src/types/candidate.md @@ -81,7 +81,7 @@ Unlike the [`GlobalValidationData`](#globalvalidationdata), which only depends o This choice can also be expressed as a choice of which parent head of the para will be built on - either optimistically on the candidate pending availability or pessimistically on the one that is surely included. -Para validation happens optimistically before the block is authored, so it is not possible to predict with 100% accuracy what will happen in the earlier phase of the [`InclusionInherent`](/runtime/inclusioninherent.html) module where new availability bitfields and availability timeouts are processed. This is what will eventually define whether a candidate can be backed within a specific relay-chain block. +Para validation happens optimistically before the block is authored, so it is not possible to predict with 100% accuracy what will happen in the earlier phase of the [`InclusionInherent`](../runtime/inclusioninherent.md) module where new availability bitfields and availability timeouts are processed. This is what will eventually define whether a candidate can be backed within a specific relay-chain block. > TODO: determine if balance/fees are even needed here. diff --git a/polkadot/roadmap/implementors-guide/src/types/overseer-protocol.md b/polkadot/roadmap/implementors-guide/src/types/overseer-protocol.md index f00583e3d9..f9363fb0e1 100644 --- a/polkadot/roadmap/implementors-guide/src/types/overseer-protocol.md +++ b/polkadot/roadmap/implementors-guide/src/types/overseer-protocol.md @@ -24,7 +24,7 @@ Either way, there will be some top-level type encapsulating messages from the ov ## All Messages -> TODO [now] +> TODO (now) ## Availability Distribution Message @@ -100,7 +100,7 @@ enum CandidateBackingMessage { ## Candidate Selection Message -These messages are sent to the [Candidate Selection subsystem](../node/backing/candidate-selection.html) as a means of providing feedback on its outputs. +These messages are sent to the [Candidate Selection subsystem](../node/backing/candidate-selection.md) as a means of providing feedback on its outputs. ```rust enum CandidateSelectionMessage { @@ -128,7 +128,7 @@ enum NetworkBridgeMessage { ## Network Bridge Update -These updates are posted from the [Network Bridge Subsystem](../node/utility/network-bridge.html) to other subsystems based on registered listeners. +These updates are posted from the [Network Bridge Subsystem](../node/utility/network-bridge.md) to other subsystems based on registered listeners. ```rust struct View(Vec); // Up to `N` (5?) chain heads. @@ -245,7 +245,7 @@ enum StatementDistributionMessage { ## Validation Request Type -Various modules request that the [Candidate Validation subsystem](../node/utility/candidate-validation.html) validate a block with this message +Various modules request that the [Candidate Validation subsystem](../node/utility/candidate-validation.md) validate a block with this message ```rust enum CandidateValidationMessage {