Asynchronous Backing MegaPR (#5022)

* inclusion emulator logic for asynchronous backing (#4790)

* initial stab at candidate_context

* fmt

* docs & more TODOs

* some cleanups

* reframe as inclusion_emulator

* documentations yes

* update types

* add constraint modifications

* watermark

* produce modifications

* v2 primitives: re-export all v1 for consistency

* vstaging primitives

* emulator constraints: handle code upgrades

* produce outbound HRMP modifications

* stack.

* method for applying modifications

* method just for sanity-checking modifications

* fragments produce modifications, not prospectives

* make linear

* add some TODOs

* remove stacking; handle code upgrades

* take `fragment` private

* reintroduce stacking.

* fragment constructor

* add TODO

* allow validating fragments against future constraints

* docs

* relay-parent number and min code size checks

* check code upgrade restriction

* check max hrmp per candidate

* fmt

* remove GoAhead logic because it wasn't helpful

* docs on code upgrade failure

* test stacking

* test modifications against constraints

* fmt

* test fragments

* descending or duplicate test

* fmt

* remove unused imports in vstaging

* wrong primitives

* spellcheck

* Runtime changes for Asynchronous Backing (#4786)

* inclusion: utility for allowed relay-parents

* inclusion: use prev number instead of prev hash

* track most recent context of paras

* inclusion: accept previous relay-parents

* update dmp  advancement rule for async backing

* fmt

* add a comment about validation outputs

* clean up a couple of TODOs

* weights

* fix weights

* fmt

* Resolve dmp todo

* Restore inclusion tests

* Restore paras_inherent tests

* MostRecentContext test

* Benchmark for new paras dispatchable

* Prepare check_validation_outputs for upgrade

* cargo run --quiet --profile=production  --features=runtime-benchmarks -- benchmark --chain=kusama-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/kusama/src/weights/runtime_parachains_paras.rs

* cargo run --quiet --profile=production  --features=runtime-benchmarks -- benchmark --chain=westend-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/westend/src/weights/runtime_parachains_paras.rs

* cargo run --quiet --profile=production  --features=runtime-benchmarks -- benchmark --chain=polkadot-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/polkadot/src/weights/runtime_parachains_paras.rs

* cargo run --quiet --profile=production  --features=runtime-benchmarks -- benchmark --chain=rococo-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/rococo/src/weights/runtime_parachains_paras.rs

* Implementers guide changes

* More tests for allowed relay parents

* Add a github issue link

* Compute group index based on relay parent

* Storage migration

* Move allowed parents tracker to shared

* Compile error

* Get group assigned to core at the next block

* Test group assignment

* fmt

* Error instead of panic

* Update guide

* Extend doc-comment

* Update runtime/parachains/src/shared.rs

Co-authored-by: Robert Habermeier <rphmeier@gmail.com>

Co-authored-by: Chris Sosnin <chris125_@live.com>
Co-authored-by: Parity Bot <admin@parity.io>
Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>

* Prospective Parachains Subsystem (#4913)

* docs and skeleton

* subsystem skeleton

* main loop

* fragment tree basics & fmt

* begin fragment trees & view

* flesh out more of view update logic

* further flesh out update logic

* some refcount functions for fragment trees

* add fatal/non-fatal errors

* use non-fatal results

* clear up some TODOs

* ideal format for scheduling info

* add a bunch of TODOs

* some more fluff

* extract fragment graph to submodule

* begin fragment graph API

* trees, not graphs

* improve docs

* scope and constructor for trees

* add some test TODOs

* limit max ancestors and store constraints

* constructor

* constraints: fix bug in HRMP watermarks

* fragment tree population logic

* set::retain

* extract population logic

* implement add_and_populate

* fmt

* add some TODOs in tests

* implement child-selection

* strip out old stuff based on wrong assumptions

* use fatality

* implement pruning

* remove unused ancestor constraints

* fragment tree instantiation

* remove outdated comment

* add message/request types and skeleton for handling

* fmt

* implement handle_candidate_seconded

* candidate storage: handle backed

* implement handle_candidate_backed

* implement answer_get_backable_candidate

* remove async where not needed

* implement fetch_ancestry

* add logic for run_iteration

* add some docs

* remove global allow(unused), fix warnings

* make spellcheck happy (despite English)

* fmt

* bump Cargo.lock

* replace tracing with gum

* introduce PopulateFrom trait

* implement GetHypotheticalDepths

* revise docs slightly

* first fragment tree scope test

* more scope tests

* test add_candidate

* fmt

* test retain

* refactor test code

* test populate is recursive

* test contiguity of depth 0 is maintained

* add_and_populate tests

* cycle tests

* remove PopulateFrom trait

* fmt

* test hypothetical depths (non-recursive)

* have CandidateSeconded return membership

* tree membership requests

* Add a ProspectiveParachainsSubsystem struct

* add a staging API for base constraints

* add a `From` impl

* add runtime API for staging_validity_constraints

* implement fetch_base_constraints

* implement `fetch_upcoming_paras`

* remove reconstruction of candidate receipt; no obvious usecase

* fmt

* export message to broader module

* remove last TODO

* correctly export

* fix compilation and add GetMinimumRelayParent request

* make provisioner into a real subsystem with proper mesage bounds

* fmt

* fix ChannelsOut in overseer test

* fix overseer tests

* fix again

* fmt

* Integrate prospective parachains subsystem into backing: Part 1 (#5557)

* BEGIN ASYNC candidate-backing CHANGES

* rename & document modes

* answer prospective validation data requests

* GetMinimumRelayParents request is now plural

* implement an implicit view utility for backing subsystems

* implicit-view: get allowed relay parents

* refactorings and improvements to implicit view

* add some TODOs for tests

* split implicit view updates into 2 functions

* backing: define State to prepare for functional refactor

* add some docs

* backing: implement bones of new leaf activation logic

* backing: create per-relay-parent-states

* use new handle_active_leaves_update

* begin extracting logic from CandidateBackingJob

* mostly extract statement import from job logic

* handle statement imports outside of job logic

* do some TODO planning for prospective parachains integration

* finish rewriting backing subsystem in functional style

* add prospective parachains mode to relay parent entries

* fmt

* add a RejectedByProspectiveParachains error

* notify prospective parachains of seconded and backed candidates

* always validate candidates exhaustively in backing.

* return persisted_validation_data from validation

* handle rejections by prospective parachains

* implement seconding sanity check

* invoke validate_and_second

* Alter statement table to allow multiple seconded messages per validator

* refactor backing to have statements carry PVD

* clean up all warnings

* Add tests for implicit view

* Improve doc comments

* Prospective parachains mode based on Runtime API version

* Add a TODO

* Rework seconding_sanity_check

* Iterate over responses

* Update backing tests

* collator-protocol: load PVD from runtime

* Fix validator side tests

* Update statement-distribution to fetch PVD

* Fix statement-distribution tests

* Backing tests with prospective paras #1

* fix per_relay_parent pruning in backing

* Test multiple leaves

* Test seconding sanity check

* Import statement order

Before creating an entry in `PerCandidateState` map
wait for the approval from the prospective parachains

* Add a test for correct state updates

* Second multiple candidates per relay parent test

* Add backing tests with prospective paras

* Second more than one test without prospective paras

* Add a test for prospective para blocks

* Update malus

* typos

Co-authored-by: Chris Sosnin <chris125_@live.com>

* Track occupied depth in backing per parachain (#5778)

* provisioner: async backing changes (#5711)

* Provisioner changes for async backing

* Select candidates based on prospective paras mode

* Revert naming

* Update tests

* Update TODO comment

* review

* provisioner: async backing changes (#5711)

* Provisioner changes for async backing

* Select candidates based on prospective paras mode

* Revert naming

* Update tests

* Update TODO comment

* review

* fmt

* Network bridge changes for asynchronous backing + update subsystems to handle versioned packets (#5991)

* BEGIN STATEMENT DISTRIBUTION WORK

create a vstaging network protocol which is the same as v1

* mostly make network bridge amenable to vstaging

* network-bridge: fully adapt to vstaging

* add some TODOs for tests

* fix fallout in bitfield-distribution

* bitfield distribution tests + TODOs

* fix fallout in gossip-support

* collator-protocol: fix message fallout

* collator-protocol: load PVD from runtime

* add TODO for vstaging tests

* make things compile

* set used network protocol version using a feature

* fmt

* get approval-distribution building

* fix approval-distribution tests

* spellcheck

* nits

* approval distribution net protocol test

* bitfield distribution net protocol test

* Revert "collator-protocol: fix message fallout"

This reverts commit 07cc887303e16c6b3843ecb25cdc7cc2080e2ed1.

* Network bridge tests

Co-authored-by: Chris Sosnin <chris125_@live.com>

* remove max_pov_size requirement from prospective pvd request (#6014)

* remove max_pov_size requirement from prospective pvd request

* fmt

* Extract legacy statement distribution to its own module (#6026)

* add compatibility type to v2 statement distribution message

* warning cleanup

* handle compatibility layer for v2

* clean up an unimplemented!() block

* circulate statements based on version

* extract legacy v1 code into separate module

* remove unimplemented

* clean up naming of from_requester/responder

* remove TODOs

* have backing share seconded statements with PVD

* fmt

* fix warning

* Quick fix unused warning for not yet implemented/used staging messages.

* Fix network bridge test

* Fix wrong merge.

We now have 23 subsystems (network bridge split + prospective
parachains)

Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at>

* Version 3 is already live.

* Fix tests (#6055)

* Fix backing tests

* Fix warnings.

* fmt

* collator-protocol: asynchronous backing changes (#5740)

* Draft collator side changes

* Start working on collations management

* Handle peer's view change

* Versioning on advertising

* Versioned collation fetching request

* Handle versioned messages

* Improve docs for collation requests

* Add spans

* Add request receiver to overseer

* Fix collator side tests

* Extract relay parent mode to lib

* Validator side draft

* Add more checks for advertisement

* Request pvd based on async backing mode

* review

* Validator side improvements

* Make old tests green

* More fixes

* Collator side tests draft

* Send collation test

* fmt

* Collator side network protocol versioning

* cleanup

* merge artifacts

* Validator side net protocol versioning

* Remove fragment tree membership request

* Resolve todo

* Collator side core state test

* Improve net protocol compatibility

* Validator side tests

* more improvements

* style fixes

* downgrade log

* Track implicit assignments

* Limit the number of seconded candidates per para

* Add a sanity check

* Handle fetched candidate

* fix tests

* Retry fetch

* Guard against dequeueing while already fetching

* Reintegrate connection management

* Timeout on advertisements

* fmt

* spellcheck

* update tests after merge

* validator assignment fixes for backing and collator protocol (#6158)

* Rename depth->ancestry len in tests

* Refactor group assignments

* Remove implicit assignments

* backing: consider occupied core assignments

* Track a single para on validator side

* Refactor prospective parachains mode request (#6179)

* Extract prospective parachains mode into util

* Skip activations depending on the mode

* backing: don't send backed candidate to provisioner (#6185)

* backing: introduce `CanSecond` request for advertisements filtering (#6225)

* Drop BoundToRelayParent

* draft changes

* fix backing tests

* Fix genesis ancestry

* Fix validator side tests

* more tests

* cargo generate-lockfile

* Implement `StagingValidityConstraints` Runtime API method (#6258)

* Implement StagingValidityConstraints

* spellcheck

* fix ump params

* Update hrmp comment

* Introduce ump per candidate limit

* hypothetical earliest block

* refactor primitives usage

* hypothetical earliest block number test

* fix build

* Prepare the Runtime for asynchronous backing upgrade (#6287)

* Introduce async backing params to runtime config

* fix cumulus config

* use config

* finish runtimes

* Introduce new staging API

* Update collator protocol

* Update provisioner

* Update prospective parachains

* Update backing

* Move async backing params lower in the config

* make naming consistent

* misc

* Use real prospective parachains subsystem (#6407)

* Backport `HypotheticalFrontier` into the feature branch (#6605)

* implement more general HypotheticalFrontier

* fmt

* drop unneeded request

Co-authored-by: Robert Habermeier <rphmeier@gmail.com>

* Resolve todo about legacy leaf activation (#6447)

* fix bug/warning in handling membership answers

* Remove `HypotheticalDepthRequest` in favor of `HypotheticalFrontierRequest` (#6521)

* Remove `HypotheticalDepthRequest` for `HypotheticalFrontierRequest`

* Update tests

* Fix (removed wrong docstring)

* Fix can_second request

* Patch some dead_code errors

---------

Co-authored-by: Chris Sosnin <chris125_@live.com>

* Async Backing: Send Statement Distribution "Backed" messages (#6634)

* Backing: Send Statement Distribution "Backed" messages

Closes #6590.

**TODO:**

- [ ] Adjust tests

* Fix compile errors

* (Mostly) fix tests

* Fix comment

* Fix test and compile error

* Test that `StatementDistributionMessage::Backed` is sent

* Fix compile error

* Fix some clippy errors

* Add prospective parachains subsystem tests (#6454)

* Add prospective parachains subsystem test

* Add `should_do_no_work_if_async_backing_disabled_for_leaf` test

* Implement `activate_leaf` helper, up to getting ancestry

* Finish implementing `activate_leaf`

* Small refactor in `activate_leaf`

* Get `CandidateSeconded` working

* Finish `send_candidate_and_check_if_found` test

* Refactor; send more leaves & candidates

* Refactor test

* Implement `check_candidate_parent_leaving_view` test

* Start work on `check_candidate_on_multiple_forks` test

* Don’t associate specific parachains with leaf

* Finish `correctly_updates_leaves` test

* Fix cycle due to reused head data

* Fix `check_backable_query` test

* Fix `check_candidate_on_multiple_forks` test

* Add `check_depth_and_pvd_queries` test

* Address review comments

* Remove TODO

* add a new index for output head data to candidate storage

* Resolve test TODOs

* Fix compile errors

* test candidate storage pruning, make sure new index is cleaned up

---------

Co-authored-by: Robert Habermeier <rphmeier@gmail.com>

* Node-side metrics for asynchronous backing (#6549)

* Add metrics for `prune_view_candidate_storage`

* Add metrics for `request_unblocked_collations`

* Fix docstring

* Couple fixes from review comments

* Fix `check_depth_query` test

* inclusion-emulator: mirror advancement rule check (#6361)

* inclusion-emulator: mirror advancement rule check

* fix build

* prospective-parachains: introduce `backed_in_path_only` flag for advertisements (#6649)

* Introduce `backed_in_path_only` flag for depth request

* fmt

* update doc comment

* fmt

* Add async-backing zombienet tests (#6314)

* Async backing: impl guide for statement distribution (#6738)

Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com>
Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com>

* Asynchronous backing statement distribution: Take III (#5999)

* add notification types for v2 statement-distribution

* improve protocol docs

* add empty vstaging module

* fmt

* add backed candidate packet request types

* start putting down structure of new logic

* handle activated leaf

* some sanity-checking on outbound statements

* fmt

* update vstaging share to use statements with PVD

* tiny refactor, candidate_hash location

* import local statements

* refactor statement import

* first stab at broadcast logic

* fmt

* fill out some TODOs

* start on handling incoming

* split off session info into separate map

* start in on a knowledge tracker

* address some grumbles

* format

* missed comment

* some docs for direct

* add note on slashing

* amend

* simplify 'direct' code

* finish up the 'direct' logic

* add a bunch of tests for the direct-in-group logic

* rename 'direct' to 'cluster', begin a candidate_entry module

* distill candidate_entry

* start in on a statement-store module

* some utilities for the statement store

* rewrite 'send_statement_direct' using new tools

* filter sending logic on peers which have the relay-parent in their view.

* some more logic for handling incoming statements

* req/res: BackedCandidatePacket -> AttestedCandidate + tweaks

* add a `validated_in_group` bitfield to BackedCandidateInventory

* BackedCandidateInventory -> Manifest

* start in on requester module

* add outgoing request for attested candidate

* add a priority mechanism for requester

* some request dispatch logic

* add seconded mask to tagged-request

* amend manifest to hold group index

* handle errors and set up scaffold for response validation

* validate attested candidate responses

* requester -> requests

* add some utilities for manipulating requests

* begin integrating requester

* start grid module

* tiny

* refactor grid topology to expose more info to subsystems

* fix grid_topology test

* fix overseer test

* implement topology group-based view construction logic

* fmt

* flesh out grid slightly more

* add indexed groups utility

* integrate Groups into per-session info

* refactor statement store to borrow Groups

* implement manifest knowledge utility

* add a test for topology setup

* don't send to group members

* test for conflicting manifests

* manifest knowledge tests

* fmt

* rename field

* garbage collection for grid tracker

* routines for finding correct/incorrect advertisers

* add manifest import logic

* tweak naming

* more tests for manifest import

* add comment

* rework candidates into a view-wide tracker

* fmt

* start writing boilerplate for grid sending

* fmt

* some more group boilerplate

* refactor handling of topology and authority IDs

* fmt

* send statements directly to grid peers where possible

* send to cluster only if statement belongs to cluster

* improve handling of cluster statements

* handle incoming statements along the grid

* API for introduction of candidates into the tree

* backing: use new prospective parachains API

* fmt prospective parachains changes

* fmt statement-dist

* fix condition

* get ready for tracking importable candidates

* prospective parachains: add Cow logic

* incomplete and complete hypothetical candidates

* remove keep_if_unneeded

* fmt

* implement more general HypotheticalFrontier

* fmt, cleanup

* add a by_parent_hash index to candidate tracker

* more framework for future code

* utilities for getting all hypothetical candidates for frontier

* track origin in statement store

* fmt

* requests should return peer

* apply post-confirmation reckoning

* flesh out import/announce/circulate logic on new statements

* adjust

* adjust TODO comment

* fix  backing tests

* update statement-distribution to use new indexedvec

* fmt

* query hypothetical candidates

* implement `note_importable_under`

* extract common utility of fragment tree updates

* add a helper function for getting statements unknown by backing

* import fresh statements to backing

* send announcements and acknowledgements over grid

* provide freshly importable statements

also avoid tracking backed candidates in statement distribution

* do not issue requests on newly importable candidates

* add TODO for later when confirming candidate

* write a routine for handling backed candidate notifications

* simplify grid substantially

* add some test TODOs

* handle confirmed candidates & grid announcements

* finish implementing manifest handling, including follow up statements

* send follow-up statements when acknowledging freshly backed

* fmt

* handle incoming acknowledgements

* a little DRYing

* wire up network messages to handlers

* fmt

* some skeleton code for peer view update handling

* more peer view skeleton stuff

* Fix async backing statement distribution tests (#6621)

* Fix compile errors in tests

* Cargo fmt

* Resolve some todos in async backing statement-distribution branch (#6482)

* Implement `remove_by_relay_parent`

* Extract `minimum_votes` to shared primitives.

* Add `can_send_statements_received_with_prejudice` test

* Fix test

* Update docstrings

* Cargo fmt

* Fix compile error

* Fix compile errors in tests

* Cargo fmt

* Add module docs; write `test_priority_ordering` (first draft)

* Fix `test_priority_ordering`

* Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority`

* Address review comments

* Remove `Entry::get_mut`

* fix test compilation

* add a TODO for a test

* clean up a couple of TODOs

* implement sending pending cluster statements

* refactor utility function for sending acknowledgement and statements

* mostly implement catching peers up via grid

* Fix clippy error

* alter grid to track all pending statements

* fix more TODOs and format

* tweak a TODO in requests

* some logic for dispatching requests

* fmt

* skeleton for response receiving

* Async backing statement distribution: cluster tests (#6678)

* Add `pending_statements_set_when_receiving_fresh_statements`

* Add `pending_statements_updated_when_sending_statements` test

* fix up

* fmt

* update TODO

* rework seconded mask in requests

* change doc

* change unhandledresponse not to borrow request manager

* only accept responses sufficient to back

* finish implementing response handling

* extract statement filter to protocol crate

* rework requests: use statement filter in network protocol

* dispatch cluster requests correctly

* rework cluster statement sending

* implement request answering

* fmt

* only send confirmed candidate statement messages on unified relay-parent

* Fix Tests In Statement Distribution Branch

* Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715)

* Integrate `handle_active_leaves_update`

* Integrate `share_local_statement`/`handle_backed_candidate_message`

* Start hooking up request/response flow

* Finish hooking up request/response flow

* Limit number of parallel requests in responder

* Fix test compilation errors

* Fix missing check for prospective parachains mode

* Fix some more compile errors

* clean up some review comments

* clean up warnings

* Async backing statement distribution: grid tests (#6673)

* Add `manifest_import_returns_ok_true` test

* cargo fmt

* Add pending_communication_receiving_manifest_on_confirmed_candidate

* Add `senders_can_provide_manifests_in_acknowledgement` test

* Add a couple of tests for pending statements

* Add `pending_statements_cleared_when_sending` test

* Add `pending_statements_respect_remote_knowledge` test

* Refactor group creation in tests

* Clarify docs

* Address some review comments

* Make some clarifications

* Fix post-merge errors

* Clarify test `senders_can_provide_manifests_in_acknowledgement`

* Try writing `pending_statements_are_updated_after_manifest_exchange`

* Document "seconding limit" and `reject_overflowing_manifests` test

* Test that seconding counts are not updated for validators on error

* Fix tests

* Fix manifest exchange test

* Add more tests in `requests.rs` (#6707)

This resolves remaining TODOs in this file.

* remove outdated inventory terminology

* Async backing statement distribution: `Candidates` tests (#6658)

* Async Backing: Fix clippy errors in statement distribution branch (#6720)

* Integrate `handle_active_leaves_update`

* Integrate `share_local_statement`/`handle_backed_candidate_message`

* Start hooking up request/response flow

* Finish hooking up request/response flow

* Limit number of parallel requests in responder

* Fix test compilation errors

* Fix missing check for prospective parachains mode

* Fix some more compile errors

* Async Backing: Fix clippy errors in statement distribution branch

* Fix some more clippy lints

* add tests module

* fix warnings in existing tests

* create basic test harness

* create a test state struct

* fmt

* create empty cluster & grid modules for tests

* some TODOs for cluster test suite

* describe test-suite for grid logic

* describe request test suite

* fix seconding-limit bug

* Remove extraneous `pub`

This somehow made it into my clippy PR.

* Fix some test compile warnings

* Remove some unneeded `allow`s

* adapt some new test helpers from Marcin

* add helper for activating a gossip topology

* add utility for signing statements

* helpers for connecting/disconnecting peers

* round out network utilities

* fmt

* fix bug in initializing validator-meta

* fix compilation

* implement first cluster test

* TODOs for incoming request tests

* Remove unneeded `make_committed_candidate` helper

* fmt

* some more tests for cluster

* add a TODO about grid senders

* integrate inbound req/res into test harness

* polish off initial cluster test suite

* keep introduce candidate request

* fix tests after introduce candidate request

* fmt

* Add grid protocol to module docs

* Fix comments

* Test `backed_in_path_only: true`

* Update node/network/protocol/src/lib.rs

Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>

* Update node/network/protocol/src/request_response/mod.rs

Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>

* Mark receiver with `vstaging`

* validate grid senders based on manifest kind

* fix mask_seconded/valid

* fix unwanted-mask check

* fix build

* resolve todo on leaf mode

* Unify protocol naming to vstaging

* fmt, fix grid test after topology change

* typo

Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>

* address review

* adjust comment, make easier to understand

* Fix typo

---------

Co-authored-by: Marcin S <marcin@bytedude.com>
Co-authored-by: Marcin S <marcin@realemail.net>
Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>
Co-authored-by: Chris Sosnin <chris125_@live.com>

* miscellaneous fixes to make asynchronous backing work (#6791)

* propagate network-protocol-staging feature

* add feature to adder-collator as well

* allow collation-generation of occupied cores

* prospective parachains: special treatment for pending availability candidates

* runtime: fetch candidates pending availability

* lazily construct PVD for pending candidates

* fix fallout in prospective parachains hypothetical/select_child

* runtime: enact candidates when creating paras-inherent

* make tests compile

* test pending availability in the scope

* add prospective parachains test

* fix validity constraints leftovers

* drop prints

* Fix typos

---------

Co-authored-by: Chris Sosnin <chris125_@live.com>
Co-authored-by: Marcin S <marcin@realemail.net>

* Remove restart from test (#6840)

* Async Backing: Statement Distribution Tests (#6755)

* start on handling incoming

* split off session info into separate map

* start in on a knowledge tracker

* address some grumbles

* format

* missed comment

* some docs for direct

* add note on slashing

* amend

* simplify 'direct' code

* finish up the 'direct' logic

* add a bunch of tests for the direct-in-group logic

* rename 'direct' to 'cluster', begin a candidate_entry module

* distill candidate_entry

* start in on a statement-store module

* some utilities for the statement store

* rewrite 'send_statement_direct' using new tools

* filter sending logic on peers which have the relay-parent in their view.

* some more logic for handling incoming statements

* req/res: BackedCandidatePacket -> AttestedCandidate + tweaks

* add a `validated_in_group` bitfield to BackedCandidateInventory

* BackedCandidateInventory -> Manifest

* start in on requester module

* add outgoing request for attested candidate

* add a priority mechanism for requester

* some request dispatch logic

* add seconded mask to tagged-request

* amend manifest to hold group index

* handle errors and set up scaffold for response validation

* validate attested candidate responses

* requester -> requests

* add some utilities for manipulating requests

* begin integrating requester

* start grid module

* tiny

* refactor grid topology to expose more info to subsystems

* fix grid_topology test

* fix overseer test

* implement topology group-based view construction logic

* fmt

* flesh out grid slightly more

* add indexed groups utility

* integrate Groups into per-session info

* refactor statement store to borrow Groups

* implement manifest knowledge utility

* add a test for topology setup

* don't send to group members

* test for conflicting manifests

* manifest knowledge tests

* fmt

* rename field

* garbage collection for grid tracker

* routines for finding correct/incorrect advertisers

* add manifest import logic

* tweak naming

* more tests for manifest import

* add comment

* rework candidates into a view-wide tracker

* fmt

* start writing boilerplate for grid sending

* fmt

* some more group boilerplate

* refactor handling of topology and authority IDs

* fmt

* send statements directly to grid peers where possible

* send to cluster only if statement belongs to cluster

* improve handling of cluster statements

* handle incoming statements along the grid

* API for introduction of candidates into the tree

* backing: use new prospective parachains API

* fmt prospective parachains changes

* fmt statement-dist

* fix condition

* get ready for tracking importable candidates

* prospective parachains: add Cow logic

* incomplete and complete hypothetical candidates

* remove keep_if_unneeded

* fmt

* implement more general HypotheticalFrontier

* fmt, cleanup

* add a by_parent_hash index to candidate tracker

* more framework for future code

* utilities for getting all hypothetical candidates for frontier

* track origin in statement store

* fmt

* requests should return peer

* apply post-confirmation reckoning

* flesh out import/announce/circulate logic on new statements

* adjust

* adjust TODO comment

* fix  backing tests

* update statement-distribution to use new indexedvec

* fmt

* query hypothetical candidates

* implement `note_importable_under`

* extract common utility of fragment tree updates

* add a helper function for getting statements unknown by backing

* import fresh statements to backing

* send announcements and acknowledgements over grid

* provide freshly importable statements

also avoid tracking backed candidates in statement distribution

* do not issue requests on newly importable candidates

* add TODO for later when confirming candidate

* write a routine for handling backed candidate notifications

* simplify grid substantially

* add some test TODOs

* handle confirmed candidates & grid announcements

* finish implementing manifest handling, including follow up statements

* send follow-up statements when acknowledging freshly backed

* fmt

* handle incoming acknowledgements

* a little DRYing

* wire up network messages to handlers

* fmt

* some skeleton code for peer view update handling

* more peer view skeleton stuff

* Fix async backing statement distribution tests (#6621)

* Fix compile errors in tests

* Cargo fmt

* Resolve some todos in async backing statement-distribution branch (#6482)

* Implement `remove_by_relay_parent`

* Extract `minimum_votes` to shared primitives.

* Add `can_send_statements_received_with_prejudice` test

* Fix test

* Update docstrings

* Cargo fmt

* Fix compile error

* Fix compile errors in tests

* Cargo fmt

* Add module docs; write `test_priority_ordering` (first draft)

* Fix `test_priority_ordering`

* Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority`

* Address review comments

* Remove `Entry::get_mut`

* fix test compilation

* add a TODO for a test

* clean up a couple of TODOs

* implement sending pending cluster statements

* refactor utility function for sending acknowledgement and statements

* mostly implement catching peers up via grid

* Fix clippy error

* alter grid to track all pending statements

* fix more TODOs and format

* tweak a TODO in requests

* some logic for dispatching requests

* fmt

* skeleton for response receiving

* Async backing statement distribution: cluster tests (#6678)

* Add `pending_statements_set_when_receiving_fresh_statements`

* Add `pending_statements_updated_when_sending_statements` test

* fix up

* fmt

* update TODO

* rework seconded mask in requests

* change doc

* change unhandledresponse not to borrow request manager

* only accept responses sufficient to back

* finish implementing response handling

* extract statement filter to protocol crate

* rework requests: use statement filter in network protocol

* dispatch cluster requests correctly

* rework cluster statement sending

* implement request answering

* fmt

* only send confirmed candidate statement messages on unified relay-parent

* Fix Tests In Statement Distribution Branch

* Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715)

* Integrate `handle_active_leaves_update`

* Integrate `share_local_statement`/`handle_backed_candidate_message`

* Start hooking up request/response flow

* Finish hooking up request/response flow

* Limit number of parallel requests in responder

* Fix test compilation errors

* Fix missing check for prospective parachains mode

* Fix some more compile errors

* clean up some review comments

* clean up warnings

* Async backing statement distribution: grid tests (#6673)

* Add `manifest_import_returns_ok_true` test

* cargo fmt

* Add pending_communication_receiving_manifest_on_confirmed_candidate

* Add `senders_can_provide_manifests_in_acknowledgement` test

* Add a couple of tests for pending statements

* Add `pending_statements_cleared_when_sending` test

* Add `pending_statements_respect_remote_knowledge` test

* Refactor group creation in tests

* Clarify docs

* Address some review comments

* Make some clarifications

* Fix post-merge errors

* Clarify test `senders_can_provide_manifests_in_acknowledgement`

* Try writing `pending_statements_are_updated_after_manifest_exchange`

* Document "seconding limit" and `reject_overflowing_manifests` test

* Test that seconding counts are not updated for validators on error

* Fix tests

* Fix manifest exchange test

* Add more tests in `requests.rs` (#6707)

This resolves remaining TODOs in this file.

* remove outdated inventory terminology

* Async backing statement distribution: `Candidates` tests (#6658)

* Async Backing: Fix clippy errors in statement distribution branch (#6720)

* Integrate `handle_active_leaves_update`

* Integrate `share_local_statement`/`handle_backed_candidate_message`

* Start hooking up request/response flow

* Finish hooking up request/response flow

* Limit number of parallel requests in responder

* Fix test compilation errors

* Fix missing check for prospective parachains mode

* Fix some more compile errors

* Async Backing: Fix clippy errors in statement distribution branch

* Fix some more clippy lints

* add tests module

* fix warnings in existing tests

* create basic test harness

* create a test state struct

* fmt

* create empty cluster & grid modules for tests

* some TODOs for cluster test suite

* describe test-suite for grid logic

* describe request test suite

* fix seconding-limit bug

* Remove extraneous `pub`

This somehow made it into my clippy PR.

* Fix some test compile warnings

* Remove some unneeded `allow`s

* adapt some new test helpers from Marcin

* add helper for activating a gossip topology

* add utility for signing statements

* helpers for connecting/disconnecting peers

* round out network utilities

* fmt

* fix bug in initializing validator-meta

* fix compilation

* implement first cluster test

* TODOs for incoming request tests

* Remove unneeded `make_committed_candidate` helper

* fmt

* Hook up request sender

* Add `valid_statement_without_prior_seconded_is_ignored` test

* Fix `valid_statement_without_prior_seconded_is_ignored` test

* some more tests for cluster

* add a TODO about grid senders

* integrate inbound req/res into test harness

* polish off initial cluster test suite

* keep introduce candidate request

* fix tests after introduce candidate request

* fmt

* Add grid protocol to module docs

* Remove obsolete test

* Fix comments

* Test `backed_in_path_only: true`

* Update node/network/protocol/src/lib.rs

Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>

* Update node/network/protocol/src/request_response/mod.rs

Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>

* Mark receiver with `vstaging`

* First draft of `ensure_seconding_limit_is_respected` test

* validate grid senders based on manifest kind

* fix mask_seconded/valid

* fix unwanted-mask check

* fix build

* resolve todo on leaf mode

* Unify protocol naming to vstaging

* Fix `ensure_seconding_limit_is_respected` test

* Start `backed_candidate_leads_to_advertisement` test

* fmt, fix grid test after topology change

* Send Backed notification

* Finish `backed_candidate_leads_to_advertisement` test

* Finish `peer_reported_for_duplicate_statements` test

* Finish `received_advertisement_before_confirmation_leads_to_request`

* Add `advertisements_rejected_from_incorrect_peers` test

* Add `manifest_rejected_*` tests

* Add `manifest_rejected_when_group_does_not_match_para` test

* Add `local_node_sanity_checks_incoming_requests` test

* Add `local_node_respects_statement_mask` test

* Add tests where peer is reported for providing invalid signatures

* Add `cluster_peer_allowed_to_send_incomplete_statements` test

* Add `received_advertisement_after_backing_leads_to_acknowledgement`

* Add `received_advertisement_after_confirmation_before_backing` test

* peer_reported_for_advertisement_conflicting_with_confirmed_candidate

* Add `peer_reported_for_not_enough_statements` test

* Add `peer_reported_for_providing_statements_meant_to_be_masked_out`

* Add `additional_statements_are_shared_after_manifest_exchange`

* Add `grid_statements_imported_to_backing` test

* Add `relay_parent_entering_peer_view_leads_to_advertisement` test

* Add `advertisement_not_re_sent_when_peer_re_enters_view` test

* Update node/network/statement-distribution/src/vstaging/tests/grid.rs

Co-authored-by: asynchronous rob <rphmeier@gmail.com>

* Resolve TODOs, update test

* Address unused code

* Add check after every test for unhandled requests

* Refactor (`make_dummy_leaf` and `handle_sent_request`)

* Refactor (`make_dummy_topology`)

* Minor refactor

---------

Co-authored-by: Robert Habermeier <rphmeier@gmail.com>
Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>
Co-authored-by: Chris Sosnin <chris125_@live.com>

* Fix some clippy lints in tests

* Async backing: minor fixes (#6920)

* bitfield-distribution test

* implicit view tests

* Refactor parameters -> params

* scheduler: update storage migration (#6963)

* update scheduler migration

* Adjust weight to account for storage read

* Statement Distribution Guide Edits (#7025)

* Statement distribution guide edits

* Addressed Marcin's comments

* Add attested candidate request retry timeouts (#6833)

Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>
Co-authored-by: asynchronous rob <rphmeier@gmail.com>
Co-authored-by: Robert Habermeier <rphmeier@gmail.com>
Co-authored-by: Chris Sosnin <chris125_@live.com>
Fix async backing statement distribution tests (#6621)
Resolve some todos in async backing statement-distribution branch (#6482)
Fix clippy errors in statement distribution branch (#6720)

* Async backing: add Prospective Parachains impl guide (#6933)

Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com>

* Updates to Provisioner Guide for Async Backing (#7106)

* Initial corrections and clarifications

* Partial first draft

* Finished first draft

* Adding back wrongly removed test bit

* fmt

* Update roadmap/implementers-guide/src/node/utility/provisioner.md

Co-authored-by: Marcin S. <marcin@realemail.net>

* Addressing comments

* Reorganization

* fmt

---------

Co-authored-by: Marcin S. <marcin@realemail.net>

* fmt

* Renaming Parathread Mentions (#7287)

* Renaming parathreads

* Renaming module to pallet

* More updates

* PVF: Refactor workers into separate crates, remove host dependency (#7253)

* PVF: Refactor workers into separate crates, remove host dependency

* Fix compile error

* Remove some leftover code

* Fix compile errors

* Update Cargo.lock

* Remove worker main.rs files

I accidentally copied these from the other PR. This PR isn't intended to
introduce standalone workers yet.

* Address review comments

* cargo fmt

* Update a couple of comments

* Update log targets

* Update quote to 1.0.27 (#7280)

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: parity-processbot <>

* pallets: implement `Default` for `GenesisConfig` in `no_std` (#7271)

* pallets: implement Default for GenesisConfig in no_std

This change is follow-up of: https://github.com/paritytech/substrate/pull/14108

It is a step towards: https://github.com/paritytech/substrate/issues/13334

* Cargo.lock updated

* update lockfile for {"substrate"}

---------

Co-authored-by: parity-processbot <>

* cli: enable BEEFY by default on test networks (#7293)

We consider BEEFY mature enough to run by default on all nodes
for test networks (Rococo/Wococo/Versi).

Right now, most nodes are not running it since it's opt-in using
--beefy flag. Switch to an opt-out model for test networks.

Replace --beefy flag from CLI with --no-beefy and have BEEFY
client start by default on test networks.

Signed-off-by: acatangiu <adrian@parity.io>

* runtime: past session slashing runtime API (#6667)

* runtime/vstaging: unapplied_slashes runtime API

* runtime/vstaging: key_ownership_proof runtime API

* runtime/ParachainHost: submit_report_dispute_lost

* fix key_ownership_proof API

* runtime: submit_report_dispute_lost runtime API

* nits

* Update node/subsystem-types/src/messages.rs

Co-authored-by: Marcin S. <marcin@bytedude.com>

* revert unrelated fmt changes

* post merge fixes

* fix compilation

---------

Co-authored-by: Marcin S. <marcin@bytedude.com>

* Correcting git mishap

* Document usage of `gum` crate (#7294)

* Document usage of gum crate

* Small fix

* Add some more basic info

* Update node/gum/src/lib.rs

Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>

* Update target docs

---------

Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>

* XCM: Fix issue with RequestUnlock (#7278)

* XCM: Fix issue with RequestUnlock

* Leave API changes for v4

* Fix clippy errors

* Fix tests

---------

Co-authored-by: parity-processbot <>

* Companion for Substrate#14228 (#7295)

* Companion for Substrate#14228

https://github.com/paritytech/substrate/pull/14228

* update lockfile for {"substrate"}

---------

Co-authored-by: parity-processbot <>

* Companion for #14237: Use latest sp-crates (#7300)

* To revert: Update substrate branch to "lexnv/bump_sp_crates"

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Revert "To revert: Update substrate branch to "lexnv/bump_sp_crates""

This reverts commit 5f1db84eac4a226c37b7f6ce6ee19b49dc7e2008.

* Update cargo lock

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update cargo.lock

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update cargo.lock

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* bounded-collections bump to 0.1.7 (#7305)

* bounded-collections bump to 0.1.7

Companion for: paritytech/substrate#14225

* update lockfile for {"substrate"}

---------

Co-authored-by: parity-processbot <>

* bump to quote 1.0.28 (#7306)

* `RollingSessionWindow` cleanup (#7204)

* Replace `RollingSessionWindow` with `RuntimeInfo` - initial commit

* Fix tests in import

* Fix the rest of the tests

* Remove dead code

* Fix todos

* Simplify session caching

* Comments for `SessionInfoProvider`

* Separate `SessionInfoProvider` from `State`

* `cache_session_info_for_head` becomes freestanding function

* Remove unneeded `mut` usage

* fn session_info -> fn get_session_info() to avoid name clashes. The function also tries to initialize `SessionInfoProvider`

* Fix SessionInfo retrieval

* Code cleanup

* Don't wrap `SessionInfoProvider` in an `Option`

* Remove `earliest_session()`

* Remove pre-caching -> wip

* Fix some tests and code cleanup

* Fix all tests

* Fixes in tests

* Fix comments, variable names and small style changes

* Fix a warning

* impl From<SessionWindowSize> for NonZeroUsize

* Fix logging for `get_session_info` - remove redundant logs and decrease log level to DEBUG

* Code review feedback

* Storage migration removing `COL_SESSION_WINDOW_DATA` from parachains db

* Remove `col_session_data` usages

* Storage migration clearing columns w/o removing them

* Remove session data column usages from `approval-voting` and `dispute-coordinator` tests

* Add some test cases from `RollingSessionWindow` to `dispute-coordinator` tests

* Fix formatting in initialized.rs

* Fix a corner case in `SessionInfo` caching for `dispute-coordinator`

* Remove `RollingSessionWindow` ;(

* Revert "Fix formatting in initialized.rs"

This reverts commit 0f94664ec9f3a7e3737a30291195990e1e7065fc.

* v2 to v3 migration drops `COL_DISPUTE_COORDINATOR_DATA` instead of clearing it

* Fix `NUM_COLUMNS` in `approval-voting`

* Use `columns::v3::NUM_COLUMNS` when opening db

* Update node/service/src/parachains_db/upgrade.rs

Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>

* Don't write in `COL_DISPUTE_COORDINATOR_DATA` for `test_rocksdb_migrate_2_to_3`

* Fix `NUM+COLUMNS` in approval_voting

* Fix formatting

* Fix columns usage

* Clarification comments about the different db versions

---------

Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>

* pallet-para-config: Remove remnant WeightInfo functions (#7308)

* pallet-para-config: Remove remnant WeightInfo functions

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>

* set_config_with_weight begone

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>

* ".git/.scripts/commands/bench/bench.sh" runtime kusama-dev runtime_parachains::configuration

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: command-bot <>

* XCM: PayOverXcm config (#6900)

* Move XCM query functionality to trait

* Fix tests

* Add PayOverXcm implementation

* fix the PayOverXcm trait to compile

* moved doc comment out of trait implmeentation and to the trait

* PayOverXCM documentation

* Change documentation a bit

* Added empty benchmark methods implementation and changed docs

* update PayOverXCM to convert AccountIds to MultiLocations

* Implement benchmarking method

* Change v3 to latest

* Descend origin to an asset sender (#6970)

* descend origin to an asset sender

* sender as tuple of dest and sender

* Add more variants to the QueryResponseStatus enum

* Change Beneficiary to Into<[u8; 32]>

* update PayOverXcm to return concrete errors and use AccountId as sender

* use polkadot-primitives for AccountId

* fix dependency to use polkadot-core-primitives

* force Unpaid instruction to the top of the instructions list

* modify report_outcome to accept interior argument

* use new_query directly for building final xcm query, instead of report_outcome

* fix usage of new_query to use the XcmQueryHandler

* fix usage of new_query to use the XcmQueryHandler

* tiny method calling fix

* xcm query handler (#7198)

* drop redundant query status

* rename ReportQueryStatus to OuterQueryStatus

* revert rename of QueryResponseStatus

* update mapping

* Update xcm/xcm-builder/src/pay.rs

Co-authored-by: Gavin Wood <gavin@parity.io>

* Updates

* Docs

* Fix benchmarking stuff

* Destination can be determined based on asset_kind

* Tweaking API to minimise clones

* Some repotting and docs

---------

Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com>
Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com>
Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io>
Co-authored-by: Gavin Wood <gavin@parity.io>

* Companion for #14265 (#7307)

* Update Cargo.lock

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update Cargo.lock

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: parity-processbot <>

* bump serde to 1.0.163 (#7315)

* bump serde to 1.0.163

* bump ci

* update lockfile for {"substrate"}

---------

Co-authored-by: parity-processbot <>

* fmt

* Updated fmt

* Removing changes accidentally pulled from master

* fix another master pull issue

* Another master pull fix

* fmt

* Fixing implementers guide build

* Revert "Merge branch 'rh-async-backing-feature-while-frozen' of https://github.com/paritytech/polkadot into brad-rename-parathread"

This reverts commit bebc24af52ab61155e3fe02cb3ce66a592bce49c, reversing
changes made to 1b2de662dfb11173679d6da5bd0da9d149c85547.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: acatangiu <adrian@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Marcin S <marcin@realemail.net>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
Co-authored-by: Adrian Catangiu <adrian@parity.io>
Co-authored-by: ordian <write@reusable.software>
Co-authored-by: Marcin S. <marcin@bytedude.com>
Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
Co-authored-by: Sam Johnson <sam@durosoft.com>
Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>
Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com>
Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com>
Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io>
Co-authored-by: Gavin Wood <gavin@parity.io>

* fix bitfield distribution test

* approval distribution tests

* fix bridge tests

* update Cargo.lock

* [async-backing-branch] Optimize collator-protocol validator-side request fetching (#7457)

* Optimize collator-protocol validator-side request fetching

* address feedback: replace tuples with structs

* feedback: add doc comments

* move collation types to subfolder

---------

Signed-off-by: alindima <alin@parity.io>

* Update collation generation for asynchronous backing (#7405)

* break candidate receipt construction and distribution into own function

* update implementers' guide to include SubmitCollation

* implement SubmitCollation for collation-generation

* fmt

* fix test compilation & remove unnecessary submodule

* add some TODOs for a test suite.

* Update roadmap/implementers-guide/src/types/overseer-protocol.md

Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>

* add new test harness and first test

* refactor to avoid requiring background sender

* ensure collation gets packaged and distributed

* tests for the fallback case with no hint

* add parent rp-number hint tests

* fmt

* update uses of CollationGenerationConfig

* fix remaining test

* address review comments

* use subsystemsender for background tasks

* fmt

* remove ValidationCodeHashHint and related tests

---------

Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>

* fix some more fallout from merge

* fmt

* remove staging APIs from Rococo & Westend (#7513)

* send network messages on main protocol name (#7515)

* misc async backing improvements for allowed ancestry blocks (#7532)

* shared: fix acquire_info

* backwards-compat test for prospective parachains

* same relay parent is allowed

* provisioner: request candidate receipt by relay parent (#7527)

* return candidates hash from prospective parachains

* update provisioner

* update tests

* guide changes

* send a single message to backing

* fix test

* revert to old `handle_new_activations` logic in some cases (#7514)

* revert to old `handle_new_activations` logic

* gate sending messages on scheduled cores to max_depth >= 2

* fmt

* 2->1

* Omnibus asynchronous backing bugfix PR (#7529)

* fix a bug in backing

* add some more logs

* prospective parachains: take ancestry only up to session bounds

* add test

* fix zombienet tests (#7614)

Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>

* fix runtime compilation

* make bitfield distribution tests compile

* attempt to fix zombienet disputes (#7618)

* update metric name

* update some metric names

* avoid cycles when creating fake candidates

* make undying collator more friendly to malformed parents

* fix a bug in malus

* fmt

* clippy

* add RUN_IN_CONTAINER to new ZombieNet tests (#7631)

* remove duplicated migration

happened because of master-merge

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: acatangiu <adrian@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: alindima <alin@parity.io>
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Co-authored-by: Chris Sosnin <chris125_@live.com>
Co-authored-by: Parity Bot <admin@parity.io>
Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com>
Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at>
Co-authored-by: Robert Klotzner <eskimor@users.noreply.github.com>
Co-authored-by: Marcin S <marcin@bytedude.com>
Co-authored-by: Marcin S <marcin@realemail.net>
Co-authored-by: Mattia L.V. Bradascio <28816406+bredamatt@users.noreply.github.com>
Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com>
Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com>
Co-authored-by: BradleyOlson64 <lotrftw9@gmail.com>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
Co-authored-by: Adrian Catangiu <adrian@parity.io>
Co-authored-by: ordian <write@reusable.software>
Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
Co-authored-by: Sam Johnson <sam@durosoft.com>
Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>
Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com>
Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com>
Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io>
Co-authored-by: Gavin Wood <gavin@parity.io>
Co-authored-by: Alin Dima <alin@parity.io>
This commit is contained in:
asynchronous rob
2023-08-18 11:11:56 -05:00
committed by GitHub
parent ad539f0e41
commit 5174b9d2d7
175 changed files with 40882 additions and 6462 deletions
@@ -6,7 +6,6 @@ edition.workspace = true
license.workspace = true
[dependencies]
always-assert = "0.1.2"
bitvec = { version = "1.0.1", default-features = false, features = ["alloc"] }
futures = "0.3.21"
futures-timer = "3"
@@ -23,6 +22,7 @@ polkadot-node-subsystem-util = { path = "../../subsystem-util" }
polkadot-node-subsystem = {path = "../../subsystem" }
fatality = "0.0.6"
thiserror = "1.0.31"
tokio-util = "0.7.1"
[dev-dependencies]
log = "0.4.17"
@@ -31,6 +31,7 @@ assert_matches = "1.4.0"
sp-core = { git = "https://github.com/paritytech/substrate", branch = "master", features = ["std"] }
sp-keyring = { git = "https://github.com/paritytech/substrate", branch = "master" }
sc-keystore = { git = "https://github.com/paritytech/substrate", branch = "master" }
sc-network = { git = "https://github.com/paritytech/substrate", branch = "master" }
parity-scale-codec = { version = "3.6.1", features = ["std"] }
@@ -0,0 +1,162 @@
// Copyright 2022 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Primitives for tracking collations-related data.
use std::collections::{HashSet, VecDeque};
use futures::{future::BoxFuture, stream::FuturesUnordered};
use polkadot_node_network_protocol::{
request_response::{
incoming::OutgoingResponse, v1 as protocol_v1, vstaging as protocol_vstaging,
IncomingRequest,
},
PeerId,
};
use polkadot_node_primitives::PoV;
use polkadot_primitives::{CandidateHash, CandidateReceipt, Hash, Id as ParaId};
/// The status of a collation as seen from the collator.
pub enum CollationStatus {
/// The collation was created, but we did not advertise it to any validator.
Created,
/// The collation was advertised to at least one validator.
Advertised,
/// The collation was requested by at least one validator.
Requested,
}
impl CollationStatus {
/// Advance to the [`Self::Advertised`] status.
///
/// This ensures that `self` isn't already [`Self::Requested`].
pub fn advance_to_advertised(&mut self) {
if !matches!(self, Self::Requested) {
*self = Self::Advertised;
}
}
/// Advance to the [`Self::Requested`] status.
pub fn advance_to_requested(&mut self) {
*self = Self::Requested;
}
}
/// A collation built by the collator.
pub struct Collation {
/// Candidate receipt.
pub receipt: CandidateReceipt,
/// Parent head-data hash.
pub parent_head_data_hash: Hash,
/// Proof to verify the state transition of the parachain.
pub pov: PoV,
/// Collation status.
pub status: CollationStatus,
}
/// Stores the state for waiting collation fetches per relay parent.
#[derive(Default)]
pub struct WaitingCollationFetches {
/// A flag indicating that we have an ongoing request.
/// This limits the number of collations being sent at any moment
/// of time to 1 for each relay parent.
///
/// If set to `true`, any new request will be queued.
pub collation_fetch_active: bool,
/// The collation fetches waiting to be fulfilled.
pub req_queue: VecDeque<VersionedCollationRequest>,
/// All peers that are waiting or actively uploading.
///
/// We will not accept multiple requests from the same peer, otherwise our DoS protection of
/// moving on to the next peer after `MAX_UNSHARED_UPLOAD_TIME` would be pointless.
pub waiting_peers: HashSet<(PeerId, CandidateHash)>,
}
/// Backwards-compatible wrapper for incoming collations requests.
pub enum VersionedCollationRequest {
V1(IncomingRequest<protocol_v1::CollationFetchingRequest>),
VStaging(IncomingRequest<protocol_vstaging::CollationFetchingRequest>),
}
impl From<IncomingRequest<protocol_v1::CollationFetchingRequest>> for VersionedCollationRequest {
fn from(req: IncomingRequest<protocol_v1::CollationFetchingRequest>) -> Self {
Self::V1(req)
}
}
impl From<IncomingRequest<protocol_vstaging::CollationFetchingRequest>>
for VersionedCollationRequest
{
fn from(req: IncomingRequest<protocol_vstaging::CollationFetchingRequest>) -> Self {
Self::VStaging(req)
}
}
impl VersionedCollationRequest {
/// Returns parachain id from the request payload.
pub fn para_id(&self) -> ParaId {
match self {
VersionedCollationRequest::V1(req) => req.payload.para_id,
VersionedCollationRequest::VStaging(req) => req.payload.para_id,
}
}
/// Returns relay parent from the request payload.
pub fn relay_parent(&self) -> Hash {
match self {
VersionedCollationRequest::V1(req) => req.payload.relay_parent,
VersionedCollationRequest::VStaging(req) => req.payload.relay_parent,
}
}
/// Returns id of the peer the request was received from.
pub fn peer_id(&self) -> PeerId {
match self {
VersionedCollationRequest::V1(req) => req.peer,
VersionedCollationRequest::VStaging(req) => req.peer,
}
}
/// Sends the response back to requester.
pub fn send_outgoing_response(
self,
response: OutgoingResponse<protocol_v1::CollationFetchingResponse>,
) -> Result<(), ()> {
match self {
VersionedCollationRequest::V1(req) => req.send_outgoing_response(response),
VersionedCollationRequest::VStaging(req) => req.send_outgoing_response(response),
}
}
}
/// Result of the finished background send-collation task.
///
/// Note that if the timeout was hit the request doesn't get
/// aborted, it only indicates that we should start processing
/// the next one from the queue.
pub struct CollationSendResult {
/// Candidate's relay parent.
pub relay_parent: Hash,
/// Candidate hash.
pub candidate_hash: CandidateHash,
/// Peer id.
pub peer_id: PeerId,
/// Whether the max unshared timeout was hit.
pub timed_out: bool,
}
pub type ActiveCollationFetches = FuturesUnordered<BoxFuture<'static, CollationSendResult>>;
@@ -20,7 +20,7 @@ use polkadot_node_subsystem_util::metrics::{self, prometheus};
pub struct Metrics(Option<MetricsInner>);
impl Metrics {
pub fn on_advertisment_made(&self) {
pub fn on_advertisement_made(&self) {
if let Some(metrics) = &self.0 {
metrics.advertisements_made.inc();
}
File diff suppressed because it is too large Load Diff
@@ -37,6 +37,7 @@ use polkadot_node_network_protocol::{
};
use polkadot_node_primitives::BlockData;
use polkadot_node_subsystem::{
errors::RuntimeApiError,
jaeger,
messages::{AllMessages, ReportPeerMessage, RuntimeApiMessage, RuntimeApiRequest},
ActivatedLeaf, ActiveLeavesUpdate, LeafStatus,
@@ -49,8 +50,13 @@ use polkadot_primitives::{
};
use polkadot_primitives_test_helpers::TestCandidateBuilder;
mod prospective_parachains;
const REPUTATION_CHANGE_TEST_INTERVAL: Duration = Duration::from_millis(10);
const ASYNC_BACKING_DISABLED_ERROR: RuntimeApiError =
RuntimeApiError::NotSupported { runtime_api_name: "test-runtime" };
#[derive(Clone)]
struct TestState {
para_id: ParaId,
@@ -186,6 +192,17 @@ impl TestState {
)),
)
.await;
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
relay_parent,
RuntimeApiRequest::StagingAsyncBackingParams(tx)
)) => {
assert_eq!(relay_parent, self.relay_parent);
tx.send(Err(ASYNC_BACKING_DISABLED_ERROR)).unwrap();
}
);
}
}
@@ -193,7 +210,8 @@ type VirtualOverseer = test_helpers::TestSubsystemContextHandle<CollatorProtocol
struct TestHarness {
virtual_overseer: VirtualOverseer,
req_cfg: sc_network::config::RequestResponseConfig,
req_v1_cfg: sc_network::config::RequestResponseConfig,
req_vstaging_cfg: sc_network::config::RequestResponseConfig,
}
fn test_harness<T: Future<Output = TestHarness>>(
@@ -215,7 +233,9 @@ fn test_harness<T: Future<Output = TestHarness>>(
let genesis_hash = Hash::repeat_byte(0xff);
let req_protocol_names = ReqProtocolNames::new(&genesis_hash, None);
let (collation_req_receiver, req_cfg) =
let (collation_req_receiver, req_v1_cfg) =
IncomingRequest::get_config_receiver(&req_protocol_names);
let (collation_req_vstaging_receiver, req_vstaging_cfg) =
IncomingRequest::get_config_receiver(&req_protocol_names);
let subsystem = async {
run_inner(
@@ -223,6 +243,7 @@ fn test_harness<T: Future<Output = TestHarness>>(
local_peer_id,
collator_pair,
collation_req_receiver,
collation_req_vstaging_receiver,
Default::default(),
reputation,
REPUTATION_CHANGE_TEST_INTERVAL,
@@ -231,7 +252,7 @@ fn test_harness<T: Future<Output = TestHarness>>(
.unwrap();
};
let test_fut = test(TestHarness { virtual_overseer, req_cfg });
let test_fut = test(TestHarness { virtual_overseer, req_v1_cfg, req_vstaging_cfg });
futures::pin_mut!(test_fut);
futures::pin_mut!(subsystem);
@@ -305,6 +326,17 @@ async fn setup_system(virtual_overseer: &mut VirtualOverseer, test_state: &TestS
])),
)
.await;
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
relay_parent,
RuntimeApiRequest::StagingAsyncBackingParams(tx)
)) => {
assert_eq!(relay_parent, test_state.relay_parent);
tx.send(Err(ASYNC_BACKING_DISABLED_ERROR)).unwrap();
}
);
}
/// Result of [`distribute_collation`]
@@ -313,29 +345,23 @@ struct DistributeCollation {
pov_block: PoV,
}
/// Create some PoV and distribute it.
async fn distribute_collation(
async fn distribute_collation_with_receipt(
virtual_overseer: &mut VirtualOverseer,
test_state: &TestState,
// whether or not we expect a connection request or not.
relay_parent: Hash,
should_connect: bool,
candidate: CandidateReceipt,
pov: PoV,
parent_head_data_hash: Hash,
) -> DistributeCollation {
// Now we want to distribute a `PoVBlock`
let pov_block = PoV { block_data: BlockData(vec![42, 43, 44]) };
let pov_hash = pov_block.hash();
let candidate = TestCandidateBuilder {
para_id: test_state.para_id,
relay_parent: test_state.relay_parent,
pov_hash,
..Default::default()
}
.build();
overseer_send(
virtual_overseer,
CollatorProtocolMessage::DistributeCollation(candidate.clone(), pov_block.clone(), None),
CollatorProtocolMessage::DistributeCollation(
candidate.clone(),
parent_head_data_hash,
pov.clone(),
None,
),
)
.await;
@@ -343,10 +369,10 @@ async fn distribute_collation(
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
relay_parent,
_relay_parent,
RuntimeApiRequest::AvailabilityCores(tx)
)) => {
assert_eq!(relay_parent, test_state.relay_parent);
assert_eq!(relay_parent, _relay_parent);
tx.send(Ok(test_state.availability_cores.clone())).unwrap();
}
);
@@ -358,7 +384,7 @@ async fn distribute_collation(
relay_parent,
RuntimeApiRequest::SessionIndexForChild(tx),
)) => {
assert_eq!(relay_parent, test_state.relay_parent);
assert_eq!(relay_parent, relay_parent);
tx.send(Ok(test_state.current_session_index())).unwrap();
},
@@ -366,17 +392,17 @@ async fn distribute_collation(
relay_parent,
RuntimeApiRequest::SessionInfo(index, tx),
)) => {
assert_eq!(relay_parent, test_state.relay_parent);
assert_eq!(relay_parent, relay_parent);
assert_eq!(index, test_state.current_session_index());
tx.send(Ok(Some(test_state.session_info.clone()))).unwrap();
},
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
relay_parent,
_relay_parent,
RuntimeApiRequest::ValidatorGroups(tx),
)) => {
assert_eq!(relay_parent, test_state.relay_parent);
assert_eq!(_relay_parent, relay_parent);
tx.send(Ok((
test_state.session_info.validator_groups.to_vec(),
test_state.group_rotation_info.clone(),
@@ -400,13 +426,48 @@ async fn distribute_collation(
);
}
DistributeCollation { candidate, pov_block }
DistributeCollation { candidate, pov_block: pov }
}
/// Create some PoV and distribute it.
async fn distribute_collation(
virtual_overseer: &mut VirtualOverseer,
test_state: &TestState,
relay_parent: Hash,
// whether or not we expect a connection request or not.
should_connect: bool,
) -> DistributeCollation {
// Now we want to distribute a `PoVBlock`
let pov_block = PoV { block_data: BlockData(vec![42, 43, 44]) };
let pov_hash = pov_block.hash();
let parent_head_data_hash = Hash::zero();
let candidate = TestCandidateBuilder {
para_id: test_state.para_id,
relay_parent,
pov_hash,
..Default::default()
}
.build();
distribute_collation_with_receipt(
virtual_overseer,
test_state,
relay_parent,
should_connect,
candidate,
pov_block,
parent_head_data_hash,
)
.await
}
/// Connect a peer
async fn connect_peer(
virtual_overseer: &mut VirtualOverseer,
peer: PeerId,
version: CollationVersion,
authority_id: Option<AuthorityDiscoveryId>,
) {
overseer_send(
@@ -414,7 +475,7 @@ async fn connect_peer(
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::PeerConnected(
peer,
polkadot_node_network_protocol::ObservedRole::Authority,
CollationVersion::V1.into(),
version.into(),
authority_id.map(|v| HashSet::from([v])),
)),
)
@@ -474,30 +535,65 @@ async fn expect_declare_msg(
}
/// Check that the next received message is a collation advertisement message.
///
/// Expects vstaging message if `expected_candidate_hashes` is `Some`, v1 otherwise.
async fn expect_advertise_collation_msg(
virtual_overseer: &mut VirtualOverseer,
peer: &PeerId,
expected_relay_parent: Hash,
expected_candidate_hashes: Option<Vec<CandidateHash>>,
) {
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::NetworkBridgeTx(
NetworkBridgeTxMessage::SendCollationMessage(
to,
Versioned::V1(protocol_v1::CollationProtocol::CollatorProtocol(wire_message)),
)
) => {
assert_eq!(to[0], *peer);
assert_matches!(
wire_message,
protocol_v1::CollatorProtocolMessage::AdvertiseCollation(
relay_parent,
) => {
assert_eq!(relay_parent, expected_relay_parent);
let mut candidate_hashes: Option<HashSet<_>> =
expected_candidate_hashes.map(|hashes| hashes.into_iter().collect());
let iter_num = candidate_hashes.as_ref().map(|hashes| hashes.len()).unwrap_or(1);
for _ in 0..iter_num {
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::NetworkBridgeTx(
NetworkBridgeTxMessage::SendCollationMessage(
to,
wire_message,
)
) => {
assert_eq!(to[0], *peer);
match (candidate_hashes.as_mut(), wire_message) {
(None, Versioned::V1(protocol_v1::CollationProtocol::CollatorProtocol(wire_message))) => {
assert_matches!(
wire_message,
protocol_v1::CollatorProtocolMessage::AdvertiseCollation(
relay_parent,
) => {
assert_eq!(relay_parent, expected_relay_parent);
}
);
},
(
Some(candidate_hashes),
Versioned::VStaging(protocol_vstaging::CollationProtocol::CollatorProtocol(
wire_message,
)),
) => {
assert_matches!(
wire_message,
protocol_vstaging::CollatorProtocolMessage::AdvertiseCollation {
relay_parent,
candidate_hash,
..
} => {
assert_eq!(relay_parent, expected_relay_parent);
assert!(candidate_hashes.contains(&candidate_hash));
// Drop the hash we've already seen.
candidate_hashes.remove(&candidate_hash);
}
);
},
_ => panic!("Invalid advertisement"),
}
);
}
);
}
);
}
}
/// Send a message that the given peer's view changed.
@@ -528,19 +624,26 @@ fn advertise_and_send_collation() {
ReputationAggregator::new(|_| true),
|test_harness| async move {
let mut virtual_overseer = test_harness.virtual_overseer;
let mut req_cfg = test_harness.req_cfg;
let mut req_v1_cfg = test_harness.req_v1_cfg;
let req_vstaging_cfg = test_harness.req_vstaging_cfg;
setup_system(&mut virtual_overseer, &test_state).await;
let DistributeCollation { candidate, pov_block } =
distribute_collation(&mut virtual_overseer, &test_state, true).await;
let DistributeCollation { candidate, pov_block } = distribute_collation(
&mut virtual_overseer,
&test_state,
test_state.relay_parent,
true,
)
.await;
for (val, peer) in test_state
.current_group_validator_authority_ids()
.into_iter()
.zip(test_state.current_group_validator_peer_ids())
{
connect_peer(&mut virtual_overseer, peer, Some(val.clone())).await;
connect_peer(&mut virtual_overseer, peer, CollationVersion::V1, Some(val.clone()))
.await;
}
// We declare to the connected validators that we are a collator.
@@ -558,18 +661,23 @@ fn advertise_and_send_collation() {
// The peer is interested in a leaf that we have a collation for;
// advertise it.
expect_advertise_collation_msg(&mut virtual_overseer, &peer, test_state.relay_parent)
.await;
expect_advertise_collation_msg(
&mut virtual_overseer,
&peer,
test_state.relay_parent,
None,
)
.await;
// Request a collation.
let (pending_response, rx) = oneshot::channel();
req_cfg
req_v1_cfg
.inbound_queue
.as_mut()
.unwrap()
.send(RawIncomingRequest {
peer,
payload: CollationFetchingRequest {
payload: request_v1::CollationFetchingRequest {
relay_parent: test_state.relay_parent,
para_id: test_state.para_id,
}
@@ -582,13 +690,13 @@ fn advertise_and_send_collation() {
{
let (pending_response, rx) = oneshot::channel();
req_cfg
req_v1_cfg
.inbound_queue
.as_mut()
.unwrap()
.send(RawIncomingRequest {
peer,
payload: CollationFetchingRequest {
payload: request_v1::CollationFetchingRequest {
relay_parent: test_state.relay_parent,
para_id: test_state.para_id,
}
@@ -613,8 +721,8 @@ fn advertise_and_send_collation() {
assert_matches!(
rx.await,
Ok(full_response) => {
let CollationFetchingResponse::Collation(receipt, pov): CollationFetchingResponse
= CollationFetchingResponse::decode(
let request_v1::CollationFetchingResponse::Collation(receipt, pov): request_v1::CollationFetchingResponse
= request_v1::CollationFetchingResponse::decode(
&mut full_response.result
.expect("We should have a proper answer").as_ref()
)
@@ -632,13 +740,13 @@ fn advertise_and_send_collation() {
// Re-request a collation.
let (pending_response, rx) = oneshot::channel();
req_cfg
req_v1_cfg
.inbound_queue
.as_mut()
.unwrap()
.send(RawIncomingRequest {
peer,
payload: CollationFetchingRequest {
payload: request_v1::CollationFetchingRequest {
relay_parent: old_relay_parent,
para_id: test_state.para_id,
}
@@ -652,7 +760,8 @@ fn advertise_and_send_collation() {
assert!(overseer_recv_with_timeout(&mut virtual_overseer, TIMEOUT).await.is_none());
distribute_collation(&mut virtual_overseer, &test_state, true).await;
distribute_collation(&mut virtual_overseer, &test_state, test_state.relay_parent, true)
.await;
// Send info about peer's view.
overseer_send(
@@ -664,9 +773,14 @@ fn advertise_and_send_collation() {
)
.await;
expect_advertise_collation_msg(&mut virtual_overseer, &peer, test_state.relay_parent)
.await;
TestHarness { virtual_overseer, req_cfg }
expect_advertise_collation_msg(
&mut virtual_overseer,
&peer,
test_state.relay_parent,
None,
)
.await;
TestHarness { virtual_overseer, req_v1_cfg, req_vstaging_cfg }
},
);
}
@@ -683,18 +797,26 @@ fn delay_reputation_change() {
ReputationAggregator::new(|_| false),
|test_harness| async move {
let mut virtual_overseer = test_harness.virtual_overseer;
let mut req_cfg = test_harness.req_cfg;
let mut req_v1_cfg = test_harness.req_v1_cfg;
let req_vstaging_cfg = test_harness.req_vstaging_cfg;
setup_system(&mut virtual_overseer, &test_state).await;
let _ = distribute_collation(&mut virtual_overseer, &test_state, true).await;
let _ = distribute_collation(
&mut virtual_overseer,
&test_state,
test_state.relay_parent,
true,
)
.await;
for (val, peer) in test_state
.current_group_validator_authority_ids()
.into_iter()
.zip(test_state.current_group_validator_peer_ids())
{
connect_peer(&mut virtual_overseer, peer, Some(val.clone())).await;
connect_peer(&mut virtual_overseer, peer, CollationVersion::V1, Some(val.clone()))
.await;
}
// We declare to the connected validators that we are a collator.
@@ -712,18 +834,23 @@ fn delay_reputation_change() {
// The peer is interested in a leaf that we have a collation for;
// advertise it.
expect_advertise_collation_msg(&mut virtual_overseer, &peer, test_state.relay_parent)
.await;
expect_advertise_collation_msg(
&mut virtual_overseer,
&peer,
test_state.relay_parent,
None,
)
.await;
// Request a collation.
let (pending_response, _rx) = oneshot::channel();
req_cfg
req_v1_cfg
.inbound_queue
.as_mut()
.unwrap()
.send(RawIncomingRequest {
peer,
payload: CollationFetchingRequest {
payload: request_v1::CollationFetchingRequest {
relay_parent: test_state.relay_parent,
para_id: test_state.para_id,
}
@@ -736,13 +863,13 @@ fn delay_reputation_change() {
{
let (pending_response, _rx) = oneshot::channel();
req_cfg
req_v1_cfg
.inbound_queue
.as_mut()
.unwrap()
.send(RawIncomingRequest {
peer,
payload: CollationFetchingRequest {
payload: request_v1::CollationFetchingRequest {
relay_parent: test_state.relay_parent,
para_id: test_state.para_id,
}
@@ -767,7 +894,90 @@ fn delay_reputation_change() {
);
}
TestHarness { virtual_overseer, req_cfg }
TestHarness { virtual_overseer, req_v1_cfg, req_vstaging_cfg }
},
);
}
/// Tests that collator side works with vstaging network protocol
/// before async backing is enabled.
#[test]
fn advertise_collation_vstaging_protocol() {
let test_state = TestState::default();
let local_peer_id = test_state.local_peer_id;
let collator_pair = test_state.collator_pair.clone();
test_harness(
local_peer_id,
collator_pair,
ReputationAggregator::new(|_| true),
|mut test_harness| async move {
let virtual_overseer = &mut test_harness.virtual_overseer;
setup_system(virtual_overseer, &test_state).await;
let DistributeCollation { candidate, .. } =
distribute_collation(virtual_overseer, &test_state, test_state.relay_parent, true)
.await;
let validators = test_state.current_group_validator_authority_ids();
assert!(validators.len() >= 2);
let peer_ids = test_state.current_group_validator_peer_ids();
// Connect first peer with v1.
connect_peer(
virtual_overseer,
peer_ids[0],
CollationVersion::V1,
Some(validators[0].clone()),
)
.await;
// The rest with vstaging.
for (val, peer) in validators.iter().zip(peer_ids.iter()).skip(1) {
connect_peer(
virtual_overseer,
*peer,
CollationVersion::VStaging,
Some(val.clone()),
)
.await;
}
// Declare messages.
expect_declare_msg(virtual_overseer, &test_state, &peer_ids[0]).await;
for peer_id in peer_ids.iter().skip(1) {
prospective_parachains::expect_declare_msg_vstaging(
virtual_overseer,
&test_state,
&peer_id,
)
.await;
}
// Send info about peers view.
for peer in peer_ids.iter() {
send_peer_view_change(virtual_overseer, peer, vec![test_state.relay_parent]).await;
}
// Versioned advertisements work.
expect_advertise_collation_msg(
virtual_overseer,
&peer_ids[0],
test_state.relay_parent,
None,
)
.await;
for peer_id in peer_ids.iter().skip(1) {
expect_advertise_collation_msg(
virtual_overseer,
peer_id,
test_state.relay_parent,
Some(vec![candidate.hash()]), // This is `Some`, advertisement is vstaging.
)
.await;
}
test_harness
},
);
}
@@ -814,7 +1024,13 @@ fn collators_declare_to_connected_peers() {
setup_system(&mut test_harness.virtual_overseer, &test_state).await;
// A validator connected to us
connect_peer(&mut test_harness.virtual_overseer, peer, Some(validator_id)).await;
connect_peer(
&mut test_harness.virtual_overseer,
peer,
CollationVersion::V1,
Some(validator_id),
)
.await;
expect_declare_msg(&mut test_harness.virtual_overseer, &test_state, &peer).await;
test_harness
},
@@ -843,10 +1059,10 @@ fn collations_are_only_advertised_to_validators_with_correct_view() {
setup_system(virtual_overseer, &test_state).await;
// A validator connected to us
connect_peer(virtual_overseer, peer, Some(validator_id)).await;
connect_peer(virtual_overseer, peer, CollationVersion::V1, Some(validator_id)).await;
// Connect the second validator
connect_peer(virtual_overseer, peer2, Some(validator_id2)).await;
connect_peer(virtual_overseer, peer2, CollationVersion::V1, Some(validator_id2)).await;
expect_declare_msg(virtual_overseer, &test_state, &peer).await;
expect_declare_msg(virtual_overseer, &test_state, &peer2).await;
@@ -854,15 +1070,18 @@ fn collations_are_only_advertised_to_validators_with_correct_view() {
// And let it tell us that it is has the same view.
send_peer_view_change(virtual_overseer, &peer2, vec![test_state.relay_parent]).await;
distribute_collation(virtual_overseer, &test_state, true).await;
distribute_collation(virtual_overseer, &test_state, test_state.relay_parent, true)
.await;
expect_advertise_collation_msg(virtual_overseer, &peer2, test_state.relay_parent).await;
expect_advertise_collation_msg(virtual_overseer, &peer2, test_state.relay_parent, None)
.await;
// The other validator announces that it changed its view.
send_peer_view_change(virtual_overseer, &peer, vec![test_state.relay_parent]).await;
// After changing the view we should receive the advertisement
expect_advertise_collation_msg(virtual_overseer, &peer, test_state.relay_parent).await;
expect_advertise_collation_msg(virtual_overseer, &peer, test_state.relay_parent, None)
.await;
test_harness
},
)
@@ -890,15 +1109,16 @@ fn collate_on_two_different_relay_chain_blocks() {
setup_system(virtual_overseer, &test_state).await;
// A validator connected to us
connect_peer(virtual_overseer, peer, Some(validator_id)).await;
connect_peer(virtual_overseer, peer, CollationVersion::V1, Some(validator_id)).await;
// Connect the second validator
connect_peer(virtual_overseer, peer2, Some(validator_id2)).await;
connect_peer(virtual_overseer, peer2, CollationVersion::V1, Some(validator_id2)).await;
expect_declare_msg(virtual_overseer, &test_state, &peer).await;
expect_declare_msg(virtual_overseer, &test_state, &peer2).await;
distribute_collation(virtual_overseer, &test_state, true).await;
distribute_collation(virtual_overseer, &test_state, test_state.relay_parent, true)
.await;
let old_relay_parent = test_state.relay_parent;
@@ -906,14 +1126,16 @@ fn collate_on_two_different_relay_chain_blocks() {
// parent are active.
test_state.advance_to_new_round(virtual_overseer, true).await;
distribute_collation(virtual_overseer, &test_state, true).await;
distribute_collation(virtual_overseer, &test_state, test_state.relay_parent, true)
.await;
send_peer_view_change(virtual_overseer, &peer, vec![old_relay_parent]).await;
expect_advertise_collation_msg(virtual_overseer, &peer, old_relay_parent).await;
expect_advertise_collation_msg(virtual_overseer, &peer, old_relay_parent, None).await;
send_peer_view_change(virtual_overseer, &peer2, vec![test_state.relay_parent]).await;
expect_advertise_collation_msg(virtual_overseer, &peer2, test_state.relay_parent).await;
expect_advertise_collation_msg(virtual_overseer, &peer2, test_state.relay_parent, None)
.await;
test_harness
},
)
@@ -938,17 +1160,20 @@ fn validator_reconnect_does_not_advertise_a_second_time() {
setup_system(virtual_overseer, &test_state).await;
// A validator connected to us
connect_peer(virtual_overseer, peer, Some(validator_id.clone())).await;
connect_peer(virtual_overseer, peer, CollationVersion::V1, Some(validator_id.clone()))
.await;
expect_declare_msg(virtual_overseer, &test_state, &peer).await;
distribute_collation(virtual_overseer, &test_state, true).await;
distribute_collation(virtual_overseer, &test_state, test_state.relay_parent, true)
.await;
send_peer_view_change(virtual_overseer, &peer, vec![test_state.relay_parent]).await;
expect_advertise_collation_msg(virtual_overseer, &peer, test_state.relay_parent).await;
expect_advertise_collation_msg(virtual_overseer, &peer, test_state.relay_parent, None)
.await;
// Disconnect and reconnect directly
disconnect_peer(virtual_overseer, peer).await;
connect_peer(virtual_overseer, peer, Some(validator_id)).await;
connect_peer(virtual_overseer, peer, CollationVersion::V1, Some(validator_id)).await;
expect_declare_msg(virtual_overseer, &test_state, &peer).await;
send_peer_view_change(virtual_overseer, &peer, vec![test_state.relay_parent]).await;
@@ -979,7 +1204,7 @@ fn collators_reject_declare_messages() {
setup_system(virtual_overseer, &test_state).await;
// A validator connected to us
connect_peer(virtual_overseer, peer, Some(validator_id)).await;
connect_peer(virtual_overseer, peer, CollationVersion::V1, Some(validator_id)).await;
expect_declare_msg(virtual_overseer, &test_state, &peer).await;
overseer_send(
@@ -1031,19 +1256,20 @@ where
ReputationAggregator::new(|_| true),
|mut test_harness| async move {
let virtual_overseer = &mut test_harness.virtual_overseer;
let req_cfg = &mut test_harness.req_cfg;
let req_cfg = &mut test_harness.req_v1_cfg;
setup_system(virtual_overseer, &test_state).await;
let DistributeCollation { candidate, pov_block } =
distribute_collation(virtual_overseer, &test_state, true).await;
distribute_collation(virtual_overseer, &test_state, test_state.relay_parent, true)
.await;
for (val, peer) in test_state
.current_group_validator_authority_ids()
.into_iter()
.zip(test_state.current_group_validator_peer_ids())
{
connect_peer(virtual_overseer, peer, Some(val.clone())).await;
connect_peer(virtual_overseer, peer, CollationVersion::V1, Some(val.clone())).await;
}
// We declare to the connected validators that we are a collator.
@@ -1064,10 +1290,20 @@ where
// The peer is interested in a leaf that we have a collation for;
// advertise it.
expect_advertise_collation_msg(virtual_overseer, &validator_0, test_state.relay_parent)
.await;
expect_advertise_collation_msg(virtual_overseer, &validator_1, test_state.relay_parent)
.await;
expect_advertise_collation_msg(
virtual_overseer,
&validator_0,
test_state.relay_parent,
None,
)
.await;
expect_advertise_collation_msg(
virtual_overseer,
&validator_1,
test_state.relay_parent,
None,
)
.await;
// Request a collation.
let (pending_response, rx) = oneshot::channel();
@@ -1077,7 +1313,7 @@ where
.unwrap()
.send(RawIncomingRequest {
peer: validator_0,
payload: CollationFetchingRequest {
payload: request_v1::CollationFetchingRequest {
relay_parent: test_state.relay_parent,
para_id: test_state.para_id,
}
@@ -1092,8 +1328,8 @@ where
let feedback_tx = assert_matches!(
rx.await,
Ok(full_response) => {
let CollationFetchingResponse::Collation(receipt, pov): CollationFetchingResponse
= CollationFetchingResponse::decode(
let request_v1::CollationFetchingResponse::Collation(receipt, pov): request_v1::CollationFetchingResponse
= request_v1::CollationFetchingResponse::decode(
&mut full_response.result
.expect("We should have a proper answer").as_ref()
)
@@ -1113,7 +1349,7 @@ where
.unwrap()
.send(RawIncomingRequest {
peer: validator_1,
payload: CollationFetchingRequest {
payload: request_v1::CollationFetchingRequest {
relay_parent: test_state.relay_parent,
para_id: test_state.para_id,
}
@@ -1129,8 +1365,8 @@ where
assert_matches!(
rx.await,
Ok(full_response) => {
let CollationFetchingResponse::Collation(receipt, pov): CollationFetchingResponse
= CollationFetchingResponse::decode(
let request_v1::CollationFetchingResponse::Collation(receipt, pov): request_v1::CollationFetchingResponse
= request_v1::CollationFetchingResponse::decode(
&mut full_response.result
.expect("We should have a proper answer").as_ref()
)
@@ -1159,7 +1395,8 @@ fn connect_to_buffered_groups() {
ReputationAggregator::new(|_| true),
|test_harness| async move {
let mut virtual_overseer = test_harness.virtual_overseer;
let mut req_cfg = test_harness.req_cfg;
let mut req_cfg = test_harness.req_v1_cfg;
let req_vstaging_cfg = test_harness.req_vstaging_cfg;
setup_system(&mut virtual_overseer, &test_state).await;
@@ -1167,7 +1404,13 @@ fn connect_to_buffered_groups() {
let peers_a = test_state.current_group_validator_peer_ids();
assert!(group_a.len() > 1);
distribute_collation(&mut virtual_overseer, &test_state, false).await;
distribute_collation(
&mut virtual_overseer,
&test_state,
test_state.relay_parent,
false,
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
@@ -1181,7 +1424,8 @@ fn connect_to_buffered_groups() {
let head_a = test_state.relay_parent;
for (val, peer) in group_a.iter().zip(&peers_a) {
connect_peer(&mut virtual_overseer, *peer, Some(val.clone())).await;
connect_peer(&mut virtual_overseer, *peer, CollationVersion::V1, Some(val.clone()))
.await;
}
for peer_id in &peers_a {
@@ -1191,7 +1435,7 @@ fn connect_to_buffered_groups() {
// Update views.
for peed_id in &peers_a {
send_peer_view_change(&mut virtual_overseer, peed_id, vec![head_a]).await;
expect_advertise_collation_msg(&mut virtual_overseer, peed_id, head_a).await;
expect_advertise_collation_msg(&mut virtual_overseer, peed_id, head_a, None).await;
}
let peer = peers_a[0];
@@ -1203,7 +1447,7 @@ fn connect_to_buffered_groups() {
.unwrap()
.send(RawIncomingRequest {
peer,
payload: CollationFetchingRequest {
payload: request_v1::CollationFetchingRequest {
relay_parent: head_a,
para_id: test_state.para_id,
}
@@ -1215,14 +1459,17 @@ fn connect_to_buffered_groups() {
assert_matches!(
rx.await,
Ok(full_response) => {
let CollationFetchingResponse::Collation(..): CollationFetchingResponse =
CollationFetchingResponse::decode(
let request_v1::CollationFetchingResponse::Collation(..) =
request_v1::CollationFetchingResponse::decode(
&mut full_response.result.expect("We should have a proper answer").as_ref(),
)
.expect("Decoding should work");
}
);
// Let the subsystem process process the collation event.
test_helpers::Yield::new().await;
test_state.advance_to_new_round(&mut virtual_overseer, true).await;
test_state.group_rotation_info = test_state.group_rotation_info.bump_rotation();
@@ -1231,7 +1478,13 @@ fn connect_to_buffered_groups() {
assert_ne!(head_a, head_b);
assert_ne!(group_a, group_b);
distribute_collation(&mut virtual_overseer, &test_state, false).await;
distribute_collation(
&mut virtual_overseer,
&test_state,
test_state.relay_parent,
false,
)
.await;
// Should be connected to both groups except for the validator that fetched advertised
// collation.
@@ -1248,7 +1501,7 @@ fn connect_to_buffered_groups() {
}
);
TestHarness { virtual_overseer, req_cfg }
TestHarness { virtual_overseer, req_v1_cfg: req_cfg, req_vstaging_cfg }
},
);
}
@@ -0,0 +1,575 @@
// Copyright 2022 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Tests for the collator side with enabled prospective parachains.
use super::*;
use polkadot_node_subsystem::messages::{ChainApiMessage, ProspectiveParachainsMessage};
use polkadot_primitives::{vstaging as vstaging_primitives, Header, OccupiedCore};
const ASYNC_BACKING_PARAMETERS: vstaging_primitives::AsyncBackingParams =
vstaging_primitives::AsyncBackingParams { max_candidate_depth: 4, allowed_ancestry_len: 3 };
fn get_parent_hash(hash: Hash) -> Hash {
Hash::from_low_u64_be(hash.to_low_u64_be() + 1)
}
/// Handle a view update.
async fn update_view(
virtual_overseer: &mut VirtualOverseer,
test_state: &TestState,
new_view: Vec<(Hash, u32)>, // Hash and block number.
activated: u8, // How many new heads does this update contain?
) {
let new_view: HashMap<Hash, u32> = HashMap::from_iter(new_view);
let our_view =
OurView::new(new_view.keys().map(|hash| (*hash, Arc::new(jaeger::Span::Disabled))), 0);
overseer_send(
virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::OurViewChange(our_view)),
)
.await;
let mut next_overseer_message = None;
for _ in 0..activated {
let (leaf_hash, leaf_number) = assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
parent,
RuntimeApiRequest::StagingAsyncBackingParams(tx),
)) => {
tx.send(Ok(ASYNC_BACKING_PARAMETERS)).unwrap();
(parent, new_view.get(&parent).copied().expect("Unknown parent requested"))
}
);
let min_number = leaf_number.saturating_sub(ASYNC_BACKING_PARAMETERS.allowed_ancestry_len);
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::ProspectiveParachains(
ProspectiveParachainsMessage::GetMinimumRelayParents(parent, tx),
) if parent == leaf_hash => {
tx.send(vec![(test_state.para_id, min_number)]).unwrap();
}
);
let ancestry_len = leaf_number + 1 - min_number;
let ancestry_hashes = std::iter::successors(Some(leaf_hash), |h| Some(get_parent_hash(*h)))
.take(ancestry_len as usize);
let ancestry_numbers = (min_number..=leaf_number).rev();
let mut ancestry_iter = ancestry_hashes.clone().zip(ancestry_numbers).peekable();
while let Some((hash, number)) = ancestry_iter.next() {
// May be `None` for the last element.
let parent_hash =
ancestry_iter.peek().map(|(h, _)| *h).unwrap_or_else(|| get_parent_hash(hash));
let msg = match next_overseer_message.take() {
Some(msg) => Some(msg),
None =>
overseer_recv_with_timeout(virtual_overseer, Duration::from_millis(50)).await,
};
let msg = match msg {
Some(msg) => msg,
None => {
// We're done.
return
},
};
if !matches!(
&msg,
AllMessages::ChainApi(ChainApiMessage::BlockHeader(_hash, ..))
if *_hash == hash
) {
// Ancestry has already been cached for this leaf.
next_overseer_message.replace(msg);
break
}
assert_matches!(
msg,
AllMessages::ChainApi(ChainApiMessage::BlockHeader(.., tx)) => {
let header = Header {
parent_hash,
number,
state_root: Hash::zero(),
extrinsics_root: Hash::zero(),
digest: Default::default(),
};
tx.send(Ok(Some(header))).unwrap();
}
);
}
}
}
/// Check that the next received message is a `Declare` message.
pub(super) async fn expect_declare_msg_vstaging(
virtual_overseer: &mut VirtualOverseer,
test_state: &TestState,
peer: &PeerId,
) {
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::NetworkBridgeTx(NetworkBridgeTxMessage::SendCollationMessage(
to,
Versioned::VStaging(protocol_vstaging::CollationProtocol::CollatorProtocol(
wire_message,
)),
)) => {
assert_eq!(to[0], *peer);
assert_matches!(
wire_message,
protocol_vstaging::CollatorProtocolMessage::Declare(
collator_id,
para_id,
signature,
) => {
assert!(signature.verify(
&*protocol_vstaging::declare_signature_payload(&test_state.local_peer_id),
&collator_id),
);
assert_eq!(collator_id, test_state.collator_pair.public());
assert_eq!(para_id, test_state.para_id);
}
);
}
);
}
/// Test that a collator distributes a collation from the allowed ancestry
/// to correct validators group.
#[test]
fn distribute_collation_from_implicit_view() {
let head_a = Hash::from_low_u64_be(126);
let head_a_num: u32 = 66;
// Grandparent of head `a`.
let head_b = Hash::from_low_u64_be(128);
let head_b_num: u32 = 64;
// Grandparent of head `b`.
let head_c = Hash::from_low_u64_be(130);
let head_c_num = 62;
let group_rotation_info = GroupRotationInfo {
session_start_block: head_c_num - 2,
group_rotation_frequency: 3,
now: head_c_num,
};
let mut test_state = TestState::default();
test_state.group_rotation_info = group_rotation_info;
let local_peer_id = test_state.local_peer_id;
let collator_pair = test_state.collator_pair.clone();
test_harness(
local_peer_id,
collator_pair,
ReputationAggregator::new(|_| true),
|mut test_harness| async move {
let virtual_overseer = &mut test_harness.virtual_overseer;
// Set collating para id.
overseer_send(virtual_overseer, CollatorProtocolMessage::CollateOn(test_state.para_id))
.await;
// Activated leaf is `b`, but the collation will be based on `c`.
update_view(virtual_overseer, &test_state, vec![(head_b, head_b_num)], 1).await;
let validator_peer_ids = test_state.current_group_validator_peer_ids();
for (val, peer) in test_state
.current_group_validator_authority_ids()
.into_iter()
.zip(validator_peer_ids.clone())
{
connect_peer(virtual_overseer, peer, CollationVersion::VStaging, Some(val.clone()))
.await;
}
// Collator declared itself to each peer.
for peer_id in &validator_peer_ids {
expect_declare_msg_vstaging(virtual_overseer, &test_state, peer_id).await;
}
let pov = PoV { block_data: BlockData(vec![1, 2, 3]) };
let parent_head_data_hash = Hash::repeat_byte(0xAA);
let candidate = TestCandidateBuilder {
para_id: test_state.para_id,
relay_parent: head_c,
pov_hash: pov.hash(),
..Default::default()
}
.build();
let DistributeCollation { candidate, pov_block: _ } =
distribute_collation_with_receipt(
virtual_overseer,
&test_state,
head_c,
false, // Check the group manually.
candidate,
pov,
parent_head_data_hash,
)
.await;
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::NetworkBridgeTx(
NetworkBridgeTxMessage::ConnectToValidators { validator_ids, .. }
) => {
let expected_validators = test_state.current_group_validator_authority_ids();
assert_eq!(expected_validators, validator_ids);
}
);
let candidate_hash = candidate.hash();
// Update peer views.
for peed_id in &validator_peer_ids {
send_peer_view_change(virtual_overseer, peed_id, vec![head_b]).await;
expect_advertise_collation_msg(
virtual_overseer,
peed_id,
head_c,
Some(vec![candidate_hash]),
)
.await;
}
// Head `c` goes out of view.
// Build a different candidate for this relay parent and attempt to distribute it.
update_view(virtual_overseer, &test_state, vec![(head_a, head_a_num)], 1).await;
let pov = PoV { block_data: BlockData(vec![4, 5, 6]) };
let parent_head_data_hash = Hash::repeat_byte(0xBB);
let candidate = TestCandidateBuilder {
para_id: test_state.para_id,
relay_parent: head_c,
pov_hash: pov.hash(),
..Default::default()
}
.build();
overseer_send(
virtual_overseer,
CollatorProtocolMessage::DistributeCollation(
candidate.clone(),
parent_head_data_hash,
pov.clone(),
None,
),
)
.await;
// Parent out of view, nothing happens.
assert!(overseer_recv_with_timeout(virtual_overseer, Duration::from_millis(100))
.await
.is_none());
test_harness
},
)
}
/// Tests that collator can distribute up to `MAX_CANDIDATE_DEPTH + 1` candidates
/// per relay parent.
#[test]
fn distribute_collation_up_to_limit() {
let test_state = TestState::default();
let local_peer_id = test_state.local_peer_id;
let collator_pair = test_state.collator_pair.clone();
test_harness(
local_peer_id,
collator_pair,
ReputationAggregator::new(|_| true),
|mut test_harness| async move {
let virtual_overseer = &mut test_harness.virtual_overseer;
let head_a = Hash::from_low_u64_be(128);
let head_a_num: u32 = 64;
// Grandparent of head `a`.
let head_b = Hash::from_low_u64_be(130);
// Set collating para id.
overseer_send(virtual_overseer, CollatorProtocolMessage::CollateOn(test_state.para_id))
.await;
// Activated leaf is `a`, but the collation will be based on `b`.
update_view(virtual_overseer, &test_state, vec![(head_a, head_a_num)], 1).await;
for i in 0..(ASYNC_BACKING_PARAMETERS.max_candidate_depth + 1) {
let pov = PoV { block_data: BlockData(vec![i as u8]) };
let parent_head_data_hash = Hash::repeat_byte(0xAA);
let candidate = TestCandidateBuilder {
para_id: test_state.para_id,
relay_parent: head_b,
pov_hash: pov.hash(),
..Default::default()
}
.build();
distribute_collation_with_receipt(
virtual_overseer,
&test_state,
head_b,
true,
candidate,
pov,
parent_head_data_hash,
)
.await;
}
let pov = PoV { block_data: BlockData(vec![10, 12, 6]) };
let parent_head_data_hash = Hash::repeat_byte(0xBB);
let candidate = TestCandidateBuilder {
para_id: test_state.para_id,
relay_parent: head_b,
pov_hash: pov.hash(),
..Default::default()
}
.build();
overseer_send(
virtual_overseer,
CollatorProtocolMessage::DistributeCollation(
candidate.clone(),
parent_head_data_hash,
pov.clone(),
None,
),
)
.await;
// Limit has been reached.
assert!(overseer_recv_with_timeout(virtual_overseer, Duration::from_millis(100))
.await
.is_none());
test_harness
},
)
}
/// Tests that collator correctly handles peer V2 requests.
#[test]
fn advertise_and_send_collation_by_hash() {
let test_state = TestState::default();
let local_peer_id = test_state.local_peer_id;
let collator_pair = test_state.collator_pair.clone();
test_harness(
local_peer_id,
collator_pair,
ReputationAggregator::new(|_| true),
|test_harness| async move {
let mut virtual_overseer = test_harness.virtual_overseer;
let req_v1_cfg = test_harness.req_v1_cfg;
let mut req_vstaging_cfg = test_harness.req_vstaging_cfg;
let head_a = Hash::from_low_u64_be(128);
let head_a_num: u32 = 64;
// Parent of head `a`.
let head_b = Hash::from_low_u64_be(129);
let head_b_num: u32 = 63;
// Set collating para id.
overseer_send(
&mut virtual_overseer,
CollatorProtocolMessage::CollateOn(test_state.para_id),
)
.await;
update_view(&mut virtual_overseer, &test_state, vec![(head_b, head_b_num)], 1).await;
update_view(&mut virtual_overseer, &test_state, vec![(head_a, head_a_num)], 1).await;
let candidates: Vec<_> = (0..2)
.map(|i| {
let pov = PoV { block_data: BlockData(vec![i as u8]) };
let candidate = TestCandidateBuilder {
para_id: test_state.para_id,
relay_parent: head_b,
pov_hash: pov.hash(),
..Default::default()
}
.build();
(candidate, pov)
})
.collect();
for (candidate, pov) in &candidates {
distribute_collation_with_receipt(
&mut virtual_overseer,
&test_state,
head_b,
true,
candidate.clone(),
pov.clone(),
Hash::zero(),
)
.await;
}
let peer = test_state.validator_peer_id[0];
let validator_id = test_state.current_group_validator_authority_ids()[0].clone();
connect_peer(
&mut virtual_overseer,
peer,
CollationVersion::VStaging,
Some(validator_id.clone()),
)
.await;
expect_declare_msg_vstaging(&mut virtual_overseer, &test_state, &peer).await;
// Head `b` is not a leaf, but both advertisements are still relevant.
send_peer_view_change(&mut virtual_overseer, &peer, vec![head_b]).await;
let hashes: Vec<_> = candidates.iter().map(|(candidate, _)| candidate.hash()).collect();
expect_advertise_collation_msg(&mut virtual_overseer, &peer, head_b, Some(hashes))
.await;
for (candidate, pov_block) in candidates {
let (pending_response, rx) = oneshot::channel();
req_vstaging_cfg
.inbound_queue
.as_mut()
.unwrap()
.send(RawIncomingRequest {
peer,
payload: request_vstaging::CollationFetchingRequest {
relay_parent: head_b,
para_id: test_state.para_id,
candidate_hash: candidate.hash(),
}
.encode(),
pending_response,
})
.await
.unwrap();
assert_matches!(
rx.await,
Ok(full_response) => {
// Response is the same for vstaging.
let request_v1::CollationFetchingResponse::Collation(receipt, pov): request_v1::CollationFetchingResponse
= request_v1::CollationFetchingResponse::decode(
&mut full_response.result
.expect("We should have a proper answer").as_ref()
)
.expect("Decoding should work");
assert_eq!(receipt, candidate);
assert_eq!(pov, pov_block);
}
);
}
TestHarness { virtual_overseer, req_v1_cfg, req_vstaging_cfg }
},
)
}
/// Tests that collator distributes collation built on top of occupied core.
#[test]
fn advertise_core_occupied() {
let mut test_state = TestState::default();
let candidate =
TestCandidateBuilder { para_id: test_state.para_id, ..Default::default() }.build();
test_state.availability_cores[0] = CoreState::Occupied(OccupiedCore {
next_up_on_available: None,
occupied_since: 0,
time_out_at: 0,
next_up_on_time_out: None,
availability: BitVec::default(),
group_responsible: GroupIndex(0),
candidate_hash: candidate.hash(),
candidate_descriptor: candidate.descriptor,
});
let local_peer_id = test_state.local_peer_id;
let collator_pair = test_state.collator_pair.clone();
test_harness(
local_peer_id,
collator_pair,
ReputationAggregator::new(|_| true),
|mut test_harness| async move {
let virtual_overseer = &mut test_harness.virtual_overseer;
let head_a = Hash::from_low_u64_be(128);
let head_a_num: u32 = 64;
// Grandparent of head `a`.
let head_b = Hash::from_low_u64_be(130);
// Set collating para id.
overseer_send(virtual_overseer, CollatorProtocolMessage::CollateOn(test_state.para_id))
.await;
// Activated leaf is `a`, but the collation will be based on `b`.
update_view(virtual_overseer, &test_state, vec![(head_a, head_a_num)], 1).await;
let pov = PoV { block_data: BlockData(vec![1, 2, 3]) };
let candidate = TestCandidateBuilder {
para_id: test_state.para_id,
relay_parent: head_b,
pov_hash: pov.hash(),
..Default::default()
}
.build();
let candidate_hash = candidate.hash();
distribute_collation_with_receipt(
virtual_overseer,
&test_state,
head_b,
true,
candidate,
pov,
Hash::zero(),
)
.await;
let validators = test_state.current_group_validator_authority_ids();
let peer_ids = test_state.current_group_validator_peer_ids();
connect_peer(
virtual_overseer,
peer_ids[0],
CollationVersion::VStaging,
Some(validators[0].clone()),
)
.await;
expect_declare_msg_vstaging(virtual_overseer, &test_state, &peer_ids[0]).await;
// Peer is aware of the leaf.
send_peer_view_change(virtual_overseer, &peer_ids[0], vec![head_a]).await;
// Collation is advertised.
expect_advertise_collation_msg(
virtual_overseer,
&peer_ids[0],
head_b,
Some(vec![candidate_hash]),
)
.await;
test_harness
},
)
}
@@ -31,13 +31,19 @@
use std::{
collections::{HashMap, VecDeque},
future::Future,
num::NonZeroUsize,
ops::Range,
pin::Pin,
task::{Context, Poll},
time::Duration,
};
use bitvec::{bitvec, vec::BitVec};
use futures::FutureExt;
use polkadot_primitives::{AuthorityDiscoveryId, GroupIndex, Hash, SessionIndex};
use polkadot_node_network_protocol::PeerId;
use polkadot_primitives::{AuthorityDiscoveryId, CandidateHash, GroupIndex, SessionIndex};
/// The ring buffer stores at most this many unique validator groups.
///
@@ -66,9 +72,9 @@ pub struct ValidatorGroupsBuffer {
group_infos: VecDeque<ValidatorsGroupInfo>,
/// Continuous buffer of validators discovery keys.
validators: VecDeque<AuthorityDiscoveryId>,
/// Mapping from relay-parent to bit-vectors with bits for all `validators`.
/// Mapping from candidate hashes to bit-vectors with bits for all `validators`.
/// Invariants kept: All bit-vectors are guaranteed to have the same size.
should_be_connected: HashMap<Hash, BitVec>,
should_be_connected: HashMap<CandidateHash, BitVec>,
/// Buffer capacity, limits the number of **groups** tracked.
cap: NonZeroUsize,
}
@@ -107,7 +113,7 @@ impl ValidatorGroupsBuffer {
/// of the buffer.
pub fn note_collation_advertised(
&mut self,
relay_parent: Hash,
candidate_hash: CandidateHash,
session_index: SessionIndex,
group_index: GroupIndex,
validators: &[AuthorityDiscoveryId],
@@ -121,19 +127,19 @@ impl ValidatorGroupsBuffer {
}) {
Some((idx, group)) => {
let group_start_idx = self.group_lengths_iter().take(idx).sum();
self.set_bits(relay_parent, group_start_idx..(group_start_idx + group.len));
self.set_bits(candidate_hash, group_start_idx..(group_start_idx + group.len));
},
None => self.push(relay_parent, session_index, group_index, validators),
None => self.push(candidate_hash, session_index, group_index, validators),
}
}
/// Note that a validator is no longer interested in a given relay parent.
pub fn reset_validator_interest(
&mut self,
relay_parent: Hash,
candidate_hash: CandidateHash,
authority_id: &AuthorityDiscoveryId,
) {
let bits = match self.should_be_connected.get_mut(&relay_parent) {
let bits = match self.should_be_connected.get_mut(&candidate_hash) {
Some(bits) => bits,
None => return,
};
@@ -145,17 +151,12 @@ impl ValidatorGroupsBuffer {
}
}
/// Remove relay parent from the buffer.
/// Remove advertised candidate from the buffer.
///
/// The buffer will no longer track which validators are interested in a corresponding
/// advertisement.
pub fn remove_relay_parent(&mut self, relay_parent: &Hash) {
self.should_be_connected.remove(relay_parent);
}
/// Removes all advertisements from the buffer.
pub fn clear_advertisements(&mut self) {
self.should_be_connected.clear();
pub fn remove_candidate(&mut self, candidate_hash: &CandidateHash) {
self.should_be_connected.remove(candidate_hash);
}
/// Pushes a new group to the buffer along with advertisement, setting all validators
@@ -164,7 +165,7 @@ impl ValidatorGroupsBuffer {
/// If the buffer is full, drops group from the tail.
fn push(
&mut self,
relay_parent: Hash,
candidate_hash: CandidateHash,
session_index: SessionIndex,
group_index: GroupIndex,
validators: &[AuthorityDiscoveryId],
@@ -193,17 +194,17 @@ impl ValidatorGroupsBuffer {
self.should_be_connected
.values_mut()
.for_each(|bits| bits.resize(new_len, false));
self.set_bits(relay_parent, group_start_idx..(group_start_idx + validators.len()));
self.set_bits(candidate_hash, group_start_idx..(group_start_idx + validators.len()));
}
/// Sets advertisement bits to 1 in a given range (usually corresponding to some group).
/// If the relay parent is unknown, inserts 0-initialized bitvec first.
///
/// The range must be ensured to be within bounds.
fn set_bits(&mut self, relay_parent: Hash, range: Range<usize>) {
fn set_bits(&mut self, candidate_hash: CandidateHash, range: Range<usize>) {
let bits = self
.should_be_connected
.entry(relay_parent)
.entry(candidate_hash)
.or_insert_with(|| bitvec![0; self.validators.len()]);
bits[range].fill(true);
@@ -217,9 +218,40 @@ impl ValidatorGroupsBuffer {
}
}
/// A timeout for resetting validators' interests in collations.
pub const RESET_INTEREST_TIMEOUT: Duration = Duration::from_secs(6);
/// A future that returns a candidate hash along with validator discovery
/// keys once a timeout hit.
///
/// If a validator doesn't manage to fetch a collation within this timeout
/// we should reset its interest in this advertisement in a buffer. For example,
/// when the PoV was already requested from another peer.
pub struct ResetInterestTimeout {
fut: futures_timer::Delay,
candidate_hash: CandidateHash,
peer_id: PeerId,
}
impl ResetInterestTimeout {
/// Returns new `ResetInterestTimeout` that resolves after given timeout.
pub fn new(candidate_hash: CandidateHash, peer_id: PeerId, delay: Duration) -> Self {
Self { fut: futures_timer::Delay::new(delay), candidate_hash, peer_id }
}
}
impl Future for ResetInterestTimeout {
type Output = (CandidateHash, PeerId);
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
self.fut.poll_unpin(cx).map(|_| (self.candidate_hash, self.peer_id))
}
}
#[cfg(test)]
mod tests {
use super::*;
use polkadot_primitives::Hash;
use sp_keyring::Sr25519Keyring;
#[test]
@@ -227,8 +259,8 @@ mod tests {
let cap = NonZeroUsize::new(1).unwrap();
let mut buf = ValidatorGroupsBuffer::with_capacity(cap);
let hash_a = Hash::repeat_byte(0x1);
let hash_b = Hash::repeat_byte(0x2);
let hash_a = CandidateHash(Hash::repeat_byte(0x1));
let hash_b = CandidateHash(Hash::repeat_byte(0x2));
let validators: Vec<_> = [
Sr25519Keyring::Alice,
@@ -263,7 +295,7 @@ mod tests {
let cap = NonZeroUsize::new(3).unwrap();
let mut buf = ValidatorGroupsBuffer::with_capacity(cap);
let hashes: Vec<_> = (0..5).map(Hash::repeat_byte).collect();
let hashes: Vec<_> = (0..5).map(|i| CandidateHash(Hash::repeat_byte(i))).collect();
let validators: Vec<_> = [
Sr25519Keyring::Alice,
@@ -17,10 +17,12 @@
//! Error handling related code and Error/Result definitions.
use futures::channel::oneshot;
use polkadot_node_network_protocol::request_response::incoming;
use polkadot_node_primitives::UncheckedSignedFullStatement;
use polkadot_node_subsystem::errors::SubsystemError;
use polkadot_node_subsystem_util::runtime;
use polkadot_node_subsystem::{errors::SubsystemError, RuntimeApiError};
use polkadot_node_subsystem_util::{backing_implicit_view, runtime};
use crate::LOG_TARGET;
@@ -44,10 +46,78 @@ pub enum Error {
#[error("Error while accessing runtime information")]
Runtime(#[from] runtime::Error),
#[error("Error while accessing Runtime API")]
RuntimeApi(#[from] RuntimeApiError),
#[error(transparent)]
ImplicitViewFetchError(backing_implicit_view::FetchError),
#[error("Response receiver for active validators request cancelled")]
CancelledActiveValidators(oneshot::Canceled),
#[error("Response receiver for validator groups request cancelled")]
CancelledValidatorGroups(oneshot::Canceled),
#[error("Response receiver for availability cores request cancelled")]
CancelledAvailabilityCores(oneshot::Canceled),
#[error("CollationSeconded contained statement with invalid signature")]
InvalidStatementSignature(UncheckedSignedFullStatement),
}
/// An error happened on the validator side of the protocol when attempting
/// to start seconding a candidate.
#[derive(Debug, thiserror::Error)]
pub enum SecondingError {
#[error("Error while accessing Runtime API")]
RuntimeApi(#[from] RuntimeApiError),
#[error("Response receiver for persisted validation data request cancelled")]
CancelledRuntimePersistedValidationData(oneshot::Canceled),
#[error("Response receiver for prospective validation data request cancelled")]
CancelledProspectiveValidationData(oneshot::Canceled),
#[error("Persisted validation data is not available")]
PersistedValidationDataNotFound,
#[error("Persisted validation data hash doesn't match one in the candidate receipt.")]
PersistedValidationDataMismatch,
#[error("Candidate hash doesn't match the advertisement")]
CandidateHashMismatch,
#[error("Received duplicate collation from the peer")]
Duplicate,
}
impl SecondingError {
/// Returns true if an error indicates that a peer is malicious.
pub fn is_malicious(&self) -> bool {
use SecondingError::*;
matches!(self, PersistedValidationDataMismatch | CandidateHashMismatch | Duplicate)
}
}
/// A validator failed to request a collation due to an error.
#[derive(Debug, thiserror::Error)]
pub enum FetchError {
#[error("Collation was not previously advertised")]
NotAdvertised,
#[error("Peer is unknown")]
UnknownPeer,
#[error("Collation was already requested")]
AlreadyRequested,
#[error("Relay parent went out of view")]
RelayParentOutOfView,
#[error("Peer's protocol doesn't match the advertisement")]
ProtocolMismatch,
}
/// Utility for eating top level errors and log them.
///
/// We basically always want to try and continue on error. This utility function is meant to
@@ -32,7 +32,7 @@ use polkadot_node_subsystem_util::reputation::ReputationAggregator;
use sp_keystore::KeystorePtr;
use polkadot_node_network_protocol::{
request_response::{v1 as request_v1, IncomingRequestReceiver},
request_response::{v1 as request_v1, vstaging as protocol_vstaging, IncomingRequestReceiver},
PeerId, UnifiedReputationChange as Rep,
};
use polkadot_primitives::CollatorPair;
@@ -76,12 +76,19 @@ pub enum ProtocolSide {
metrics: validator_side::Metrics,
},
/// Collators operate on a parachain.
Collator(
PeerId,
CollatorPair,
IncomingRequestReceiver<request_v1::CollationFetchingRequest>,
collator_side::Metrics,
),
Collator {
/// Local peer id.
peer_id: PeerId,
/// Parachain collator pair.
collator_pair: CollatorPair,
/// Receiver for v1 collation fetching requests.
request_receiver_v1: IncomingRequestReceiver<request_v1::CollationFetchingRequest>,
/// Receiver for vstaging collation fetching requests.
request_receiver_vstaging:
IncomingRequestReceiver<protocol_vstaging::CollationFetchingRequest>,
/// Metrics.
metrics: collator_side::Metrics,
},
/// No protocol side, just disable it.
None,
}
@@ -110,10 +117,22 @@ impl<Context> CollatorProtocolSubsystem {
validator_side::run(ctx, keystore, eviction_policy, metrics)
.map_err(|e| SubsystemError::with_origin("collator-protocol", e))
.boxed(),
ProtocolSide::Collator(local_peer_id, collator_pair, req_receiver, metrics) =>
collator_side::run(ctx, local_peer_id, collator_pair, req_receiver, metrics)
.map_err(|e| SubsystemError::with_origin("collator-protocol", e))
.boxed(),
ProtocolSide::Collator {
peer_id,
collator_pair,
request_receiver_v1,
request_receiver_vstaging,
metrics,
} => collator_side::run(
ctx,
peer_id,
collator_pair,
request_receiver_v1,
request_receiver_vstaging,
metrics,
)
.map_err(|e| SubsystemError::with_origin("collator-protocol", e))
.boxed(),
ProtocolSide::None => return DummySubsystem.start(ctx),
};
@@ -0,0 +1,366 @@
// Copyright 2017-2022 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Primitives for tracking collations-related data.
//!
//! Usually a path of collations is as follows:
//! 1. First, collation must be advertised by collator.
//! 2. If the advertisement was accepted, it's queued for fetch (per relay parent).
//! 3. Once it's requested, the collation is said to be Pending.
//! 4. Pending collation becomes Fetched once received, we send it to backing for validation.
//! 5. If it turns to be invalid or async backing allows seconding another candidate, carry on
//! with the next advertisement, otherwise we're done with this relay parent.
//!
//! ┌──────────────────────────────────────────┐
//! └─▶Advertised ─▶ Pending ─▶ Fetched ─▶ Validated
use std::{collections::VecDeque, future::Future, pin::Pin, task::Poll};
use futures::{future::BoxFuture, FutureExt};
use polkadot_node_network_protocol::{
request_response::{outgoing::RequestError, v1 as request_v1, OutgoingResult},
PeerId,
};
use polkadot_node_primitives::PoV;
use polkadot_node_subsystem::jaeger;
use polkadot_node_subsystem_util::{
metrics::prometheus::prometheus::HistogramTimer, runtime::ProspectiveParachainsMode,
};
use polkadot_primitives::{
CandidateHash, CandidateReceipt, CollatorId, Hash, Id as ParaId, PersistedValidationData,
};
use tokio_util::sync::CancellationToken;
use crate::{error::SecondingError, LOG_TARGET};
/// Candidate supplied with a para head it's built on top of.
#[derive(Debug, Copy, Clone, Hash, Eq, PartialEq)]
pub struct ProspectiveCandidate {
/// Candidate hash.
pub candidate_hash: CandidateHash,
/// Parent head-data hash as supplied in advertisement.
pub parent_head_data_hash: Hash,
}
impl ProspectiveCandidate {
pub fn candidate_hash(&self) -> CandidateHash {
self.candidate_hash
}
}
/// Identifier of a fetched collation.
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
pub struct FetchedCollation {
/// Candidate's relay parent.
pub relay_parent: Hash,
/// Parachain id.
pub para_id: ParaId,
/// Candidate hash.
pub candidate_hash: CandidateHash,
/// Id of the collator the collation was fetched from.
pub collator_id: CollatorId,
}
impl From<&CandidateReceipt<Hash>> for FetchedCollation {
fn from(receipt: &CandidateReceipt<Hash>) -> Self {
let descriptor = receipt.descriptor();
Self {
relay_parent: descriptor.relay_parent,
para_id: descriptor.para_id,
candidate_hash: receipt.hash(),
collator_id: descriptor.collator.clone(),
}
}
}
/// Identifier of a collation being requested.
#[derive(Debug, Copy, Clone, Hash, Eq, PartialEq)]
pub struct PendingCollation {
/// Candidate's relay parent.
pub relay_parent: Hash,
/// Parachain id.
pub para_id: ParaId,
/// Peer that advertised this collation.
pub peer_id: PeerId,
/// Optional candidate hash and parent head-data hash if were
/// supplied in advertisement.
pub prospective_candidate: Option<ProspectiveCandidate>,
/// Hash of the candidate's commitments.
pub commitments_hash: Option<Hash>,
}
impl PendingCollation {
pub fn new(
relay_parent: Hash,
para_id: ParaId,
peer_id: &PeerId,
prospective_candidate: Option<ProspectiveCandidate>,
) -> Self {
Self {
relay_parent,
para_id,
peer_id: *peer_id,
prospective_candidate,
commitments_hash: None,
}
}
}
/// vstaging advertisement that was rejected by the backing
/// subsystem. Validator may fetch it later if its fragment
/// membership gets recognized before relay parent goes out of view.
#[derive(Debug, Clone)]
pub struct BlockedAdvertisement {
/// Peer that advertised the collation.
pub peer_id: PeerId,
/// Collator id.
pub collator_id: CollatorId,
/// The relay-parent of the candidate.
pub candidate_relay_parent: Hash,
/// Hash of the candidate.
pub candidate_hash: CandidateHash,
}
/// Performs a sanity check between advertised and fetched collations.
///
/// Since the persisted validation data is constructed using the advertised
/// parent head data hash, the latter doesn't require an additional check.
pub fn fetched_collation_sanity_check(
advertised: &PendingCollation,
fetched: &CandidateReceipt,
persisted_validation_data: &PersistedValidationData,
) -> Result<(), SecondingError> {
if persisted_validation_data.hash() != fetched.descriptor().persisted_validation_data_hash {
Err(SecondingError::PersistedValidationDataMismatch)
} else if advertised
.prospective_candidate
.map_or(false, |pc| pc.candidate_hash() != fetched.hash())
{
Err(SecondingError::CandidateHashMismatch)
} else {
Ok(())
}
}
/// Identifier for a requested collation and the respective collator that advertised it.
#[derive(Debug, Clone)]
pub struct CollationEvent {
/// Collator id.
pub collator_id: CollatorId,
/// The requested collation data.
pub pending_collation: PendingCollation,
}
/// Fetched collation data.
#[derive(Debug, Clone)]
pub struct PendingCollationFetch {
/// Collation identifier.
pub collation_event: CollationEvent,
/// Candidate receipt.
pub candidate_receipt: CandidateReceipt,
/// Proof of validity.
pub pov: PoV,
}
/// The status of the collations in [`CollationsPerRelayParent`].
#[derive(Debug, Clone, Copy)]
pub enum CollationStatus {
/// We are waiting for a collation to be advertised to us.
Waiting,
/// We are currently fetching a collation.
Fetching,
/// We are waiting that a collation is being validated.
WaitingOnValidation,
/// We have seconded a collation.
Seconded,
}
impl Default for CollationStatus {
fn default() -> Self {
Self::Waiting
}
}
impl CollationStatus {
/// Downgrades to `Waiting`, but only if `self != Seconded`.
fn back_to_waiting(&mut self, relay_parent_mode: ProspectiveParachainsMode) {
match self {
Self::Seconded =>
if relay_parent_mode.is_enabled() {
// With async backing enabled it's allowed to
// second more candidates.
*self = Self::Waiting
},
_ => *self = Self::Waiting,
}
}
}
/// Information about collations per relay parent.
#[derive(Default)]
pub struct Collations {
/// What is the current status in regards to a collation for this relay parent?
pub status: CollationStatus,
/// Collator we're fetching from, optionally which candidate was requested.
///
/// This is the currently last started fetch, which did not exceed `MAX_UNSHARED_DOWNLOAD_TIME`
/// yet.
pub fetching_from: Option<(CollatorId, Option<CandidateHash>)>,
/// Collation that were advertised to us, but we did not yet fetch.
pub waiting_queue: VecDeque<(PendingCollation, CollatorId)>,
/// How many collations have been seconded.
pub seconded_count: usize,
}
impl Collations {
/// Note a seconded collation for a given para.
pub(super) fn note_seconded(&mut self) {
self.seconded_count += 1
}
/// Returns the next collation to fetch from the `waiting_queue`.
///
/// This will reset the status back to `Waiting` using [`CollationStatus::back_to_waiting`].
///
/// Returns `Some(_)` if there is any collation to fetch, the `status` is not `Seconded` and
/// the passed in `finished_one` is the currently `waiting_collation`.
pub(super) fn get_next_collation_to_fetch(
&mut self,
finished_one: &(CollatorId, Option<CandidateHash>),
relay_parent_mode: ProspectiveParachainsMode,
) -> Option<(PendingCollation, CollatorId)> {
// If finished one does not match waiting_collation, then we already dequeued another fetch
// to replace it.
if let Some((collator_id, maybe_candidate_hash)) = self.fetching_from.as_ref() {
// If a candidate hash was saved previously, `finished_one` must include this too.
if collator_id != &finished_one.0 &&
maybe_candidate_hash.map_or(true, |hash| Some(&hash) != finished_one.1.as_ref())
{
gum::trace!(
target: LOG_TARGET,
waiting_collation = ?self.fetching_from,
?finished_one,
"Not proceeding to the next collation - has already been done."
);
return None
}
}
self.status.back_to_waiting(relay_parent_mode);
match self.status {
// We don't need to fetch any other collation when we already have seconded one.
CollationStatus::Seconded => None,
CollationStatus::Waiting =>
if !self.is_seconded_limit_reached(relay_parent_mode) {
None
} else {
self.waiting_queue.pop_front()
},
CollationStatus::WaitingOnValidation | CollationStatus::Fetching =>
unreachable!("We have reset the status above!"),
}
}
/// Checks the limit of seconded candidates for a given para.
pub(super) fn is_seconded_limit_reached(
&self,
relay_parent_mode: ProspectiveParachainsMode,
) -> bool {
let seconded_limit =
if let ProspectiveParachainsMode::Enabled { max_candidate_depth, .. } =
relay_parent_mode
{
max_candidate_depth + 1
} else {
1
};
self.seconded_count < seconded_limit
}
}
// Any error that can occur when awaiting a collation fetch response.
#[derive(Debug, thiserror::Error)]
pub(super) enum CollationFetchError {
#[error("Future was cancelled.")]
Cancelled,
#[error("{0}")]
Request(#[from] RequestError),
}
/// Future that concludes when the collator has responded to our collation fetch request
/// or the request was cancelled by the validator.
pub(super) struct CollationFetchRequest {
/// Info about the requested collation.
pub pending_collation: PendingCollation,
/// Collator id.
pub collator_id: CollatorId,
/// Responses from collator.
pub from_collator: BoxFuture<'static, OutgoingResult<request_v1::CollationFetchingResponse>>,
/// Handle used for checking if this request was cancelled.
pub cancellation_token: CancellationToken,
/// A jaeger span corresponding to the lifetime of the request.
pub span: Option<jaeger::Span>,
/// A metric histogram for the lifetime of the request
pub _lifetime_timer: Option<HistogramTimer>,
}
impl Future for CollationFetchRequest {
type Output = (
CollationEvent,
std::result::Result<request_v1::CollationFetchingResponse, CollationFetchError>,
);
fn poll(mut self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Self::Output> {
// First check if this fetch request was cancelled.
let cancelled = match std::pin::pin!(self.cancellation_token.cancelled()).poll(cx) {
Poll::Ready(()) => true,
Poll::Pending => false,
};
if cancelled {
self.span.as_mut().map(|s| s.add_string_tag("success", "false"));
return Poll::Ready((
CollationEvent {
collator_id: self.collator_id.clone(),
pending_collation: self.pending_collation,
},
Err(CollationFetchError::Cancelled),
))
}
let res = self.from_collator.poll_unpin(cx).map(|res| {
(
CollationEvent {
collator_id: self.collator_id.clone(),
pending_collation: self.pending_collation,
},
res.map_err(CollationFetchError::Request),
)
});
match &res {
Poll::Ready((_, Ok(request_v1::CollationFetchingResponse::Collation(..)))) => {
self.span.as_mut().map(|s| s.add_string_tag("success", "true"));
},
Poll::Ready((_, Err(_))) => {
self.span.as_mut().map(|s| s.add_string_tag("success", "false"));
},
_ => {},
};
res
}
}
@@ -0,0 +1,142 @@
// Copyright 2017-2023 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
use polkadot_node_subsystem_util::metrics::{self, prometheus};
#[derive(Clone, Default)]
pub struct Metrics(Option<MetricsInner>);
impl Metrics {
pub fn on_request(&self, succeeded: std::result::Result<(), ()>) {
if let Some(metrics) = &self.0 {
match succeeded {
Ok(()) => metrics.collation_requests.with_label_values(&["succeeded"]).inc(),
Err(()) => metrics.collation_requests.with_label_values(&["failed"]).inc(),
}
}
}
/// Provide a timer for `process_msg` which observes on drop.
pub fn time_process_msg(&self) -> Option<metrics::prometheus::prometheus::HistogramTimer> {
self.0.as_ref().map(|metrics| metrics.process_msg.start_timer())
}
/// Provide a timer for `handle_collation_request_result` which observes on drop.
pub fn time_handle_collation_request_result(
&self,
) -> Option<metrics::prometheus::prometheus::HistogramTimer> {
self.0
.as_ref()
.map(|metrics| metrics.handle_collation_request_result.start_timer())
}
/// Note the current number of collator peers.
pub fn note_collator_peer_count(&self, collator_peers: usize) {
self.0
.as_ref()
.map(|metrics| metrics.collator_peer_count.set(collator_peers as u64));
}
/// Provide a timer for `CollationFetchRequest` structure which observes on drop.
pub fn time_collation_request_duration(
&self,
) -> Option<metrics::prometheus::prometheus::HistogramTimer> {
self.0.as_ref().map(|metrics| metrics.collation_request_duration.start_timer())
}
/// Provide a timer for `request_unblocked_collations` which observes on drop.
pub fn time_request_unblocked_collations(
&self,
) -> Option<metrics::prometheus::prometheus::HistogramTimer> {
self.0
.as_ref()
.map(|metrics| metrics.request_unblocked_collations.start_timer())
}
}
#[derive(Clone)]
struct MetricsInner {
collation_requests: prometheus::CounterVec<prometheus::U64>,
process_msg: prometheus::Histogram,
handle_collation_request_result: prometheus::Histogram,
collator_peer_count: prometheus::Gauge<prometheus::U64>,
collation_request_duration: prometheus::Histogram,
request_unblocked_collations: prometheus::Histogram,
}
impl metrics::Metrics for Metrics {
fn try_register(
registry: &prometheus::Registry,
) -> std::result::Result<Self, prometheus::PrometheusError> {
let metrics = MetricsInner {
collation_requests: prometheus::register(
prometheus::CounterVec::new(
prometheus::Opts::new(
"polkadot_parachain_collation_requests_total",
"Number of collations requested from Collators.",
),
&["success"],
)?,
registry,
)?,
process_msg: prometheus::register(
prometheus::Histogram::with_opts(
prometheus::HistogramOpts::new(
"polkadot_parachain_collator_protocol_validator_process_msg",
"Time spent within `collator_protocol_validator::process_msg`",
)
)?,
registry,
)?,
handle_collation_request_result: prometheus::register(
prometheus::Histogram::with_opts(
prometheus::HistogramOpts::new(
"polkadot_parachain_collator_protocol_validator_handle_collation_request_result",
"Time spent within `collator_protocol_validator::handle_collation_request_result`",
)
)?,
registry,
)?,
collator_peer_count: prometheus::register(
prometheus::Gauge::new(
"polkadot_parachain_collator_peer_count",
"Amount of collator peers connected",
)?,
registry,
)?,
collation_request_duration: prometheus::register(
prometheus::Histogram::with_opts(
prometheus::HistogramOpts::new(
"polkadot_parachain_collator_protocol_validator_collation_request_duration",
"Lifetime of the `CollationFetchRequest` structure",
).buckets(vec![0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.75, 0.9, 1.0, 1.2, 1.5, 1.75]),
)?,
registry,
)?,
request_unblocked_collations: prometheus::register(
prometheus::Histogram::with_opts(
prometheus::HistogramOpts::new(
"polkadot_parachain_collator_protocol_validator_request_unblocked_collations",
"Time spent within `collator_protocol_validator::request_unblocked_collations`",
)
)?,
registry,
)?,
};
Ok(Metrics(Some(metrics)))
}
}
File diff suppressed because it is too large Load Diff
@@ -19,8 +19,8 @@ use assert_matches::assert_matches;
use futures::{executor, future, Future};
use sp_core::{crypto::Pair, Encode};
use sp_keyring::Sr25519Keyring;
use sp_keystore::{testing::MemoryKeystore, Keystore};
use std::{iter, sync::Arc, task::Poll, time::Duration};
use sp_keystore::Keystore;
use std::{iter, sync::Arc, time::Duration};
use polkadot_node_network_protocol::{
our_view,
@@ -28,24 +28,39 @@ use polkadot_node_network_protocol::{
request_response::{Requests, ResponseSender},
ObservedRole,
};
use polkadot_node_primitives::BlockData;
use polkadot_node_subsystem::messages::{
AllMessages, ReportPeerMessage, RuntimeApiMessage, RuntimeApiRequest,
use polkadot_node_primitives::{BlockData, PoV};
use polkadot_node_subsystem::{
errors::RuntimeApiError,
messages::{AllMessages, ReportPeerMessage, RuntimeApiMessage, RuntimeApiRequest},
};
use polkadot_node_subsystem_test_helpers as test_helpers;
use polkadot_node_subsystem_util::{reputation::add_reputation, TimeoutExt};
use polkadot_primitives::{
CollatorPair, CoreState, GroupIndex, GroupRotationInfo, OccupiedCore, ScheduledCore,
ValidatorId, ValidatorIndex,
CandidateReceipt, CollatorPair, CoreState, GroupIndex, GroupRotationInfo, HeadData,
OccupiedCore, PersistedValidationData, ScheduledCore, ValidatorId, ValidatorIndex,
};
use polkadot_primitives_test_helpers::{
dummy_candidate_descriptor, dummy_candidate_receipt_bad_sig, dummy_hash,
};
mod prospective_parachains;
const ACTIVITY_TIMEOUT: Duration = Duration::from_millis(500);
const DECLARE_TIMEOUT: Duration = Duration::from_millis(25);
const REPUTATION_CHANGE_TEST_INTERVAL: Duration = Duration::from_millis(10);
const ASYNC_BACKING_DISABLED_ERROR: RuntimeApiError =
RuntimeApiError::NotSupported { runtime_api_name: "test-runtime" };
fn dummy_pvd() -> PersistedValidationData {
PersistedValidationData {
parent_head: HeadData(vec![7, 8, 9]),
relay_parent_number: 5,
max_pov_size: 1024,
relay_parent_storage_root: Default::default(),
}
}
#[derive(Clone)]
struct TestState {
chain_ids: Vec<ParaId>,
@@ -120,6 +135,7 @@ type VirtualOverseer = test_helpers::TestSubsystemContextHandle<CollatorProtocol
struct TestHarness {
virtual_overseer: VirtualOverseer,
keystore: KeystorePtr,
}
fn test_harness<T: Future<Output = VirtualOverseer>>(
@@ -136,17 +152,17 @@ fn test_harness<T: Future<Output = VirtualOverseer>>(
let (context, virtual_overseer) = test_helpers::make_subsystem_context(pool.clone());
let keystore = MemoryKeystore::new();
keystore
.sr25519_generate_new(
polkadot_primitives::PARACHAIN_KEY_TYPE_ID,
Some(&Sr25519Keyring::Alice.to_seed()),
)
.unwrap();
let keystore = Arc::new(sc_keystore::LocalKeystore::in_memory());
Keystore::sr25519_generate_new(
&*keystore,
polkadot_primitives::PARACHAIN_KEY_TYPE_ID,
Some(&Sr25519Keyring::Alice.to_seed()),
)
.expect("Insert key into keystore");
let subsystem = run_inner(
context,
Arc::new(keystore),
keystore.clone(),
crate::CollatorEvictionPolicy {
inactive_collator: ACTIVITY_TIMEOUT,
undeclared: DECLARE_TIMEOUT,
@@ -156,7 +172,7 @@ fn test_harness<T: Future<Output = VirtualOverseer>>(
REPUTATION_CHANGE_TEST_INTERVAL,
);
let test_fut = test(TestHarness { virtual_overseer });
let test_fut = test(TestHarness { virtual_overseer, keystore });
futures::pin_mut!(test_fut);
futures::pin_mut!(subsystem);
@@ -253,16 +269,53 @@ async fn assert_candidate_backing_second(
expected_relay_parent: Hash,
expected_para_id: ParaId,
expected_pov: &PoV,
mode: ProspectiveParachainsMode,
) -> CandidateReceipt {
let pvd = dummy_pvd();
// Depending on relay parent mode pvd will be either requested
// from the Runtime API or Prospective Parachains.
let msg = overseer_recv(virtual_overseer).await;
match mode {
ProspectiveParachainsMode::Disabled => assert_matches!(
msg,
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
hash,
RuntimeApiRequest::PersistedValidationData(para_id, assumption, tx),
)) => {
assert_eq!(expected_relay_parent, hash);
assert_eq!(expected_para_id, para_id);
assert_eq!(OccupiedCoreAssumption::Free, assumption);
tx.send(Ok(Some(pvd.clone()))).unwrap();
}
),
ProspectiveParachainsMode::Enabled { .. } => assert_matches!(
msg,
AllMessages::ProspectiveParachains(
ProspectiveParachainsMessage::GetProspectiveValidationData(request, tx),
) => {
assert_eq!(expected_relay_parent, request.candidate_relay_parent);
assert_eq!(expected_para_id, request.para_id);
tx.send(Some(pvd.clone())).unwrap();
}
),
}
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::CandidateBacking(CandidateBackingMessage::Second(relay_parent, candidate_receipt, incoming_pov)
) => {
assert_eq!(expected_relay_parent, relay_parent);
assert_eq!(expected_para_id, candidate_receipt.descriptor.para_id);
assert_eq!(*expected_pov, incoming_pov);
candidate_receipt
})
AllMessages::CandidateBacking(CandidateBackingMessage::Second(
relay_parent,
candidate_receipt,
received_pvd,
incoming_pov,
)) => {
assert_eq!(expected_relay_parent, relay_parent);
assert_eq!(expected_para_id, candidate_receipt.descriptor.para_id);
assert_eq!(*expected_pov, incoming_pov);
assert_eq!(pvd, received_pvd);
candidate_receipt
}
)
}
/// Assert that a collator got disconnected.
@@ -284,6 +337,7 @@ async fn assert_fetch_collation_request(
virtual_overseer: &mut VirtualOverseer,
relay_parent: Hash,
para_id: ParaId,
candidate_hash: Option<CandidateHash>,
) -> ResponseSender {
assert_matches!(
overseer_recv(virtual_overseer).await,
@@ -291,14 +345,26 @@ async fn assert_fetch_collation_request(
) => {
let req = reqs.into_iter().next()
.expect("There should be exactly one request");
match req {
Requests::CollationFetchingV1(req) => {
let payload = req.payload;
assert_eq!(payload.relay_parent, relay_parent);
assert_eq!(payload.para_id, para_id);
req.pending_response
}
_ => panic!("Unexpected request"),
match candidate_hash {
None => assert_matches!(
req,
Requests::CollationFetchingV1(req) => {
let payload = req.payload;
assert_eq!(payload.relay_parent, relay_parent);
assert_eq!(payload.para_id, para_id);
req.pending_response
}
),
Some(candidate_hash) => assert_matches!(
req,
Requests::CollationFetchingVStaging(req) => {
let payload = req.payload;
assert_eq!(payload.relay_parent, relay_parent);
assert_eq!(payload.para_id, para_id);
assert_eq!(payload.candidate_hash, candidate_hash);
req.pending_response
}
),
}
})
}
@@ -309,27 +375,38 @@ async fn connect_and_declare_collator(
peer: PeerId,
collator: CollatorPair,
para_id: ParaId,
version: CollationVersion,
) {
overseer_send(
virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::PeerConnected(
peer,
ObservedRole::Full,
CollationVersion::V1.into(),
version.into(),
None,
)),
)
.await;
overseer_send(
virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::PeerMessage(
peer,
Versioned::V1(protocol_v1::CollatorProtocolMessage::Declare(
let wire_message = match version {
CollationVersion::V1 => Versioned::V1(protocol_v1::CollatorProtocolMessage::Declare(
collator.public(),
para_id,
collator.sign(&protocol_v1::declare_signature_payload(&peer)),
)),
CollationVersion::VStaging =>
Versioned::VStaging(protocol_vstaging::CollatorProtocolMessage::Declare(
collator.public(),
para_id,
collator.sign(&protocol_v1::declare_signature_payload(&peer)),
)),
};
overseer_send(
virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::PeerMessage(
peer,
wire_message,
)),
)
.await;
@@ -340,24 +417,48 @@ async fn advertise_collation(
virtual_overseer: &mut VirtualOverseer,
peer: PeerId,
relay_parent: Hash,
candidate: Option<(CandidateHash, Hash)>, // Candidate hash + parent head data hash.
) {
let wire_message = match candidate {
Some((candidate_hash, parent_head_data_hash)) =>
Versioned::VStaging(protocol_vstaging::CollatorProtocolMessage::AdvertiseCollation {
relay_parent,
candidate_hash,
parent_head_data_hash,
}),
None =>
Versioned::V1(protocol_v1::CollatorProtocolMessage::AdvertiseCollation(relay_parent)),
};
overseer_send(
virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::PeerMessage(
peer,
Versioned::V1(protocol_v1::CollatorProtocolMessage::AdvertiseCollation(relay_parent)),
wire_message,
)),
)
.await;
}
async fn assert_async_backing_params_request(virtual_overseer: &mut VirtualOverseer, hash: Hash) {
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
relay_parent,
RuntimeApiRequest::StagingAsyncBackingParams(tx)
)) => {
assert_eq!(relay_parent, hash);
tx.send(Err(ASYNC_BACKING_DISABLED_ERROR)).unwrap();
}
);
}
// As we receive a relevant advertisement act on it and issue a collation request.
#[test]
fn act_on_advertisement() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair = CollatorPair::generate().0;
gum::trace!("activating");
@@ -370,6 +471,7 @@ fn act_on_advertisement() {
)
.await;
assert_async_backing_params_request(&mut virtual_overseer, test_state.relay_parent).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
let peer_b = PeerId::random();
@@ -379,15 +481,74 @@ fn act_on_advertisement() {
peer_b,
pair.clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
advertise_collation(&mut virtual_overseer, peer_b, test_state.relay_parent).await;
advertise_collation(&mut virtual_overseer, peer_b, test_state.relay_parent, None).await;
assert_fetch_collation_request(
&mut virtual_overseer,
test_state.relay_parent,
test_state.chain_ids[0],
None,
)
.await;
virtual_overseer
});
}
/// Tests that validator side works with vstaging network protocol
/// before async backing is enabled.
#[test]
fn act_on_advertisement_vstaging() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair = CollatorPair::generate().0;
gum::trace!("activating");
overseer_send(
&mut virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::OurViewChange(
our_view![test_state.relay_parent],
)),
)
.await;
assert_async_backing_params_request(&mut virtual_overseer, test_state.relay_parent).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
let peer_b = PeerId::random();
connect_and_declare_collator(
&mut virtual_overseer,
peer_b,
pair.clone(),
test_state.chain_ids[0],
CollationVersion::VStaging,
)
.await;
let candidate_hash = CandidateHash::default();
let parent_head_data_hash = Hash::zero();
// vstaging advertisement.
advertise_collation(
&mut virtual_overseer,
peer_b,
test_state.relay_parent,
Some((candidate_hash, parent_head_data_hash)),
)
.await;
assert_fetch_collation_request(
&mut virtual_overseer,
test_state.relay_parent,
test_state.chain_ids[0],
Some(candidate_hash),
)
.await;
@@ -401,7 +562,7 @@ fn collator_reporting_works() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
overseer_send(
&mut virtual_overseer,
@@ -411,6 +572,8 @@ fn collator_reporting_works() {
)
.await;
assert_async_backing_params_request(&mut virtual_overseer, test_state.relay_parent).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
let peer_b = PeerId::random();
@@ -421,6 +584,7 @@ fn collator_reporting_works() {
peer_b,
test_state.collators[0].clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
@@ -429,6 +593,7 @@ fn collator_reporting_works() {
peer_c,
test_state.collators[1].clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
@@ -458,7 +623,7 @@ fn collator_authentication_verification_works() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let peer_b = PeerId::random();
@@ -509,20 +674,24 @@ fn fetch_one_collation_at_a_time() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let second = Hash::random();
let our_view = our_view![test_state.relay_parent, second];
overseer_send(
&mut virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::OurViewChange(
our_view![test_state.relay_parent, second],
our_view.clone(),
)),
)
.await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
// Iter over view since the order may change due to sorted invariant.
for hash in our_view.iter() {
assert_async_backing_params_request(&mut virtual_overseer, *hash).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
}
let peer_b = PeerId::random();
let peer_c = PeerId::random();
@@ -532,6 +701,7 @@ fn fetch_one_collation_at_a_time() {
peer_b,
test_state.collators[0].clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
@@ -540,16 +710,18 @@ fn fetch_one_collation_at_a_time() {
peer_c,
test_state.collators[1].clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
advertise_collation(&mut virtual_overseer, peer_b, test_state.relay_parent).await;
advertise_collation(&mut virtual_overseer, peer_c, test_state.relay_parent).await;
advertise_collation(&mut virtual_overseer, peer_b, test_state.relay_parent, None).await;
advertise_collation(&mut virtual_overseer, peer_c, test_state.relay_parent, None).await;
let response_channel = assert_fetch_collation_request(
&mut virtual_overseer,
test_state.relay_parent,
test_state.chain_ids[0],
None,
)
.await;
@@ -563,10 +735,13 @@ fn fetch_one_collation_at_a_time() {
dummy_candidate_receipt_bad_sig(dummy_hash(), Some(Default::default()));
candidate_a.descriptor.para_id = test_state.chain_ids[0];
candidate_a.descriptor.relay_parent = test_state.relay_parent;
candidate_a.descriptor.persisted_validation_data_hash = dummy_pvd().hash();
response_channel
.send(Ok(
CollationFetchingResponse::Collation(candidate_a.clone(), pov.clone()).encode()
))
.send(Ok(request_v1::CollationFetchingResponse::Collation(
candidate_a.clone(),
pov.clone(),
)
.encode()))
.expect("Sending response should succeed");
assert_candidate_backing_second(
@@ -574,6 +749,7 @@ fn fetch_one_collation_at_a_time() {
test_state.relay_parent,
test_state.chain_ids[0],
&pov,
ProspectiveParachainsMode::Disabled,
)
.await;
@@ -581,7 +757,7 @@ fn fetch_one_collation_at_a_time() {
test_helpers::Yield::new().await;
// Second collation is not requested since there's already seconded one.
assert_matches!(futures::poll!(virtual_overseer.recv().boxed()), Poll::Pending);
assert_matches!(virtual_overseer.recv().now_or_never(), None);
virtual_overseer
})
@@ -594,20 +770,24 @@ fn fetches_next_collation() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let second = Hash::random();
let our_view = our_view![test_state.relay_parent, second];
overseer_send(
&mut virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::OurViewChange(
our_view![test_state.relay_parent, second],
our_view.clone(),
)),
)
.await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
for hash in our_view.iter() {
assert_async_backing_params_request(&mut virtual_overseer, *hash).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
}
let peer_b = PeerId::random();
let peer_c = PeerId::random();
@@ -618,6 +798,7 @@ fn fetches_next_collation() {
peer_b,
test_state.collators[2].clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
@@ -626,6 +807,7 @@ fn fetches_next_collation() {
peer_c,
test_state.collators[3].clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
@@ -634,45 +816,64 @@ fn fetches_next_collation() {
peer_d,
test_state.collators[4].clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
advertise_collation(&mut virtual_overseer, peer_b, second).await;
advertise_collation(&mut virtual_overseer, peer_c, second).await;
advertise_collation(&mut virtual_overseer, peer_d, second).await;
advertise_collation(&mut virtual_overseer, peer_b, second, None).await;
advertise_collation(&mut virtual_overseer, peer_c, second, None).await;
advertise_collation(&mut virtual_overseer, peer_d, second, None).await;
// Dropping the response channel should lead to fetching the second collation.
assert_fetch_collation_request(&mut virtual_overseer, second, test_state.chain_ids[0])
.await;
assert_fetch_collation_request(
&mut virtual_overseer,
second,
test_state.chain_ids[0],
None,
)
.await;
let response_channel_non_exclusive =
assert_fetch_collation_request(&mut virtual_overseer, second, test_state.chain_ids[0])
.await;
let response_channel_non_exclusive = assert_fetch_collation_request(
&mut virtual_overseer,
second,
test_state.chain_ids[0],
None,
)
.await;
// Third collator should receive response after that timeout:
Delay::new(MAX_UNSHARED_DOWNLOAD_TIME + Duration::from_millis(50)).await;
let response_channel =
assert_fetch_collation_request(&mut virtual_overseer, second, test_state.chain_ids[0])
.await;
let response_channel = assert_fetch_collation_request(
&mut virtual_overseer,
second,
test_state.chain_ids[0],
None,
)
.await;
let pov = PoV { block_data: BlockData(vec![1]) };
let mut candidate_a =
dummy_candidate_receipt_bad_sig(dummy_hash(), Some(Default::default()));
candidate_a.descriptor.para_id = test_state.chain_ids[0];
candidate_a.descriptor.relay_parent = second;
candidate_a.descriptor.persisted_validation_data_hash = dummy_pvd().hash();
// First request finishes now:
response_channel_non_exclusive
.send(Ok(
CollationFetchingResponse::Collation(candidate_a.clone(), pov.clone()).encode()
))
.send(Ok(request_v1::CollationFetchingResponse::Collation(
candidate_a.clone(),
pov.clone(),
)
.encode()))
.expect("Sending response should succeed");
response_channel
.send(Ok(
CollationFetchingResponse::Collation(candidate_a.clone(), pov.clone()).encode()
))
.send(Ok(request_v1::CollationFetchingResponse::Collation(
candidate_a.clone(),
pov.clone(),
)
.encode()))
.expect("Sending response should succeed");
assert_candidate_backing_second(
@@ -680,6 +881,7 @@ fn fetches_next_collation() {
second,
test_state.chain_ids[0],
&pov,
ProspectiveParachainsMode::Disabled,
)
.await;
@@ -692,7 +894,7 @@ fn reject_connection_to_next_group() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
overseer_send(
&mut virtual_overseer,
@@ -702,6 +904,7 @@ fn reject_connection_to_next_group() {
)
.await;
assert_async_backing_params_request(&mut virtual_overseer, test_state.relay_parent).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
let peer_b = PeerId::random();
@@ -711,6 +914,7 @@ fn reject_connection_to_next_group() {
peer_b,
test_state.collators[0].clone(),
test_state.chain_ids[1], // next, not current `para_id`
CollationVersion::V1,
)
.await;
@@ -737,20 +941,24 @@ fn fetch_next_collation_on_invalid_collation() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let second = Hash::random();
let our_view = our_view![test_state.relay_parent, second];
overseer_send(
&mut virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::OurViewChange(
our_view![test_state.relay_parent, second],
our_view.clone(),
)),
)
.await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
for hash in our_view.iter() {
assert_async_backing_params_request(&mut virtual_overseer, *hash).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
}
let peer_b = PeerId::random();
let peer_c = PeerId::random();
@@ -760,6 +968,7 @@ fn fetch_next_collation_on_invalid_collation() {
peer_b,
test_state.collators[0].clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
@@ -768,16 +977,18 @@ fn fetch_next_collation_on_invalid_collation() {
peer_c,
test_state.collators[1].clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
advertise_collation(&mut virtual_overseer, peer_b, test_state.relay_parent).await;
advertise_collation(&mut virtual_overseer, peer_c, test_state.relay_parent).await;
advertise_collation(&mut virtual_overseer, peer_b, test_state.relay_parent, None).await;
advertise_collation(&mut virtual_overseer, peer_c, test_state.relay_parent, None).await;
let response_channel = assert_fetch_collation_request(
&mut virtual_overseer,
test_state.relay_parent,
test_state.chain_ids[0],
None,
)
.await;
@@ -786,10 +997,13 @@ fn fetch_next_collation_on_invalid_collation() {
dummy_candidate_receipt_bad_sig(dummy_hash(), Some(Default::default()));
candidate_a.descriptor.para_id = test_state.chain_ids[0];
candidate_a.descriptor.relay_parent = test_state.relay_parent;
candidate_a.descriptor.persisted_validation_data_hash = dummy_pvd().hash();
response_channel
.send(Ok(
CollationFetchingResponse::Collation(candidate_a.clone(), pov.clone()).encode()
))
.send(Ok(request_v1::CollationFetchingResponse::Collation(
candidate_a.clone(),
pov.clone(),
)
.encode()))
.expect("Sending response should succeed");
let receipt = assert_candidate_backing_second(
@@ -797,6 +1011,7 @@ fn fetch_next_collation_on_invalid_collation() {
test_state.relay_parent,
test_state.chain_ids[0],
&pov,
ProspectiveParachainsMode::Disabled,
)
.await;
@@ -822,6 +1037,7 @@ fn fetch_next_collation_on_invalid_collation() {
&mut virtual_overseer,
test_state.relay_parent,
test_state.chain_ids[0],
None,
)
.await;
@@ -834,7 +1050,7 @@ fn inactive_disconnected() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair = CollatorPair::generate().0;
@@ -848,6 +1064,7 @@ fn inactive_disconnected() {
)
.await;
assert_async_backing_params_request(&mut virtual_overseer, hash_a).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
let peer_b = PeerId::random();
@@ -857,14 +1074,16 @@ fn inactive_disconnected() {
peer_b,
pair.clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
advertise_collation(&mut virtual_overseer, peer_b, test_state.relay_parent).await;
advertise_collation(&mut virtual_overseer, peer_b, test_state.relay_parent, None).await;
assert_fetch_collation_request(
&mut virtual_overseer,
test_state.relay_parent,
test_state.chain_ids[0],
None,
)
.await;
@@ -880,7 +1099,7 @@ fn activity_extends_life() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair = CollatorPair::generate().0;
@@ -888,18 +1107,20 @@ fn activity_extends_life() {
let hash_b = Hash::repeat_byte(1);
let hash_c = Hash::repeat_byte(2);
let our_view = our_view![hash_a, hash_b, hash_c];
overseer_send(
&mut virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::OurViewChange(
our_view![hash_a, hash_b, hash_c],
our_view.clone(),
)),
)
.await;
// 3 heads, 3 times.
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
for hash in our_view.iter() {
assert_async_backing_params_request(&mut virtual_overseer, *hash).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
}
let peer_b = PeerId::random();
@@ -908,29 +1129,45 @@ fn activity_extends_life() {
peer_b,
pair.clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
Delay::new(ACTIVITY_TIMEOUT * 2 / 3).await;
advertise_collation(&mut virtual_overseer, peer_b, hash_a).await;
advertise_collation(&mut virtual_overseer, peer_b, hash_a, None).await;
assert_fetch_collation_request(&mut virtual_overseer, hash_a, test_state.chain_ids[0])
.await;
assert_fetch_collation_request(
&mut virtual_overseer,
hash_a,
test_state.chain_ids[0],
None,
)
.await;
Delay::new(ACTIVITY_TIMEOUT * 2 / 3).await;
advertise_collation(&mut virtual_overseer, peer_b, hash_b).await;
advertise_collation(&mut virtual_overseer, peer_b, hash_b, None).await;
assert_fetch_collation_request(&mut virtual_overseer, hash_b, test_state.chain_ids[0])
.await;
assert_fetch_collation_request(
&mut virtual_overseer,
hash_b,
test_state.chain_ids[0],
None,
)
.await;
Delay::new(ACTIVITY_TIMEOUT * 2 / 3).await;
advertise_collation(&mut virtual_overseer, peer_b, hash_c).await;
advertise_collation(&mut virtual_overseer, peer_b, hash_c, None).await;
assert_fetch_collation_request(&mut virtual_overseer, hash_c, test_state.chain_ids[0])
.await;
assert_fetch_collation_request(
&mut virtual_overseer,
hash_c,
test_state.chain_ids[0],
None,
)
.await;
Delay::new(ACTIVITY_TIMEOUT * 3 / 2).await;
@@ -945,7 +1182,7 @@ fn disconnect_if_no_declare() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
overseer_send(
&mut virtual_overseer,
@@ -955,6 +1192,7 @@ fn disconnect_if_no_declare() {
)
.await;
assert_async_backing_params_request(&mut virtual_overseer, test_state.relay_parent).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
let peer_b = PeerId::random();
@@ -981,7 +1219,7 @@ fn disconnect_if_wrong_declare() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair = CollatorPair::generate().0;
@@ -993,6 +1231,7 @@ fn disconnect_if_wrong_declare() {
)
.await;
assert_async_backing_params_request(&mut virtual_overseer, test_state.relay_parent).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
let peer_b = PeerId::random();
@@ -1042,7 +1281,7 @@ fn delay_reputation_change() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| false), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair = CollatorPair::generate().0;
@@ -1054,6 +1293,7 @@ fn delay_reputation_change() {
)
.await;
assert_async_backing_params_request(&mut virtual_overseer, test_state.relay_parent).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
let peer_b = PeerId::random();
@@ -1127,7 +1367,7 @@ fn view_change_clears_old_collators() {
let mut test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer } = test_harness;
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair = CollatorPair::generate().0;
@@ -1139,6 +1379,7 @@ fn view_change_clears_old_collators() {
)
.await;
assert_async_backing_params_request(&mut virtual_overseer, test_state.relay_parent).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
let peer_b = PeerId::random();
@@ -1148,6 +1389,7 @@ fn view_change_clears_old_collators() {
peer_b,
pair.clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
@@ -1162,6 +1404,7 @@ fn view_change_clears_old_collators() {
.await;
test_state.group_rotation_info = test_state.group_rotation_info.bump_rotation();
assert_async_backing_params_request(&mut virtual_overseer, hash_b).await;
respond_to_core_info_queries(&mut virtual_overseer, &test_state).await;
assert_collator_disconnect(&mut virtual_overseer, peer_b).await;
@@ -0,0 +1,988 @@
// Copyright 2022 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Tests for the validator side with enabled prospective parachains.
use super::*;
use polkadot_node_subsystem::messages::ChainApiMessage;
use polkadot_primitives::{
vstaging as vstaging_primitives, BlockNumber, CandidateCommitments, CommittedCandidateReceipt,
Header, SigningContext, ValidatorId,
};
const ASYNC_BACKING_PARAMETERS: vstaging_primitives::AsyncBackingParams =
vstaging_primitives::AsyncBackingParams { max_candidate_depth: 4, allowed_ancestry_len: 3 };
fn get_parent_hash(hash: Hash) -> Hash {
Hash::from_low_u64_be(hash.to_low_u64_be() + 1)
}
async fn assert_assign_incoming(
virtual_overseer: &mut VirtualOverseer,
test_state: &TestState,
hash: Hash,
number: BlockNumber,
next_msg: &mut Option<AllMessages>,
) {
let msg = match next_msg.take() {
Some(msg) => msg,
None => overseer_recv(virtual_overseer).await,
};
assert_matches!(
msg,
AllMessages::RuntimeApi(
RuntimeApiMessage::Request(parent, RuntimeApiRequest::Validators(tx))
) if parent == hash => {
tx.send(Ok(test_state.validator_public.clone())).unwrap();
}
);
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::RuntimeApi(
RuntimeApiMessage::Request(parent, RuntimeApiRequest::ValidatorGroups(tx))
) if parent == hash => {
let validator_groups = test_state.validator_groups.clone();
let mut group_rotation_info = test_state.group_rotation_info.clone();
group_rotation_info.now = number;
tx.send(Ok((validator_groups, group_rotation_info))).unwrap();
}
);
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::RuntimeApi(
RuntimeApiMessage::Request(parent, RuntimeApiRequest::AvailabilityCores(tx))
) if parent == hash => {
tx.send(Ok(test_state.cores.clone())).unwrap();
}
);
}
/// Handle a view update.
async fn update_view(
virtual_overseer: &mut VirtualOverseer,
test_state: &TestState,
new_view: Vec<(Hash, u32)>, // Hash and block number.
activated: u8, // How many new heads does this update contain?
) -> Option<AllMessages> {
let new_view: HashMap<Hash, u32> = HashMap::from_iter(new_view);
let our_view =
OurView::new(new_view.keys().map(|hash| (*hash, Arc::new(jaeger::Span::Disabled))), 0);
overseer_send(
virtual_overseer,
CollatorProtocolMessage::NetworkBridgeUpdate(NetworkBridgeEvent::OurViewChange(our_view)),
)
.await;
let mut next_overseer_message = None;
for _ in 0..activated {
let (leaf_hash, leaf_number) = assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
parent,
RuntimeApiRequest::StagingAsyncBackingParams(tx),
)) => {
tx.send(Ok(ASYNC_BACKING_PARAMETERS)).unwrap();
(parent, new_view.get(&parent).copied().expect("Unknown parent requested"))
}
);
assert_assign_incoming(
virtual_overseer,
test_state,
leaf_hash,
leaf_number,
&mut next_overseer_message,
)
.await;
let min_number = leaf_number.saturating_sub(ASYNC_BACKING_PARAMETERS.allowed_ancestry_len);
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::ProspectiveParachains(
ProspectiveParachainsMessage::GetMinimumRelayParents(parent, tx),
) if parent == leaf_hash => {
tx.send(test_state.chain_ids.iter().map(|para_id| (*para_id, min_number)).collect()).unwrap();
}
);
let ancestry_len = leaf_number + 1 - min_number;
let ancestry_hashes = std::iter::successors(Some(leaf_hash), |h| Some(get_parent_hash(*h)))
.take(ancestry_len as usize);
let ancestry_numbers = (min_number..=leaf_number).rev();
let ancestry_iter = ancestry_hashes.clone().zip(ancestry_numbers).peekable();
// How many blocks were actually requested.
let mut requested_len: usize = 0;
{
let mut ancestry_iter = ancestry_iter.clone();
while let Some((hash, number)) = ancestry_iter.next() {
// May be `None` for the last element.
let parent_hash =
ancestry_iter.peek().map(|(h, _)| *h).unwrap_or_else(|| get_parent_hash(hash));
let msg = match next_overseer_message.take() {
Some(msg) => msg,
None => overseer_recv(virtual_overseer).await,
};
if !matches!(&msg, AllMessages::ChainApi(ChainApiMessage::BlockHeader(..))) {
// Ancestry has already been cached for this leaf.
next_overseer_message.replace(msg);
break
}
assert_matches!(
msg,
AllMessages::ChainApi(ChainApiMessage::BlockHeader(.., tx)) => {
let header = Header {
parent_hash,
number,
state_root: Hash::zero(),
extrinsics_root: Hash::zero(),
digest: Default::default(),
};
tx.send(Ok(Some(header))).unwrap();
}
);
requested_len += 1;
}
}
// Skip the leaf.
for (hash, number) in ancestry_iter.skip(1).take(requested_len.saturating_sub(1)) {
assert_assign_incoming(
virtual_overseer,
test_state,
hash,
number,
&mut next_overseer_message,
)
.await;
}
}
next_overseer_message
}
async fn send_seconded_statement(
virtual_overseer: &mut VirtualOverseer,
keystore: KeystorePtr,
candidate: &CommittedCandidateReceipt,
) {
let signing_context = SigningContext { session_index: 0, parent_hash: Hash::zero() };
let stmt = SignedFullStatement::sign(
&keystore,
Statement::Seconded(candidate.clone()),
&signing_context,
ValidatorIndex(0),
&ValidatorId::from(Sr25519Keyring::Alice.public()),
)
.ok()
.flatten()
.expect("should be signed");
overseer_send(
virtual_overseer,
CollatorProtocolMessage::Seconded(candidate.descriptor.relay_parent, stmt),
)
.await;
}
async fn assert_collation_seconded(
virtual_overseer: &mut VirtualOverseer,
relay_parent: Hash,
peer_id: PeerId,
) {
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::NetworkBridgeTx(NetworkBridgeTxMessage::ReportPeer(
ReportPeerMessage::Single(peer, rep)
)) => {
assert_eq!(peer_id, peer);
assert_eq!(rep.value, BENEFIT_NOTIFY_GOOD.cost_or_benefit());
}
);
assert_matches!(
overseer_recv(virtual_overseer).await,
AllMessages::NetworkBridgeTx(NetworkBridgeTxMessage::SendCollationMessage(
peers,
Versioned::VStaging(protocol_vstaging::CollationProtocol::CollatorProtocol(
protocol_vstaging::CollatorProtocolMessage::CollationSeconded(
_relay_parent,
..,
),
)),
)) => {
assert_eq!(peers, vec![peer_id]);
assert_eq!(relay_parent, _relay_parent);
}
);
}
#[test]
fn v1_advertisement_rejected() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair_a = CollatorPair::generate().0;
let head_b = Hash::from_low_u64_be(128);
let head_b_num: u32 = 0;
update_view(&mut virtual_overseer, &test_state, vec![(head_b, head_b_num)], 1).await;
let peer_a = PeerId::random();
// Accept both collators from the implicit view.
connect_and_declare_collator(
&mut virtual_overseer,
peer_a,
pair_a.clone(),
test_state.chain_ids[0],
CollationVersion::V1,
)
.await;
advertise_collation(&mut virtual_overseer, peer_a, head_b, None).await;
// Not reported.
test_helpers::Yield::new().await;
assert_matches!(virtual_overseer.recv().now_or_never(), None);
virtual_overseer
});
}
#[test]
fn accept_advertisements_from_implicit_view() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair_a = CollatorPair::generate().0;
let pair_b = CollatorPair::generate().0;
let head_b = Hash::from_low_u64_be(128);
let head_b_num: u32 = 2;
let head_c = get_parent_hash(head_b);
// Grandparent of head `b`.
// Group rotation frequency is 1 by default, at `d` we're assigned
// to the first para.
let head_d = get_parent_hash(head_c);
// Activated leaf is `b`, but the collation will be based on `c`.
update_view(&mut virtual_overseer, &test_state, vec![(head_b, head_b_num)], 1).await;
let peer_a = PeerId::random();
let peer_b = PeerId::random();
// Accept both collators from the implicit view.
connect_and_declare_collator(
&mut virtual_overseer,
peer_a,
pair_a.clone(),
test_state.chain_ids[0],
CollationVersion::VStaging,
)
.await;
connect_and_declare_collator(
&mut virtual_overseer,
peer_b,
pair_b.clone(),
test_state.chain_ids[1],
CollationVersion::VStaging,
)
.await;
let candidate_hash = CandidateHash::default();
let parent_head_data_hash = Hash::zero();
advertise_collation(
&mut virtual_overseer,
peer_b,
head_c,
Some((candidate_hash, parent_head_data_hash)),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidate_hash);
assert_eq!(request.candidate_para_id, test_state.chain_ids[1]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
tx.send(true).expect("receiving side should be alive");
}
);
assert_fetch_collation_request(
&mut virtual_overseer,
head_c,
test_state.chain_ids[1],
Some(candidate_hash),
)
.await;
// Advertise with different para.
advertise_collation(
&mut virtual_overseer,
peer_a,
head_d, // Note different relay parent.
Some((candidate_hash, parent_head_data_hash)),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidate_hash);
assert_eq!(request.candidate_para_id, test_state.chain_ids[0]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
tx.send(true).expect("receiving side should be alive");
}
);
assert_fetch_collation_request(
&mut virtual_overseer,
head_d,
test_state.chain_ids[0],
Some(candidate_hash),
)
.await;
virtual_overseer
});
}
#[test]
fn second_multiple_candidates_per_relay_parent() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer, keystore } = test_harness;
let pair = CollatorPair::generate().0;
// Grandparent of head `a`.
let head_b = Hash::from_low_u64_be(128);
let head_b_num: u32 = 2;
// Grandparent of head `b`.
// Group rotation frequency is 1 by default, at `c` we're assigned
// to the first para.
let head_c = Hash::from_low_u64_be(130);
// Activated leaf is `b`, but the collation will be based on `c`.
update_view(&mut virtual_overseer, &test_state, vec![(head_b, head_b_num)], 1).await;
let peer_a = PeerId::random();
connect_and_declare_collator(
&mut virtual_overseer,
peer_a,
pair.clone(),
test_state.chain_ids[0],
CollationVersion::VStaging,
)
.await;
for i in 0..(ASYNC_BACKING_PARAMETERS.max_candidate_depth + 1) {
let mut candidate = dummy_candidate_receipt_bad_sig(head_c, Some(Default::default()));
candidate.descriptor.para_id = test_state.chain_ids[0];
candidate.descriptor.persisted_validation_data_hash = dummy_pvd().hash();
let commitments = CandidateCommitments {
head_data: HeadData(vec![i as u8]),
horizontal_messages: Default::default(),
upward_messages: Default::default(),
new_validation_code: None,
processed_downward_messages: 0,
hrmp_watermark: 0,
};
candidate.commitments_hash = commitments.hash();
let candidate_hash = candidate.hash();
let parent_head_data_hash = Hash::zero();
advertise_collation(
&mut virtual_overseer,
peer_a,
head_c,
Some((candidate_hash, parent_head_data_hash)),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidate_hash);
assert_eq!(request.candidate_para_id, test_state.chain_ids[0]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
tx.send(true).expect("receiving side should be alive");
}
);
let response_channel = assert_fetch_collation_request(
&mut virtual_overseer,
head_c,
test_state.chain_ids[0],
Some(candidate_hash),
)
.await;
let pov = PoV { block_data: BlockData(vec![1]) };
response_channel
.send(Ok(request_vstaging::CollationFetchingResponse::Collation(
candidate.clone(),
pov.clone(),
)
.encode()))
.expect("Sending response should succeed");
assert_candidate_backing_second(
&mut virtual_overseer,
head_c,
test_state.chain_ids[0],
&pov,
ProspectiveParachainsMode::Enabled {
max_candidate_depth: ASYNC_BACKING_PARAMETERS.max_candidate_depth as _,
allowed_ancestry_len: ASYNC_BACKING_PARAMETERS.allowed_ancestry_len as _,
},
)
.await;
let candidate =
CommittedCandidateReceipt { descriptor: candidate.descriptor, commitments };
send_seconded_statement(&mut virtual_overseer, keystore.clone(), &candidate).await;
assert_collation_seconded(&mut virtual_overseer, head_c, peer_a).await;
}
// No more advertisements can be made for this relay parent.
let candidate_hash = CandidateHash(Hash::repeat_byte(0xAA));
advertise_collation(
&mut virtual_overseer,
peer_a,
head_c,
Some((candidate_hash, Hash::zero())),
)
.await;
// Reported because reached the limit of advertisements per relay parent.
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::NetworkBridgeTx(
NetworkBridgeTxMessage::ReportPeer(ReportPeerMessage::Single(peer_id, rep)),
) => {
assert_eq!(peer_a, peer_id);
assert_eq!(rep.value, COST_UNEXPECTED_MESSAGE.cost_or_benefit());
}
);
// By different peer too (not reported).
let pair_b = CollatorPair::generate().0;
let peer_b = PeerId::random();
connect_and_declare_collator(
&mut virtual_overseer,
peer_b,
pair_b.clone(),
test_state.chain_ids[0],
CollationVersion::VStaging,
)
.await;
let candidate_hash = CandidateHash(Hash::repeat_byte(0xFF));
advertise_collation(
&mut virtual_overseer,
peer_b,
head_c,
Some((candidate_hash, Hash::zero())),
)
.await;
test_helpers::Yield::new().await;
assert_matches!(virtual_overseer.recv().now_or_never(), None);
virtual_overseer
});
}
#[test]
fn fetched_collation_sanity_check() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair = CollatorPair::generate().0;
// Grandparent of head `a`.
let head_b = Hash::from_low_u64_be(128);
let head_b_num: u32 = 2;
// Grandparent of head `b`.
// Group rotation frequency is 1 by default, at `c` we're assigned
// to the first para.
let head_c = Hash::from_low_u64_be(130);
// Activated leaf is `b`, but the collation will be based on `c`.
update_view(&mut virtual_overseer, &test_state, vec![(head_b, head_b_num)], 1).await;
let peer_a = PeerId::random();
connect_and_declare_collator(
&mut virtual_overseer,
peer_a,
pair.clone(),
test_state.chain_ids[0],
CollationVersion::VStaging,
)
.await;
let mut candidate = dummy_candidate_receipt_bad_sig(head_c, Some(Default::default()));
candidate.descriptor.para_id = test_state.chain_ids[0];
let commitments = CandidateCommitments {
head_data: HeadData(vec![1, 2, 3]),
horizontal_messages: Default::default(),
upward_messages: Default::default(),
new_validation_code: None,
processed_downward_messages: 0,
hrmp_watermark: 0,
};
candidate.commitments_hash = commitments.hash();
let candidate_hash = CandidateHash(Hash::zero());
let parent_head_data_hash = Hash::zero();
advertise_collation(
&mut virtual_overseer,
peer_a,
head_c,
Some((candidate_hash, parent_head_data_hash)),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidate_hash);
assert_eq!(request.candidate_para_id, test_state.chain_ids[0]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
tx.send(true).expect("receiving side should be alive");
}
);
let response_channel = assert_fetch_collation_request(
&mut virtual_overseer,
head_c,
test_state.chain_ids[0],
Some(candidate_hash),
)
.await;
let pov = PoV { block_data: BlockData(vec![1]) };
response_channel
.send(Ok(request_vstaging::CollationFetchingResponse::Collation(
candidate.clone(),
pov.clone(),
)
.encode()))
.expect("Sending response should succeed");
// PVD request.
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::ProspectiveParachains(
ProspectiveParachainsMessage::GetProspectiveValidationData(request, tx),
) => {
assert_eq!(head_c, request.candidate_relay_parent);
assert_eq!(test_state.chain_ids[0], request.para_id);
tx.send(Some(dummy_pvd())).unwrap();
}
);
// Reported malicious.
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::NetworkBridgeTx(
NetworkBridgeTxMessage::ReportPeer(ReportPeerMessage::Single(peer_id, rep)),
) => {
assert_eq!(peer_a, peer_id);
assert_eq!(rep.value, COST_REPORT_BAD.cost_or_benefit());
}
);
virtual_overseer
});
}
#[test]
fn advertisement_spam_protection() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair_a = CollatorPair::generate().0;
let head_b = Hash::from_low_u64_be(128);
let head_b_num: u32 = 2;
let head_c = get_parent_hash(head_b);
// Activated leaf is `b`, but the collation will be based on `c`.
update_view(&mut virtual_overseer, &test_state, vec![(head_b, head_b_num)], 1).await;
let peer_a = PeerId::random();
connect_and_declare_collator(
&mut virtual_overseer,
peer_a,
pair_a.clone(),
test_state.chain_ids[1],
CollationVersion::VStaging,
)
.await;
let candidate_hash = CandidateHash::default();
let parent_head_data_hash = Hash::zero();
advertise_collation(
&mut virtual_overseer,
peer_a,
head_c,
Some((candidate_hash, parent_head_data_hash)),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidate_hash);
assert_eq!(request.candidate_para_id, test_state.chain_ids[1]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
// Reject it.
tx.send(false).expect("receiving side should be alive");
}
);
// Send the same advertisement again.
advertise_collation(
&mut virtual_overseer,
peer_a,
head_c,
Some((candidate_hash, parent_head_data_hash)),
)
.await;
// Reported.
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::NetworkBridgeTx(
NetworkBridgeTxMessage::ReportPeer(ReportPeerMessage::Single(peer_id, rep)),
) => {
assert_eq!(peer_a, peer_id);
assert_eq!(rep.value, COST_UNEXPECTED_MESSAGE.cost_or_benefit());
}
);
virtual_overseer
});
}
#[test]
fn backed_candidate_unblocks_advertisements() {
let test_state = TestState::default();
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer, .. } = test_harness;
let pair_a = CollatorPair::generate().0;
let pair_b = CollatorPair::generate().0;
let head_b = Hash::from_low_u64_be(128);
let head_b_num: u32 = 2;
let head_c = get_parent_hash(head_b);
// Grandparent of head `b`.
// Group rotation frequency is 1 by default, at `d` we're assigned
// to the first para.
let head_d = get_parent_hash(head_c);
// Activated leaf is `b`, but the collation will be based on `c`.
update_view(&mut virtual_overseer, &test_state, vec![(head_b, head_b_num)], 1).await;
let peer_a = PeerId::random();
let peer_b = PeerId::random();
// Accept both collators from the implicit view.
connect_and_declare_collator(
&mut virtual_overseer,
peer_a,
pair_a.clone(),
test_state.chain_ids[0],
CollationVersion::VStaging,
)
.await;
connect_and_declare_collator(
&mut virtual_overseer,
peer_b,
pair_b.clone(),
test_state.chain_ids[1],
CollationVersion::VStaging,
)
.await;
let candidate_hash = CandidateHash::default();
let parent_head_data_hash = Hash::zero();
advertise_collation(
&mut virtual_overseer,
peer_b,
head_c,
Some((candidate_hash, parent_head_data_hash)),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidate_hash);
assert_eq!(request.candidate_para_id, test_state.chain_ids[1]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
// Reject it.
tx.send(false).expect("receiving side should be alive");
}
);
// Advertise with different para.
advertise_collation(
&mut virtual_overseer,
peer_a,
head_d, // Note different relay parent.
Some((candidate_hash, parent_head_data_hash)),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidate_hash);
assert_eq!(request.candidate_para_id, test_state.chain_ids[0]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
tx.send(false).expect("receiving side should be alive");
}
);
overseer_send(
&mut virtual_overseer,
CollatorProtocolMessage::Backed {
para_id: test_state.chain_ids[0],
para_head: parent_head_data_hash,
},
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidate_hash);
assert_eq!(request.candidate_para_id, test_state.chain_ids[0]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
tx.send(true).expect("receiving side should be alive");
}
);
assert_fetch_collation_request(
&mut virtual_overseer,
head_d,
test_state.chain_ids[0],
Some(candidate_hash),
)
.await;
virtual_overseer
});
}
#[test]
fn active_leave_unblocks_advertisements() {
let mut test_state = TestState::default();
test_state.group_rotation_info.group_rotation_frequency = 100;
test_harness(ReputationAggregator::new(|_| true), |test_harness| async move {
let TestHarness { mut virtual_overseer, .. } = test_harness;
let head_b = Hash::from_low_u64_be(128);
let head_b_num: u32 = 0;
update_view(&mut virtual_overseer, &test_state, vec![(head_b, head_b_num)], 1).await;
let peers: Vec<CollatorPair> = (0..3).map(|_| CollatorPair::generate().0).collect();
let peer_ids: Vec<PeerId> = (0..3).map(|_| PeerId::random()).collect();
let candidates: Vec<CandidateHash> =
(0u8..3).map(|i| CandidateHash(Hash::repeat_byte(i))).collect();
for (peer, peer_id) in peers.iter().zip(&peer_ids) {
connect_and_declare_collator(
&mut virtual_overseer,
*peer_id,
peer.clone(),
test_state.chain_ids[0],
CollationVersion::VStaging,
)
.await;
}
let parent_head_data_hash = Hash::zero();
for (peer, candidate) in peer_ids.iter().zip(&candidates).take(2) {
advertise_collation(
&mut virtual_overseer,
*peer,
head_b,
Some((*candidate, parent_head_data_hash)),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, *candidate);
assert_eq!(request.candidate_para_id, test_state.chain_ids[0]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
// Send false.
tx.send(false).expect("receiving side should be alive");
}
);
}
let head_c = Hash::from_low_u64_be(127);
let head_c_num: u32 = 1;
let next_overseer_message =
update_view(&mut virtual_overseer, &test_state, vec![(head_c, head_c_num)], 1)
.await
.expect("should've sent request to backing");
// Unblock first request.
assert_matches!(
next_overseer_message,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidates[0]);
assert_eq!(request.candidate_para_id, test_state.chain_ids[0]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
tx.send(true).expect("receiving side should be alive");
}
);
assert_fetch_collation_request(
&mut virtual_overseer,
head_b,
test_state.chain_ids[0],
Some(candidates[0]),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidates[1]);
assert_eq!(request.candidate_para_id, test_state.chain_ids[0]);
assert_eq!(request.parent_head_data_hash, parent_head_data_hash);
tx.send(false).expect("receiving side should be alive");
}
);
// Collation request was discarded.
test_helpers::Yield::new().await;
assert_matches!(virtual_overseer.recv().now_or_never(), None);
advertise_collation(
&mut virtual_overseer,
peer_ids[2],
head_c,
Some((candidates[2], parent_head_data_hash)),
)
.await;
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidates[2]);
tx.send(false).expect("receiving side should be alive");
}
);
let head_d = Hash::from_low_u64_be(126);
let head_d_num: u32 = 2;
let next_overseer_message =
update_view(&mut virtual_overseer, &test_state, vec![(head_d, head_d_num)], 1)
.await
.expect("should've sent request to backing");
// Reject 2, accept 3.
assert_matches!(
next_overseer_message,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidates[1]);
tx.send(false).expect("receiving side should be alive");
}
);
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::CandidateBacking(
CandidateBackingMessage::CanSecond(request, tx),
) => {
assert_eq!(request.candidate_hash, candidates[2]);
tx.send(true).expect("receiving side should be alive");
}
);
assert_fetch_collation_request(
&mut virtual_overseer,
head_c,
test_state.chain_ids[0],
Some(candidates[2]),
)
.await;
virtual_overseer
});
}