mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-04-26 16:57:58 +00:00
Asynchronous Backing MegaPR (#5022)
* inclusion emulator logic for asynchronous backing (#4790) * initial stab at candidate_context * fmt * docs & more TODOs * some cleanups * reframe as inclusion_emulator * documentations yes * update types * add constraint modifications * watermark * produce modifications * v2 primitives: re-export all v1 for consistency * vstaging primitives * emulator constraints: handle code upgrades * produce outbound HRMP modifications * stack. * method for applying modifications * method just for sanity-checking modifications * fragments produce modifications, not prospectives * make linear * add some TODOs * remove stacking; handle code upgrades * take `fragment` private * reintroduce stacking. * fragment constructor * add TODO * allow validating fragments against future constraints * docs * relay-parent number and min code size checks * check code upgrade restriction * check max hrmp per candidate * fmt * remove GoAhead logic because it wasn't helpful * docs on code upgrade failure * test stacking * test modifications against constraints * fmt * test fragments * descending or duplicate test * fmt * remove unused imports in vstaging * wrong primitives * spellcheck * Runtime changes for Asynchronous Backing (#4786) * inclusion: utility for allowed relay-parents * inclusion: use prev number instead of prev hash * track most recent context of paras * inclusion: accept previous relay-parents * update dmp advancement rule for async backing * fmt * add a comment about validation outputs * clean up a couple of TODOs * weights * fix weights * fmt * Resolve dmp todo * Restore inclusion tests * Restore paras_inherent tests * MostRecentContext test * Benchmark for new paras dispatchable * Prepare check_validation_outputs for upgrade * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=kusama-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/kusama/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=westend-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/westend/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=polkadot-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/polkadot/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=rococo-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/rococo/src/weights/runtime_parachains_paras.rs * Implementers guide changes * More tests for allowed relay parents * Add a github issue link * Compute group index based on relay parent * Storage migration * Move allowed parents tracker to shared * Compile error * Get group assigned to core at the next block * Test group assignment * fmt * Error instead of panic * Update guide * Extend doc-comment * Update runtime/parachains/src/shared.rs Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Parity Bot <admin@parity.io> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Prospective Parachains Subsystem (#4913) * docs and skeleton * subsystem skeleton * main loop * fragment tree basics & fmt * begin fragment trees & view * flesh out more of view update logic * further flesh out update logic * some refcount functions for fragment trees * add fatal/non-fatal errors * use non-fatal results * clear up some TODOs * ideal format for scheduling info * add a bunch of TODOs * some more fluff * extract fragment graph to submodule * begin fragment graph API * trees, not graphs * improve docs * scope and constructor for trees * add some test TODOs * limit max ancestors and store constraints * constructor * constraints: fix bug in HRMP watermarks * fragment tree population logic * set::retain * extract population logic * implement add_and_populate * fmt * add some TODOs in tests * implement child-selection * strip out old stuff based on wrong assumptions * use fatality * implement pruning * remove unused ancestor constraints * fragment tree instantiation * remove outdated comment * add message/request types and skeleton for handling * fmt * implement handle_candidate_seconded * candidate storage: handle backed * implement handle_candidate_backed * implement answer_get_backable_candidate * remove async where not needed * implement fetch_ancestry * add logic for run_iteration * add some docs * remove global allow(unused), fix warnings * make spellcheck happy (despite English) * fmt * bump Cargo.lock * replace tracing with gum * introduce PopulateFrom trait * implement GetHypotheticalDepths * revise docs slightly * first fragment tree scope test * more scope tests * test add_candidate * fmt * test retain * refactor test code * test populate is recursive * test contiguity of depth 0 is maintained * add_and_populate tests * cycle tests * remove PopulateFrom trait * fmt * test hypothetical depths (non-recursive) * have CandidateSeconded return membership * tree membership requests * Add a ProspectiveParachainsSubsystem struct * add a staging API for base constraints * add a `From` impl * add runtime API for staging_validity_constraints * implement fetch_base_constraints * implement `fetch_upcoming_paras` * remove reconstruction of candidate receipt; no obvious usecase * fmt * export message to broader module * remove last TODO * correctly export * fix compilation and add GetMinimumRelayParent request * make provisioner into a real subsystem with proper mesage bounds * fmt * fix ChannelsOut in overseer test * fix overseer tests * fix again * fmt * Integrate prospective parachains subsystem into backing: Part 1 (#5557) * BEGIN ASYNC candidate-backing CHANGES * rename & document modes * answer prospective validation data requests * GetMinimumRelayParents request is now plural * implement an implicit view utility for backing subsystems * implicit-view: get allowed relay parents * refactorings and improvements to implicit view * add some TODOs for tests * split implicit view updates into 2 functions * backing: define State to prepare for functional refactor * add some docs * backing: implement bones of new leaf activation logic * backing: create per-relay-parent-states * use new handle_active_leaves_update * begin extracting logic from CandidateBackingJob * mostly extract statement import from job logic * handle statement imports outside of job logic * do some TODO planning for prospective parachains integration * finish rewriting backing subsystem in functional style * add prospective parachains mode to relay parent entries * fmt * add a RejectedByProspectiveParachains error * notify prospective parachains of seconded and backed candidates * always validate candidates exhaustively in backing. * return persisted_validation_data from validation * handle rejections by prospective parachains * implement seconding sanity check * invoke validate_and_second * Alter statement table to allow multiple seconded messages per validator * refactor backing to have statements carry PVD * clean up all warnings * Add tests for implicit view * Improve doc comments * Prospective parachains mode based on Runtime API version * Add a TODO * Rework seconding_sanity_check * Iterate over responses * Update backing tests * collator-protocol: load PVD from runtime * Fix validator side tests * Update statement-distribution to fetch PVD * Fix statement-distribution tests * Backing tests with prospective paras #1 * fix per_relay_parent pruning in backing * Test multiple leaves * Test seconding sanity check * Import statement order Before creating an entry in `PerCandidateState` map wait for the approval from the prospective parachains * Add a test for correct state updates * Second multiple candidates per relay parent test * Add backing tests with prospective paras * Second more than one test without prospective paras * Add a test for prospective para blocks * Update malus * typos Co-authored-by: Chris Sosnin <chris125_@live.com> * Track occupied depth in backing per parachain (#5778) * provisioner: async backing changes (#5711) * Provisioner changes for async backing * Select candidates based on prospective paras mode * Revert naming * Update tests * Update TODO comment * review * provisioner: async backing changes (#5711) * Provisioner changes for async backing * Select candidates based on prospective paras mode * Revert naming * Update tests * Update TODO comment * review * fmt * Network bridge changes for asynchronous backing + update subsystems to handle versioned packets (#5991) * BEGIN STATEMENT DISTRIBUTION WORK create a vstaging network protocol which is the same as v1 * mostly make network bridge amenable to vstaging * network-bridge: fully adapt to vstaging * add some TODOs for tests * fix fallout in bitfield-distribution * bitfield distribution tests + TODOs * fix fallout in gossip-support * collator-protocol: fix message fallout * collator-protocol: load PVD from runtime * add TODO for vstaging tests * make things compile * set used network protocol version using a feature * fmt * get approval-distribution building * fix approval-distribution tests * spellcheck * nits * approval distribution net protocol test * bitfield distribution net protocol test * Revert "collator-protocol: fix message fallout" This reverts commit 07cc887303e16c6b3843ecb25cdc7cc2080e2ed1. * Network bridge tests Co-authored-by: Chris Sosnin <chris125_@live.com> * remove max_pov_size requirement from prospective pvd request (#6014) * remove max_pov_size requirement from prospective pvd request * fmt * Extract legacy statement distribution to its own module (#6026) * add compatibility type to v2 statement distribution message * warning cleanup * handle compatibility layer for v2 * clean up an unimplemented!() block * circulate statements based on version * extract legacy v1 code into separate module * remove unimplemented * clean up naming of from_requester/responder * remove TODOs * have backing share seconded statements with PVD * fmt * fix warning * Quick fix unused warning for not yet implemented/used staging messages. * Fix network bridge test * Fix wrong merge. We now have 23 subsystems (network bridge split + prospective parachains) Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at> * Version 3 is already live. * Fix tests (#6055) * Fix backing tests * Fix warnings. * fmt * collator-protocol: asynchronous backing changes (#5740) * Draft collator side changes * Start working on collations management * Handle peer's view change * Versioning on advertising * Versioned collation fetching request * Handle versioned messages * Improve docs for collation requests * Add spans * Add request receiver to overseer * Fix collator side tests * Extract relay parent mode to lib * Validator side draft * Add more checks for advertisement * Request pvd based on async backing mode * review * Validator side improvements * Make old tests green * More fixes * Collator side tests draft * Send collation test * fmt * Collator side network protocol versioning * cleanup * merge artifacts * Validator side net protocol versioning * Remove fragment tree membership request * Resolve todo * Collator side core state test * Improve net protocol compatibility * Validator side tests * more improvements * style fixes * downgrade log * Track implicit assignments * Limit the number of seconded candidates per para * Add a sanity check * Handle fetched candidate * fix tests * Retry fetch * Guard against dequeueing while already fetching * Reintegrate connection management * Timeout on advertisements * fmt * spellcheck * update tests after merge * validator assignment fixes for backing and collator protocol (#6158) * Rename depth->ancestry len in tests * Refactor group assignments * Remove implicit assignments * backing: consider occupied core assignments * Track a single para on validator side * Refactor prospective parachains mode request (#6179) * Extract prospective parachains mode into util * Skip activations depending on the mode * backing: don't send backed candidate to provisioner (#6185) * backing: introduce `CanSecond` request for advertisements filtering (#6225) * Drop BoundToRelayParent * draft changes * fix backing tests * Fix genesis ancestry * Fix validator side tests * more tests * cargo generate-lockfile * Implement `StagingValidityConstraints` Runtime API method (#6258) * Implement StagingValidityConstraints * spellcheck * fix ump params * Update hrmp comment * Introduce ump per candidate limit * hypothetical earliest block * refactor primitives usage * hypothetical earliest block number test * fix build * Prepare the Runtime for asynchronous backing upgrade (#6287) * Introduce async backing params to runtime config * fix cumulus config * use config * finish runtimes * Introduce new staging API * Update collator protocol * Update provisioner * Update prospective parachains * Update backing * Move async backing params lower in the config * make naming consistent * misc * Use real prospective parachains subsystem (#6407) * Backport `HypotheticalFrontier` into the feature branch (#6605) * implement more general HypotheticalFrontier * fmt * drop unneeded request Co-authored-by: Robert Habermeier <rphmeier@gmail.com> * Resolve todo about legacy leaf activation (#6447) * fix bug/warning in handling membership answers * Remove `HypotheticalDepthRequest` in favor of `HypotheticalFrontierRequest` (#6521) * Remove `HypotheticalDepthRequest` for `HypotheticalFrontierRequest` * Update tests * Fix (removed wrong docstring) * Fix can_second request * Patch some dead_code errors --------- Co-authored-by: Chris Sosnin <chris125_@live.com> * Async Backing: Send Statement Distribution "Backed" messages (#6634) * Backing: Send Statement Distribution "Backed" messages Closes #6590. **TODO:** - [ ] Adjust tests * Fix compile errors * (Mostly) fix tests * Fix comment * Fix test and compile error * Test that `StatementDistributionMessage::Backed` is sent * Fix compile error * Fix some clippy errors * Add prospective parachains subsystem tests (#6454) * Add prospective parachains subsystem test * Add `should_do_no_work_if_async_backing_disabled_for_leaf` test * Implement `activate_leaf` helper, up to getting ancestry * Finish implementing `activate_leaf` * Small refactor in `activate_leaf` * Get `CandidateSeconded` working * Finish `send_candidate_and_check_if_found` test * Refactor; send more leaves & candidates * Refactor test * Implement `check_candidate_parent_leaving_view` test * Start work on `check_candidate_on_multiple_forks` test * Don’t associate specific parachains with leaf * Finish `correctly_updates_leaves` test * Fix cycle due to reused head data * Fix `check_backable_query` test * Fix `check_candidate_on_multiple_forks` test * Add `check_depth_and_pvd_queries` test * Address review comments * Remove TODO * add a new index for output head data to candidate storage * Resolve test TODOs * Fix compile errors * test candidate storage pruning, make sure new index is cleaned up --------- Co-authored-by: Robert Habermeier <rphmeier@gmail.com> * Node-side metrics for asynchronous backing (#6549) * Add metrics for `prune_view_candidate_storage` * Add metrics for `request_unblocked_collations` * Fix docstring * Couple fixes from review comments * Fix `check_depth_query` test * inclusion-emulator: mirror advancement rule check (#6361) * inclusion-emulator: mirror advancement rule check * fix build * prospective-parachains: introduce `backed_in_path_only` flag for advertisements (#6649) * Introduce `backed_in_path_only` flag for depth request * fmt * update doc comment * fmt * Add async-backing zombienet tests (#6314) * Async backing: impl guide for statement distribution (#6738) Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com> * Asynchronous backing statement distribution: Take III (#5999) * add notification types for v2 statement-distribution * improve protocol docs * add empty vstaging module * fmt * add backed candidate packet request types * start putting down structure of new logic * handle activated leaf * some sanity-checking on outbound statements * fmt * update vstaging share to use statements with PVD * tiny refactor, candidate_hash location * import local statements * refactor statement import * first stab at broadcast logic * fmt * fill out some TODOs * start on handling incoming * split off session info into separate map * start in on a knowledge tracker * address some grumbles * format * missed comment * some docs for direct * add note on slashing * amend * simplify 'direct' code * finish up the 'direct' logic * add a bunch of tests for the direct-in-group logic * rename 'direct' to 'cluster', begin a candidate_entry module * distill candidate_entry * start in on a statement-store module * some utilities for the statement store * rewrite 'send_statement_direct' using new tools * filter sending logic on peers which have the relay-parent in their view. * some more logic for handling incoming statements * req/res: BackedCandidatePacket -> AttestedCandidate + tweaks * add a `validated_in_group` bitfield to BackedCandidateInventory * BackedCandidateInventory -> Manifest * start in on requester module * add outgoing request for attested candidate * add a priority mechanism for requester * some request dispatch logic * add seconded mask to tagged-request * amend manifest to hold group index * handle errors and set up scaffold for response validation * validate attested candidate responses * requester -> requests * add some utilities for manipulating requests * begin integrating requester * start grid module * tiny * refactor grid topology to expose more info to subsystems * fix grid_topology test * fix overseer test * implement topology group-based view construction logic * fmt * flesh out grid slightly more * add indexed groups utility * integrate Groups into per-session info * refactor statement store to borrow Groups * implement manifest knowledge utility * add a test for topology setup * don't send to group members * test for conflicting manifests * manifest knowledge tests * fmt * rename field * garbage collection for grid tracker * routines for finding correct/incorrect advertisers * add manifest import logic * tweak naming * more tests for manifest import * add comment * rework candidates into a view-wide tracker * fmt * start writing boilerplate for grid sending * fmt * some more group boilerplate * refactor handling of topology and authority IDs * fmt * send statements directly to grid peers where possible * send to cluster only if statement belongs to cluster * improve handling of cluster statements * handle incoming statements along the grid * API for introduction of candidates into the tree * backing: use new prospective parachains API * fmt prospective parachains changes * fmt statement-dist * fix condition * get ready for tracking importable candidates * prospective parachains: add Cow logic * incomplete and complete hypothetical candidates * remove keep_if_unneeded * fmt * implement more general HypotheticalFrontier * fmt, cleanup * add a by_parent_hash index to candidate tracker * more framework for future code * utilities for getting all hypothetical candidates for frontier * track origin in statement store * fmt * requests should return peer * apply post-confirmation reckoning * flesh out import/announce/circulate logic on new statements * adjust * adjust TODO comment * fix backing tests * update statement-distribution to use new indexedvec * fmt * query hypothetical candidates * implement `note_importable_under` * extract common utility of fragment tree updates * add a helper function for getting statements unknown by backing * import fresh statements to backing * send announcements and acknowledgements over grid * provide freshly importable statements also avoid tracking backed candidates in statement distribution * do not issue requests on newly importable candidates * add TODO for later when confirming candidate * write a routine for handling backed candidate notifications * simplify grid substantially * add some test TODOs * handle confirmed candidates & grid announcements * finish implementing manifest handling, including follow up statements * send follow-up statements when acknowledging freshly backed * fmt * handle incoming acknowledgements * a little DRYing * wire up network messages to handlers * fmt * some skeleton code for peer view update handling * more peer view skeleton stuff * Fix async backing statement distribution tests (#6621) * Fix compile errors in tests * Cargo fmt * Resolve some todos in async backing statement-distribution branch (#6482) * Implement `remove_by_relay_parent` * Extract `minimum_votes` to shared primitives. * Add `can_send_statements_received_with_prejudice` test * Fix test * Update docstrings * Cargo fmt * Fix compile error * Fix compile errors in tests * Cargo fmt * Add module docs; write `test_priority_ordering` (first draft) * Fix `test_priority_ordering` * Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority` * Address review comments * Remove `Entry::get_mut` * fix test compilation * add a TODO for a test * clean up a couple of TODOs * implement sending pending cluster statements * refactor utility function for sending acknowledgement and statements * mostly implement catching peers up via grid * Fix clippy error * alter grid to track all pending statements * fix more TODOs and format * tweak a TODO in requests * some logic for dispatching requests * fmt * skeleton for response receiving * Async backing statement distribution: cluster tests (#6678) * Add `pending_statements_set_when_receiving_fresh_statements` * Add `pending_statements_updated_when_sending_statements` test * fix up * fmt * update TODO * rework seconded mask in requests * change doc * change unhandledresponse not to borrow request manager * only accept responses sufficient to back * finish implementing response handling * extract statement filter to protocol crate * rework requests: use statement filter in network protocol * dispatch cluster requests correctly * rework cluster statement sending * implement request answering * fmt * only send confirmed candidate statement messages on unified relay-parent * Fix Tests In Statement Distribution Branch * Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * clean up some review comments * clean up warnings * Async backing statement distribution: grid tests (#6673) * Add `manifest_import_returns_ok_true` test * cargo fmt * Add pending_communication_receiving_manifest_on_confirmed_candidate * Add `senders_can_provide_manifests_in_acknowledgement` test * Add a couple of tests for pending statements * Add `pending_statements_cleared_when_sending` test * Add `pending_statements_respect_remote_knowledge` test * Refactor group creation in tests * Clarify docs * Address some review comments * Make some clarifications * Fix post-merge errors * Clarify test `senders_can_provide_manifests_in_acknowledgement` * Try writing `pending_statements_are_updated_after_manifest_exchange` * Document "seconding limit" and `reject_overflowing_manifests` test * Test that seconding counts are not updated for validators on error * Fix tests * Fix manifest exchange test * Add more tests in `requests.rs` (#6707) This resolves remaining TODOs in this file. * remove outdated inventory terminology * Async backing statement distribution: `Candidates` tests (#6658) * Async Backing: Fix clippy errors in statement distribution branch (#6720) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * Async Backing: Fix clippy errors in statement distribution branch * Fix some more clippy lints * add tests module * fix warnings in existing tests * create basic test harness * create a test state struct * fmt * create empty cluster & grid modules for tests * some TODOs for cluster test suite * describe test-suite for grid logic * describe request test suite * fix seconding-limit bug * Remove extraneous `pub` This somehow made it into my clippy PR. * Fix some test compile warnings * Remove some unneeded `allow`s * adapt some new test helpers from Marcin * add helper for activating a gossip topology * add utility for signing statements * helpers for connecting/disconnecting peers * round out network utilities * fmt * fix bug in initializing validator-meta * fix compilation * implement first cluster test * TODOs for incoming request tests * Remove unneeded `make_committed_candidate` helper * fmt * some more tests for cluster * add a TODO about grid senders * integrate inbound req/res into test harness * polish off initial cluster test suite * keep introduce candidate request * fix tests after introduce candidate request * fmt * Add grid protocol to module docs * Fix comments * Test `backed_in_path_only: true` * Update node/network/protocol/src/lib.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Update node/network/protocol/src/request_response/mod.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Mark receiver with `vstaging` * validate grid senders based on manifest kind * fix mask_seconded/valid * fix unwanted-mask check * fix build * resolve todo on leaf mode * Unify protocol naming to vstaging * fmt, fix grid test after topology change * typo Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * address review * adjust comment, make easier to understand * Fix typo --------- Co-authored-by: Marcin S <marcin@bytedude.com> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Chris Sosnin <chris125_@live.com> * miscellaneous fixes to make asynchronous backing work (#6791) * propagate network-protocol-staging feature * add feature to adder-collator as well * allow collation-generation of occupied cores * prospective parachains: special treatment for pending availability candidates * runtime: fetch candidates pending availability * lazily construct PVD for pending candidates * fix fallout in prospective parachains hypothetical/select_child * runtime: enact candidates when creating paras-inherent * make tests compile * test pending availability in the scope * add prospective parachains test * fix validity constraints leftovers * drop prints * Fix typos --------- Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Marcin S <marcin@realemail.net> * Remove restart from test (#6840) * Async Backing: Statement Distribution Tests (#6755) * start on handling incoming * split off session info into separate map * start in on a knowledge tracker * address some grumbles * format * missed comment * some docs for direct * add note on slashing * amend * simplify 'direct' code * finish up the 'direct' logic * add a bunch of tests for the direct-in-group logic * rename 'direct' to 'cluster', begin a candidate_entry module * distill candidate_entry * start in on a statement-store module * some utilities for the statement store * rewrite 'send_statement_direct' using new tools * filter sending logic on peers which have the relay-parent in their view. * some more logic for handling incoming statements * req/res: BackedCandidatePacket -> AttestedCandidate + tweaks * add a `validated_in_group` bitfield to BackedCandidateInventory * BackedCandidateInventory -> Manifest * start in on requester module * add outgoing request for attested candidate * add a priority mechanism for requester * some request dispatch logic * add seconded mask to tagged-request * amend manifest to hold group index * handle errors and set up scaffold for response validation * validate attested candidate responses * requester -> requests * add some utilities for manipulating requests * begin integrating requester * start grid module * tiny * refactor grid topology to expose more info to subsystems * fix grid_topology test * fix overseer test * implement topology group-based view construction logic * fmt * flesh out grid slightly more * add indexed groups utility * integrate Groups into per-session info * refactor statement store to borrow Groups * implement manifest knowledge utility * add a test for topology setup * don't send to group members * test for conflicting manifests * manifest knowledge tests * fmt * rename field * garbage collection for grid tracker * routines for finding correct/incorrect advertisers * add manifest import logic * tweak naming * more tests for manifest import * add comment * rework candidates into a view-wide tracker * fmt * start writing boilerplate for grid sending * fmt * some more group boilerplate * refactor handling of topology and authority IDs * fmt * send statements directly to grid peers where possible * send to cluster only if statement belongs to cluster * improve handling of cluster statements * handle incoming statements along the grid * API for introduction of candidates into the tree * backing: use new prospective parachains API * fmt prospective parachains changes * fmt statement-dist * fix condition * get ready for tracking importable candidates * prospective parachains: add Cow logic * incomplete and complete hypothetical candidates * remove keep_if_unneeded * fmt * implement more general HypotheticalFrontier * fmt, cleanup * add a by_parent_hash index to candidate tracker * more framework for future code * utilities for getting all hypothetical candidates for frontier * track origin in statement store * fmt * requests should return peer * apply post-confirmation reckoning * flesh out import/announce/circulate logic on new statements * adjust * adjust TODO comment * fix backing tests * update statement-distribution to use new indexedvec * fmt * query hypothetical candidates * implement `note_importable_under` * extract common utility of fragment tree updates * add a helper function for getting statements unknown by backing * import fresh statements to backing * send announcements and acknowledgements over grid * provide freshly importable statements also avoid tracking backed candidates in statement distribution * do not issue requests on newly importable candidates * add TODO for later when confirming candidate * write a routine for handling backed candidate notifications * simplify grid substantially * add some test TODOs * handle confirmed candidates & grid announcements * finish implementing manifest handling, including follow up statements * send follow-up statements when acknowledging freshly backed * fmt * handle incoming acknowledgements * a little DRYing * wire up network messages to handlers * fmt * some skeleton code for peer view update handling * more peer view skeleton stuff * Fix async backing statement distribution tests (#6621) * Fix compile errors in tests * Cargo fmt * Resolve some todos in async backing statement-distribution branch (#6482) * Implement `remove_by_relay_parent` * Extract `minimum_votes` to shared primitives. * Add `can_send_statements_received_with_prejudice` test * Fix test * Update docstrings * Cargo fmt * Fix compile error * Fix compile errors in tests * Cargo fmt * Add module docs; write `test_priority_ordering` (first draft) * Fix `test_priority_ordering` * Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority` * Address review comments * Remove `Entry::get_mut` * fix test compilation * add a TODO for a test * clean up a couple of TODOs * implement sending pending cluster statements * refactor utility function for sending acknowledgement and statements * mostly implement catching peers up via grid * Fix clippy error * alter grid to track all pending statements * fix more TODOs and format * tweak a TODO in requests * some logic for dispatching requests * fmt * skeleton for response receiving * Async backing statement distribution: cluster tests (#6678) * Add `pending_statements_set_when_receiving_fresh_statements` * Add `pending_statements_updated_when_sending_statements` test * fix up * fmt * update TODO * rework seconded mask in requests * change doc * change unhandledresponse not to borrow request manager * only accept responses sufficient to back * finish implementing response handling * extract statement filter to protocol crate * rework requests: use statement filter in network protocol * dispatch cluster requests correctly * rework cluster statement sending * implement request answering * fmt * only send confirmed candidate statement messages on unified relay-parent * Fix Tests In Statement Distribution Branch * Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * clean up some review comments * clean up warnings * Async backing statement distribution: grid tests (#6673) * Add `manifest_import_returns_ok_true` test * cargo fmt * Add pending_communication_receiving_manifest_on_confirmed_candidate * Add `senders_can_provide_manifests_in_acknowledgement` test * Add a couple of tests for pending statements * Add `pending_statements_cleared_when_sending` test * Add `pending_statements_respect_remote_knowledge` test * Refactor group creation in tests * Clarify docs * Address some review comments * Make some clarifications * Fix post-merge errors * Clarify test `senders_can_provide_manifests_in_acknowledgement` * Try writing `pending_statements_are_updated_after_manifest_exchange` * Document "seconding limit" and `reject_overflowing_manifests` test * Test that seconding counts are not updated for validators on error * Fix tests * Fix manifest exchange test * Add more tests in `requests.rs` (#6707) This resolves remaining TODOs in this file. * remove outdated inventory terminology * Async backing statement distribution: `Candidates` tests (#6658) * Async Backing: Fix clippy errors in statement distribution branch (#6720) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * Async Backing: Fix clippy errors in statement distribution branch * Fix some more clippy lints * add tests module * fix warnings in existing tests * create basic test harness * create a test state struct * fmt * create empty cluster & grid modules for tests * some TODOs for cluster test suite * describe test-suite for grid logic * describe request test suite * fix seconding-limit bug * Remove extraneous `pub` This somehow made it into my clippy PR. * Fix some test compile warnings * Remove some unneeded `allow`s * adapt some new test helpers from Marcin * add helper for activating a gossip topology * add utility for signing statements * helpers for connecting/disconnecting peers * round out network utilities * fmt * fix bug in initializing validator-meta * fix compilation * implement first cluster test * TODOs for incoming request tests * Remove unneeded `make_committed_candidate` helper * fmt * Hook up request sender * Add `valid_statement_without_prior_seconded_is_ignored` test * Fix `valid_statement_without_prior_seconded_is_ignored` test * some more tests for cluster * add a TODO about grid senders * integrate inbound req/res into test harness * polish off initial cluster test suite * keep introduce candidate request * fix tests after introduce candidate request * fmt * Add grid protocol to module docs * Remove obsolete test * Fix comments * Test `backed_in_path_only: true` * Update node/network/protocol/src/lib.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Update node/network/protocol/src/request_response/mod.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Mark receiver with `vstaging` * First draft of `ensure_seconding_limit_is_respected` test * validate grid senders based on manifest kind * fix mask_seconded/valid * fix unwanted-mask check * fix build * resolve todo on leaf mode * Unify protocol naming to vstaging * Fix `ensure_seconding_limit_is_respected` test * Start `backed_candidate_leads_to_advertisement` test * fmt, fix grid test after topology change * Send Backed notification * Finish `backed_candidate_leads_to_advertisement` test * Finish `peer_reported_for_duplicate_statements` test * Finish `received_advertisement_before_confirmation_leads_to_request` * Add `advertisements_rejected_from_incorrect_peers` test * Add `manifest_rejected_*` tests * Add `manifest_rejected_when_group_does_not_match_para` test * Add `local_node_sanity_checks_incoming_requests` test * Add `local_node_respects_statement_mask` test * Add tests where peer is reported for providing invalid signatures * Add `cluster_peer_allowed_to_send_incomplete_statements` test * Add `received_advertisement_after_backing_leads_to_acknowledgement` * Add `received_advertisement_after_confirmation_before_backing` test * peer_reported_for_advertisement_conflicting_with_confirmed_candidate * Add `peer_reported_for_not_enough_statements` test * Add `peer_reported_for_providing_statements_meant_to_be_masked_out` * Add `additional_statements_are_shared_after_manifest_exchange` * Add `grid_statements_imported_to_backing` test * Add `relay_parent_entering_peer_view_leads_to_advertisement` test * Add `advertisement_not_re_sent_when_peer_re_enters_view` test * Update node/network/statement-distribution/src/vstaging/tests/grid.rs Co-authored-by: asynchronous rob <rphmeier@gmail.com> * Resolve TODOs, update test * Address unused code * Add check after every test for unhandled requests * Refactor (`make_dummy_leaf` and `handle_sent_request`) * Refactor (`make_dummy_topology`) * Minor refactor --------- Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Chris Sosnin <chris125_@live.com> * Fix some clippy lints in tests * Async backing: minor fixes (#6920) * bitfield-distribution test * implicit view tests * Refactor parameters -> params * scheduler: update storage migration (#6963) * update scheduler migration * Adjust weight to account for storage read * Statement Distribution Guide Edits (#7025) * Statement distribution guide edits * Addressed Marcin's comments * Add attested candidate request retry timeouts (#6833) Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: asynchronous rob <rphmeier@gmail.com> Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <chris125_@live.com> Fix async backing statement distribution tests (#6621) Resolve some todos in async backing statement-distribution branch (#6482) Fix clippy errors in statement distribution branch (#6720) * Async backing: add Prospective Parachains impl guide (#6933) Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> * Updates to Provisioner Guide for Async Backing (#7106) * Initial corrections and clarifications * Partial first draft * Finished first draft * Adding back wrongly removed test bit * fmt * Update roadmap/implementers-guide/src/node/utility/provisioner.md Co-authored-by: Marcin S. <marcin@realemail.net> * Addressing comments * Reorganization * fmt --------- Co-authored-by: Marcin S. <marcin@realemail.net> * fmt * Renaming Parathread Mentions (#7287) * Renaming parathreads * Renaming module to pallet * More updates * PVF: Refactor workers into separate crates, remove host dependency (#7253) * PVF: Refactor workers into separate crates, remove host dependency * Fix compile error * Remove some leftover code * Fix compile errors * Update Cargo.lock * Remove worker main.rs files I accidentally copied these from the other PR. This PR isn't intended to introduce standalone workers yet. * Address review comments * cargo fmt * Update a couple of comments * Update log targets * Update quote to 1.0.27 (#7280) Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: parity-processbot <> * pallets: implement `Default` for `GenesisConfig` in `no_std` (#7271) * pallets: implement Default for GenesisConfig in no_std This change is follow-up of: https://github.com/paritytech/substrate/pull/14108 It is a step towards: https://github.com/paritytech/substrate/issues/13334 * Cargo.lock updated * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * cli: enable BEEFY by default on test networks (#7293) We consider BEEFY mature enough to run by default on all nodes for test networks (Rococo/Wococo/Versi). Right now, most nodes are not running it since it's opt-in using --beefy flag. Switch to an opt-out model for test networks. Replace --beefy flag from CLI with --no-beefy and have BEEFY client start by default on test networks. Signed-off-by: acatangiu <adrian@parity.io> * runtime: past session slashing runtime API (#6667) * runtime/vstaging: unapplied_slashes runtime API * runtime/vstaging: key_ownership_proof runtime API * runtime/ParachainHost: submit_report_dispute_lost * fix key_ownership_proof API * runtime: submit_report_dispute_lost runtime API * nits * Update node/subsystem-types/src/messages.rs Co-authored-by: Marcin S. <marcin@bytedude.com> * revert unrelated fmt changes * post merge fixes * fix compilation --------- Co-authored-by: Marcin S. <marcin@bytedude.com> * Correcting git mishap * Document usage of `gum` crate (#7294) * Document usage of gum crate * Small fix * Add some more basic info * Update node/gum/src/lib.rs Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * Update target docs --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * XCM: Fix issue with RequestUnlock (#7278) * XCM: Fix issue with RequestUnlock * Leave API changes for v4 * Fix clippy errors * Fix tests --------- Co-authored-by: parity-processbot <> * Companion for Substrate#14228 (#7295) * Companion for Substrate#14228 https://github.com/paritytech/substrate/pull/14228 * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * Companion for #14237: Use latest sp-crates (#7300) * To revert: Update substrate branch to "lexnv/bump_sp_crates" Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Revert "To revert: Update substrate branch to "lexnv/bump_sp_crates"" This reverts commit 5f1db84eac4a226c37b7f6ce6ee19b49dc7e2008. * Update cargo lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * bounded-collections bump to 0.1.7 (#7305) * bounded-collections bump to 0.1.7 Companion for: paritytech/substrate#14225 * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * bump to quote 1.0.28 (#7306) * `RollingSessionWindow` cleanup (#7204) * Replace `RollingSessionWindow` with `RuntimeInfo` - initial commit * Fix tests in import * Fix the rest of the tests * Remove dead code * Fix todos * Simplify session caching * Comments for `SessionInfoProvider` * Separate `SessionInfoProvider` from `State` * `cache_session_info_for_head` becomes freestanding function * Remove unneeded `mut` usage * fn session_info -> fn get_session_info() to avoid name clashes. The function also tries to initialize `SessionInfoProvider` * Fix SessionInfo retrieval * Code cleanup * Don't wrap `SessionInfoProvider` in an `Option` * Remove `earliest_session()` * Remove pre-caching -> wip * Fix some tests and code cleanup * Fix all tests * Fixes in tests * Fix comments, variable names and small style changes * Fix a warning * impl From<SessionWindowSize> for NonZeroUsize * Fix logging for `get_session_info` - remove redundant logs and decrease log level to DEBUG * Code review feedback * Storage migration removing `COL_SESSION_WINDOW_DATA` from parachains db * Remove `col_session_data` usages * Storage migration clearing columns w/o removing them * Remove session data column usages from `approval-voting` and `dispute-coordinator` tests * Add some test cases from `RollingSessionWindow` to `dispute-coordinator` tests * Fix formatting in initialized.rs * Fix a corner case in `SessionInfo` caching for `dispute-coordinator` * Remove `RollingSessionWindow` ;( * Revert "Fix formatting in initialized.rs" This reverts commit 0f94664ec9f3a7e3737a30291195990e1e7065fc. * v2 to v3 migration drops `COL_DISPUTE_COORDINATOR_DATA` instead of clearing it * Fix `NUM_COLUMNS` in `approval-voting` * Use `columns::v3::NUM_COLUMNS` when opening db * Update node/service/src/parachains_db/upgrade.rs Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * Don't write in `COL_DISPUTE_COORDINATOR_DATA` for `test_rocksdb_migrate_2_to_3` * Fix `NUM+COLUMNS` in approval_voting * Fix formatting * Fix columns usage * Clarification comments about the different db versions --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * pallet-para-config: Remove remnant WeightInfo functions (#7308) * pallet-para-config: Remove remnant WeightInfo functions Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * set_config_with_weight begone Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * ".git/.scripts/commands/bench/bench.sh" runtime kusama-dev runtime_parachains::configuration --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: command-bot <> * XCM: PayOverXcm config (#6900) * Move XCM query functionality to trait * Fix tests * Add PayOverXcm implementation * fix the PayOverXcm trait to compile * moved doc comment out of trait implmeentation and to the trait * PayOverXCM documentation * Change documentation a bit * Added empty benchmark methods implementation and changed docs * update PayOverXCM to convert AccountIds to MultiLocations * Implement benchmarking method * Change v3 to latest * Descend origin to an asset sender (#6970) * descend origin to an asset sender * sender as tuple of dest and sender * Add more variants to the QueryResponseStatus enum * Change Beneficiary to Into<[u8; 32]> * update PayOverXcm to return concrete errors and use AccountId as sender * use polkadot-primitives for AccountId * fix dependency to use polkadot-core-primitives * force Unpaid instruction to the top of the instructions list * modify report_outcome to accept interior argument * use new_query directly for building final xcm query, instead of report_outcome * fix usage of new_query to use the XcmQueryHandler * fix usage of new_query to use the XcmQueryHandler * tiny method calling fix * xcm query handler (#7198) * drop redundant query status * rename ReportQueryStatus to OuterQueryStatus * revert rename of QueryResponseStatus * update mapping * Update xcm/xcm-builder/src/pay.rs Co-authored-by: Gavin Wood <gavin@parity.io> * Updates * Docs * Fix benchmarking stuff * Destination can be determined based on asset_kind * Tweaking API to minimise clones * Some repotting and docs --------- Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> * Companion for #14265 (#7307) * Update Cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update Cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: parity-processbot <> * bump serde to 1.0.163 (#7315) * bump serde to 1.0.163 * bump ci * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * fmt * Updated fmt * Removing changes accidentally pulled from master * fix another master pull issue * Another master pull fix * fmt * Fixing implementers guide build * Revert "Merge branch 'rh-async-backing-feature-while-frozen' of https://github.com/paritytech/polkadot into brad-rename-parathread" This reverts commit bebc24af52ab61155e3fe02cb3ce66a592bce49c, reversing changes made to 1b2de662dfb11173679d6da5bd0da9d149c85547. --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: acatangiu <adrian@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Marcin S. <marcin@bytedude.com> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> * fix bitfield distribution test * approval distribution tests * fix bridge tests * update Cargo.lock * [async-backing-branch] Optimize collator-protocol validator-side request fetching (#7457) * Optimize collator-protocol validator-side request fetching * address feedback: replace tuples with structs * feedback: add doc comments * move collation types to subfolder --------- Signed-off-by: alindima <alin@parity.io> * Update collation generation for asynchronous backing (#7405) * break candidate receipt construction and distribution into own function * update implementers' guide to include SubmitCollation * implement SubmitCollation for collation-generation * fmt * fix test compilation & remove unnecessary submodule * add some TODOs for a test suite. * Update roadmap/implementers-guide/src/types/overseer-protocol.md Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * add new test harness and first test * refactor to avoid requiring background sender * ensure collation gets packaged and distributed * tests for the fallback case with no hint * add parent rp-number hint tests * fmt * update uses of CollationGenerationConfig * fix remaining test * address review comments * use subsystemsender for background tasks * fmt * remove ValidationCodeHashHint and related tests --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * fix some more fallout from merge * fmt * remove staging APIs from Rococo & Westend (#7513) * send network messages on main protocol name (#7515) * misc async backing improvements for allowed ancestry blocks (#7532) * shared: fix acquire_info * backwards-compat test for prospective parachains * same relay parent is allowed * provisioner: request candidate receipt by relay parent (#7527) * return candidates hash from prospective parachains * update provisioner * update tests * guide changes * send a single message to backing * fix test * revert to old `handle_new_activations` logic in some cases (#7514) * revert to old `handle_new_activations` logic * gate sending messages on scheduled cores to max_depth >= 2 * fmt * 2->1 * Omnibus asynchronous backing bugfix PR (#7529) * fix a bug in backing * add some more logs * prospective parachains: take ancestry only up to session bounds * add test * fix zombienet tests (#7614) Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> * fix runtime compilation * make bitfield distribution tests compile * attempt to fix zombienet disputes (#7618) * update metric name * update some metric names * avoid cycles when creating fake candidates * make undying collator more friendly to malformed parents * fix a bug in malus * fmt * clippy * add RUN_IN_CONTAINER to new ZombieNet tests (#7631) * remove duplicated migration happened because of master-merge --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: acatangiu <adrian@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Signed-off-by: alindima <alin@parity.io> Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Parity Bot <admin@parity.io> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at> Co-authored-by: Robert Klotzner <eskimor@users.noreply.github.com> Co-authored-by: Marcin S <marcin@bytedude.com> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Mattia L.V. Bradascio <28816406+bredamatt@users.noreply.github.com> Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com> Co-authored-by: BradleyOlson64 <lotrftw9@gmail.com> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> Co-authored-by: Alin Dima <alin@parity.io>
This commit is contained in:
@@ -18,9 +18,13 @@
|
||||
//! Error handling related code and Error/Result definitions.
|
||||
|
||||
use polkadot_node_network_protocol::PeerId;
|
||||
use polkadot_node_subsystem::SubsystemError;
|
||||
use polkadot_node_subsystem_util::runtime;
|
||||
use polkadot_primitives::{CandidateHash, Hash};
|
||||
use polkadot_node_subsystem::{RuntimeApiError, SubsystemError};
|
||||
use polkadot_node_subsystem_util::{
|
||||
backing_implicit_view::FetchError as ImplicitViewFetchError, runtime,
|
||||
};
|
||||
use polkadot_primitives::{CandidateHash, Hash, Id as ParaId};
|
||||
|
||||
use futures::channel::oneshot;
|
||||
|
||||
use crate::LOG_TARGET;
|
||||
|
||||
@@ -56,6 +60,27 @@ pub enum Error {
|
||||
#[error("Error while accessing runtime information")]
|
||||
Runtime(#[from] runtime::Error),
|
||||
|
||||
#[error("RuntimeAPISubsystem channel closed before receipt")]
|
||||
RuntimeApiUnavailable(#[source] oneshot::Canceled),
|
||||
|
||||
#[error("Fetching persisted validation data for para {0:?}, {1:?}")]
|
||||
FetchPersistedValidationData(ParaId, RuntimeApiError),
|
||||
|
||||
#[error("Fetching session index failed {0:?}")]
|
||||
FetchSessionIndex(RuntimeApiError),
|
||||
|
||||
#[error("Fetching session info failed {0:?}")]
|
||||
FetchSessionInfo(RuntimeApiError),
|
||||
|
||||
#[error("Fetching availability cores failed {0:?}")]
|
||||
FetchAvailabilityCores(RuntimeApiError),
|
||||
|
||||
#[error("Fetching validator groups failed {0:?}")]
|
||||
FetchValidatorGroups(RuntimeApiError),
|
||||
|
||||
#[error("Attempted to share statement when not a validator or not assigned")]
|
||||
InvalidShare,
|
||||
|
||||
#[error("Relay parent could not be found in active heads")]
|
||||
NoSuchHead(Hash),
|
||||
|
||||
@@ -76,6 +101,10 @@ pub enum Error {
|
||||
// Responder no longer waits for our data. (Should not happen right now.)
|
||||
#[error("Oneshot `GetData` channel closed")]
|
||||
ResponderGetDataCanceled,
|
||||
|
||||
// Failed to activate leaf due to a fetch error.
|
||||
#[error("Implicit view failure while activating leaf")]
|
||||
ActivateLeafFailure(ImplicitViewFetchError),
|
||||
}
|
||||
|
||||
/// Utility for eating top level errors and log them.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
+4
-1
@@ -32,7 +32,10 @@ use polkadot_node_subsystem::{Span, Stage};
|
||||
use polkadot_node_subsystem_util::TimeoutExt;
|
||||
use polkadot_primitives::{CandidateHash, CommittedCandidateReceipt, Hash};
|
||||
|
||||
use crate::{metrics::Metrics, COST_WRONG_HASH, LOG_TARGET};
|
||||
use crate::{
|
||||
legacy_v1::{COST_WRONG_HASH, LOG_TARGET},
|
||||
metrics::Metrics,
|
||||
};
|
||||
|
||||
// In case we failed fetching from our known peers, how long we should wait before attempting a
|
||||
// retry, even though we have not yet discovered any new peers. Or in other words how long to
|
||||
+2
-2
@@ -48,8 +48,8 @@ pub enum ResponderMessage {
|
||||
|
||||
/// A fetching task, taking care of fetching large statements via request/response.
|
||||
///
|
||||
/// A fetch task does not know about a particular `Statement` instead it just tries fetching a
|
||||
/// `CommittedCandidateReceipt` from peers, whether this can be used to re-assemble one ore
|
||||
/// A fetch task does not know about a particular `Statement`, instead it just tries fetching a
|
||||
/// `CommittedCandidateReceipt` from peers, whether this can be used to re-assemble one or
|
||||
/// many `SignedFullStatement`s needs to be verified by the caller.
|
||||
pub async fn respond(
|
||||
mut receiver: IncomingRequestReceiver<StatementFetchingRequest>,
|
||||
+213
-28
@@ -14,7 +14,11 @@
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use super::{metrics::Metrics, *};
|
||||
#![allow(clippy::clone_on_copy)]
|
||||
|
||||
use super::*;
|
||||
use crate::{metrics::Metrics, *};
|
||||
|
||||
use assert_matches::assert_matches;
|
||||
use futures::executor;
|
||||
use futures_timer::Delay;
|
||||
@@ -26,19 +30,21 @@ use polkadot_node_network_protocol::{
|
||||
v1::{StatementFetchingRequest, StatementFetchingResponse},
|
||||
IncomingRequest, Recipient, ReqProtocolNames, Requests,
|
||||
},
|
||||
view, ObservedRole,
|
||||
view, ObservedRole, VersionedValidationProtocol,
|
||||
};
|
||||
use polkadot_node_primitives::{
|
||||
SignedFullStatementWithPVD, Statement, UncheckedSignedFullStatement,
|
||||
};
|
||||
use polkadot_node_primitives::{Statement, UncheckedSignedFullStatement};
|
||||
use polkadot_node_subsystem::{
|
||||
jaeger,
|
||||
messages::{
|
||||
network_bridge_event, AllMessages, ReportPeerMessage, RuntimeApiMessage, RuntimeApiRequest,
|
||||
},
|
||||
ActivatedLeaf, LeafStatus,
|
||||
ActivatedLeaf, LeafStatus, RuntimeApiError,
|
||||
};
|
||||
use polkadot_node_subsystem_test_helpers::mock::make_ferdie_keystore;
|
||||
use polkadot_primitives::{
|
||||
GroupIndex, Hash, Id as ParaId, IndexedVec, SessionInfo, ValidationCode, ValidatorId,
|
||||
GroupIndex, Hash, HeadData, Id as ParaId, IndexedVec, SessionInfo, ValidationCode,
|
||||
};
|
||||
use polkadot_primitives_test_helpers::{
|
||||
dummy_committed_candidate_receipt, dummy_hash, AlwaysZeroRng,
|
||||
@@ -54,6 +60,30 @@ use util::reputation::add_reputation;
|
||||
// Some deterministic genesis hash for protocol names
|
||||
const GENESIS_HASH: Hash = Hash::repeat_byte(0xff);
|
||||
|
||||
const ASYNC_BACKING_DISABLED_ERROR: RuntimeApiError =
|
||||
RuntimeApiError::NotSupported { runtime_api_name: "test-runtime" };
|
||||
|
||||
fn dummy_pvd() -> PersistedValidationData {
|
||||
PersistedValidationData {
|
||||
parent_head: HeadData(vec![7, 8, 9]),
|
||||
relay_parent_number: 5,
|
||||
max_pov_size: 1024,
|
||||
relay_parent_storage_root: Default::default(),
|
||||
}
|
||||
}
|
||||
|
||||
fn extend_statement_with_pvd(
|
||||
statement: SignedFullStatement,
|
||||
pvd: PersistedValidationData,
|
||||
) -> SignedFullStatementWithPVD {
|
||||
statement
|
||||
.convert_to_superpayload_with(|statement| match statement {
|
||||
Statement::Seconded(receipt) => StatementWithPVD::Seconded(receipt, pvd),
|
||||
Statement::Valid(candidate_hash) => StatementWithPVD::Valid(candidate_hash),
|
||||
})
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn active_head_accepts_only_2_seconded_per_validator() {
|
||||
let validators = vec![
|
||||
@@ -496,6 +526,7 @@ fn peer_view_update_sends_messages() {
|
||||
|
||||
let mut peer_data = PeerData {
|
||||
view: old_view,
|
||||
protocol_version: ValidationVersion::V1,
|
||||
view_knowledge: {
|
||||
let mut k = HashMap::new();
|
||||
|
||||
@@ -554,8 +585,9 @@ fn peer_view_update_sends_messages() {
|
||||
for statement in active_head.statements_about(candidate_hash) {
|
||||
let message = handle.recv().await;
|
||||
let expected_to = vec![peer];
|
||||
let expected_payload =
|
||||
statement_message(hash_c, statement.statement.clone(), &Metrics::default());
|
||||
let expected_payload = VersionedValidationProtocol::from(Versioned::V1(
|
||||
v1_statement_message(hash_c, statement.statement.clone(), &Metrics::default()),
|
||||
));
|
||||
|
||||
assert_matches!(
|
||||
message,
|
||||
@@ -596,6 +628,7 @@ fn circulated_statement_goes_to_all_peers_with_view() {
|
||||
|
||||
let peer_data_from_view = |view: View| PeerData {
|
||||
view: view.clone(),
|
||||
protocol_version: ValidationVersion::V1,
|
||||
view_knowledge: view.iter().map(|v| (*v, Default::default())).collect(),
|
||||
maybe_authority: None,
|
||||
};
|
||||
@@ -697,7 +730,7 @@ fn circulated_statement_goes_to_all_peers_with_view() {
|
||||
|
||||
assert_eq!(
|
||||
payload,
|
||||
statement_message(hash_b, statement.statement.clone(), &Metrics::default()),
|
||||
VersionedValidationProtocol::from(Versioned::V1(v1_statement_message(hash_b, statement.statement.clone(), &Metrics::default()))),
|
||||
);
|
||||
}
|
||||
)
|
||||
@@ -706,12 +739,14 @@ fn circulated_statement_goes_to_all_peers_with_view() {
|
||||
|
||||
#[test]
|
||||
fn receiving_from_one_sends_to_another_and_to_candidate_backing() {
|
||||
const PARA_ID: ParaId = ParaId::new(1);
|
||||
let hash_a = Hash::repeat_byte(1);
|
||||
let pvd = dummy_pvd();
|
||||
|
||||
let candidate = {
|
||||
let mut c = dummy_committed_candidate_receipt(dummy_hash());
|
||||
c.descriptor.relay_parent = hash_a;
|
||||
c.descriptor.para_id = 1.into();
|
||||
c.descriptor.para_id = PARA_ID;
|
||||
c
|
||||
};
|
||||
|
||||
@@ -733,11 +768,13 @@ fn receiving_from_one_sends_to_another_and_to_candidate_backing() {
|
||||
|
||||
let req_protocol_names = ReqProtocolNames::new(&GENESIS_HASH, None);
|
||||
let (statement_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
let (candidate_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
|
||||
let bg = async move {
|
||||
let s = StatementDistributionSubsystem {
|
||||
keystore: Arc::new(LocalKeystore::in_memory()),
|
||||
req_receiver: Some(statement_req_receiver),
|
||||
v1_req_receiver: Some(statement_req_receiver),
|
||||
req_receiver: Some(candidate_req_receiver),
|
||||
metrics: Default::default(),
|
||||
rng: AlwaysZeroRng,
|
||||
reputation: ReputationAggregator::new(|_| true),
|
||||
@@ -758,6 +795,17 @@ fn receiving_from_one_sends_to_another_and_to_candidate_backing() {
|
||||
)))
|
||||
.await;
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(r, RuntimeApiRequest::StagingAsyncBackingParams(tx))
|
||||
)
|
||||
if r == hash_a
|
||||
=> {
|
||||
let _ = tx.send(Err(ASYNC_BACKING_DISABLED_ERROR));
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
@@ -862,18 +910,32 @@ fn receiving_from_one_sends_to_another_and_to_candidate_backing() {
|
||||
})
|
||||
.await;
|
||||
|
||||
let statement_with_pvd = extend_statement_with_pvd(statement.clone(), pvd.clone());
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
|
||||
hash,
|
||||
RuntimeApiRequest::PersistedValidationData(para_id, assumption, tx),
|
||||
)) if para_id == PARA_ID &&
|
||||
assumption == OccupiedCoreAssumption::Free &&
|
||||
hash == hash_a =>
|
||||
{
|
||||
tx.send(Ok(Some(pvd))).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::NetworkBridgeTx(
|
||||
NetworkBridgeTxMessage::ReportPeer(ReportPeerMessage::Single(p, r))
|
||||
) if p == peer_a && r == BENEFIT_VALID_STATEMENT_FIRST.into() => {}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::CandidateBacking(
|
||||
CandidateBackingMessage::Statement(r, s)
|
||||
) if r == hash_a && s == statement => {}
|
||||
) if r == hash_a && s == statement_with_pvd => {}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
@@ -902,6 +964,9 @@ fn receiving_from_one_sends_to_another_and_to_candidate_backing() {
|
||||
|
||||
#[test]
|
||||
fn receiving_large_statement_from_one_sends_to_another_and_to_candidate_backing() {
|
||||
const PARA_ID: ParaId = ParaId::new(1);
|
||||
let pvd = dummy_pvd();
|
||||
|
||||
sp_tracing::try_init_simple();
|
||||
let hash_a = Hash::repeat_byte(1);
|
||||
let hash_b = Hash::repeat_byte(2);
|
||||
@@ -909,7 +974,7 @@ fn receiving_large_statement_from_one_sends_to_another_and_to_candidate_backing(
|
||||
let candidate = {
|
||||
let mut c = dummy_committed_candidate_receipt(dummy_hash());
|
||||
c.descriptor.relay_parent = hash_a;
|
||||
c.descriptor.para_id = 1.into();
|
||||
c.descriptor.para_id = PARA_ID;
|
||||
c.commitments.new_validation_code = Some(ValidationCode(vec![1, 2, 3]));
|
||||
c
|
||||
};
|
||||
@@ -937,11 +1002,13 @@ fn receiving_large_statement_from_one_sends_to_another_and_to_candidate_backing(
|
||||
let req_protocol_names = ReqProtocolNames::new(&GENESIS_HASH, None);
|
||||
let (statement_req_receiver, mut req_cfg) =
|
||||
IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
let (candidate_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
|
||||
let bg = async move {
|
||||
let s = StatementDistributionSubsystem {
|
||||
keystore: make_ferdie_keystore(),
|
||||
req_receiver: Some(statement_req_receiver),
|
||||
v1_req_receiver: Some(statement_req_receiver),
|
||||
req_receiver: Some(candidate_req_receiver),
|
||||
metrics: Default::default(),
|
||||
rng: AlwaysZeroRng,
|
||||
reputation: ReputationAggregator::new(|_| true),
|
||||
@@ -962,6 +1029,17 @@ fn receiving_large_statement_from_one_sends_to_another_and_to_candidate_backing(
|
||||
)))
|
||||
.await;
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(r, RuntimeApiRequest::StagingAsyncBackingParams(tx))
|
||||
)
|
||||
if r == hash_a
|
||||
=> {
|
||||
let _ = tx.send(Err(ASYNC_BACKING_DISABLED_ERROR));
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
@@ -1292,6 +1370,20 @@ fn receiving_large_statement_from_one_sends_to_another_and_to_candidate_backing(
|
||||
) if p == peer_c && r == BENEFIT_VALID_RESPONSE.into() => {}
|
||||
);
|
||||
|
||||
let statement_with_pvd = extend_statement_with_pvd(statement.clone(), pvd.clone());
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
|
||||
hash,
|
||||
RuntimeApiRequest::PersistedValidationData(para_id, assumption, tx),
|
||||
)) if para_id == PARA_ID &&
|
||||
assumption == OccupiedCoreAssumption::Free &&
|
||||
hash == hash_a =>
|
||||
{
|
||||
tx.send(Ok(Some(pvd))).unwrap();
|
||||
}
|
||||
);
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::NetworkBridgeTx(
|
||||
@@ -1303,7 +1395,7 @@ fn receiving_large_statement_from_one_sends_to_another_and_to_candidate_backing(
|
||||
handle.recv().await,
|
||||
AllMessages::CandidateBacking(
|
||||
CandidateBackingMessage::Statement(r, s)
|
||||
) if r == hash_a && s == statement => {}
|
||||
) if r == hash_a && s == statement_with_pvd => {}
|
||||
);
|
||||
|
||||
// Now messages should go out:
|
||||
@@ -1400,6 +1492,7 @@ fn receiving_large_statement_from_one_sends_to_another_and_to_candidate_backing(
|
||||
fn delay_reputation_changes() {
|
||||
sp_tracing::try_init_simple();
|
||||
let hash_a = Hash::repeat_byte(1);
|
||||
let pvd = dummy_pvd();
|
||||
|
||||
let candidate = {
|
||||
let mut c = dummy_committed_candidate_receipt(dummy_hash());
|
||||
@@ -1431,13 +1524,15 @@ fn delay_reputation_changes() {
|
||||
|
||||
let req_protocol_names = ReqProtocolNames::new(&GENESIS_HASH, None);
|
||||
let (statement_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
let (candidate_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
|
||||
let reputation_interval = Duration::from_millis(100);
|
||||
|
||||
let bg = async move {
|
||||
let s = StatementDistributionSubsystem {
|
||||
keystore: make_ferdie_keystore(),
|
||||
req_receiver: Some(statement_req_receiver),
|
||||
v1_req_receiver: Some(statement_req_receiver),
|
||||
req_receiver: Some(candidate_req_receiver),
|
||||
metrics: Default::default(),
|
||||
rng: AlwaysZeroRng,
|
||||
reputation: ReputationAggregator::new(|_| false),
|
||||
@@ -1458,6 +1553,17 @@ fn delay_reputation_changes() {
|
||||
)))
|
||||
.await;
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(r, RuntimeApiRequest::StagingAsyncBackingParams(tx))
|
||||
)
|
||||
if r == hash_a
|
||||
=> {
|
||||
let _ = tx.send(Err(ASYNC_BACKING_DISABLED_ERROR));
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
@@ -1768,9 +1874,18 @@ fn delay_reputation_changes() {
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::CandidateBacking(
|
||||
CandidateBackingMessage::Statement(r, s)
|
||||
) if r == hash_a && s == statement => {}
|
||||
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
|
||||
hash,
|
||||
RuntimeApiRequest::PersistedValidationData(_, assumption, tx),
|
||||
)) if assumption == OccupiedCoreAssumption::Free && hash == hash_a =>
|
||||
{
|
||||
tx.send(Ok(Some(pvd))).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::CandidateBacking(CandidateBackingMessage::Statement(..))
|
||||
);
|
||||
|
||||
// Now messages should go out:
|
||||
@@ -1885,11 +2000,13 @@ fn share_prioritizes_backing_group() {
|
||||
let req_protocol_names = ReqProtocolNames::new(&GENESIS_HASH, None);
|
||||
let (statement_req_receiver, mut req_cfg) =
|
||||
IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
let (candidate_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
|
||||
let bg = async move {
|
||||
let s = StatementDistributionSubsystem {
|
||||
keystore: make_ferdie_keystore(),
|
||||
req_receiver: Some(statement_req_receiver),
|
||||
v1_req_receiver: Some(statement_req_receiver),
|
||||
req_receiver: Some(candidate_req_receiver),
|
||||
metrics: Default::default(),
|
||||
rng: AlwaysZeroRng,
|
||||
reputation: ReputationAggregator::new(|_| true),
|
||||
@@ -1910,6 +2027,17 @@ fn share_prioritizes_backing_group() {
|
||||
)))
|
||||
.await;
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(r, RuntimeApiRequest::StagingAsyncBackingParams(tx))
|
||||
)
|
||||
if r == hash_a
|
||||
=> {
|
||||
let _ = tx.send(Err(ASYNC_BACKING_DISABLED_ERROR));
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
@@ -2069,9 +2197,17 @@ fn share_prioritizes_backing_group() {
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
SignedFullStatement::sign(
|
||||
// note: this is ignored by legacy-v1 code.
|
||||
let pvd = PersistedValidationData {
|
||||
parent_head: HeadData::from(vec![1, 2, 3]),
|
||||
relay_parent_number: 0,
|
||||
relay_parent_storage_root: Hash::repeat_byte(42),
|
||||
max_pov_size: 100,
|
||||
};
|
||||
|
||||
SignedFullStatementWithPVD::sign(
|
||||
&keystore,
|
||||
Statement::Seconded(candidate.clone()),
|
||||
Statement::Seconded(candidate.clone()).supply_pvd(pvd),
|
||||
&signing_context,
|
||||
ValidatorIndex(4),
|
||||
&ferdie_public.into(),
|
||||
@@ -2081,14 +2217,15 @@ fn share_prioritizes_backing_group() {
|
||||
.expect("should be signed")
|
||||
};
|
||||
|
||||
let metadata = derive_metadata_assuming_seconded(hash_a, statement.clone().into());
|
||||
|
||||
handle
|
||||
.send(FromOrchestra::Communication {
|
||||
msg: StatementDistributionMessage::Share(hash_a, statement.clone()),
|
||||
})
|
||||
.await;
|
||||
|
||||
let statement = StatementWithPVD::drop_pvd_from_signed(statement);
|
||||
let metadata = derive_metadata_assuming_seconded(hash_a, statement.clone().into());
|
||||
|
||||
// Messages should go out:
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
@@ -2180,10 +2317,12 @@ fn peer_cant_flood_with_large_statements() {
|
||||
|
||||
let req_protocol_names = ReqProtocolNames::new(&GENESIS_HASH, None);
|
||||
let (statement_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
let (candidate_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
let bg = async move {
|
||||
let s = StatementDistributionSubsystem {
|
||||
keystore: make_ferdie_keystore(),
|
||||
req_receiver: Some(statement_req_receiver),
|
||||
v1_req_receiver: Some(statement_req_receiver),
|
||||
req_receiver: Some(candidate_req_receiver),
|
||||
metrics: Default::default(),
|
||||
rng: AlwaysZeroRng,
|
||||
reputation: ReputationAggregator::new(|_| true),
|
||||
@@ -2204,6 +2343,17 @@ fn peer_cant_flood_with_large_statements() {
|
||||
)))
|
||||
.await;
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(r, RuntimeApiRequest::StagingAsyncBackingParams(tx))
|
||||
)
|
||||
if r == hash_a
|
||||
=> {
|
||||
let _ = tx.send(Err(ASYNC_BACKING_DISABLED_ERROR));
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
@@ -2341,6 +2491,7 @@ fn peer_cant_flood_with_large_statements() {
|
||||
#[test]
|
||||
fn handle_multiple_seconded_statements() {
|
||||
let relay_parent_hash = Hash::repeat_byte(1);
|
||||
let pvd = dummy_pvd();
|
||||
|
||||
let candidate = dummy_committed_candidate_receipt(relay_parent_hash);
|
||||
let candidate_hash = candidate.hash();
|
||||
@@ -2384,11 +2535,13 @@ fn handle_multiple_seconded_statements() {
|
||||
|
||||
let req_protocol_names = ReqProtocolNames::new(&GENESIS_HASH, None);
|
||||
let (statement_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
let (candidate_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
|
||||
let virtual_overseer_fut = async move {
|
||||
let s = StatementDistributionSubsystem {
|
||||
keystore: Arc::new(LocalKeystore::in_memory()),
|
||||
req_receiver: Some(statement_req_receiver),
|
||||
v1_req_receiver: Some(statement_req_receiver),
|
||||
req_receiver: Some(candidate_req_receiver),
|
||||
metrics: Default::default(),
|
||||
rng: AlwaysZeroRng,
|
||||
reputation: ReputationAggregator::new(|_| true),
|
||||
@@ -2409,6 +2562,17 @@ fn handle_multiple_seconded_statements() {
|
||||
)))
|
||||
.await;
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(r, RuntimeApiRequest::StagingAsyncBackingParams(tx))
|
||||
)
|
||||
if r == relay_parent_hash
|
||||
=> {
|
||||
let _ = tx.send(Err(ASYNC_BACKING_DISABLED_ERROR));
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
@@ -2575,6 +2739,18 @@ fn handle_multiple_seconded_statements() {
|
||||
})
|
||||
.await;
|
||||
|
||||
let statement_with_pvd = extend_statement_with_pvd(statement.clone(), pvd.clone());
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
|
||||
_,
|
||||
RuntimeApiRequest::PersistedValidationData(_, assumption, tx),
|
||||
)) if assumption == OccupiedCoreAssumption::Free => {
|
||||
tx.send(Ok(Some(pvd.clone()))).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::NetworkBridgeTx(
|
||||
@@ -2592,7 +2768,7 @@ fn handle_multiple_seconded_statements() {
|
||||
CandidateBackingMessage::Statement(r, s)
|
||||
) => {
|
||||
assert_eq!(r, relay_parent_hash);
|
||||
assert_eq!(s, statement);
|
||||
assert_eq!(s, statement_with_pvd);
|
||||
}
|
||||
);
|
||||
|
||||
@@ -2676,6 +2852,10 @@ fn handle_multiple_seconded_statements() {
|
||||
})
|
||||
.await;
|
||||
|
||||
let statement_with_pvd = extend_statement_with_pvd(statement.clone(), pvd.clone());
|
||||
|
||||
// Persisted validation data is cached.
|
||||
|
||||
assert_matches!(
|
||||
handle.recv().await,
|
||||
AllMessages::NetworkBridgeTx(
|
||||
@@ -2692,7 +2872,7 @@ fn handle_multiple_seconded_statements() {
|
||||
CandidateBackingMessage::Statement(r, s)
|
||||
) => {
|
||||
assert_eq!(r, relay_parent_hash);
|
||||
assert_eq!(s, statement);
|
||||
assert_eq!(s, statement_with_pvd);
|
||||
}
|
||||
);
|
||||
|
||||
@@ -2784,3 +2964,8 @@ fn derive_metadata_assuming_seconded(
|
||||
signature: statement.unchecked_signature().clone(),
|
||||
}
|
||||
}
|
||||
|
||||
// TODO [now]: adapt most tests to v2 messages.
|
||||
// TODO [now]: test that v2 peers send v1 messages to v1 peers
|
||||
// TODO [now]: test that v2 peers handle v1 messages from v1 peers.
|
||||
// TODO [now]: test that v2 peers send v2 messages to v2 peers.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -29,7 +29,7 @@ struct MetricsInner {
|
||||
received_responses: prometheus::CounterVec<prometheus::U64>,
|
||||
active_leaves_update: prometheus::Histogram,
|
||||
share: prometheus::Histogram,
|
||||
network_bridge_update_v1: prometheus::HistogramVec,
|
||||
network_bridge_update: prometheus::HistogramVec,
|
||||
statements_unexpected: prometheus::CounterVec<prometheus::U64>,
|
||||
created_message_size: prometheus::Gauge<prometheus::U64>,
|
||||
}
|
||||
@@ -77,16 +77,13 @@ impl Metrics {
|
||||
self.0.as_ref().map(|metrics| metrics.share.start_timer())
|
||||
}
|
||||
|
||||
/// Provide a timer for `network_bridge_update_v1` which observes on drop.
|
||||
pub fn time_network_bridge_update_v1(
|
||||
/// Provide a timer for `network_bridge_update` which observes on drop.
|
||||
pub fn time_network_bridge_update(
|
||||
&self,
|
||||
message_type: &'static str,
|
||||
) -> Option<metrics::prometheus::prometheus::HistogramTimer> {
|
||||
self.0.as_ref().map(|metrics| {
|
||||
metrics
|
||||
.network_bridge_update_v1
|
||||
.with_label_values(&[message_type])
|
||||
.start_timer()
|
||||
metrics.network_bridge_update.with_label_values(&[message_type]).start_timer()
|
||||
})
|
||||
}
|
||||
|
||||
@@ -168,11 +165,11 @@ impl metrics::Metrics for Metrics {
|
||||
)?,
|
||||
registry,
|
||||
)?,
|
||||
network_bridge_update_v1: prometheus::register(
|
||||
network_bridge_update: prometheus::register(
|
||||
prometheus::HistogramVec::new(
|
||||
prometheus::HistogramOpts::new(
|
||||
"polkadot_parachain_statement_distribution_network_bridge_update_v1",
|
||||
"Time spent within `statement_distribution::network_bridge_update_v1`",
|
||||
"polkadot_parachain_statement_distribution_network_bridge_update",
|
||||
"Time spent within `statement_distribution::network_bridge_update`",
|
||||
)
|
||||
.buckets(HISTOGRAM_LATENCY_BUCKETS.into()),
|
||||
&["message_type"],
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,70 @@
|
||||
// Copyright 2022 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Polkadot.
|
||||
|
||||
// Polkadot is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Polkadot is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! A utility for tracking groups and their members within a session.
|
||||
|
||||
use polkadot_node_primitives::minimum_votes;
|
||||
use polkadot_primitives::vstaging::{GroupIndex, IndexedVec, ValidatorIndex};
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Validator groups within a session, plus some helpful indexing for
|
||||
/// looking up groups by validator indices or authority discovery ID.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct Groups {
|
||||
groups: IndexedVec<GroupIndex, Vec<ValidatorIndex>>,
|
||||
by_validator_index: HashMap<ValidatorIndex, GroupIndex>,
|
||||
}
|
||||
|
||||
impl Groups {
|
||||
/// Create a new [`Groups`] tracker with the groups and discovery keys
|
||||
/// from the session.
|
||||
pub fn new(groups: IndexedVec<GroupIndex, Vec<ValidatorIndex>>) -> Self {
|
||||
let mut by_validator_index = HashMap::new();
|
||||
|
||||
for (i, group) in groups.iter().enumerate() {
|
||||
let index = GroupIndex(i as _);
|
||||
for v in group {
|
||||
by_validator_index.insert(*v, index);
|
||||
}
|
||||
}
|
||||
|
||||
Groups { groups, by_validator_index }
|
||||
}
|
||||
|
||||
/// Access all the underlying groups.
|
||||
pub fn all(&self) -> &IndexedVec<GroupIndex, Vec<ValidatorIndex>> {
|
||||
&self.groups
|
||||
}
|
||||
|
||||
/// Get the underlying group validators by group index.
|
||||
pub fn get(&self, group_index: GroupIndex) -> Option<&[ValidatorIndex]> {
|
||||
self.groups.get(group_index).map(|x| &x[..])
|
||||
}
|
||||
|
||||
/// Get the backing group size and backing threshold.
|
||||
pub fn get_size_and_backing_threshold(
|
||||
&self,
|
||||
group_index: GroupIndex,
|
||||
) -> Option<(usize, usize)> {
|
||||
self.get(group_index).map(|g| (g.len(), minimum_votes(g.len())))
|
||||
}
|
||||
|
||||
/// Get the group index for a validator by index.
|
||||
pub fn by_validator_index(&self, validator_index: ValidatorIndex) -> Option<GroupIndex> {
|
||||
self.by_validator_index.get(&validator_index).map(|x| *x)
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,283 @@
|
||||
// Copyright 2022 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Polkadot.
|
||||
|
||||
// Polkadot is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Polkadot is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! A store of all statements under a given relay-parent.
|
||||
//!
|
||||
//! This structure doesn't attempt to do any spam protection, which must
|
||||
//! be provided at a higher level.
|
||||
//!
|
||||
//! This keeps track of statements submitted with a number of different of
|
||||
//! views into this data: views based on the candidate, views based on the validator
|
||||
//! groups, and views based on the validators themselves.
|
||||
|
||||
use bitvec::{order::Lsb0 as BitOrderLsb0, vec::BitVec};
|
||||
use polkadot_node_network_protocol::vstaging::StatementFilter;
|
||||
use polkadot_primitives::vstaging::{
|
||||
CandidateHash, CompactStatement, GroupIndex, SignedStatement, ValidatorIndex,
|
||||
};
|
||||
use std::collections::hash_map::{Entry as HEntry, HashMap};
|
||||
|
||||
use super::groups::Groups;
|
||||
|
||||
/// Possible origins of a statement.
|
||||
pub enum StatementOrigin {
|
||||
/// The statement originated locally.
|
||||
Local,
|
||||
/// The statement originated from a remote peer.
|
||||
Remote,
|
||||
}
|
||||
|
||||
impl StatementOrigin {
|
||||
fn is_local(&self) -> bool {
|
||||
match *self {
|
||||
StatementOrigin::Local => true,
|
||||
StatementOrigin::Remote => false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
struct StoredStatement {
|
||||
statement: SignedStatement,
|
||||
known_by_backing: bool,
|
||||
}
|
||||
|
||||
/// Storage for statements. Intended to be used for statements signed under
|
||||
/// the same relay-parent. See module docs for more details.
|
||||
pub struct StatementStore {
|
||||
validator_meta: HashMap<ValidatorIndex, ValidatorMeta>,
|
||||
|
||||
// we keep statements per-group because even though only one group _should_ be
|
||||
// producing statements about a candidate, until we have the candidate receipt
|
||||
// itself, we can't tell which group that is.
|
||||
group_statements: HashMap<(GroupIndex, CandidateHash), GroupStatements>,
|
||||
known_statements: HashMap<Fingerprint, StoredStatement>,
|
||||
}
|
||||
|
||||
impl StatementStore {
|
||||
/// Create a new [`StatementStore`]
|
||||
pub fn new(groups: &Groups) -> Self {
|
||||
let mut validator_meta = HashMap::new();
|
||||
for (g, group) in groups.all().iter().enumerate() {
|
||||
for (i, v) in group.iter().enumerate() {
|
||||
validator_meta.insert(
|
||||
*v,
|
||||
ValidatorMeta {
|
||||
seconded_count: 0,
|
||||
within_group_index: i,
|
||||
group: GroupIndex(g as _),
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
StatementStore {
|
||||
validator_meta,
|
||||
group_statements: HashMap::new(),
|
||||
known_statements: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Insert a statement. Returns `true` if was not known already, `false` if it was.
|
||||
/// Ignores statements by unknown validators and returns an error.
|
||||
pub fn insert(
|
||||
&mut self,
|
||||
groups: &Groups,
|
||||
statement: SignedStatement,
|
||||
origin: StatementOrigin,
|
||||
) -> Result<bool, ValidatorUnknown> {
|
||||
let validator_index = statement.validator_index();
|
||||
let validator_meta = match self.validator_meta.get_mut(&validator_index) {
|
||||
None => return Err(ValidatorUnknown),
|
||||
Some(m) => m,
|
||||
};
|
||||
|
||||
let compact = statement.payload().clone();
|
||||
let fingerprint = (validator_index, compact.clone());
|
||||
match self.known_statements.entry(fingerprint) {
|
||||
HEntry::Occupied(mut e) => {
|
||||
if let StatementOrigin::Local = origin {
|
||||
e.get_mut().known_by_backing = true;
|
||||
}
|
||||
|
||||
return Ok(false)
|
||||
},
|
||||
HEntry::Vacant(e) => {
|
||||
e.insert(StoredStatement { statement, known_by_backing: origin.is_local() });
|
||||
},
|
||||
}
|
||||
|
||||
let candidate_hash = *compact.candidate_hash();
|
||||
let seconded = if let CompactStatement::Seconded(_) = compact { true } else { false };
|
||||
|
||||
// cross-reference updates.
|
||||
{
|
||||
let group_index = validator_meta.group;
|
||||
let group = match groups.get(group_index) {
|
||||
Some(g) => g,
|
||||
None => {
|
||||
gum::error!(
|
||||
target: crate::LOG_TARGET,
|
||||
?group_index,
|
||||
"groups passed into `insert` differ from those used at store creation"
|
||||
);
|
||||
|
||||
return Err(ValidatorUnknown)
|
||||
},
|
||||
};
|
||||
|
||||
let group_statements = self
|
||||
.group_statements
|
||||
.entry((group_index, candidate_hash))
|
||||
.or_insert_with(|| GroupStatements::with_group_size(group.len()));
|
||||
|
||||
if seconded {
|
||||
validator_meta.seconded_count += 1;
|
||||
group_statements.note_seconded(validator_meta.within_group_index);
|
||||
} else {
|
||||
group_statements.note_validated(validator_meta.within_group_index);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(true)
|
||||
}
|
||||
|
||||
/// Fill a `StatementFilter` to be used in the grid topology with all statements
|
||||
/// we are already aware of.
|
||||
pub fn fill_statement_filter(
|
||||
&self,
|
||||
group_index: GroupIndex,
|
||||
candidate_hash: CandidateHash,
|
||||
statement_filter: &mut StatementFilter,
|
||||
) {
|
||||
if let Some(statements) = self.group_statements.get(&(group_index, candidate_hash)) {
|
||||
statement_filter.seconded_in_group |= statements.seconded.as_bitslice();
|
||||
statement_filter.validated_in_group |= statements.valid.as_bitslice();
|
||||
}
|
||||
}
|
||||
|
||||
/// Get an iterator over stored signed statements by the group conforming to the
|
||||
/// given filter.
|
||||
///
|
||||
/// Seconded statements are provided first.
|
||||
pub fn group_statements<'a>(
|
||||
&'a self,
|
||||
groups: &'a Groups,
|
||||
group_index: GroupIndex,
|
||||
candidate_hash: CandidateHash,
|
||||
filter: &'a StatementFilter,
|
||||
) -> impl Iterator<Item = &'a SignedStatement> + 'a {
|
||||
let group_validators = groups.get(group_index);
|
||||
|
||||
let seconded_statements = filter
|
||||
.seconded_in_group
|
||||
.iter_ones()
|
||||
.filter_map(move |i| group_validators.as_ref().and_then(|g| g.get(i)))
|
||||
.filter_map(move |v| {
|
||||
self.known_statements.get(&(*v, CompactStatement::Seconded(candidate_hash)))
|
||||
})
|
||||
.map(|s| &s.statement);
|
||||
|
||||
let valid_statements = filter
|
||||
.validated_in_group
|
||||
.iter_ones()
|
||||
.filter_map(move |i| group_validators.as_ref().and_then(|g| g.get(i)))
|
||||
.filter_map(move |v| {
|
||||
self.known_statements.get(&(*v, CompactStatement::Valid(candidate_hash)))
|
||||
})
|
||||
.map(|s| &s.statement);
|
||||
|
||||
seconded_statements.chain(valid_statements)
|
||||
}
|
||||
|
||||
/// Get the full statement of this kind issued by this validator, if it is known.
|
||||
pub fn validator_statement(
|
||||
&self,
|
||||
validator_index: ValidatorIndex,
|
||||
statement: CompactStatement,
|
||||
) -> Option<&SignedStatement> {
|
||||
self.known_statements.get(&(validator_index, statement)).map(|s| &s.statement)
|
||||
}
|
||||
|
||||
/// Get an iterator over all statements marked as being unknown by the backing subsystem.
|
||||
pub fn fresh_statements_for_backing<'a>(
|
||||
&'a self,
|
||||
validators: &'a [ValidatorIndex],
|
||||
candidate_hash: CandidateHash,
|
||||
) -> impl Iterator<Item = &SignedStatement> + 'a {
|
||||
let s_st = CompactStatement::Seconded(candidate_hash);
|
||||
let v_st = CompactStatement::Valid(candidate_hash);
|
||||
|
||||
validators
|
||||
.iter()
|
||||
.flat_map(move |v| {
|
||||
let a = self.known_statements.get(&(*v, s_st.clone()));
|
||||
let b = self.known_statements.get(&(*v, v_st.clone()));
|
||||
|
||||
a.into_iter().chain(b)
|
||||
})
|
||||
.filter(|stored| !stored.known_by_backing)
|
||||
.map(|stored| &stored.statement)
|
||||
}
|
||||
|
||||
/// Get the amount of known `Seconded` statements by the given validator index.
|
||||
pub fn seconded_count(&self, validator_index: &ValidatorIndex) -> usize {
|
||||
self.validator_meta.get(validator_index).map_or(0, |m| m.seconded_count)
|
||||
}
|
||||
|
||||
/// Note that a statement is known by the backing subsystem.
|
||||
pub fn note_known_by_backing(
|
||||
&mut self,
|
||||
validator_index: ValidatorIndex,
|
||||
statement: CompactStatement,
|
||||
) {
|
||||
if let Some(stored) = self.known_statements.get_mut(&(validator_index, statement)) {
|
||||
stored.known_by_backing = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Error indicating that the validator was unknown.
|
||||
pub struct ValidatorUnknown;
|
||||
|
||||
type Fingerprint = (ValidatorIndex, CompactStatement);
|
||||
|
||||
struct ValidatorMeta {
|
||||
group: GroupIndex,
|
||||
within_group_index: usize,
|
||||
seconded_count: usize,
|
||||
}
|
||||
|
||||
struct GroupStatements {
|
||||
seconded: BitVec<u8, BitOrderLsb0>,
|
||||
valid: BitVec<u8, BitOrderLsb0>,
|
||||
}
|
||||
|
||||
impl GroupStatements {
|
||||
fn with_group_size(group_size: usize) -> Self {
|
||||
GroupStatements {
|
||||
seconded: BitVec::repeat(false, group_size),
|
||||
valid: BitVec::repeat(false, group_size),
|
||||
}
|
||||
}
|
||||
|
||||
fn note_seconded(&mut self, within_group_index: usize) {
|
||||
self.seconded.set(within_group_index, true);
|
||||
}
|
||||
|
||||
fn note_validated(&mut self, within_group_index: usize) {
|
||||
self.valid.set(within_group_index, true);
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,606 @@
|
||||
// Copyright 2023 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Polkadot.
|
||||
|
||||
// Polkadot is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Polkadot is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
#![allow(clippy::clone_on_copy)]
|
||||
|
||||
use super::*;
|
||||
use crate::*;
|
||||
use polkadot_node_network_protocol::{
|
||||
grid_topology::TopologyPeerInfo,
|
||||
request_response::{outgoing::Recipient, ReqProtocolNames},
|
||||
view, ObservedRole,
|
||||
};
|
||||
use polkadot_node_primitives::Statement;
|
||||
use polkadot_node_subsystem::messages::{
|
||||
network_bridge_event::NewGossipTopology, AllMessages, ChainApiMessage, FragmentTreeMembership,
|
||||
HypotheticalCandidate, NetworkBridgeEvent, ProspectiveParachainsMessage, ReportPeerMessage,
|
||||
RuntimeApiMessage, RuntimeApiRequest,
|
||||
};
|
||||
use polkadot_node_subsystem_test_helpers as test_helpers;
|
||||
use polkadot_node_subsystem_types::{jaeger, ActivatedLeaf, LeafStatus};
|
||||
use polkadot_node_subsystem_util::TimeoutExt;
|
||||
use polkadot_primitives::vstaging::{
|
||||
AssignmentPair, AsyncBackingParams, BlockNumber, CommittedCandidateReceipt, CoreState,
|
||||
GroupRotationInfo, HeadData, Header, IndexedVec, PersistedValidationData, ScheduledCore,
|
||||
SessionIndex, SessionInfo, ValidatorPair,
|
||||
};
|
||||
use sc_keystore::LocalKeystore;
|
||||
use sp_application_crypto::Pair as PairT;
|
||||
use sp_authority_discovery::AuthorityPair as AuthorityDiscoveryPair;
|
||||
use sp_keyring::Sr25519Keyring;
|
||||
|
||||
use assert_matches::assert_matches;
|
||||
use futures::Future;
|
||||
use parity_scale_codec::Encode;
|
||||
use rand::{Rng, SeedableRng};
|
||||
|
||||
use std::sync::Arc;
|
||||
|
||||
mod cluster;
|
||||
mod grid;
|
||||
mod requests;
|
||||
|
||||
type VirtualOverseer = test_helpers::TestSubsystemContextHandle<StatementDistributionMessage>;
|
||||
|
||||
const DEFAULT_ASYNC_BACKING_PARAMETERS: AsyncBackingParams =
|
||||
AsyncBackingParams { max_candidate_depth: 4, allowed_ancestry_len: 3 };
|
||||
|
||||
// Some deterministic genesis hash for req/res protocol names
|
||||
const GENESIS_HASH: Hash = Hash::repeat_byte(0xff);
|
||||
|
||||
struct TestConfig {
|
||||
validator_count: usize,
|
||||
// how many validators to place in each group.
|
||||
group_size: usize,
|
||||
// whether the local node should be a validator
|
||||
local_validator: bool,
|
||||
async_backing_params: Option<AsyncBackingParams>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
struct TestLocalValidator {
|
||||
validator_index: ValidatorIndex,
|
||||
group_index: GroupIndex,
|
||||
}
|
||||
|
||||
struct TestState {
|
||||
config: TestConfig,
|
||||
local: Option<TestLocalValidator>,
|
||||
validators: Vec<ValidatorPair>,
|
||||
session_info: SessionInfo,
|
||||
req_sender: async_channel::Sender<sc_network::config::IncomingRequest>,
|
||||
}
|
||||
|
||||
impl TestState {
|
||||
fn from_config(
|
||||
config: TestConfig,
|
||||
req_sender: async_channel::Sender<sc_network::config::IncomingRequest>,
|
||||
rng: &mut impl Rng,
|
||||
) -> Self {
|
||||
if config.group_size == 0 {
|
||||
panic!("group size cannot be 0");
|
||||
}
|
||||
|
||||
let mut validators = Vec::new();
|
||||
let mut discovery_keys = Vec::new();
|
||||
let mut assignment_keys = Vec::new();
|
||||
let mut validator_groups = Vec::new();
|
||||
|
||||
let local_validator_pos = if config.local_validator {
|
||||
// ensure local validator is always in a full group.
|
||||
Some(rng.gen_range(0..config.validator_count).saturating_sub(config.group_size - 1))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
for i in 0..config.validator_count {
|
||||
let validator_pair = if Some(i) == local_validator_pos {
|
||||
// Note: the specific key is used to ensure the keystore holds
|
||||
// this key and the subsystem can detect that it is a validator.
|
||||
Sr25519Keyring::Ferdie.pair().into()
|
||||
} else {
|
||||
ValidatorPair::generate().0
|
||||
};
|
||||
let assignment_id = AssignmentPair::generate().0.public();
|
||||
let discovery_id = AuthorityDiscoveryPair::generate().0.public();
|
||||
|
||||
let group_index = i / config.group_size;
|
||||
validators.push(validator_pair);
|
||||
discovery_keys.push(discovery_id);
|
||||
assignment_keys.push(assignment_id);
|
||||
if validator_groups.len() == group_index {
|
||||
validator_groups.push(vec![ValidatorIndex(i as _)]);
|
||||
} else {
|
||||
validator_groups.last_mut().unwrap().push(ValidatorIndex(i as _));
|
||||
}
|
||||
}
|
||||
|
||||
let local = if let Some(local_pos) = local_validator_pos {
|
||||
Some(TestLocalValidator {
|
||||
validator_index: ValidatorIndex(local_pos as _),
|
||||
group_index: GroupIndex((local_pos / config.group_size) as _),
|
||||
})
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let validator_public = validator_pubkeys(&validators);
|
||||
let session_info = SessionInfo {
|
||||
validators: validator_public,
|
||||
discovery_keys,
|
||||
validator_groups: IndexedVec::from(validator_groups),
|
||||
assignment_keys,
|
||||
n_cores: 0,
|
||||
zeroth_delay_tranche_width: 0,
|
||||
relay_vrf_modulo_samples: 0,
|
||||
n_delay_tranches: 0,
|
||||
no_show_slots: 0,
|
||||
needed_approvals: 0,
|
||||
active_validator_indices: vec![],
|
||||
dispute_period: 6,
|
||||
random_seed: [0u8; 32],
|
||||
};
|
||||
|
||||
TestState { config, local, validators, session_info, req_sender }
|
||||
}
|
||||
|
||||
fn make_dummy_leaf(&self, relay_parent: Hash) -> TestLeaf {
|
||||
TestLeaf {
|
||||
number: 1,
|
||||
hash: relay_parent,
|
||||
parent_hash: Hash::repeat_byte(0),
|
||||
session: 1,
|
||||
availability_cores: self.make_availability_cores(|i| {
|
||||
CoreState::Scheduled(ScheduledCore {
|
||||
para_id: ParaId::from(i as u32),
|
||||
collator: None,
|
||||
})
|
||||
}),
|
||||
para_data: (0..self.session_info.validator_groups.len())
|
||||
.map(|i| (ParaId::from(i as u32), PerParaData::new(1, vec![1, 2, 3].into())))
|
||||
.collect(),
|
||||
}
|
||||
}
|
||||
|
||||
fn make_availability_cores(&self, f: impl Fn(usize) -> CoreState) -> Vec<CoreState> {
|
||||
(0..self.session_info.validator_groups.len()).map(f).collect()
|
||||
}
|
||||
|
||||
fn make_dummy_topology(&self) -> NewGossipTopology {
|
||||
let validator_count = self.config.validator_count;
|
||||
NewGossipTopology {
|
||||
session: 1,
|
||||
topology: SessionGridTopology::new(
|
||||
(0..validator_count).collect(),
|
||||
(0..validator_count)
|
||||
.map(|i| TopologyPeerInfo {
|
||||
peer_ids: Vec::new(),
|
||||
validator_index: ValidatorIndex(i as u32),
|
||||
discovery_id: AuthorityDiscoveryPair::generate().0.public(),
|
||||
})
|
||||
.collect(),
|
||||
),
|
||||
local_index: self.local.as_ref().map(|local| local.validator_index),
|
||||
}
|
||||
}
|
||||
|
||||
fn group_validators(
|
||||
&self,
|
||||
group_index: GroupIndex,
|
||||
exclude_local: bool,
|
||||
) -> Vec<ValidatorIndex> {
|
||||
self.session_info
|
||||
.validator_groups
|
||||
.get(group_index)
|
||||
.unwrap()
|
||||
.iter()
|
||||
.cloned()
|
||||
.filter(|&i| {
|
||||
self.local.as_ref().map_or(true, |l| !exclude_local || l.validator_index != i)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn discovery_id(&self, validator_index: ValidatorIndex) -> AuthorityDiscoveryId {
|
||||
self.session_info.discovery_keys[validator_index.0 as usize].clone()
|
||||
}
|
||||
|
||||
fn sign_statement(
|
||||
&self,
|
||||
validator_index: ValidatorIndex,
|
||||
statement: CompactStatement,
|
||||
context: &SigningContext,
|
||||
) -> SignedStatement {
|
||||
let payload = statement.signing_payload(context);
|
||||
let pair = &self.validators[validator_index.0 as usize];
|
||||
let signature = pair.sign(&payload[..]);
|
||||
|
||||
SignedStatement::new(statement, validator_index, signature, context, &pair.public())
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
fn sign_full_statement(
|
||||
&self,
|
||||
validator_index: ValidatorIndex,
|
||||
statement: Statement,
|
||||
context: &SigningContext,
|
||||
pvd: PersistedValidationData,
|
||||
) -> SignedFullStatementWithPVD {
|
||||
let payload = statement.to_compact().signing_payload(context);
|
||||
let pair = &self.validators[validator_index.0 as usize];
|
||||
let signature = pair.sign(&payload[..]);
|
||||
|
||||
SignedFullStatementWithPVD::new(
|
||||
statement.supply_pvd(pvd),
|
||||
validator_index,
|
||||
signature,
|
||||
context,
|
||||
&pair.public(),
|
||||
)
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
// send a request out, returning a future which expects a response.
|
||||
async fn send_request(
|
||||
&mut self,
|
||||
peer: PeerId,
|
||||
request: AttestedCandidateRequest,
|
||||
) -> impl Future<Output = sc_network::config::OutgoingResponse> {
|
||||
let (tx, rx) = futures::channel::oneshot::channel();
|
||||
let req = sc_network::config::IncomingRequest {
|
||||
peer,
|
||||
payload: request.encode(),
|
||||
pending_response: tx,
|
||||
};
|
||||
self.req_sender.send(req).await.unwrap();
|
||||
|
||||
rx.map(|r| r.unwrap())
|
||||
}
|
||||
}
|
||||
|
||||
fn test_harness<T: Future<Output = VirtualOverseer>>(
|
||||
config: TestConfig,
|
||||
test: impl FnOnce(TestState, VirtualOverseer) -> T,
|
||||
) {
|
||||
let pool = sp_core::testing::TaskExecutor::new();
|
||||
let keystore = if config.local_validator {
|
||||
test_helpers::mock::make_ferdie_keystore()
|
||||
} else {
|
||||
Arc::new(LocalKeystore::in_memory()) as KeystorePtr
|
||||
};
|
||||
let req_protocol_names = ReqProtocolNames::new(&GENESIS_HASH, None);
|
||||
let (statement_req_receiver, _) = IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
let (candidate_req_receiver, req_cfg) =
|
||||
IncomingRequest::get_config_receiver(&req_protocol_names);
|
||||
let mut rng = rand_chacha::ChaCha8Rng::seed_from_u64(0);
|
||||
|
||||
let test_state = TestState::from_config(config, req_cfg.inbound_queue.unwrap(), &mut rng);
|
||||
|
||||
let (context, virtual_overseer) = test_helpers::make_subsystem_context(pool.clone());
|
||||
let subsystem = async move {
|
||||
let subsystem = crate::StatementDistributionSubsystem {
|
||||
keystore,
|
||||
v1_req_receiver: Some(statement_req_receiver),
|
||||
req_receiver: Some(candidate_req_receiver),
|
||||
metrics: Default::default(),
|
||||
rng,
|
||||
reputation: ReputationAggregator::new(|_| true),
|
||||
};
|
||||
|
||||
if let Err(e) = subsystem.run(context).await {
|
||||
panic!("Fatal error: {:?}", e);
|
||||
}
|
||||
};
|
||||
|
||||
let test_fut = test(test_state, virtual_overseer);
|
||||
|
||||
futures::pin_mut!(test_fut);
|
||||
futures::pin_mut!(subsystem);
|
||||
futures::executor::block_on(future::join(
|
||||
async move {
|
||||
let mut virtual_overseer = test_fut.await;
|
||||
// Ensure we have handled all responses.
|
||||
if let Ok(Some(msg)) = virtual_overseer.rx.try_next() {
|
||||
panic!("Did not handle all responses: {:?}", msg);
|
||||
}
|
||||
// Conclude.
|
||||
virtual_overseer.send(FromOrchestra::Signal(OverseerSignal::Conclude)).await;
|
||||
},
|
||||
subsystem,
|
||||
));
|
||||
}
|
||||
|
||||
struct PerParaData {
|
||||
min_relay_parent: BlockNumber,
|
||||
head_data: HeadData,
|
||||
}
|
||||
|
||||
impl PerParaData {
|
||||
pub fn new(min_relay_parent: BlockNumber, head_data: HeadData) -> Self {
|
||||
Self { min_relay_parent, head_data }
|
||||
}
|
||||
}
|
||||
|
||||
struct TestLeaf {
|
||||
number: BlockNumber,
|
||||
hash: Hash,
|
||||
parent_hash: Hash,
|
||||
session: SessionIndex,
|
||||
availability_cores: Vec<CoreState>,
|
||||
para_data: Vec<(ParaId, PerParaData)>,
|
||||
}
|
||||
|
||||
impl TestLeaf {
|
||||
pub fn para_data(&self, para_id: ParaId) -> &PerParaData {
|
||||
self.para_data
|
||||
.iter()
|
||||
.find_map(|(p_id, data)| if *p_id == para_id { Some(data) } else { None })
|
||||
.unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
async fn activate_leaf(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
leaf: &TestLeaf,
|
||||
test_state: &TestState,
|
||||
expect_session_info_request: bool,
|
||||
) {
|
||||
let activated = ActivatedLeaf {
|
||||
hash: leaf.hash,
|
||||
number: leaf.number,
|
||||
status: LeafStatus::Fresh,
|
||||
span: Arc::new(jaeger::Span::Disabled),
|
||||
};
|
||||
|
||||
virtual_overseer
|
||||
.send(FromOrchestra::Signal(OverseerSignal::ActiveLeaves(ActiveLeavesUpdate::start_work(
|
||||
activated,
|
||||
))))
|
||||
.await;
|
||||
|
||||
handle_leaf_activation(virtual_overseer, leaf, test_state, expect_session_info_request).await;
|
||||
}
|
||||
|
||||
async fn handle_leaf_activation(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
leaf: &TestLeaf,
|
||||
test_state: &TestState,
|
||||
expect_session_info_request: bool,
|
||||
) {
|
||||
let TestLeaf { number, hash, parent_hash, para_data, session, availability_cores } = leaf;
|
||||
|
||||
assert_matches!(
|
||||
virtual_overseer.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(parent, RuntimeApiRequest::StagingAsyncBackingParams(tx))
|
||||
) if parent == *hash => {
|
||||
tx.send(Ok(test_state.config.async_backing_params.unwrap_or(DEFAULT_ASYNC_BACKING_PARAMETERS))).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
let mrp_response: Vec<(ParaId, BlockNumber)> = para_data
|
||||
.iter()
|
||||
.map(|(para_id, data)| (*para_id, data.min_relay_parent))
|
||||
.collect();
|
||||
assert_matches!(
|
||||
virtual_overseer.recv().await,
|
||||
AllMessages::ProspectiveParachains(
|
||||
ProspectiveParachainsMessage::GetMinimumRelayParents(parent, tx)
|
||||
) if parent == *hash => {
|
||||
tx.send(mrp_response).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
let header = Header {
|
||||
parent_hash: *parent_hash,
|
||||
number: *number,
|
||||
state_root: Hash::zero(),
|
||||
extrinsics_root: Hash::zero(),
|
||||
digest: Default::default(),
|
||||
};
|
||||
assert_matches!(
|
||||
virtual_overseer.recv().await,
|
||||
AllMessages::ChainApi(
|
||||
ChainApiMessage::BlockHeader(parent, tx)
|
||||
) if parent == *hash => {
|
||||
tx.send(Ok(Some(header))).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
virtual_overseer.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(parent, RuntimeApiRequest::SessionIndexForChild(tx))) if parent == *hash => {
|
||||
tx.send(Ok(*session)).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
virtual_overseer.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(parent, RuntimeApiRequest::AvailabilityCores(tx))) if parent == *hash => {
|
||||
tx.send(Ok(availability_cores.clone())).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
let validator_groups = test_state.session_info.validator_groups.to_vec();
|
||||
let group_rotation_info =
|
||||
GroupRotationInfo { session_start_block: 1, group_rotation_frequency: 12, now: 1 };
|
||||
assert_matches!(
|
||||
virtual_overseer.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(parent, RuntimeApiRequest::ValidatorGroups(tx))) if parent == *hash => {
|
||||
tx.send(Ok((validator_groups, group_rotation_info))).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
if expect_session_info_request {
|
||||
assert_matches!(
|
||||
virtual_overseer.recv().await,
|
||||
AllMessages::RuntimeApi(
|
||||
RuntimeApiMessage::Request(parent, RuntimeApiRequest::SessionInfo(s, tx))) if parent == *hash && s == *session => {
|
||||
tx.send(Ok(Some(test_state.session_info.clone()))).unwrap();
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/// Intercepts an outgoing request, checks the fields, and sends the response.
|
||||
async fn handle_sent_request(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
peer: PeerId,
|
||||
candidate_hash: CandidateHash,
|
||||
mask: StatementFilter,
|
||||
candidate_receipt: CommittedCandidateReceipt,
|
||||
persisted_validation_data: PersistedValidationData,
|
||||
statements: Vec<UncheckedSignedStatement>,
|
||||
) {
|
||||
assert_matches!(
|
||||
virtual_overseer.recv().await,
|
||||
AllMessages::NetworkBridgeTx(NetworkBridgeTxMessage::SendRequests(mut requests, IfDisconnected::ImmediateError)) => {
|
||||
assert_eq!(requests.len(), 1);
|
||||
assert_matches!(
|
||||
requests.pop().unwrap(),
|
||||
Requests::AttestedCandidateVStaging(outgoing) => {
|
||||
assert_eq!(outgoing.peer, Recipient::Peer(peer));
|
||||
assert_eq!(outgoing.payload.candidate_hash, candidate_hash);
|
||||
assert_eq!(outgoing.payload.mask, mask);
|
||||
|
||||
let res = AttestedCandidateResponse {
|
||||
candidate_receipt,
|
||||
persisted_validation_data,
|
||||
statements,
|
||||
};
|
||||
outgoing.pending_response.send(Ok(res.encode())).unwrap();
|
||||
}
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
async fn answer_expected_hypothetical_depth_request(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
responses: Vec<(HypotheticalCandidate, FragmentTreeMembership)>,
|
||||
expected_leaf_hash: Option<Hash>,
|
||||
expected_backed_in_path_only: bool,
|
||||
) {
|
||||
assert_matches!(
|
||||
virtual_overseer.recv().await,
|
||||
AllMessages::ProspectiveParachains(
|
||||
ProspectiveParachainsMessage::GetHypotheticalFrontier(req, tx)
|
||||
) => {
|
||||
assert_eq!(req.fragment_tree_relay_parent, expected_leaf_hash);
|
||||
assert_eq!(req.backed_in_path_only, expected_backed_in_path_only);
|
||||
for (i, (candidate, _)) in responses.iter().enumerate() {
|
||||
assert!(
|
||||
req.candidates.iter().any(|c| &c == &candidate),
|
||||
"did not receive request for hypothetical candidate {}",
|
||||
i,
|
||||
);
|
||||
}
|
||||
|
||||
tx.send(responses).unwrap();
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
fn validator_pubkeys(val_ids: &[ValidatorPair]) -> IndexedVec<ValidatorIndex, ValidatorId> {
|
||||
val_ids.iter().map(|v| v.public().into()).collect()
|
||||
}
|
||||
|
||||
async fn connect_peer(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
peer: PeerId,
|
||||
authority_ids: Option<HashSet<AuthorityDiscoveryId>>,
|
||||
) {
|
||||
virtual_overseer
|
||||
.send(FromOrchestra::Communication {
|
||||
msg: StatementDistributionMessage::NetworkBridgeUpdate(
|
||||
NetworkBridgeEvent::PeerConnected(
|
||||
peer,
|
||||
ObservedRole::Authority,
|
||||
ValidationVersion::VStaging.into(),
|
||||
authority_ids,
|
||||
),
|
||||
),
|
||||
})
|
||||
.await;
|
||||
}
|
||||
|
||||
// TODO: Add some tests using this?
|
||||
#[allow(dead_code)]
|
||||
async fn disconnect_peer(virtual_overseer: &mut VirtualOverseer, peer: PeerId) {
|
||||
virtual_overseer
|
||||
.send(FromOrchestra::Communication {
|
||||
msg: StatementDistributionMessage::NetworkBridgeUpdate(
|
||||
NetworkBridgeEvent::PeerDisconnected(peer),
|
||||
),
|
||||
})
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn send_peer_view_change(virtual_overseer: &mut VirtualOverseer, peer: PeerId, view: View) {
|
||||
virtual_overseer
|
||||
.send(FromOrchestra::Communication {
|
||||
msg: StatementDistributionMessage::NetworkBridgeUpdate(
|
||||
NetworkBridgeEvent::PeerViewChange(peer, view),
|
||||
),
|
||||
})
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn send_peer_message(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
peer: PeerId,
|
||||
message: protocol_vstaging::StatementDistributionMessage,
|
||||
) {
|
||||
virtual_overseer
|
||||
.send(FromOrchestra::Communication {
|
||||
msg: StatementDistributionMessage::NetworkBridgeUpdate(
|
||||
NetworkBridgeEvent::PeerMessage(peer, Versioned::VStaging(message)),
|
||||
),
|
||||
})
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn send_new_topology(virtual_overseer: &mut VirtualOverseer, topology: NewGossipTopology) {
|
||||
virtual_overseer
|
||||
.send(FromOrchestra::Communication {
|
||||
msg: StatementDistributionMessage::NetworkBridgeUpdate(
|
||||
NetworkBridgeEvent::NewGossipTopology(topology),
|
||||
),
|
||||
})
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn overseer_recv_with_timeout(
|
||||
overseer: &mut VirtualOverseer,
|
||||
timeout: Duration,
|
||||
) -> Option<AllMessages> {
|
||||
gum::trace!("waiting for message...");
|
||||
overseer.recv().timeout(timeout).await
|
||||
}
|
||||
|
||||
fn next_group_index(
|
||||
group_index: GroupIndex,
|
||||
validator_count: usize,
|
||||
group_size: usize,
|
||||
) -> GroupIndex {
|
||||
let next_group = group_index.0 + 1;
|
||||
let num_groups =
|
||||
validator_count / group_size + if validator_count % group_size > 0 { 1 } else { 0 };
|
||||
GroupIndex::from(next_group % num_groups as u32)
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user