mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-04-26 16:57:58 +00:00
Asynchronous Backing MegaPR (#5022)
* inclusion emulator logic for asynchronous backing (#4790) * initial stab at candidate_context * fmt * docs & more TODOs * some cleanups * reframe as inclusion_emulator * documentations yes * update types * add constraint modifications * watermark * produce modifications * v2 primitives: re-export all v1 for consistency * vstaging primitives * emulator constraints: handle code upgrades * produce outbound HRMP modifications * stack. * method for applying modifications * method just for sanity-checking modifications * fragments produce modifications, not prospectives * make linear * add some TODOs * remove stacking; handle code upgrades * take `fragment` private * reintroduce stacking. * fragment constructor * add TODO * allow validating fragments against future constraints * docs * relay-parent number and min code size checks * check code upgrade restriction * check max hrmp per candidate * fmt * remove GoAhead logic because it wasn't helpful * docs on code upgrade failure * test stacking * test modifications against constraints * fmt * test fragments * descending or duplicate test * fmt * remove unused imports in vstaging * wrong primitives * spellcheck * Runtime changes for Asynchronous Backing (#4786) * inclusion: utility for allowed relay-parents * inclusion: use prev number instead of prev hash * track most recent context of paras * inclusion: accept previous relay-parents * update dmp advancement rule for async backing * fmt * add a comment about validation outputs * clean up a couple of TODOs * weights * fix weights * fmt * Resolve dmp todo * Restore inclusion tests * Restore paras_inherent tests * MostRecentContext test * Benchmark for new paras dispatchable * Prepare check_validation_outputs for upgrade * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=kusama-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/kusama/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=westend-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/westend/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=polkadot-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/polkadot/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=rococo-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/rococo/src/weights/runtime_parachains_paras.rs * Implementers guide changes * More tests for allowed relay parents * Add a github issue link * Compute group index based on relay parent * Storage migration * Move allowed parents tracker to shared * Compile error * Get group assigned to core at the next block * Test group assignment * fmt * Error instead of panic * Update guide * Extend doc-comment * Update runtime/parachains/src/shared.rs Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Parity Bot <admin@parity.io> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Prospective Parachains Subsystem (#4913) * docs and skeleton * subsystem skeleton * main loop * fragment tree basics & fmt * begin fragment trees & view * flesh out more of view update logic * further flesh out update logic * some refcount functions for fragment trees * add fatal/non-fatal errors * use non-fatal results * clear up some TODOs * ideal format for scheduling info * add a bunch of TODOs * some more fluff * extract fragment graph to submodule * begin fragment graph API * trees, not graphs * improve docs * scope and constructor for trees * add some test TODOs * limit max ancestors and store constraints * constructor * constraints: fix bug in HRMP watermarks * fragment tree population logic * set::retain * extract population logic * implement add_and_populate * fmt * add some TODOs in tests * implement child-selection * strip out old stuff based on wrong assumptions * use fatality * implement pruning * remove unused ancestor constraints * fragment tree instantiation * remove outdated comment * add message/request types and skeleton for handling * fmt * implement handle_candidate_seconded * candidate storage: handle backed * implement handle_candidate_backed * implement answer_get_backable_candidate * remove async where not needed * implement fetch_ancestry * add logic for run_iteration * add some docs * remove global allow(unused), fix warnings * make spellcheck happy (despite English) * fmt * bump Cargo.lock * replace tracing with gum * introduce PopulateFrom trait * implement GetHypotheticalDepths * revise docs slightly * first fragment tree scope test * more scope tests * test add_candidate * fmt * test retain * refactor test code * test populate is recursive * test contiguity of depth 0 is maintained * add_and_populate tests * cycle tests * remove PopulateFrom trait * fmt * test hypothetical depths (non-recursive) * have CandidateSeconded return membership * tree membership requests * Add a ProspectiveParachainsSubsystem struct * add a staging API for base constraints * add a `From` impl * add runtime API for staging_validity_constraints * implement fetch_base_constraints * implement `fetch_upcoming_paras` * remove reconstruction of candidate receipt; no obvious usecase * fmt * export message to broader module * remove last TODO * correctly export * fix compilation and add GetMinimumRelayParent request * make provisioner into a real subsystem with proper mesage bounds * fmt * fix ChannelsOut in overseer test * fix overseer tests * fix again * fmt * Integrate prospective parachains subsystem into backing: Part 1 (#5557) * BEGIN ASYNC candidate-backing CHANGES * rename & document modes * answer prospective validation data requests * GetMinimumRelayParents request is now plural * implement an implicit view utility for backing subsystems * implicit-view: get allowed relay parents * refactorings and improvements to implicit view * add some TODOs for tests * split implicit view updates into 2 functions * backing: define State to prepare for functional refactor * add some docs * backing: implement bones of new leaf activation logic * backing: create per-relay-parent-states * use new handle_active_leaves_update * begin extracting logic from CandidateBackingJob * mostly extract statement import from job logic * handle statement imports outside of job logic * do some TODO planning for prospective parachains integration * finish rewriting backing subsystem in functional style * add prospective parachains mode to relay parent entries * fmt * add a RejectedByProspectiveParachains error * notify prospective parachains of seconded and backed candidates * always validate candidates exhaustively in backing. * return persisted_validation_data from validation * handle rejections by prospective parachains * implement seconding sanity check * invoke validate_and_second * Alter statement table to allow multiple seconded messages per validator * refactor backing to have statements carry PVD * clean up all warnings * Add tests for implicit view * Improve doc comments * Prospective parachains mode based on Runtime API version * Add a TODO * Rework seconding_sanity_check * Iterate over responses * Update backing tests * collator-protocol: load PVD from runtime * Fix validator side tests * Update statement-distribution to fetch PVD * Fix statement-distribution tests * Backing tests with prospective paras #1 * fix per_relay_parent pruning in backing * Test multiple leaves * Test seconding sanity check * Import statement order Before creating an entry in `PerCandidateState` map wait for the approval from the prospective parachains * Add a test for correct state updates * Second multiple candidates per relay parent test * Add backing tests with prospective paras * Second more than one test without prospective paras * Add a test for prospective para blocks * Update malus * typos Co-authored-by: Chris Sosnin <chris125_@live.com> * Track occupied depth in backing per parachain (#5778) * provisioner: async backing changes (#5711) * Provisioner changes for async backing * Select candidates based on prospective paras mode * Revert naming * Update tests * Update TODO comment * review * provisioner: async backing changes (#5711) * Provisioner changes for async backing * Select candidates based on prospective paras mode * Revert naming * Update tests * Update TODO comment * review * fmt * Network bridge changes for asynchronous backing + update subsystems to handle versioned packets (#5991) * BEGIN STATEMENT DISTRIBUTION WORK create a vstaging network protocol which is the same as v1 * mostly make network bridge amenable to vstaging * network-bridge: fully adapt to vstaging * add some TODOs for tests * fix fallout in bitfield-distribution * bitfield distribution tests + TODOs * fix fallout in gossip-support * collator-protocol: fix message fallout * collator-protocol: load PVD from runtime * add TODO for vstaging tests * make things compile * set used network protocol version using a feature * fmt * get approval-distribution building * fix approval-distribution tests * spellcheck * nits * approval distribution net protocol test * bitfield distribution net protocol test * Revert "collator-protocol: fix message fallout" This reverts commit 07cc887303e16c6b3843ecb25cdc7cc2080e2ed1. * Network bridge tests Co-authored-by: Chris Sosnin <chris125_@live.com> * remove max_pov_size requirement from prospective pvd request (#6014) * remove max_pov_size requirement from prospective pvd request * fmt * Extract legacy statement distribution to its own module (#6026) * add compatibility type to v2 statement distribution message * warning cleanup * handle compatibility layer for v2 * clean up an unimplemented!() block * circulate statements based on version * extract legacy v1 code into separate module * remove unimplemented * clean up naming of from_requester/responder * remove TODOs * have backing share seconded statements with PVD * fmt * fix warning * Quick fix unused warning for not yet implemented/used staging messages. * Fix network bridge test * Fix wrong merge. We now have 23 subsystems (network bridge split + prospective parachains) Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at> * Version 3 is already live. * Fix tests (#6055) * Fix backing tests * Fix warnings. * fmt * collator-protocol: asynchronous backing changes (#5740) * Draft collator side changes * Start working on collations management * Handle peer's view change * Versioning on advertising * Versioned collation fetching request * Handle versioned messages * Improve docs for collation requests * Add spans * Add request receiver to overseer * Fix collator side tests * Extract relay parent mode to lib * Validator side draft * Add more checks for advertisement * Request pvd based on async backing mode * review * Validator side improvements * Make old tests green * More fixes * Collator side tests draft * Send collation test * fmt * Collator side network protocol versioning * cleanup * merge artifacts * Validator side net protocol versioning * Remove fragment tree membership request * Resolve todo * Collator side core state test * Improve net protocol compatibility * Validator side tests * more improvements * style fixes * downgrade log * Track implicit assignments * Limit the number of seconded candidates per para * Add a sanity check * Handle fetched candidate * fix tests * Retry fetch * Guard against dequeueing while already fetching * Reintegrate connection management * Timeout on advertisements * fmt * spellcheck * update tests after merge * validator assignment fixes for backing and collator protocol (#6158) * Rename depth->ancestry len in tests * Refactor group assignments * Remove implicit assignments * backing: consider occupied core assignments * Track a single para on validator side * Refactor prospective parachains mode request (#6179) * Extract prospective parachains mode into util * Skip activations depending on the mode * backing: don't send backed candidate to provisioner (#6185) * backing: introduce `CanSecond` request for advertisements filtering (#6225) * Drop BoundToRelayParent * draft changes * fix backing tests * Fix genesis ancestry * Fix validator side tests * more tests * cargo generate-lockfile * Implement `StagingValidityConstraints` Runtime API method (#6258) * Implement StagingValidityConstraints * spellcheck * fix ump params * Update hrmp comment * Introduce ump per candidate limit * hypothetical earliest block * refactor primitives usage * hypothetical earliest block number test * fix build * Prepare the Runtime for asynchronous backing upgrade (#6287) * Introduce async backing params to runtime config * fix cumulus config * use config * finish runtimes * Introduce new staging API * Update collator protocol * Update provisioner * Update prospective parachains * Update backing * Move async backing params lower in the config * make naming consistent * misc * Use real prospective parachains subsystem (#6407) * Backport `HypotheticalFrontier` into the feature branch (#6605) * implement more general HypotheticalFrontier * fmt * drop unneeded request Co-authored-by: Robert Habermeier <rphmeier@gmail.com> * Resolve todo about legacy leaf activation (#6447) * fix bug/warning in handling membership answers * Remove `HypotheticalDepthRequest` in favor of `HypotheticalFrontierRequest` (#6521) * Remove `HypotheticalDepthRequest` for `HypotheticalFrontierRequest` * Update tests * Fix (removed wrong docstring) * Fix can_second request * Patch some dead_code errors --------- Co-authored-by: Chris Sosnin <chris125_@live.com> * Async Backing: Send Statement Distribution "Backed" messages (#6634) * Backing: Send Statement Distribution "Backed" messages Closes #6590. **TODO:** - [ ] Adjust tests * Fix compile errors * (Mostly) fix tests * Fix comment * Fix test and compile error * Test that `StatementDistributionMessage::Backed` is sent * Fix compile error * Fix some clippy errors * Add prospective parachains subsystem tests (#6454) * Add prospective parachains subsystem test * Add `should_do_no_work_if_async_backing_disabled_for_leaf` test * Implement `activate_leaf` helper, up to getting ancestry * Finish implementing `activate_leaf` * Small refactor in `activate_leaf` * Get `CandidateSeconded` working * Finish `send_candidate_and_check_if_found` test * Refactor; send more leaves & candidates * Refactor test * Implement `check_candidate_parent_leaving_view` test * Start work on `check_candidate_on_multiple_forks` test * Don’t associate specific parachains with leaf * Finish `correctly_updates_leaves` test * Fix cycle due to reused head data * Fix `check_backable_query` test * Fix `check_candidate_on_multiple_forks` test * Add `check_depth_and_pvd_queries` test * Address review comments * Remove TODO * add a new index for output head data to candidate storage * Resolve test TODOs * Fix compile errors * test candidate storage pruning, make sure new index is cleaned up --------- Co-authored-by: Robert Habermeier <rphmeier@gmail.com> * Node-side metrics for asynchronous backing (#6549) * Add metrics for `prune_view_candidate_storage` * Add metrics for `request_unblocked_collations` * Fix docstring * Couple fixes from review comments * Fix `check_depth_query` test * inclusion-emulator: mirror advancement rule check (#6361) * inclusion-emulator: mirror advancement rule check * fix build * prospective-parachains: introduce `backed_in_path_only` flag for advertisements (#6649) * Introduce `backed_in_path_only` flag for depth request * fmt * update doc comment * fmt * Add async-backing zombienet tests (#6314) * Async backing: impl guide for statement distribution (#6738) Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com> * Asynchronous backing statement distribution: Take III (#5999) * add notification types for v2 statement-distribution * improve protocol docs * add empty vstaging module * fmt * add backed candidate packet request types * start putting down structure of new logic * handle activated leaf * some sanity-checking on outbound statements * fmt * update vstaging share to use statements with PVD * tiny refactor, candidate_hash location * import local statements * refactor statement import * first stab at broadcast logic * fmt * fill out some TODOs * start on handling incoming * split off session info into separate map * start in on a knowledge tracker * address some grumbles * format * missed comment * some docs for direct * add note on slashing * amend * simplify 'direct' code * finish up the 'direct' logic * add a bunch of tests for the direct-in-group logic * rename 'direct' to 'cluster', begin a candidate_entry module * distill candidate_entry * start in on a statement-store module * some utilities for the statement store * rewrite 'send_statement_direct' using new tools * filter sending logic on peers which have the relay-parent in their view. * some more logic for handling incoming statements * req/res: BackedCandidatePacket -> AttestedCandidate + tweaks * add a `validated_in_group` bitfield to BackedCandidateInventory * BackedCandidateInventory -> Manifest * start in on requester module * add outgoing request for attested candidate * add a priority mechanism for requester * some request dispatch logic * add seconded mask to tagged-request * amend manifest to hold group index * handle errors and set up scaffold for response validation * validate attested candidate responses * requester -> requests * add some utilities for manipulating requests * begin integrating requester * start grid module * tiny * refactor grid topology to expose more info to subsystems * fix grid_topology test * fix overseer test * implement topology group-based view construction logic * fmt * flesh out grid slightly more * add indexed groups utility * integrate Groups into per-session info * refactor statement store to borrow Groups * implement manifest knowledge utility * add a test for topology setup * don't send to group members * test for conflicting manifests * manifest knowledge tests * fmt * rename field * garbage collection for grid tracker * routines for finding correct/incorrect advertisers * add manifest import logic * tweak naming * more tests for manifest import * add comment * rework candidates into a view-wide tracker * fmt * start writing boilerplate for grid sending * fmt * some more group boilerplate * refactor handling of topology and authority IDs * fmt * send statements directly to grid peers where possible * send to cluster only if statement belongs to cluster * improve handling of cluster statements * handle incoming statements along the grid * API for introduction of candidates into the tree * backing: use new prospective parachains API * fmt prospective parachains changes * fmt statement-dist * fix condition * get ready for tracking importable candidates * prospective parachains: add Cow logic * incomplete and complete hypothetical candidates * remove keep_if_unneeded * fmt * implement more general HypotheticalFrontier * fmt, cleanup * add a by_parent_hash index to candidate tracker * more framework for future code * utilities for getting all hypothetical candidates for frontier * track origin in statement store * fmt * requests should return peer * apply post-confirmation reckoning * flesh out import/announce/circulate logic on new statements * adjust * adjust TODO comment * fix backing tests * update statement-distribution to use new indexedvec * fmt * query hypothetical candidates * implement `note_importable_under` * extract common utility of fragment tree updates * add a helper function for getting statements unknown by backing * import fresh statements to backing * send announcements and acknowledgements over grid * provide freshly importable statements also avoid tracking backed candidates in statement distribution * do not issue requests on newly importable candidates * add TODO for later when confirming candidate * write a routine for handling backed candidate notifications * simplify grid substantially * add some test TODOs * handle confirmed candidates & grid announcements * finish implementing manifest handling, including follow up statements * send follow-up statements when acknowledging freshly backed * fmt * handle incoming acknowledgements * a little DRYing * wire up network messages to handlers * fmt * some skeleton code for peer view update handling * more peer view skeleton stuff * Fix async backing statement distribution tests (#6621) * Fix compile errors in tests * Cargo fmt * Resolve some todos in async backing statement-distribution branch (#6482) * Implement `remove_by_relay_parent` * Extract `minimum_votes` to shared primitives. * Add `can_send_statements_received_with_prejudice` test * Fix test * Update docstrings * Cargo fmt * Fix compile error * Fix compile errors in tests * Cargo fmt * Add module docs; write `test_priority_ordering` (first draft) * Fix `test_priority_ordering` * Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority` * Address review comments * Remove `Entry::get_mut` * fix test compilation * add a TODO for a test * clean up a couple of TODOs * implement sending pending cluster statements * refactor utility function for sending acknowledgement and statements * mostly implement catching peers up via grid * Fix clippy error * alter grid to track all pending statements * fix more TODOs and format * tweak a TODO in requests * some logic for dispatching requests * fmt * skeleton for response receiving * Async backing statement distribution: cluster tests (#6678) * Add `pending_statements_set_when_receiving_fresh_statements` * Add `pending_statements_updated_when_sending_statements` test * fix up * fmt * update TODO * rework seconded mask in requests * change doc * change unhandledresponse not to borrow request manager * only accept responses sufficient to back * finish implementing response handling * extract statement filter to protocol crate * rework requests: use statement filter in network protocol * dispatch cluster requests correctly * rework cluster statement sending * implement request answering * fmt * only send confirmed candidate statement messages on unified relay-parent * Fix Tests In Statement Distribution Branch * Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * clean up some review comments * clean up warnings * Async backing statement distribution: grid tests (#6673) * Add `manifest_import_returns_ok_true` test * cargo fmt * Add pending_communication_receiving_manifest_on_confirmed_candidate * Add `senders_can_provide_manifests_in_acknowledgement` test * Add a couple of tests for pending statements * Add `pending_statements_cleared_when_sending` test * Add `pending_statements_respect_remote_knowledge` test * Refactor group creation in tests * Clarify docs * Address some review comments * Make some clarifications * Fix post-merge errors * Clarify test `senders_can_provide_manifests_in_acknowledgement` * Try writing `pending_statements_are_updated_after_manifest_exchange` * Document "seconding limit" and `reject_overflowing_manifests` test * Test that seconding counts are not updated for validators on error * Fix tests * Fix manifest exchange test * Add more tests in `requests.rs` (#6707) This resolves remaining TODOs in this file. * remove outdated inventory terminology * Async backing statement distribution: `Candidates` tests (#6658) * Async Backing: Fix clippy errors in statement distribution branch (#6720) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * Async Backing: Fix clippy errors in statement distribution branch * Fix some more clippy lints * add tests module * fix warnings in existing tests * create basic test harness * create a test state struct * fmt * create empty cluster & grid modules for tests * some TODOs for cluster test suite * describe test-suite for grid logic * describe request test suite * fix seconding-limit bug * Remove extraneous `pub` This somehow made it into my clippy PR. * Fix some test compile warnings * Remove some unneeded `allow`s * adapt some new test helpers from Marcin * add helper for activating a gossip topology * add utility for signing statements * helpers for connecting/disconnecting peers * round out network utilities * fmt * fix bug in initializing validator-meta * fix compilation * implement first cluster test * TODOs for incoming request tests * Remove unneeded `make_committed_candidate` helper * fmt * some more tests for cluster * add a TODO about grid senders * integrate inbound req/res into test harness * polish off initial cluster test suite * keep introduce candidate request * fix tests after introduce candidate request * fmt * Add grid protocol to module docs * Fix comments * Test `backed_in_path_only: true` * Update node/network/protocol/src/lib.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Update node/network/protocol/src/request_response/mod.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Mark receiver with `vstaging` * validate grid senders based on manifest kind * fix mask_seconded/valid * fix unwanted-mask check * fix build * resolve todo on leaf mode * Unify protocol naming to vstaging * fmt, fix grid test after topology change * typo Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * address review * adjust comment, make easier to understand * Fix typo --------- Co-authored-by: Marcin S <marcin@bytedude.com> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Chris Sosnin <chris125_@live.com> * miscellaneous fixes to make asynchronous backing work (#6791) * propagate network-protocol-staging feature * add feature to adder-collator as well * allow collation-generation of occupied cores * prospective parachains: special treatment for pending availability candidates * runtime: fetch candidates pending availability * lazily construct PVD for pending candidates * fix fallout in prospective parachains hypothetical/select_child * runtime: enact candidates when creating paras-inherent * make tests compile * test pending availability in the scope * add prospective parachains test * fix validity constraints leftovers * drop prints * Fix typos --------- Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Marcin S <marcin@realemail.net> * Remove restart from test (#6840) * Async Backing: Statement Distribution Tests (#6755) * start on handling incoming * split off session info into separate map * start in on a knowledge tracker * address some grumbles * format * missed comment * some docs for direct * add note on slashing * amend * simplify 'direct' code * finish up the 'direct' logic * add a bunch of tests for the direct-in-group logic * rename 'direct' to 'cluster', begin a candidate_entry module * distill candidate_entry * start in on a statement-store module * some utilities for the statement store * rewrite 'send_statement_direct' using new tools * filter sending logic on peers which have the relay-parent in their view. * some more logic for handling incoming statements * req/res: BackedCandidatePacket -> AttestedCandidate + tweaks * add a `validated_in_group` bitfield to BackedCandidateInventory * BackedCandidateInventory -> Manifest * start in on requester module * add outgoing request for attested candidate * add a priority mechanism for requester * some request dispatch logic * add seconded mask to tagged-request * amend manifest to hold group index * handle errors and set up scaffold for response validation * validate attested candidate responses * requester -> requests * add some utilities for manipulating requests * begin integrating requester * start grid module * tiny * refactor grid topology to expose more info to subsystems * fix grid_topology test * fix overseer test * implement topology group-based view construction logic * fmt * flesh out grid slightly more * add indexed groups utility * integrate Groups into per-session info * refactor statement store to borrow Groups * implement manifest knowledge utility * add a test for topology setup * don't send to group members * test for conflicting manifests * manifest knowledge tests * fmt * rename field * garbage collection for grid tracker * routines for finding correct/incorrect advertisers * add manifest import logic * tweak naming * more tests for manifest import * add comment * rework candidates into a view-wide tracker * fmt * start writing boilerplate for grid sending * fmt * some more group boilerplate * refactor handling of topology and authority IDs * fmt * send statements directly to grid peers where possible * send to cluster only if statement belongs to cluster * improve handling of cluster statements * handle incoming statements along the grid * API for introduction of candidates into the tree * backing: use new prospective parachains API * fmt prospective parachains changes * fmt statement-dist * fix condition * get ready for tracking importable candidates * prospective parachains: add Cow logic * incomplete and complete hypothetical candidates * remove keep_if_unneeded * fmt * implement more general HypotheticalFrontier * fmt, cleanup * add a by_parent_hash index to candidate tracker * more framework for future code * utilities for getting all hypothetical candidates for frontier * track origin in statement store * fmt * requests should return peer * apply post-confirmation reckoning * flesh out import/announce/circulate logic on new statements * adjust * adjust TODO comment * fix backing tests * update statement-distribution to use new indexedvec * fmt * query hypothetical candidates * implement `note_importable_under` * extract common utility of fragment tree updates * add a helper function for getting statements unknown by backing * import fresh statements to backing * send announcements and acknowledgements over grid * provide freshly importable statements also avoid tracking backed candidates in statement distribution * do not issue requests on newly importable candidates * add TODO for later when confirming candidate * write a routine for handling backed candidate notifications * simplify grid substantially * add some test TODOs * handle confirmed candidates & grid announcements * finish implementing manifest handling, including follow up statements * send follow-up statements when acknowledging freshly backed * fmt * handle incoming acknowledgements * a little DRYing * wire up network messages to handlers * fmt * some skeleton code for peer view update handling * more peer view skeleton stuff * Fix async backing statement distribution tests (#6621) * Fix compile errors in tests * Cargo fmt * Resolve some todos in async backing statement-distribution branch (#6482) * Implement `remove_by_relay_parent` * Extract `minimum_votes` to shared primitives. * Add `can_send_statements_received_with_prejudice` test * Fix test * Update docstrings * Cargo fmt * Fix compile error * Fix compile errors in tests * Cargo fmt * Add module docs; write `test_priority_ordering` (first draft) * Fix `test_priority_ordering` * Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority` * Address review comments * Remove `Entry::get_mut` * fix test compilation * add a TODO for a test * clean up a couple of TODOs * implement sending pending cluster statements * refactor utility function for sending acknowledgement and statements * mostly implement catching peers up via grid * Fix clippy error * alter grid to track all pending statements * fix more TODOs and format * tweak a TODO in requests * some logic for dispatching requests * fmt * skeleton for response receiving * Async backing statement distribution: cluster tests (#6678) * Add `pending_statements_set_when_receiving_fresh_statements` * Add `pending_statements_updated_when_sending_statements` test * fix up * fmt * update TODO * rework seconded mask in requests * change doc * change unhandledresponse not to borrow request manager * only accept responses sufficient to back * finish implementing response handling * extract statement filter to protocol crate * rework requests: use statement filter in network protocol * dispatch cluster requests correctly * rework cluster statement sending * implement request answering * fmt * only send confirmed candidate statement messages on unified relay-parent * Fix Tests In Statement Distribution Branch * Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * clean up some review comments * clean up warnings * Async backing statement distribution: grid tests (#6673) * Add `manifest_import_returns_ok_true` test * cargo fmt * Add pending_communication_receiving_manifest_on_confirmed_candidate * Add `senders_can_provide_manifests_in_acknowledgement` test * Add a couple of tests for pending statements * Add `pending_statements_cleared_when_sending` test * Add `pending_statements_respect_remote_knowledge` test * Refactor group creation in tests * Clarify docs * Address some review comments * Make some clarifications * Fix post-merge errors * Clarify test `senders_can_provide_manifests_in_acknowledgement` * Try writing `pending_statements_are_updated_after_manifest_exchange` * Document "seconding limit" and `reject_overflowing_manifests` test * Test that seconding counts are not updated for validators on error * Fix tests * Fix manifest exchange test * Add more tests in `requests.rs` (#6707) This resolves remaining TODOs in this file. * remove outdated inventory terminology * Async backing statement distribution: `Candidates` tests (#6658) * Async Backing: Fix clippy errors in statement distribution branch (#6720) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * Async Backing: Fix clippy errors in statement distribution branch * Fix some more clippy lints * add tests module * fix warnings in existing tests * create basic test harness * create a test state struct * fmt * create empty cluster & grid modules for tests * some TODOs for cluster test suite * describe test-suite for grid logic * describe request test suite * fix seconding-limit bug * Remove extraneous `pub` This somehow made it into my clippy PR. * Fix some test compile warnings * Remove some unneeded `allow`s * adapt some new test helpers from Marcin * add helper for activating a gossip topology * add utility for signing statements * helpers for connecting/disconnecting peers * round out network utilities * fmt * fix bug in initializing validator-meta * fix compilation * implement first cluster test * TODOs for incoming request tests * Remove unneeded `make_committed_candidate` helper * fmt * Hook up request sender * Add `valid_statement_without_prior_seconded_is_ignored` test * Fix `valid_statement_without_prior_seconded_is_ignored` test * some more tests for cluster * add a TODO about grid senders * integrate inbound req/res into test harness * polish off initial cluster test suite * keep introduce candidate request * fix tests after introduce candidate request * fmt * Add grid protocol to module docs * Remove obsolete test * Fix comments * Test `backed_in_path_only: true` * Update node/network/protocol/src/lib.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Update node/network/protocol/src/request_response/mod.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Mark receiver with `vstaging` * First draft of `ensure_seconding_limit_is_respected` test * validate grid senders based on manifest kind * fix mask_seconded/valid * fix unwanted-mask check * fix build * resolve todo on leaf mode * Unify protocol naming to vstaging * Fix `ensure_seconding_limit_is_respected` test * Start `backed_candidate_leads_to_advertisement` test * fmt, fix grid test after topology change * Send Backed notification * Finish `backed_candidate_leads_to_advertisement` test * Finish `peer_reported_for_duplicate_statements` test * Finish `received_advertisement_before_confirmation_leads_to_request` * Add `advertisements_rejected_from_incorrect_peers` test * Add `manifest_rejected_*` tests * Add `manifest_rejected_when_group_does_not_match_para` test * Add `local_node_sanity_checks_incoming_requests` test * Add `local_node_respects_statement_mask` test * Add tests where peer is reported for providing invalid signatures * Add `cluster_peer_allowed_to_send_incomplete_statements` test * Add `received_advertisement_after_backing_leads_to_acknowledgement` * Add `received_advertisement_after_confirmation_before_backing` test * peer_reported_for_advertisement_conflicting_with_confirmed_candidate * Add `peer_reported_for_not_enough_statements` test * Add `peer_reported_for_providing_statements_meant_to_be_masked_out` * Add `additional_statements_are_shared_after_manifest_exchange` * Add `grid_statements_imported_to_backing` test * Add `relay_parent_entering_peer_view_leads_to_advertisement` test * Add `advertisement_not_re_sent_when_peer_re_enters_view` test * Update node/network/statement-distribution/src/vstaging/tests/grid.rs Co-authored-by: asynchronous rob <rphmeier@gmail.com> * Resolve TODOs, update test * Address unused code * Add check after every test for unhandled requests * Refactor (`make_dummy_leaf` and `handle_sent_request`) * Refactor (`make_dummy_topology`) * Minor refactor --------- Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Chris Sosnin <chris125_@live.com> * Fix some clippy lints in tests * Async backing: minor fixes (#6920) * bitfield-distribution test * implicit view tests * Refactor parameters -> params * scheduler: update storage migration (#6963) * update scheduler migration * Adjust weight to account for storage read * Statement Distribution Guide Edits (#7025) * Statement distribution guide edits * Addressed Marcin's comments * Add attested candidate request retry timeouts (#6833) Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: asynchronous rob <rphmeier@gmail.com> Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <chris125_@live.com> Fix async backing statement distribution tests (#6621) Resolve some todos in async backing statement-distribution branch (#6482) Fix clippy errors in statement distribution branch (#6720) * Async backing: add Prospective Parachains impl guide (#6933) Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> * Updates to Provisioner Guide for Async Backing (#7106) * Initial corrections and clarifications * Partial first draft * Finished first draft * Adding back wrongly removed test bit * fmt * Update roadmap/implementers-guide/src/node/utility/provisioner.md Co-authored-by: Marcin S. <marcin@realemail.net> * Addressing comments * Reorganization * fmt --------- Co-authored-by: Marcin S. <marcin@realemail.net> * fmt * Renaming Parathread Mentions (#7287) * Renaming parathreads * Renaming module to pallet * More updates * PVF: Refactor workers into separate crates, remove host dependency (#7253) * PVF: Refactor workers into separate crates, remove host dependency * Fix compile error * Remove some leftover code * Fix compile errors * Update Cargo.lock * Remove worker main.rs files I accidentally copied these from the other PR. This PR isn't intended to introduce standalone workers yet. * Address review comments * cargo fmt * Update a couple of comments * Update log targets * Update quote to 1.0.27 (#7280) Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: parity-processbot <> * pallets: implement `Default` for `GenesisConfig` in `no_std` (#7271) * pallets: implement Default for GenesisConfig in no_std This change is follow-up of: https://github.com/paritytech/substrate/pull/14108 It is a step towards: https://github.com/paritytech/substrate/issues/13334 * Cargo.lock updated * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * cli: enable BEEFY by default on test networks (#7293) We consider BEEFY mature enough to run by default on all nodes for test networks (Rococo/Wococo/Versi). Right now, most nodes are not running it since it's opt-in using --beefy flag. Switch to an opt-out model for test networks. Replace --beefy flag from CLI with --no-beefy and have BEEFY client start by default on test networks. Signed-off-by: acatangiu <adrian@parity.io> * runtime: past session slashing runtime API (#6667) * runtime/vstaging: unapplied_slashes runtime API * runtime/vstaging: key_ownership_proof runtime API * runtime/ParachainHost: submit_report_dispute_lost * fix key_ownership_proof API * runtime: submit_report_dispute_lost runtime API * nits * Update node/subsystem-types/src/messages.rs Co-authored-by: Marcin S. <marcin@bytedude.com> * revert unrelated fmt changes * post merge fixes * fix compilation --------- Co-authored-by: Marcin S. <marcin@bytedude.com> * Correcting git mishap * Document usage of `gum` crate (#7294) * Document usage of gum crate * Small fix * Add some more basic info * Update node/gum/src/lib.rs Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * Update target docs --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * XCM: Fix issue with RequestUnlock (#7278) * XCM: Fix issue with RequestUnlock * Leave API changes for v4 * Fix clippy errors * Fix tests --------- Co-authored-by: parity-processbot <> * Companion for Substrate#14228 (#7295) * Companion for Substrate#14228 https://github.com/paritytech/substrate/pull/14228 * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * Companion for #14237: Use latest sp-crates (#7300) * To revert: Update substrate branch to "lexnv/bump_sp_crates" Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Revert "To revert: Update substrate branch to "lexnv/bump_sp_crates"" This reverts commit 5f1db84eac4a226c37b7f6ce6ee19b49dc7e2008. * Update cargo lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * bounded-collections bump to 0.1.7 (#7305) * bounded-collections bump to 0.1.7 Companion for: paritytech/substrate#14225 * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * bump to quote 1.0.28 (#7306) * `RollingSessionWindow` cleanup (#7204) * Replace `RollingSessionWindow` with `RuntimeInfo` - initial commit * Fix tests in import * Fix the rest of the tests * Remove dead code * Fix todos * Simplify session caching * Comments for `SessionInfoProvider` * Separate `SessionInfoProvider` from `State` * `cache_session_info_for_head` becomes freestanding function * Remove unneeded `mut` usage * fn session_info -> fn get_session_info() to avoid name clashes. The function also tries to initialize `SessionInfoProvider` * Fix SessionInfo retrieval * Code cleanup * Don't wrap `SessionInfoProvider` in an `Option` * Remove `earliest_session()` * Remove pre-caching -> wip * Fix some tests and code cleanup * Fix all tests * Fixes in tests * Fix comments, variable names and small style changes * Fix a warning * impl From<SessionWindowSize> for NonZeroUsize * Fix logging for `get_session_info` - remove redundant logs and decrease log level to DEBUG * Code review feedback * Storage migration removing `COL_SESSION_WINDOW_DATA` from parachains db * Remove `col_session_data` usages * Storage migration clearing columns w/o removing them * Remove session data column usages from `approval-voting` and `dispute-coordinator` tests * Add some test cases from `RollingSessionWindow` to `dispute-coordinator` tests * Fix formatting in initialized.rs * Fix a corner case in `SessionInfo` caching for `dispute-coordinator` * Remove `RollingSessionWindow` ;( * Revert "Fix formatting in initialized.rs" This reverts commit 0f94664ec9f3a7e3737a30291195990e1e7065fc. * v2 to v3 migration drops `COL_DISPUTE_COORDINATOR_DATA` instead of clearing it * Fix `NUM_COLUMNS` in `approval-voting` * Use `columns::v3::NUM_COLUMNS` when opening db * Update node/service/src/parachains_db/upgrade.rs Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * Don't write in `COL_DISPUTE_COORDINATOR_DATA` for `test_rocksdb_migrate_2_to_3` * Fix `NUM+COLUMNS` in approval_voting * Fix formatting * Fix columns usage * Clarification comments about the different db versions --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * pallet-para-config: Remove remnant WeightInfo functions (#7308) * pallet-para-config: Remove remnant WeightInfo functions Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * set_config_with_weight begone Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * ".git/.scripts/commands/bench/bench.sh" runtime kusama-dev runtime_parachains::configuration --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: command-bot <> * XCM: PayOverXcm config (#6900) * Move XCM query functionality to trait * Fix tests * Add PayOverXcm implementation * fix the PayOverXcm trait to compile * moved doc comment out of trait implmeentation and to the trait * PayOverXCM documentation * Change documentation a bit * Added empty benchmark methods implementation and changed docs * update PayOverXCM to convert AccountIds to MultiLocations * Implement benchmarking method * Change v3 to latest * Descend origin to an asset sender (#6970) * descend origin to an asset sender * sender as tuple of dest and sender * Add more variants to the QueryResponseStatus enum * Change Beneficiary to Into<[u8; 32]> * update PayOverXcm to return concrete errors and use AccountId as sender * use polkadot-primitives for AccountId * fix dependency to use polkadot-core-primitives * force Unpaid instruction to the top of the instructions list * modify report_outcome to accept interior argument * use new_query directly for building final xcm query, instead of report_outcome * fix usage of new_query to use the XcmQueryHandler * fix usage of new_query to use the XcmQueryHandler * tiny method calling fix * xcm query handler (#7198) * drop redundant query status * rename ReportQueryStatus to OuterQueryStatus * revert rename of QueryResponseStatus * update mapping * Update xcm/xcm-builder/src/pay.rs Co-authored-by: Gavin Wood <gavin@parity.io> * Updates * Docs * Fix benchmarking stuff * Destination can be determined based on asset_kind * Tweaking API to minimise clones * Some repotting and docs --------- Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> * Companion for #14265 (#7307) * Update Cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update Cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: parity-processbot <> * bump serde to 1.0.163 (#7315) * bump serde to 1.0.163 * bump ci * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * fmt * Updated fmt * Removing changes accidentally pulled from master * fix another master pull issue * Another master pull fix * fmt * Fixing implementers guide build * Revert "Merge branch 'rh-async-backing-feature-while-frozen' of https://github.com/paritytech/polkadot into brad-rename-parathread" This reverts commit bebc24af52ab61155e3fe02cb3ce66a592bce49c, reversing changes made to 1b2de662dfb11173679d6da5bd0da9d149c85547. --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: acatangiu <adrian@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Marcin S. <marcin@bytedude.com> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> * fix bitfield distribution test * approval distribution tests * fix bridge tests * update Cargo.lock * [async-backing-branch] Optimize collator-protocol validator-side request fetching (#7457) * Optimize collator-protocol validator-side request fetching * address feedback: replace tuples with structs * feedback: add doc comments * move collation types to subfolder --------- Signed-off-by: alindima <alin@parity.io> * Update collation generation for asynchronous backing (#7405) * break candidate receipt construction and distribution into own function * update implementers' guide to include SubmitCollation * implement SubmitCollation for collation-generation * fmt * fix test compilation & remove unnecessary submodule * add some TODOs for a test suite. * Update roadmap/implementers-guide/src/types/overseer-protocol.md Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * add new test harness and first test * refactor to avoid requiring background sender * ensure collation gets packaged and distributed * tests for the fallback case with no hint * add parent rp-number hint tests * fmt * update uses of CollationGenerationConfig * fix remaining test * address review comments * use subsystemsender for background tasks * fmt * remove ValidationCodeHashHint and related tests --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * fix some more fallout from merge * fmt * remove staging APIs from Rococo & Westend (#7513) * send network messages on main protocol name (#7515) * misc async backing improvements for allowed ancestry blocks (#7532) * shared: fix acquire_info * backwards-compat test for prospective parachains * same relay parent is allowed * provisioner: request candidate receipt by relay parent (#7527) * return candidates hash from prospective parachains * update provisioner * update tests * guide changes * send a single message to backing * fix test * revert to old `handle_new_activations` logic in some cases (#7514) * revert to old `handle_new_activations` logic * gate sending messages on scheduled cores to max_depth >= 2 * fmt * 2->1 * Omnibus asynchronous backing bugfix PR (#7529) * fix a bug in backing * add some more logs * prospective parachains: take ancestry only up to session bounds * add test * fix zombienet tests (#7614) Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> * fix runtime compilation * make bitfield distribution tests compile * attempt to fix zombienet disputes (#7618) * update metric name * update some metric names * avoid cycles when creating fake candidates * make undying collator more friendly to malformed parents * fix a bug in malus * fmt * clippy * add RUN_IN_CONTAINER to new ZombieNet tests (#7631) * remove duplicated migration happened because of master-merge --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: acatangiu <adrian@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Signed-off-by: alindima <alin@parity.io> Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Parity Bot <admin@parity.io> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at> Co-authored-by: Robert Klotzner <eskimor@users.noreply.github.com> Co-authored-by: Marcin S <marcin@bytedude.com> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Mattia L.V. Bradascio <28816406+bredamatt@users.noreply.github.com> Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com> Co-authored-by: BradleyOlson64 <lotrftw9@gmail.com> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> Co-authored-by: Alin Dima <alin@parity.io>
This commit is contained in:
@@ -25,8 +25,9 @@ use polkadot_node_jaeger as jaeger;
|
||||
use polkadot_node_network_protocol::{
|
||||
self as net_protocol,
|
||||
grid_topology::{RandomRouting, RequiredRouting, SessionGridTopologies, SessionGridTopology},
|
||||
peer_set::MAX_NOTIFICATION_SIZE,
|
||||
v1 as protocol_v1, PeerId, UnifiedReputationChange as Rep, Versioned, View,
|
||||
peer_set::{ValidationVersion, MAX_NOTIFICATION_SIZE},
|
||||
v1 as protocol_v1, vstaging as protocol_vstaging, PeerId, UnifiedReputationChange as Rep,
|
||||
Versioned, VersionedValidationProtocol, View,
|
||||
};
|
||||
use polkadot_node_primitives::approval::{
|
||||
AssignmentCert, BlockApprovalMeta, IndirectAssignmentCert, IndirectSignedApprovalVote,
|
||||
@@ -159,6 +160,15 @@ enum Resend {
|
||||
No,
|
||||
}
|
||||
|
||||
/// Data stored on a per-peer basis.
|
||||
#[derive(Debug)]
|
||||
struct PeerData {
|
||||
/// The peer's view.
|
||||
view: View,
|
||||
/// The peer's protocol version.
|
||||
version: ValidationVersion,
|
||||
}
|
||||
|
||||
/// The [`State`] struct is responsible for tracking the overall state of the subsystem.
|
||||
///
|
||||
/// It tracks metadata about our view of the unfinalized chain,
|
||||
@@ -179,7 +189,7 @@ struct State {
|
||||
pending_known: HashMap<Hash, Vec<(PeerId, PendingMessage)>>,
|
||||
|
||||
/// Peer data is partially stored here, and partially inline within the [`BlockEntry`]s
|
||||
peer_views: HashMap<PeerId, View>,
|
||||
peer_data: HashMap<PeerId, PeerData>,
|
||||
|
||||
/// Keeps a topology for various different sessions.
|
||||
topologies: SessionGridTopologies,
|
||||
@@ -349,14 +359,30 @@ impl State {
|
||||
rng: &mut (impl CryptoRng + Rng),
|
||||
) {
|
||||
match event {
|
||||
NetworkBridgeEvent::PeerConnected(peer_id, role, _, _) => {
|
||||
NetworkBridgeEvent::PeerConnected(peer_id, role, version, _) => {
|
||||
// insert a blank view if none already present
|
||||
gum::trace!(target: LOG_TARGET, ?peer_id, ?role, "Peer connected");
|
||||
self.peer_views.entry(peer_id).or_default();
|
||||
let version = match ValidationVersion::try_from(version).ok() {
|
||||
Some(v) => v,
|
||||
None => {
|
||||
// sanity: network bridge is supposed to detect this already.
|
||||
gum::error!(
|
||||
target: LOG_TARGET,
|
||||
?peer_id,
|
||||
?version,
|
||||
"Unsupported protocol version"
|
||||
);
|
||||
return
|
||||
},
|
||||
};
|
||||
|
||||
self.peer_data
|
||||
.entry(peer_id)
|
||||
.or_insert_with(|| PeerData { version, view: Default::default() });
|
||||
},
|
||||
NetworkBridgeEvent::PeerDisconnected(peer_id) => {
|
||||
gum::trace!(target: LOG_TARGET, ?peer_id, "Peer disconnected");
|
||||
self.peer_views.remove(&peer_id);
|
||||
self.peer_data.remove(&peer_id);
|
||||
self.blocks.iter_mut().for_each(|(_hash, entry)| {
|
||||
entry.known_by.remove(&peer_id);
|
||||
})
|
||||
@@ -393,12 +419,12 @@ impl State {
|
||||
live
|
||||
});
|
||||
},
|
||||
NetworkBridgeEvent::PeerMessage(peer_id, msg) => {
|
||||
self.process_incoming_peer_message(ctx, metrics, peer_id, msg, rng).await;
|
||||
},
|
||||
NetworkBridgeEvent::UpdatedAuthorityIds { .. } => {
|
||||
// The approval-distribution subsystem doesn't deal with `AuthorityDiscoveryId`s.
|
||||
},
|
||||
NetworkBridgeEvent::PeerMessage(peer_id, Versioned::V1(msg)) => {
|
||||
self.process_incoming_peer_message(ctx, metrics, peer_id, msg, rng).await;
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -455,16 +481,18 @@ impl State {
|
||||
|
||||
{
|
||||
let sender = ctx.sender();
|
||||
for (peer_id, view) in self.peer_views.iter() {
|
||||
let intersection = view.iter().filter(|h| new_hashes.contains(h));
|
||||
let view_intersection = View::new(intersection.cloned(), view.finalized_number);
|
||||
for (peer_id, data) in self.peer_data.iter() {
|
||||
let intersection = data.view.iter().filter(|h| new_hashes.contains(h));
|
||||
let view_intersection =
|
||||
View::new(intersection.cloned(), data.view.finalized_number);
|
||||
Self::unify_with_peer(
|
||||
sender,
|
||||
metrics,
|
||||
&mut self.blocks,
|
||||
&self.topologies,
|
||||
self.peer_views.len(),
|
||||
self.peer_data.len(),
|
||||
*peer_id,
|
||||
data.version,
|
||||
view_intersection,
|
||||
rng,
|
||||
)
|
||||
@@ -547,6 +575,7 @@ impl State {
|
||||
|
||||
adjust_required_routing_and_propagate(
|
||||
ctx,
|
||||
&self.peer_data,
|
||||
&mut self.blocks,
|
||||
&self.topologies,
|
||||
|block_entry| block_entry.session == session,
|
||||
@@ -566,13 +595,16 @@ impl State {
|
||||
ctx: &mut Context,
|
||||
metrics: &Metrics,
|
||||
peer_id: PeerId,
|
||||
msg: protocol_v1::ApprovalDistributionMessage,
|
||||
msg: net_protocol::ApprovalDistributionMessage,
|
||||
rng: &mut R,
|
||||
) where
|
||||
R: CryptoRng + Rng,
|
||||
{
|
||||
match msg {
|
||||
protocol_v1::ApprovalDistributionMessage::Assignments(assignments) => {
|
||||
Versioned::V1(protocol_v1::ApprovalDistributionMessage::Assignments(assignments)) |
|
||||
Versioned::VStaging(protocol_vstaging::ApprovalDistributionMessage::Assignments(
|
||||
assignments,
|
||||
)) => {
|
||||
gum::trace!(
|
||||
target: LOG_TARGET,
|
||||
peer_id = %peer_id,
|
||||
@@ -611,7 +643,10 @@ impl State {
|
||||
.await;
|
||||
}
|
||||
},
|
||||
protocol_v1::ApprovalDistributionMessage::Approvals(approvals) => {
|
||||
Versioned::V1(protocol_v1::ApprovalDistributionMessage::Approvals(approvals)) |
|
||||
Versioned::VStaging(protocol_vstaging::ApprovalDistributionMessage::Approvals(
|
||||
approvals,
|
||||
)) => {
|
||||
gum::trace!(
|
||||
target: LOG_TARGET,
|
||||
peer_id = %peer_id,
|
||||
@@ -664,9 +699,14 @@ impl State {
|
||||
{
|
||||
gum::trace!(target: LOG_TARGET, ?view, "Peer view change");
|
||||
let finalized_number = view.finalized_number;
|
||||
let old_view =
|
||||
self.peer_views.get_mut(&peer_id).map(|d| std::mem::replace(d, view.clone()));
|
||||
let old_finalized_number = old_view.map(|v| v.finalized_number).unwrap_or(0);
|
||||
let (peer_protocol_version, old_finalized_number) = match self
|
||||
.peer_data
|
||||
.get_mut(&peer_id)
|
||||
.map(|d| (d.version, std::mem::replace(&mut d.view, view.clone())))
|
||||
{
|
||||
Some((v, view)) => (v, view.finalized_number),
|
||||
None => return, // unknown peer
|
||||
};
|
||||
|
||||
// we want to prune every block known_by peer up to (including) view.finalized_number
|
||||
let blocks = &mut self.blocks;
|
||||
@@ -691,8 +731,9 @@ impl State {
|
||||
metrics,
|
||||
&mut self.blocks,
|
||||
&self.topologies,
|
||||
self.peer_views.len(),
|
||||
self.peer_data.len(),
|
||||
peer_id,
|
||||
peer_protocol_version,
|
||||
view,
|
||||
rng,
|
||||
)
|
||||
@@ -992,7 +1033,7 @@ impl State {
|
||||
// then messages will be sent when we get it.
|
||||
|
||||
let assignments = vec![(assignment, claimed_candidate_index)];
|
||||
let n_peers_total = self.peer_views.len();
|
||||
let n_peers_total = self.peer_data.len();
|
||||
let source_peer = source.peer_id();
|
||||
|
||||
let mut peer_filter = move |peer| {
|
||||
@@ -1019,31 +1060,53 @@ impl State {
|
||||
route_random
|
||||
};
|
||||
|
||||
let peers = entry.known_by.keys().filter(|p| peer_filter(p)).cloned().collect::<Vec<_>>();
|
||||
let (v1_peers, vstaging_peers) = {
|
||||
let peer_data = &self.peer_data;
|
||||
let peers = entry
|
||||
.known_by
|
||||
.keys()
|
||||
.filter_map(|p| peer_data.get_key_value(p))
|
||||
.filter(|(p, _)| peer_filter(p))
|
||||
.map(|(p, peer_data)| (*p, peer_data.version))
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// Add the metadata of the assignment to the knowledge of each peer.
|
||||
for peer in peers.iter() {
|
||||
// we already filtered peers above, so this should always be Some
|
||||
if let Some(peer_knowledge) = entry.known_by.get_mut(peer) {
|
||||
peer_knowledge.sent.insert(message_subject.clone(), message_kind);
|
||||
// Add the metadata of the assignment to the knowledge of each peer.
|
||||
for (peer, _) in peers.iter() {
|
||||
// we already filtered peers above, so this should always be Some
|
||||
if let Some(peer_knowledge) = entry.known_by.get_mut(peer) {
|
||||
peer_knowledge.sent.insert(message_subject.clone(), message_kind);
|
||||
}
|
||||
}
|
||||
|
||||
if !peers.is_empty() {
|
||||
gum::trace!(
|
||||
target: LOG_TARGET,
|
||||
?block_hash,
|
||||
?claimed_candidate_index,
|
||||
local = source.peer_id().is_none(),
|
||||
num_peers = peers.len(),
|
||||
"Sending an assignment to peers",
|
||||
);
|
||||
}
|
||||
|
||||
let v1_peers = filter_peers_by_version(&peers, ValidationVersion::V1);
|
||||
let vstaging_peers = filter_peers_by_version(&peers, ValidationVersion::VStaging);
|
||||
|
||||
(v1_peers, vstaging_peers)
|
||||
};
|
||||
|
||||
if !v1_peers.is_empty() {
|
||||
ctx.send_message(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
v1_peers,
|
||||
versioned_assignments_packet(ValidationVersion::V1, assignments.clone()),
|
||||
))
|
||||
.await;
|
||||
}
|
||||
|
||||
if !peers.is_empty() {
|
||||
gum::trace!(
|
||||
target: LOG_TARGET,
|
||||
?block_hash,
|
||||
?claimed_candidate_index,
|
||||
local = source.peer_id().is_none(),
|
||||
num_peers = peers.len(),
|
||||
"Sending an assignment to peers",
|
||||
);
|
||||
|
||||
if !vstaging_peers.is_empty() {
|
||||
ctx.send_message(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
peers,
|
||||
Versioned::V1(protocol_v1::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_v1::ApprovalDistributionMessage::Assignments(assignments),
|
||||
)),
|
||||
vstaging_peers,
|
||||
versioned_assignments_packet(ValidationVersion::VStaging, assignments.clone()),
|
||||
))
|
||||
.await;
|
||||
}
|
||||
@@ -1332,38 +1395,55 @@ impl State {
|
||||
in_topology || knowledge.sent.contains(message_subject, MessageKind::Assignment)
|
||||
};
|
||||
|
||||
let peers = entry
|
||||
.known_by
|
||||
.iter()
|
||||
.filter(|(p, k)| peer_filter(p, k))
|
||||
.map(|(p, _)| p)
|
||||
.cloned()
|
||||
.collect::<Vec<_>>();
|
||||
let (v1_peers, vstaging_peers) = {
|
||||
let peer_data = &self.peer_data;
|
||||
let peers = entry
|
||||
.known_by
|
||||
.iter()
|
||||
.filter_map(|(p, k)| peer_data.get(&p).map(|pd| (p, k, pd.version)))
|
||||
.filter(|(p, k, _)| peer_filter(p, k))
|
||||
.map(|(p, _, v)| (*p, v))
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// Add the metadata of the assignment to the knowledge of each peer.
|
||||
for peer in peers.iter() {
|
||||
// we already filtered peers above, so this should always be Some
|
||||
if let Some(entry) = entry.known_by.get_mut(peer) {
|
||||
entry.sent.insert(message_subject.clone(), message_kind);
|
||||
// Add the metadata of the assignment to the knowledge of each peer.
|
||||
for (peer, _) in peers.iter() {
|
||||
// we already filtered peers above, so this should always be Some
|
||||
if let Some(peer_knowledge) = entry.known_by.get_mut(peer) {
|
||||
peer_knowledge.sent.insert(message_subject.clone(), message_kind);
|
||||
}
|
||||
}
|
||||
|
||||
if !peers.is_empty() {
|
||||
gum::trace!(
|
||||
target: LOG_TARGET,
|
||||
?block_hash,
|
||||
?candidate_index,
|
||||
local = source.peer_id().is_none(),
|
||||
num_peers = peers.len(),
|
||||
"Sending an approval to peers",
|
||||
);
|
||||
}
|
||||
|
||||
let v1_peers = filter_peers_by_version(&peers, ValidationVersion::V1);
|
||||
let vstaging_peers = filter_peers_by_version(&peers, ValidationVersion::VStaging);
|
||||
|
||||
(v1_peers, vstaging_peers)
|
||||
};
|
||||
|
||||
let approvals = vec![vote];
|
||||
|
||||
if !v1_peers.is_empty() {
|
||||
ctx.send_message(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
v1_peers,
|
||||
versioned_approvals_packet(ValidationVersion::V1, approvals.clone()),
|
||||
))
|
||||
.await;
|
||||
}
|
||||
|
||||
if !peers.is_empty() {
|
||||
let approvals = vec![vote];
|
||||
gum::trace!(
|
||||
target: LOG_TARGET,
|
||||
?block_hash,
|
||||
?candidate_index,
|
||||
local = source.peer_id().is_none(),
|
||||
num_peers = peers.len(),
|
||||
"Sending an approval to peers",
|
||||
);
|
||||
|
||||
if !vstaging_peers.is_empty() {
|
||||
ctx.send_message(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
peers,
|
||||
Versioned::V1(protocol_v1::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_v1::ApprovalDistributionMessage::Approvals(approvals),
|
||||
)),
|
||||
vstaging_peers,
|
||||
versioned_approvals_packet(ValidationVersion::VStaging, approvals),
|
||||
))
|
||||
.await;
|
||||
}
|
||||
@@ -1427,6 +1507,7 @@ impl State {
|
||||
topologies: &SessionGridTopologies,
|
||||
total_peers: usize,
|
||||
peer_id: PeerId,
|
||||
peer_protocol_version: ValidationVersion,
|
||||
view: View,
|
||||
rng: &mut (impl CryptoRng + Rng),
|
||||
) {
|
||||
@@ -1536,7 +1617,8 @@ impl State {
|
||||
"Sending assignments to unified peer",
|
||||
);
|
||||
|
||||
send_assignments_batched(sender, assignments_to_send, peer_id).await;
|
||||
send_assignments_batched(sender, assignments_to_send, peer_id, peer_protocol_version)
|
||||
.await;
|
||||
}
|
||||
|
||||
if !approvals_to_send.is_empty() {
|
||||
@@ -1547,7 +1629,7 @@ impl State {
|
||||
"Sending approvals to unified peer",
|
||||
);
|
||||
|
||||
send_approvals_batched(sender, approvals_to_send, peer_id).await;
|
||||
send_approvals_batched(sender, approvals_to_send, peer_id, peer_protocol_version).await;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1583,6 +1665,7 @@ impl State {
|
||||
|
||||
adjust_required_routing_and_propagate(
|
||||
ctx,
|
||||
&self.peer_data,
|
||||
&mut self.blocks,
|
||||
&self.topologies,
|
||||
|block_entry| {
|
||||
@@ -1610,6 +1693,7 @@ impl State {
|
||||
|
||||
adjust_required_routing_and_propagate(
|
||||
ctx,
|
||||
&self.peer_data,
|
||||
&mut self.blocks,
|
||||
&self.topologies,
|
||||
|block_entry| {
|
||||
@@ -1669,6 +1753,7 @@ impl State {
|
||||
#[overseer::contextbounds(ApprovalDistribution, prefix = self::overseer)]
|
||||
async fn adjust_required_routing_and_propagate<Context, BlockFilter, RoutingModifier>(
|
||||
ctx: &mut Context,
|
||||
peer_data: &HashMap<PeerId, PeerData>,
|
||||
blocks: &mut HashMap<Hash, BlockEntry>,
|
||||
topologies: &SessionGridTopologies,
|
||||
block_filter: BlockFilter,
|
||||
@@ -1758,11 +1843,22 @@ async fn adjust_required_routing_and_propagate<Context, BlockFilter, RoutingModi
|
||||
// Send messages in accumulated packets, assignments preceding approvals.
|
||||
|
||||
for (peer, assignments_packet) in peer_assignments {
|
||||
send_assignments_batched(ctx.sender(), assignments_packet, peer).await;
|
||||
let peer_protocol_version = match peer_data.get(&peer).map(|pd| pd.version) {
|
||||
None => continue,
|
||||
Some(v) => v,
|
||||
};
|
||||
|
||||
send_assignments_batched(ctx.sender(), assignments_packet, peer, peer_protocol_version)
|
||||
.await;
|
||||
}
|
||||
|
||||
for (peer, approvals_packet) in peer_approvals {
|
||||
send_approvals_batched(ctx.sender(), approvals_packet, peer).await;
|
||||
let peer_protocol_version = match peer_data.get(&peer).map(|pd| pd.version) {
|
||||
None => continue,
|
||||
Some(v) => v,
|
||||
};
|
||||
|
||||
send_approvals_batched(ctx.sender(), approvals_packet, peer, peer_protocol_version).await;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1912,6 +2008,49 @@ impl ApprovalDistribution {
|
||||
}
|
||||
}
|
||||
|
||||
fn versioned_approvals_packet(
|
||||
version: ValidationVersion,
|
||||
approvals: Vec<IndirectSignedApprovalVote>,
|
||||
) -> VersionedValidationProtocol {
|
||||
match version {
|
||||
ValidationVersion::V1 =>
|
||||
Versioned::V1(protocol_v1::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_v1::ApprovalDistributionMessage::Approvals(approvals),
|
||||
)),
|
||||
ValidationVersion::VStaging =>
|
||||
Versioned::VStaging(protocol_vstaging::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_vstaging::ApprovalDistributionMessage::Approvals(approvals),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
fn versioned_assignments_packet(
|
||||
version: ValidationVersion,
|
||||
assignments: Vec<(IndirectAssignmentCert, CandidateIndex)>,
|
||||
) -> VersionedValidationProtocol {
|
||||
match version {
|
||||
ValidationVersion::V1 =>
|
||||
Versioned::V1(protocol_v1::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_v1::ApprovalDistributionMessage::Assignments(assignments),
|
||||
)),
|
||||
ValidationVersion::VStaging =>
|
||||
Versioned::VStaging(protocol_vstaging::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_vstaging::ApprovalDistributionMessage::Assignments(assignments),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
fn filter_peers_by_version(
|
||||
peers: &[(PeerId, ValidationVersion)],
|
||||
version: ValidationVersion,
|
||||
) -> Vec<PeerId> {
|
||||
peers
|
||||
.iter()
|
||||
.filter(|(_, v)| v == &version)
|
||||
.map(|(peer_id, _)| *peer_id)
|
||||
.collect()
|
||||
}
|
||||
|
||||
#[overseer::subsystem(ApprovalDistribution, error=SubsystemError, prefix=self::overseer)]
|
||||
impl<Context> ApprovalDistribution {
|
||||
fn start(self, ctx: Context) -> SpawnedSubsystem {
|
||||
@@ -1954,19 +2093,16 @@ pub(crate) async fn send_assignments_batched(
|
||||
sender: &mut impl overseer::ApprovalDistributionSenderTrait,
|
||||
assignments: Vec<(IndirectAssignmentCert, CandidateIndex)>,
|
||||
peer: PeerId,
|
||||
protocol_version: ValidationVersion,
|
||||
) {
|
||||
let mut batches = assignments.into_iter().peekable();
|
||||
|
||||
while batches.peek().is_some() {
|
||||
let batch: Vec<_> = batches.by_ref().take(MAX_ASSIGNMENT_BATCH_SIZE).collect();
|
||||
let versioned = versioned_assignments_packet(protocol_version, batch);
|
||||
|
||||
sender
|
||||
.send_message(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
vec![peer],
|
||||
Versioned::V1(protocol_v1::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_v1::ApprovalDistributionMessage::Assignments(batch),
|
||||
)),
|
||||
))
|
||||
.send_message(NetworkBridgeTxMessage::SendValidationMessage(vec![peer], versioned))
|
||||
.await;
|
||||
}
|
||||
}
|
||||
@@ -1976,19 +2112,16 @@ pub(crate) async fn send_approvals_batched(
|
||||
sender: &mut impl overseer::ApprovalDistributionSenderTrait,
|
||||
approvals: Vec<IndirectSignedApprovalVote>,
|
||||
peer: PeerId,
|
||||
protocol_version: ValidationVersion,
|
||||
) {
|
||||
let mut batches = approvals.into_iter().peekable();
|
||||
|
||||
while batches.peek().is_some() {
|
||||
let batch: Vec<_> = batches.by_ref().take(MAX_APPROVAL_BATCH_SIZE).collect();
|
||||
let versioned = versioned_approvals_packet(protocol_version, batch);
|
||||
|
||||
sender
|
||||
.send_message(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
vec![peer],
|
||||
Versioned::V1(protocol_v1::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_v1::ApprovalDistributionMessage::Approvals(batch),
|
||||
)),
|
||||
))
|
||||
.send_message(NetworkBridgeTxMessage::SendValidationMessage(vec![peer], versioned))
|
||||
.await;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -219,6 +219,7 @@ async fn setup_gossip_topology(
|
||||
async fn setup_peer_with_view(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
peer_id: &PeerId,
|
||||
validation_version: ValidationVersion,
|
||||
view: View,
|
||||
) {
|
||||
overseer_send(
|
||||
@@ -226,7 +227,7 @@ async fn setup_peer_with_view(
|
||||
ApprovalDistributionMessage::NetworkBridgeUpdate(NetworkBridgeEvent::PeerConnected(
|
||||
*peer_id,
|
||||
ObservedRole::Full,
|
||||
ValidationVersion::V1.into(),
|
||||
validation_version.into(),
|
||||
None,
|
||||
)),
|
||||
)
|
||||
@@ -243,13 +244,12 @@ async fn setup_peer_with_view(
|
||||
async fn send_message_from_peer(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
peer_id: &PeerId,
|
||||
msg: protocol_v1::ApprovalDistributionMessage,
|
||||
msg: net_protocol::ApprovalDistributionMessage,
|
||||
) {
|
||||
overseer_send(
|
||||
virtual_overseer,
|
||||
ApprovalDistributionMessage::NetworkBridgeUpdate(NetworkBridgeEvent::PeerMessage(
|
||||
*peer_id,
|
||||
Versioned::V1(msg),
|
||||
*peer_id, msg,
|
||||
)),
|
||||
)
|
||||
.await;
|
||||
@@ -331,9 +331,9 @@ fn try_import_the_same_assignment() {
|
||||
let _ = test_harness(state_without_reputation_delay(), |mut virtual_overseer| async move {
|
||||
let overseer = &mut virtual_overseer;
|
||||
// setup peers
|
||||
setup_peer_with_view(overseer, &peer_a, view![]).await;
|
||||
setup_peer_with_view(overseer, &peer_b, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peer_c, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peer_a, ValidationVersion::V1, view![]).await;
|
||||
setup_peer_with_view(overseer, &peer_b, ValidationVersion::V1, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peer_c, ValidationVersion::V1, view![hash]).await;
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
let meta = BlockApprovalMeta {
|
||||
@@ -353,7 +353,7 @@ fn try_import_the_same_assignment() {
|
||||
let assignments = vec![(cert.clone(), 0u32)];
|
||||
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Assignments(assignments.clone());
|
||||
send_message_from_peer(overseer, &peer_a, msg).await;
|
||||
send_message_from_peer(overseer, &peer_a, Versioned::V1(msg)).await;
|
||||
|
||||
expect_reputation_change(overseer, &peer_a, COST_UNEXPECTED_MESSAGE).await;
|
||||
|
||||
@@ -386,11 +386,11 @@ fn try_import_the_same_assignment() {
|
||||
);
|
||||
|
||||
// setup new peer
|
||||
setup_peer_with_view(overseer, &peer_d, view![]).await;
|
||||
setup_peer_with_view(overseer, &peer_d, ValidationVersion::V1, view![]).await;
|
||||
|
||||
// send the same assignment from peer_d
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Assignments(assignments);
|
||||
send_message_from_peer(overseer, &peer_d, msg).await;
|
||||
send_message_from_peer(overseer, &peer_d, Versioned::V1(msg)).await;
|
||||
|
||||
expect_reputation_change(overseer, &peer_d, COST_UNEXPECTED_MESSAGE).await;
|
||||
expect_reputation_change(overseer, &peer_d, BENEFIT_VALID_MESSAGE).await;
|
||||
@@ -413,7 +413,7 @@ fn delay_reputation_change() {
|
||||
let overseer = &mut virtual_overseer;
|
||||
|
||||
// Setup peers
|
||||
setup_peer_with_view(overseer, &peer, view![]).await;
|
||||
setup_peer_with_view(overseer, &peer, ValidationVersion::V1, view![]).await;
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
let meta = BlockApprovalMeta {
|
||||
@@ -433,7 +433,7 @@ fn delay_reputation_change() {
|
||||
let assignments = vec![(cert.clone(), 0u32)];
|
||||
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Assignments(assignments.clone());
|
||||
send_message_from_peer(overseer, &peer, msg).await;
|
||||
send_message_from_peer(overseer, &peer, Versioned::V1(msg)).await;
|
||||
|
||||
// send an `Accept` message from the Approval Voting subsystem
|
||||
assert_matches!(
|
||||
@@ -474,7 +474,7 @@ fn spam_attack_results_in_negative_reputation_change() {
|
||||
let _ = test_harness(state_without_reputation_delay(), |mut virtual_overseer| async move {
|
||||
let overseer = &mut virtual_overseer;
|
||||
let peer = &peer_a;
|
||||
setup_peer_with_view(overseer, peer, view![]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![]).await;
|
||||
|
||||
// new block `hash_b` with 20 candidates
|
||||
let candidates_count = 20;
|
||||
@@ -501,7 +501,7 @@ fn spam_attack_results_in_negative_reputation_change() {
|
||||
.collect();
|
||||
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Assignments(assignments.clone());
|
||||
send_message_from_peer(overseer, peer, msg.clone()).await;
|
||||
send_message_from_peer(overseer, peer, Versioned::V1(msg.clone())).await;
|
||||
|
||||
for i in 0..candidates_count {
|
||||
expect_reputation_change(overseer, peer, COST_UNEXPECTED_MESSAGE).await;
|
||||
@@ -533,7 +533,7 @@ fn spam_attack_results_in_negative_reputation_change() {
|
||||
.await;
|
||||
|
||||
// send the assignments again
|
||||
send_message_from_peer(overseer, peer, msg.clone()).await;
|
||||
send_message_from_peer(overseer, peer, Versioned::V1(msg.clone())).await;
|
||||
|
||||
// each of them will incur `COST_UNEXPECTED_MESSAGE`, not only the first one
|
||||
for _ in 0..candidates_count {
|
||||
@@ -558,7 +558,7 @@ fn peer_sending_us_the_same_we_just_sent_them_is_ok() {
|
||||
let _ = test_harness(state_without_reputation_delay(), |mut virtual_overseer| async move {
|
||||
let overseer = &mut virtual_overseer;
|
||||
let peer = &peer_a;
|
||||
setup_peer_with_view(overseer, peer, view![]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![]).await;
|
||||
|
||||
// new block `hash` with 1 candidates
|
||||
let meta = BlockApprovalMeta {
|
||||
@@ -610,12 +610,12 @@ fn peer_sending_us_the_same_we_just_sent_them_is_ok() {
|
||||
// the peer could send us it as well
|
||||
let assignments = vec![(cert, candidate_index)];
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Assignments(assignments);
|
||||
send_message_from_peer(overseer, peer, msg.clone()).await;
|
||||
send_message_from_peer(overseer, peer, Versioned::V1(msg.clone())).await;
|
||||
|
||||
assert!(overseer.recv().timeout(TIMEOUT).await.is_none(), "we should not punish the peer");
|
||||
|
||||
// send the assignments again
|
||||
send_message_from_peer(overseer, peer, msg).await;
|
||||
send_message_from_peer(overseer, peer, Versioned::V1(msg)).await;
|
||||
|
||||
// now we should
|
||||
expect_reputation_change(overseer, peer, COST_DUPLICATE_MESSAGE).await;
|
||||
@@ -634,9 +634,9 @@ fn import_approval_happy_path() {
|
||||
let _ = test_harness(state_without_reputation_delay(), |mut virtual_overseer| async move {
|
||||
let overseer = &mut virtual_overseer;
|
||||
// setup peers
|
||||
setup_peer_with_view(overseer, &peer_a, view![]).await;
|
||||
setup_peer_with_view(overseer, &peer_b, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peer_c, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peer_a, ValidationVersion::V1, view![]).await;
|
||||
setup_peer_with_view(overseer, &peer_b, ValidationVersion::V1, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peer_c, ValidationVersion::V1, view![hash]).await;
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
let meta = BlockApprovalMeta {
|
||||
@@ -681,7 +681,7 @@ fn import_approval_happy_path() {
|
||||
signature: dummy_signature(),
|
||||
};
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Approvals(vec![approval.clone()]);
|
||||
send_message_from_peer(overseer, &peer_b, msg).await;
|
||||
send_message_from_peer(overseer, &peer_b, Versioned::V1(msg)).await;
|
||||
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
@@ -722,8 +722,8 @@ fn import_approval_bad() {
|
||||
let _ = test_harness(state_without_reputation_delay(), |mut virtual_overseer| async move {
|
||||
let overseer = &mut virtual_overseer;
|
||||
// setup peers
|
||||
setup_peer_with_view(overseer, &peer_a, view![]).await;
|
||||
setup_peer_with_view(overseer, &peer_b, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peer_a, ValidationVersion::V1, view![]).await;
|
||||
setup_peer_with_view(overseer, &peer_b, ValidationVersion::V1, view![hash]).await;
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
let meta = BlockApprovalMeta {
|
||||
@@ -749,14 +749,14 @@ fn import_approval_bad() {
|
||||
signature: dummy_signature(),
|
||||
};
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Approvals(vec![approval.clone()]);
|
||||
send_message_from_peer(overseer, &peer_b, msg).await;
|
||||
send_message_from_peer(overseer, &peer_b, Versioned::V1(msg)).await;
|
||||
|
||||
expect_reputation_change(overseer, &peer_b, COST_UNEXPECTED_MESSAGE).await;
|
||||
|
||||
// now import an assignment from peer_b
|
||||
let assignments = vec![(cert.clone(), candidate_index)];
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Assignments(assignments);
|
||||
send_message_from_peer(overseer, &peer_b, msg).await;
|
||||
send_message_from_peer(overseer, &peer_b, Versioned::V1(msg)).await;
|
||||
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
@@ -775,7 +775,7 @@ fn import_approval_bad() {
|
||||
|
||||
// and try again
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Approvals(vec![approval.clone()]);
|
||||
send_message_from_peer(overseer, &peer_b, msg).await;
|
||||
send_message_from_peer(overseer, &peer_b, Versioned::V1(msg)).await;
|
||||
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
@@ -916,7 +916,7 @@ fn update_peer_view() {
|
||||
overseer_send(overseer, ApprovalDistributionMessage::DistributeAssignment(cert_b, 0)).await;
|
||||
|
||||
// connect a peer
|
||||
setup_peer_with_view(overseer, peer, view![hash_a]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash_a]).await;
|
||||
|
||||
// we should send relevant assignments to the peer
|
||||
assert_matches!(
|
||||
@@ -934,7 +934,7 @@ fn update_peer_view() {
|
||||
virtual_overseer
|
||||
});
|
||||
|
||||
assert_eq!(state.peer_views.get(peer).map(|v| v.finalized_number), Some(0));
|
||||
assert_eq!(state.peer_data.get(peer).map(|data| data.view.finalized_number), Some(0));
|
||||
assert_eq!(
|
||||
state
|
||||
.blocks
|
||||
@@ -986,7 +986,7 @@ fn update_peer_view() {
|
||||
virtual_overseer
|
||||
});
|
||||
|
||||
assert_eq!(state.peer_views.get(peer).map(|v| v.finalized_number), Some(2));
|
||||
assert_eq!(state.peer_data.get(peer).map(|data| data.view.finalized_number), Some(2));
|
||||
assert_eq!(
|
||||
state
|
||||
.blocks
|
||||
@@ -1016,7 +1016,10 @@ fn update_peer_view() {
|
||||
virtual_overseer
|
||||
});
|
||||
|
||||
assert_eq!(state.peer_views.get(peer).map(|v| v.finalized_number), Some(finalized_number));
|
||||
assert_eq!(
|
||||
state.peer_data.get(peer).map(|data| data.view.finalized_number),
|
||||
Some(finalized_number)
|
||||
);
|
||||
assert!(state.blocks.get(&hash_c).unwrap().known_by.get(peer).is_none());
|
||||
}
|
||||
|
||||
@@ -1031,7 +1034,7 @@ fn import_remotely_then_locally() {
|
||||
let _ = test_harness(state_without_reputation_delay(), |mut virtual_overseer| async move {
|
||||
let overseer = &mut virtual_overseer;
|
||||
// setup the peer
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
let meta = BlockApprovalMeta {
|
||||
@@ -1051,7 +1054,7 @@ fn import_remotely_then_locally() {
|
||||
let cert = fake_assignment_cert(hash, validator_index);
|
||||
let assignments = vec![(cert.clone(), candidate_index)];
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Assignments(assignments.clone());
|
||||
send_message_from_peer(overseer, peer, msg).await;
|
||||
send_message_from_peer(overseer, peer, Versioned::V1(msg)).await;
|
||||
|
||||
// send an `Accept` message from the Approval Voting subsystem
|
||||
assert_matches!(
|
||||
@@ -1086,7 +1089,7 @@ fn import_remotely_then_locally() {
|
||||
signature: dummy_signature(),
|
||||
};
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Approvals(vec![approval.clone()]);
|
||||
send_message_from_peer(overseer, peer, msg).await;
|
||||
send_message_from_peer(overseer, peer, Versioned::V1(msg)).await;
|
||||
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
@@ -1152,7 +1155,7 @@ fn sends_assignments_even_when_state_is_approved() {
|
||||
.await;
|
||||
|
||||
// connect the peer.
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
|
||||
let assignments = vec![(cert.clone(), candidate_index)];
|
||||
let approvals = vec![approval.clone()];
|
||||
@@ -1216,7 +1219,7 @@ fn race_condition_in_local_vs_remote_view_update() {
|
||||
};
|
||||
|
||||
// This will send a peer view that is ahead of our view
|
||||
setup_peer_with_view(overseer, peer, view![hash_b]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash_b]).await;
|
||||
|
||||
// Send our view update to include a new head
|
||||
overseer_send(
|
||||
@@ -1237,7 +1240,7 @@ fn race_condition_in_local_vs_remote_view_update() {
|
||||
.collect();
|
||||
|
||||
let msg = protocol_v1::ApprovalDistributionMessage::Assignments(assignments.clone());
|
||||
send_message_from_peer(overseer, peer, msg.clone()).await;
|
||||
send_message_from_peer(overseer, peer, Versioned::V1(msg.clone())).await;
|
||||
|
||||
// This will handle pending messages being processed
|
||||
let msg = ApprovalDistributionMessage::NewBlocks(vec![meta]);
|
||||
@@ -1280,7 +1283,7 @@ fn propagates_locally_generated_assignment_to_both_dimensions() {
|
||||
|
||||
// Connect all peers.
|
||||
for (peer, _) in &peers {
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
}
|
||||
|
||||
// Set up a gossip topology.
|
||||
@@ -1385,7 +1388,7 @@ fn propagates_assignments_along_unshared_dimension() {
|
||||
|
||||
// Connect all peers.
|
||||
for (peer, _) in &peers {
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
}
|
||||
|
||||
// Set up a gossip topology.
|
||||
@@ -1421,7 +1424,7 @@ fn propagates_assignments_along_unshared_dimension() {
|
||||
|
||||
// Issuer of the message is important, not the peer we receive from.
|
||||
// 99 deliberately chosen because it's not in X or Y.
|
||||
send_message_from_peer(overseer, &peers[99].0, msg).await;
|
||||
send_message_from_peer(overseer, &peers[99].0, Versioned::V1(msg)).await;
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::ApprovalVoting(ApprovalVotingMessage::CheckAndImportAssignment(
|
||||
@@ -1470,7 +1473,7 @@ fn propagates_assignments_along_unshared_dimension() {
|
||||
|
||||
// Issuer of the message is important, not the peer we receive from.
|
||||
// 99 deliberately chosen because it's not in X or Y.
|
||||
send_message_from_peer(overseer, &peers[99].0, msg).await;
|
||||
send_message_from_peer(overseer, &peers[99].0, Versioned::V1(msg)).await;
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::ApprovalVoting(ApprovalVotingMessage::CheckAndImportAssignment(
|
||||
@@ -1527,7 +1530,7 @@ fn propagates_to_required_after_connect() {
|
||||
// Connect all peers except omitted.
|
||||
for (i, (peer, _)) in peers.iter().enumerate() {
|
||||
if !omitted.contains(&i) {
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1616,7 +1619,7 @@ fn propagates_to_required_after_connect() {
|
||||
);
|
||||
|
||||
for i in omitted.iter().copied() {
|
||||
setup_peer_with_view(overseer, &peers[i].0, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peers[i].0, ValidationVersion::V1, view![hash]).await;
|
||||
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
@@ -1665,7 +1668,7 @@ fn sends_to_more_peers_after_getting_topology() {
|
||||
|
||||
// Connect all peers except omitted.
|
||||
for (peer, _) in &peers {
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
}
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
@@ -1817,7 +1820,7 @@ fn originator_aggression_l1() {
|
||||
|
||||
// Connect all peers except omitted.
|
||||
for (peer, _) in &peers {
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
}
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
@@ -1976,7 +1979,7 @@ fn non_originator_aggression_l1() {
|
||||
|
||||
// Connect all peers except omitted.
|
||||
for (peer, _) in &peers {
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
}
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
@@ -2010,7 +2013,7 @@ fn non_originator_aggression_l1() {
|
||||
|
||||
// Issuer of the message is important, not the peer we receive from.
|
||||
// 99 deliberately chosen because it's not in X or Y.
|
||||
send_message_from_peer(overseer, &peers[99].0, msg).await;
|
||||
send_message_from_peer(overseer, &peers[99].0, Versioned::V1(msg)).await;
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::ApprovalVoting(ApprovalVotingMessage::CheckAndImportAssignment(
|
||||
@@ -2081,7 +2084,7 @@ fn non_originator_aggression_l2() {
|
||||
|
||||
// Connect all peers except omitted.
|
||||
for (peer, _) in &peers {
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
}
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
@@ -2115,7 +2118,7 @@ fn non_originator_aggression_l2() {
|
||||
|
||||
// Issuer of the message is important, not the peer we receive from.
|
||||
// 99 deliberately chosen because it's not in X or Y.
|
||||
send_message_from_peer(overseer, &peers[99].0, msg).await;
|
||||
send_message_from_peer(overseer, &peers[99].0, Versioned::V1(msg)).await;
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::ApprovalVoting(ApprovalVotingMessage::CheckAndImportAssignment(
|
||||
@@ -2246,7 +2249,7 @@ fn resends_messages_periodically() {
|
||||
|
||||
// Connect all peers.
|
||||
for (peer, _) in &peers {
|
||||
setup_peer_with_view(overseer, peer, view![hash]).await;
|
||||
setup_peer_with_view(overseer, peer, ValidationVersion::V1, view![hash]).await;
|
||||
}
|
||||
|
||||
// Set up a gossip topology.
|
||||
@@ -2281,7 +2284,7 @@ fn resends_messages_periodically() {
|
||||
|
||||
// Issuer of the message is important, not the peer we receive from.
|
||||
// 99 deliberately chosen because it's not in X or Y.
|
||||
send_message_from_peer(overseer, &peers[99].0, msg).await;
|
||||
send_message_from_peer(overseer, &peers[99].0, Versioned::V1(msg)).await;
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::ApprovalVoting(ApprovalVotingMessage::CheckAndImportAssignment(
|
||||
@@ -2372,6 +2375,126 @@ fn resends_messages_periodically() {
|
||||
});
|
||||
}
|
||||
|
||||
/// Tests that peers correctly receive versioned messages.
|
||||
#[test]
|
||||
fn import_versioned_approval() {
|
||||
let peer_a = PeerId::random();
|
||||
let peer_b = PeerId::random();
|
||||
let peer_c = PeerId::random();
|
||||
let parent_hash = Hash::repeat_byte(0xFF);
|
||||
let hash = Hash::repeat_byte(0xAA);
|
||||
|
||||
let state = state_without_reputation_delay();
|
||||
let _ = test_harness(state, |mut virtual_overseer| async move {
|
||||
let overseer = &mut virtual_overseer;
|
||||
// All peers are aware of relay parent.
|
||||
setup_peer_with_view(overseer, &peer_a, ValidationVersion::VStaging, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peer_b, ValidationVersion::V1, view![hash]).await;
|
||||
setup_peer_with_view(overseer, &peer_c, ValidationVersion::VStaging, view![hash]).await;
|
||||
|
||||
// new block `hash_a` with 1 candidates
|
||||
let meta = BlockApprovalMeta {
|
||||
hash,
|
||||
parent_hash,
|
||||
number: 1,
|
||||
candidates: vec![Default::default(); 1],
|
||||
slot: 1.into(),
|
||||
session: 1,
|
||||
};
|
||||
let msg = ApprovalDistributionMessage::NewBlocks(vec![meta]);
|
||||
overseer_send(overseer, msg).await;
|
||||
|
||||
// import an assignment related to `hash` locally
|
||||
let validator_index = ValidatorIndex(0);
|
||||
let candidate_index = 0u32;
|
||||
let cert = fake_assignment_cert(hash, validator_index);
|
||||
overseer_send(
|
||||
overseer,
|
||||
ApprovalDistributionMessage::DistributeAssignment(cert, candidate_index),
|
||||
)
|
||||
.await;
|
||||
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::NetworkBridgeTx(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
peers,
|
||||
Versioned::V1(protocol_v1::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_v1::ApprovalDistributionMessage::Assignments(assignments)
|
||||
))
|
||||
)) => {
|
||||
assert_eq!(peers, vec![peer_b]);
|
||||
assert_eq!(assignments.len(), 1);
|
||||
}
|
||||
);
|
||||
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::NetworkBridgeTx(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
peers,
|
||||
Versioned::VStaging(protocol_vstaging::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_vstaging::ApprovalDistributionMessage::Assignments(assignments)
|
||||
))
|
||||
)) => {
|
||||
assert_eq!(peers.len(), 2);
|
||||
assert!(peers.contains(&peer_a));
|
||||
assert!(peers.contains(&peer_c));
|
||||
|
||||
assert_eq!(assignments.len(), 1);
|
||||
}
|
||||
);
|
||||
|
||||
// send the an approval from peer_a
|
||||
let approval = IndirectSignedApprovalVote {
|
||||
block_hash: hash,
|
||||
candidate_index,
|
||||
validator: validator_index,
|
||||
signature: dummy_signature(),
|
||||
};
|
||||
let msg = protocol_vstaging::ApprovalDistributionMessage::Approvals(vec![approval.clone()]);
|
||||
send_message_from_peer(overseer, &peer_a, Versioned::VStaging(msg)).await;
|
||||
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::ApprovalVoting(ApprovalVotingMessage::CheckAndImportApproval(
|
||||
vote,
|
||||
tx,
|
||||
)) => {
|
||||
assert_eq!(vote, approval);
|
||||
tx.send(ApprovalCheckResult::Accepted).unwrap();
|
||||
}
|
||||
);
|
||||
|
||||
expect_reputation_change(overseer, &peer_a, BENEFIT_VALID_MESSAGE_FIRST).await;
|
||||
|
||||
// Peers b and c receive versioned approval messages.
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::NetworkBridgeTx(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
peers,
|
||||
Versioned::V1(protocol_v1::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_v1::ApprovalDistributionMessage::Approvals(approvals)
|
||||
))
|
||||
)) => {
|
||||
assert_eq!(peers, vec![peer_b]);
|
||||
assert_eq!(approvals.len(), 1);
|
||||
}
|
||||
);
|
||||
assert_matches!(
|
||||
overseer_recv(overseer).await,
|
||||
AllMessages::NetworkBridgeTx(NetworkBridgeTxMessage::SendValidationMessage(
|
||||
peers,
|
||||
Versioned::VStaging(protocol_vstaging::ValidationProtocol::ApprovalDistribution(
|
||||
protocol_vstaging::ApprovalDistributionMessage::Approvals(approvals)
|
||||
))
|
||||
)) => {
|
||||
assert_eq!(peers, vec![peer_c]);
|
||||
assert_eq!(approvals.len(), 1);
|
||||
}
|
||||
);
|
||||
virtual_overseer
|
||||
});
|
||||
}
|
||||
|
||||
fn batch_test_round(message_count: usize) {
|
||||
use polkadot_node_subsystem::SubsystemContext;
|
||||
let pool = sp_core::testing::TaskExecutor::new();
|
||||
@@ -2402,8 +2525,9 @@ fn batch_test_round(message_count: usize) {
|
||||
.collect();
|
||||
|
||||
let peer = PeerId::random();
|
||||
send_assignments_batched(&mut sender, assignments.clone(), peer).await;
|
||||
send_approvals_batched(&mut sender, approvals.clone(), peer).await;
|
||||
send_assignments_batched(&mut sender, assignments.clone(), peer, ValidationVersion::V1)
|
||||
.await;
|
||||
send_approvals_batched(&mut sender, approvals.clone(), peer, ValidationVersion::V1).await;
|
||||
|
||||
// Check expected assignments batches.
|
||||
for assignment_index in (0..assignments.len()).step_by(super::MAX_ASSIGNMENT_BATCH_SIZE) {
|
||||
|
||||
Reference in New Issue
Block a user