mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-04-26 02:57:57 +00:00
Asynchronous Backing MegaPR (#5022)
* inclusion emulator logic for asynchronous backing (#4790) * initial stab at candidate_context * fmt * docs & more TODOs * some cleanups * reframe as inclusion_emulator * documentations yes * update types * add constraint modifications * watermark * produce modifications * v2 primitives: re-export all v1 for consistency * vstaging primitives * emulator constraints: handle code upgrades * produce outbound HRMP modifications * stack. * method for applying modifications * method just for sanity-checking modifications * fragments produce modifications, not prospectives * make linear * add some TODOs * remove stacking; handle code upgrades * take `fragment` private * reintroduce stacking. * fragment constructor * add TODO * allow validating fragments against future constraints * docs * relay-parent number and min code size checks * check code upgrade restriction * check max hrmp per candidate * fmt * remove GoAhead logic because it wasn't helpful * docs on code upgrade failure * test stacking * test modifications against constraints * fmt * test fragments * descending or duplicate test * fmt * remove unused imports in vstaging * wrong primitives * spellcheck * Runtime changes for Asynchronous Backing (#4786) * inclusion: utility for allowed relay-parents * inclusion: use prev number instead of prev hash * track most recent context of paras * inclusion: accept previous relay-parents * update dmp advancement rule for async backing * fmt * add a comment about validation outputs * clean up a couple of TODOs * weights * fix weights * fmt * Resolve dmp todo * Restore inclusion tests * Restore paras_inherent tests * MostRecentContext test * Benchmark for new paras dispatchable * Prepare check_validation_outputs for upgrade * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=kusama-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/kusama/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=westend-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/westend/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=polkadot-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/polkadot/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=rococo-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/rococo/src/weights/runtime_parachains_paras.rs * Implementers guide changes * More tests for allowed relay parents * Add a github issue link * Compute group index based on relay parent * Storage migration * Move allowed parents tracker to shared * Compile error * Get group assigned to core at the next block * Test group assignment * fmt * Error instead of panic * Update guide * Extend doc-comment * Update runtime/parachains/src/shared.rs Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Parity Bot <admin@parity.io> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Prospective Parachains Subsystem (#4913) * docs and skeleton * subsystem skeleton * main loop * fragment tree basics & fmt * begin fragment trees & view * flesh out more of view update logic * further flesh out update logic * some refcount functions for fragment trees * add fatal/non-fatal errors * use non-fatal results * clear up some TODOs * ideal format for scheduling info * add a bunch of TODOs * some more fluff * extract fragment graph to submodule * begin fragment graph API * trees, not graphs * improve docs * scope and constructor for trees * add some test TODOs * limit max ancestors and store constraints * constructor * constraints: fix bug in HRMP watermarks * fragment tree population logic * set::retain * extract population logic * implement add_and_populate * fmt * add some TODOs in tests * implement child-selection * strip out old stuff based on wrong assumptions * use fatality * implement pruning * remove unused ancestor constraints * fragment tree instantiation * remove outdated comment * add message/request types and skeleton for handling * fmt * implement handle_candidate_seconded * candidate storage: handle backed * implement handle_candidate_backed * implement answer_get_backable_candidate * remove async where not needed * implement fetch_ancestry * add logic for run_iteration * add some docs * remove global allow(unused), fix warnings * make spellcheck happy (despite English) * fmt * bump Cargo.lock * replace tracing with gum * introduce PopulateFrom trait * implement GetHypotheticalDepths * revise docs slightly * first fragment tree scope test * more scope tests * test add_candidate * fmt * test retain * refactor test code * test populate is recursive * test contiguity of depth 0 is maintained * add_and_populate tests * cycle tests * remove PopulateFrom trait * fmt * test hypothetical depths (non-recursive) * have CandidateSeconded return membership * tree membership requests * Add a ProspectiveParachainsSubsystem struct * add a staging API for base constraints * add a `From` impl * add runtime API for staging_validity_constraints * implement fetch_base_constraints * implement `fetch_upcoming_paras` * remove reconstruction of candidate receipt; no obvious usecase * fmt * export message to broader module * remove last TODO * correctly export * fix compilation and add GetMinimumRelayParent request * make provisioner into a real subsystem with proper mesage bounds * fmt * fix ChannelsOut in overseer test * fix overseer tests * fix again * fmt * Integrate prospective parachains subsystem into backing: Part 1 (#5557) * BEGIN ASYNC candidate-backing CHANGES * rename & document modes * answer prospective validation data requests * GetMinimumRelayParents request is now plural * implement an implicit view utility for backing subsystems * implicit-view: get allowed relay parents * refactorings and improvements to implicit view * add some TODOs for tests * split implicit view updates into 2 functions * backing: define State to prepare for functional refactor * add some docs * backing: implement bones of new leaf activation logic * backing: create per-relay-parent-states * use new handle_active_leaves_update * begin extracting logic from CandidateBackingJob * mostly extract statement import from job logic * handle statement imports outside of job logic * do some TODO planning for prospective parachains integration * finish rewriting backing subsystem in functional style * add prospective parachains mode to relay parent entries * fmt * add a RejectedByProspectiveParachains error * notify prospective parachains of seconded and backed candidates * always validate candidates exhaustively in backing. * return persisted_validation_data from validation * handle rejections by prospective parachains * implement seconding sanity check * invoke validate_and_second * Alter statement table to allow multiple seconded messages per validator * refactor backing to have statements carry PVD * clean up all warnings * Add tests for implicit view * Improve doc comments * Prospective parachains mode based on Runtime API version * Add a TODO * Rework seconding_sanity_check * Iterate over responses * Update backing tests * collator-protocol: load PVD from runtime * Fix validator side tests * Update statement-distribution to fetch PVD * Fix statement-distribution tests * Backing tests with prospective paras #1 * fix per_relay_parent pruning in backing * Test multiple leaves * Test seconding sanity check * Import statement order Before creating an entry in `PerCandidateState` map wait for the approval from the prospective parachains * Add a test for correct state updates * Second multiple candidates per relay parent test * Add backing tests with prospective paras * Second more than one test without prospective paras * Add a test for prospective para blocks * Update malus * typos Co-authored-by: Chris Sosnin <chris125_@live.com> * Track occupied depth in backing per parachain (#5778) * provisioner: async backing changes (#5711) * Provisioner changes for async backing * Select candidates based on prospective paras mode * Revert naming * Update tests * Update TODO comment * review * provisioner: async backing changes (#5711) * Provisioner changes for async backing * Select candidates based on prospective paras mode * Revert naming * Update tests * Update TODO comment * review * fmt * Network bridge changes for asynchronous backing + update subsystems to handle versioned packets (#5991) * BEGIN STATEMENT DISTRIBUTION WORK create a vstaging network protocol which is the same as v1 * mostly make network bridge amenable to vstaging * network-bridge: fully adapt to vstaging * add some TODOs for tests * fix fallout in bitfield-distribution * bitfield distribution tests + TODOs * fix fallout in gossip-support * collator-protocol: fix message fallout * collator-protocol: load PVD from runtime * add TODO for vstaging tests * make things compile * set used network protocol version using a feature * fmt * get approval-distribution building * fix approval-distribution tests * spellcheck * nits * approval distribution net protocol test * bitfield distribution net protocol test * Revert "collator-protocol: fix message fallout" This reverts commit 07cc887303e16c6b3843ecb25cdc7cc2080e2ed1. * Network bridge tests Co-authored-by: Chris Sosnin <chris125_@live.com> * remove max_pov_size requirement from prospective pvd request (#6014) * remove max_pov_size requirement from prospective pvd request * fmt * Extract legacy statement distribution to its own module (#6026) * add compatibility type to v2 statement distribution message * warning cleanup * handle compatibility layer for v2 * clean up an unimplemented!() block * circulate statements based on version * extract legacy v1 code into separate module * remove unimplemented * clean up naming of from_requester/responder * remove TODOs * have backing share seconded statements with PVD * fmt * fix warning * Quick fix unused warning for not yet implemented/used staging messages. * Fix network bridge test * Fix wrong merge. We now have 23 subsystems (network bridge split + prospective parachains) Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at> * Version 3 is already live. * Fix tests (#6055) * Fix backing tests * Fix warnings. * fmt * collator-protocol: asynchronous backing changes (#5740) * Draft collator side changes * Start working on collations management * Handle peer's view change * Versioning on advertising * Versioned collation fetching request * Handle versioned messages * Improve docs for collation requests * Add spans * Add request receiver to overseer * Fix collator side tests * Extract relay parent mode to lib * Validator side draft * Add more checks for advertisement * Request pvd based on async backing mode * review * Validator side improvements * Make old tests green * More fixes * Collator side tests draft * Send collation test * fmt * Collator side network protocol versioning * cleanup * merge artifacts * Validator side net protocol versioning * Remove fragment tree membership request * Resolve todo * Collator side core state test * Improve net protocol compatibility * Validator side tests * more improvements * style fixes * downgrade log * Track implicit assignments * Limit the number of seconded candidates per para * Add a sanity check * Handle fetched candidate * fix tests * Retry fetch * Guard against dequeueing while already fetching * Reintegrate connection management * Timeout on advertisements * fmt * spellcheck * update tests after merge * validator assignment fixes for backing and collator protocol (#6158) * Rename depth->ancestry len in tests * Refactor group assignments * Remove implicit assignments * backing: consider occupied core assignments * Track a single para on validator side * Refactor prospective parachains mode request (#6179) * Extract prospective parachains mode into util * Skip activations depending on the mode * backing: don't send backed candidate to provisioner (#6185) * backing: introduce `CanSecond` request for advertisements filtering (#6225) * Drop BoundToRelayParent * draft changes * fix backing tests * Fix genesis ancestry * Fix validator side tests * more tests * cargo generate-lockfile * Implement `StagingValidityConstraints` Runtime API method (#6258) * Implement StagingValidityConstraints * spellcheck * fix ump params * Update hrmp comment * Introduce ump per candidate limit * hypothetical earliest block * refactor primitives usage * hypothetical earliest block number test * fix build * Prepare the Runtime for asynchronous backing upgrade (#6287) * Introduce async backing params to runtime config * fix cumulus config * use config * finish runtimes * Introduce new staging API * Update collator protocol * Update provisioner * Update prospective parachains * Update backing * Move async backing params lower in the config * make naming consistent * misc * Use real prospective parachains subsystem (#6407) * Backport `HypotheticalFrontier` into the feature branch (#6605) * implement more general HypotheticalFrontier * fmt * drop unneeded request Co-authored-by: Robert Habermeier <rphmeier@gmail.com> * Resolve todo about legacy leaf activation (#6447) * fix bug/warning in handling membership answers * Remove `HypotheticalDepthRequest` in favor of `HypotheticalFrontierRequest` (#6521) * Remove `HypotheticalDepthRequest` for `HypotheticalFrontierRequest` * Update tests * Fix (removed wrong docstring) * Fix can_second request * Patch some dead_code errors --------- Co-authored-by: Chris Sosnin <chris125_@live.com> * Async Backing: Send Statement Distribution "Backed" messages (#6634) * Backing: Send Statement Distribution "Backed" messages Closes #6590. **TODO:** - [ ] Adjust tests * Fix compile errors * (Mostly) fix tests * Fix comment * Fix test and compile error * Test that `StatementDistributionMessage::Backed` is sent * Fix compile error * Fix some clippy errors * Add prospective parachains subsystem tests (#6454) * Add prospective parachains subsystem test * Add `should_do_no_work_if_async_backing_disabled_for_leaf` test * Implement `activate_leaf` helper, up to getting ancestry * Finish implementing `activate_leaf` * Small refactor in `activate_leaf` * Get `CandidateSeconded` working * Finish `send_candidate_and_check_if_found` test * Refactor; send more leaves & candidates * Refactor test * Implement `check_candidate_parent_leaving_view` test * Start work on `check_candidate_on_multiple_forks` test * Don’t associate specific parachains with leaf * Finish `correctly_updates_leaves` test * Fix cycle due to reused head data * Fix `check_backable_query` test * Fix `check_candidate_on_multiple_forks` test * Add `check_depth_and_pvd_queries` test * Address review comments * Remove TODO * add a new index for output head data to candidate storage * Resolve test TODOs * Fix compile errors * test candidate storage pruning, make sure new index is cleaned up --------- Co-authored-by: Robert Habermeier <rphmeier@gmail.com> * Node-side metrics for asynchronous backing (#6549) * Add metrics for `prune_view_candidate_storage` * Add metrics for `request_unblocked_collations` * Fix docstring * Couple fixes from review comments * Fix `check_depth_query` test * inclusion-emulator: mirror advancement rule check (#6361) * inclusion-emulator: mirror advancement rule check * fix build * prospective-parachains: introduce `backed_in_path_only` flag for advertisements (#6649) * Introduce `backed_in_path_only` flag for depth request * fmt * update doc comment * fmt * Add async-backing zombienet tests (#6314) * Async backing: impl guide for statement distribution (#6738) Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com> * Asynchronous backing statement distribution: Take III (#5999) * add notification types for v2 statement-distribution * improve protocol docs * add empty vstaging module * fmt * add backed candidate packet request types * start putting down structure of new logic * handle activated leaf * some sanity-checking on outbound statements * fmt * update vstaging share to use statements with PVD * tiny refactor, candidate_hash location * import local statements * refactor statement import * first stab at broadcast logic * fmt * fill out some TODOs * start on handling incoming * split off session info into separate map * start in on a knowledge tracker * address some grumbles * format * missed comment * some docs for direct * add note on slashing * amend * simplify 'direct' code * finish up the 'direct' logic * add a bunch of tests for the direct-in-group logic * rename 'direct' to 'cluster', begin a candidate_entry module * distill candidate_entry * start in on a statement-store module * some utilities for the statement store * rewrite 'send_statement_direct' using new tools * filter sending logic on peers which have the relay-parent in their view. * some more logic for handling incoming statements * req/res: BackedCandidatePacket -> AttestedCandidate + tweaks * add a `validated_in_group` bitfield to BackedCandidateInventory * BackedCandidateInventory -> Manifest * start in on requester module * add outgoing request for attested candidate * add a priority mechanism for requester * some request dispatch logic * add seconded mask to tagged-request * amend manifest to hold group index * handle errors and set up scaffold for response validation * validate attested candidate responses * requester -> requests * add some utilities for manipulating requests * begin integrating requester * start grid module * tiny * refactor grid topology to expose more info to subsystems * fix grid_topology test * fix overseer test * implement topology group-based view construction logic * fmt * flesh out grid slightly more * add indexed groups utility * integrate Groups into per-session info * refactor statement store to borrow Groups * implement manifest knowledge utility * add a test for topology setup * don't send to group members * test for conflicting manifests * manifest knowledge tests * fmt * rename field * garbage collection for grid tracker * routines for finding correct/incorrect advertisers * add manifest import logic * tweak naming * more tests for manifest import * add comment * rework candidates into a view-wide tracker * fmt * start writing boilerplate for grid sending * fmt * some more group boilerplate * refactor handling of topology and authority IDs * fmt * send statements directly to grid peers where possible * send to cluster only if statement belongs to cluster * improve handling of cluster statements * handle incoming statements along the grid * API for introduction of candidates into the tree * backing: use new prospective parachains API * fmt prospective parachains changes * fmt statement-dist * fix condition * get ready for tracking importable candidates * prospective parachains: add Cow logic * incomplete and complete hypothetical candidates * remove keep_if_unneeded * fmt * implement more general HypotheticalFrontier * fmt, cleanup * add a by_parent_hash index to candidate tracker * more framework for future code * utilities for getting all hypothetical candidates for frontier * track origin in statement store * fmt * requests should return peer * apply post-confirmation reckoning * flesh out import/announce/circulate logic on new statements * adjust * adjust TODO comment * fix backing tests * update statement-distribution to use new indexedvec * fmt * query hypothetical candidates * implement `note_importable_under` * extract common utility of fragment tree updates * add a helper function for getting statements unknown by backing * import fresh statements to backing * send announcements and acknowledgements over grid * provide freshly importable statements also avoid tracking backed candidates in statement distribution * do not issue requests on newly importable candidates * add TODO for later when confirming candidate * write a routine for handling backed candidate notifications * simplify grid substantially * add some test TODOs * handle confirmed candidates & grid announcements * finish implementing manifest handling, including follow up statements * send follow-up statements when acknowledging freshly backed * fmt * handle incoming acknowledgements * a little DRYing * wire up network messages to handlers * fmt * some skeleton code for peer view update handling * more peer view skeleton stuff * Fix async backing statement distribution tests (#6621) * Fix compile errors in tests * Cargo fmt * Resolve some todos in async backing statement-distribution branch (#6482) * Implement `remove_by_relay_parent` * Extract `minimum_votes` to shared primitives. * Add `can_send_statements_received_with_prejudice` test * Fix test * Update docstrings * Cargo fmt * Fix compile error * Fix compile errors in tests * Cargo fmt * Add module docs; write `test_priority_ordering` (first draft) * Fix `test_priority_ordering` * Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority` * Address review comments * Remove `Entry::get_mut` * fix test compilation * add a TODO for a test * clean up a couple of TODOs * implement sending pending cluster statements * refactor utility function for sending acknowledgement and statements * mostly implement catching peers up via grid * Fix clippy error * alter grid to track all pending statements * fix more TODOs and format * tweak a TODO in requests * some logic for dispatching requests * fmt * skeleton for response receiving * Async backing statement distribution: cluster tests (#6678) * Add `pending_statements_set_when_receiving_fresh_statements` * Add `pending_statements_updated_when_sending_statements` test * fix up * fmt * update TODO * rework seconded mask in requests * change doc * change unhandledresponse not to borrow request manager * only accept responses sufficient to back * finish implementing response handling * extract statement filter to protocol crate * rework requests: use statement filter in network protocol * dispatch cluster requests correctly * rework cluster statement sending * implement request answering * fmt * only send confirmed candidate statement messages on unified relay-parent * Fix Tests In Statement Distribution Branch * Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * clean up some review comments * clean up warnings * Async backing statement distribution: grid tests (#6673) * Add `manifest_import_returns_ok_true` test * cargo fmt * Add pending_communication_receiving_manifest_on_confirmed_candidate * Add `senders_can_provide_manifests_in_acknowledgement` test * Add a couple of tests for pending statements * Add `pending_statements_cleared_when_sending` test * Add `pending_statements_respect_remote_knowledge` test * Refactor group creation in tests * Clarify docs * Address some review comments * Make some clarifications * Fix post-merge errors * Clarify test `senders_can_provide_manifests_in_acknowledgement` * Try writing `pending_statements_are_updated_after_manifest_exchange` * Document "seconding limit" and `reject_overflowing_manifests` test * Test that seconding counts are not updated for validators on error * Fix tests * Fix manifest exchange test * Add more tests in `requests.rs` (#6707) This resolves remaining TODOs in this file. * remove outdated inventory terminology * Async backing statement distribution: `Candidates` tests (#6658) * Async Backing: Fix clippy errors in statement distribution branch (#6720) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * Async Backing: Fix clippy errors in statement distribution branch * Fix some more clippy lints * add tests module * fix warnings in existing tests * create basic test harness * create a test state struct * fmt * create empty cluster & grid modules for tests * some TODOs for cluster test suite * describe test-suite for grid logic * describe request test suite * fix seconding-limit bug * Remove extraneous `pub` This somehow made it into my clippy PR. * Fix some test compile warnings * Remove some unneeded `allow`s * adapt some new test helpers from Marcin * add helper for activating a gossip topology * add utility for signing statements * helpers for connecting/disconnecting peers * round out network utilities * fmt * fix bug in initializing validator-meta * fix compilation * implement first cluster test * TODOs for incoming request tests * Remove unneeded `make_committed_candidate` helper * fmt * some more tests for cluster * add a TODO about grid senders * integrate inbound req/res into test harness * polish off initial cluster test suite * keep introduce candidate request * fix tests after introduce candidate request * fmt * Add grid protocol to module docs * Fix comments * Test `backed_in_path_only: true` * Update node/network/protocol/src/lib.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Update node/network/protocol/src/request_response/mod.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Mark receiver with `vstaging` * validate grid senders based on manifest kind * fix mask_seconded/valid * fix unwanted-mask check * fix build * resolve todo on leaf mode * Unify protocol naming to vstaging * fmt, fix grid test after topology change * typo Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * address review * adjust comment, make easier to understand * Fix typo --------- Co-authored-by: Marcin S <marcin@bytedude.com> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Chris Sosnin <chris125_@live.com> * miscellaneous fixes to make asynchronous backing work (#6791) * propagate network-protocol-staging feature * add feature to adder-collator as well * allow collation-generation of occupied cores * prospective parachains: special treatment for pending availability candidates * runtime: fetch candidates pending availability * lazily construct PVD for pending candidates * fix fallout in prospective parachains hypothetical/select_child * runtime: enact candidates when creating paras-inherent * make tests compile * test pending availability in the scope * add prospective parachains test * fix validity constraints leftovers * drop prints * Fix typos --------- Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Marcin S <marcin@realemail.net> * Remove restart from test (#6840) * Async Backing: Statement Distribution Tests (#6755) * start on handling incoming * split off session info into separate map * start in on a knowledge tracker * address some grumbles * format * missed comment * some docs for direct * add note on slashing * amend * simplify 'direct' code * finish up the 'direct' logic * add a bunch of tests for the direct-in-group logic * rename 'direct' to 'cluster', begin a candidate_entry module * distill candidate_entry * start in on a statement-store module * some utilities for the statement store * rewrite 'send_statement_direct' using new tools * filter sending logic on peers which have the relay-parent in their view. * some more logic for handling incoming statements * req/res: BackedCandidatePacket -> AttestedCandidate + tweaks * add a `validated_in_group` bitfield to BackedCandidateInventory * BackedCandidateInventory -> Manifest * start in on requester module * add outgoing request for attested candidate * add a priority mechanism for requester * some request dispatch logic * add seconded mask to tagged-request * amend manifest to hold group index * handle errors and set up scaffold for response validation * validate attested candidate responses * requester -> requests * add some utilities for manipulating requests * begin integrating requester * start grid module * tiny * refactor grid topology to expose more info to subsystems * fix grid_topology test * fix overseer test * implement topology group-based view construction logic * fmt * flesh out grid slightly more * add indexed groups utility * integrate Groups into per-session info * refactor statement store to borrow Groups * implement manifest knowledge utility * add a test for topology setup * don't send to group members * test for conflicting manifests * manifest knowledge tests * fmt * rename field * garbage collection for grid tracker * routines for finding correct/incorrect advertisers * add manifest import logic * tweak naming * more tests for manifest import * add comment * rework candidates into a view-wide tracker * fmt * start writing boilerplate for grid sending * fmt * some more group boilerplate * refactor handling of topology and authority IDs * fmt * send statements directly to grid peers where possible * send to cluster only if statement belongs to cluster * improve handling of cluster statements * handle incoming statements along the grid * API for introduction of candidates into the tree * backing: use new prospective parachains API * fmt prospective parachains changes * fmt statement-dist * fix condition * get ready for tracking importable candidates * prospective parachains: add Cow logic * incomplete and complete hypothetical candidates * remove keep_if_unneeded * fmt * implement more general HypotheticalFrontier * fmt, cleanup * add a by_parent_hash index to candidate tracker * more framework for future code * utilities for getting all hypothetical candidates for frontier * track origin in statement store * fmt * requests should return peer * apply post-confirmation reckoning * flesh out import/announce/circulate logic on new statements * adjust * adjust TODO comment * fix backing tests * update statement-distribution to use new indexedvec * fmt * query hypothetical candidates * implement `note_importable_under` * extract common utility of fragment tree updates * add a helper function for getting statements unknown by backing * import fresh statements to backing * send announcements and acknowledgements over grid * provide freshly importable statements also avoid tracking backed candidates in statement distribution * do not issue requests on newly importable candidates * add TODO for later when confirming candidate * write a routine for handling backed candidate notifications * simplify grid substantially * add some test TODOs * handle confirmed candidates & grid announcements * finish implementing manifest handling, including follow up statements * send follow-up statements when acknowledging freshly backed * fmt * handle incoming acknowledgements * a little DRYing * wire up network messages to handlers * fmt * some skeleton code for peer view update handling * more peer view skeleton stuff * Fix async backing statement distribution tests (#6621) * Fix compile errors in tests * Cargo fmt * Resolve some todos in async backing statement-distribution branch (#6482) * Implement `remove_by_relay_parent` * Extract `minimum_votes` to shared primitives. * Add `can_send_statements_received_with_prejudice` test * Fix test * Update docstrings * Cargo fmt * Fix compile error * Fix compile errors in tests * Cargo fmt * Add module docs; write `test_priority_ordering` (first draft) * Fix `test_priority_ordering` * Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority` * Address review comments * Remove `Entry::get_mut` * fix test compilation * add a TODO for a test * clean up a couple of TODOs * implement sending pending cluster statements * refactor utility function for sending acknowledgement and statements * mostly implement catching peers up via grid * Fix clippy error * alter grid to track all pending statements * fix more TODOs and format * tweak a TODO in requests * some logic for dispatching requests * fmt * skeleton for response receiving * Async backing statement distribution: cluster tests (#6678) * Add `pending_statements_set_when_receiving_fresh_statements` * Add `pending_statements_updated_when_sending_statements` test * fix up * fmt * update TODO * rework seconded mask in requests * change doc * change unhandledresponse not to borrow request manager * only accept responses sufficient to back * finish implementing response handling * extract statement filter to protocol crate * rework requests: use statement filter in network protocol * dispatch cluster requests correctly * rework cluster statement sending * implement request answering * fmt * only send confirmed candidate statement messages on unified relay-parent * Fix Tests In Statement Distribution Branch * Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * clean up some review comments * clean up warnings * Async backing statement distribution: grid tests (#6673) * Add `manifest_import_returns_ok_true` test * cargo fmt * Add pending_communication_receiving_manifest_on_confirmed_candidate * Add `senders_can_provide_manifests_in_acknowledgement` test * Add a couple of tests for pending statements * Add `pending_statements_cleared_when_sending` test * Add `pending_statements_respect_remote_knowledge` test * Refactor group creation in tests * Clarify docs * Address some review comments * Make some clarifications * Fix post-merge errors * Clarify test `senders_can_provide_manifests_in_acknowledgement` * Try writing `pending_statements_are_updated_after_manifest_exchange` * Document "seconding limit" and `reject_overflowing_manifests` test * Test that seconding counts are not updated for validators on error * Fix tests * Fix manifest exchange test * Add more tests in `requests.rs` (#6707) This resolves remaining TODOs in this file. * remove outdated inventory terminology * Async backing statement distribution: `Candidates` tests (#6658) * Async Backing: Fix clippy errors in statement distribution branch (#6720) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * Async Backing: Fix clippy errors in statement distribution branch * Fix some more clippy lints * add tests module * fix warnings in existing tests * create basic test harness * create a test state struct * fmt * create empty cluster & grid modules for tests * some TODOs for cluster test suite * describe test-suite for grid logic * describe request test suite * fix seconding-limit bug * Remove extraneous `pub` This somehow made it into my clippy PR. * Fix some test compile warnings * Remove some unneeded `allow`s * adapt some new test helpers from Marcin * add helper for activating a gossip topology * add utility for signing statements * helpers for connecting/disconnecting peers * round out network utilities * fmt * fix bug in initializing validator-meta * fix compilation * implement first cluster test * TODOs for incoming request tests * Remove unneeded `make_committed_candidate` helper * fmt * Hook up request sender * Add `valid_statement_without_prior_seconded_is_ignored` test * Fix `valid_statement_without_prior_seconded_is_ignored` test * some more tests for cluster * add a TODO about grid senders * integrate inbound req/res into test harness * polish off initial cluster test suite * keep introduce candidate request * fix tests after introduce candidate request * fmt * Add grid protocol to module docs * Remove obsolete test * Fix comments * Test `backed_in_path_only: true` * Update node/network/protocol/src/lib.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Update node/network/protocol/src/request_response/mod.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Mark receiver with `vstaging` * First draft of `ensure_seconding_limit_is_respected` test * validate grid senders based on manifest kind * fix mask_seconded/valid * fix unwanted-mask check * fix build * resolve todo on leaf mode * Unify protocol naming to vstaging * Fix `ensure_seconding_limit_is_respected` test * Start `backed_candidate_leads_to_advertisement` test * fmt, fix grid test after topology change * Send Backed notification * Finish `backed_candidate_leads_to_advertisement` test * Finish `peer_reported_for_duplicate_statements` test * Finish `received_advertisement_before_confirmation_leads_to_request` * Add `advertisements_rejected_from_incorrect_peers` test * Add `manifest_rejected_*` tests * Add `manifest_rejected_when_group_does_not_match_para` test * Add `local_node_sanity_checks_incoming_requests` test * Add `local_node_respects_statement_mask` test * Add tests where peer is reported for providing invalid signatures * Add `cluster_peer_allowed_to_send_incomplete_statements` test * Add `received_advertisement_after_backing_leads_to_acknowledgement` * Add `received_advertisement_after_confirmation_before_backing` test * peer_reported_for_advertisement_conflicting_with_confirmed_candidate * Add `peer_reported_for_not_enough_statements` test * Add `peer_reported_for_providing_statements_meant_to_be_masked_out` * Add `additional_statements_are_shared_after_manifest_exchange` * Add `grid_statements_imported_to_backing` test * Add `relay_parent_entering_peer_view_leads_to_advertisement` test * Add `advertisement_not_re_sent_when_peer_re_enters_view` test * Update node/network/statement-distribution/src/vstaging/tests/grid.rs Co-authored-by: asynchronous rob <rphmeier@gmail.com> * Resolve TODOs, update test * Address unused code * Add check after every test for unhandled requests * Refactor (`make_dummy_leaf` and `handle_sent_request`) * Refactor (`make_dummy_topology`) * Minor refactor --------- Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Chris Sosnin <chris125_@live.com> * Fix some clippy lints in tests * Async backing: minor fixes (#6920) * bitfield-distribution test * implicit view tests * Refactor parameters -> params * scheduler: update storage migration (#6963) * update scheduler migration * Adjust weight to account for storage read * Statement Distribution Guide Edits (#7025) * Statement distribution guide edits * Addressed Marcin's comments * Add attested candidate request retry timeouts (#6833) Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: asynchronous rob <rphmeier@gmail.com> Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <chris125_@live.com> Fix async backing statement distribution tests (#6621) Resolve some todos in async backing statement-distribution branch (#6482) Fix clippy errors in statement distribution branch (#6720) * Async backing: add Prospective Parachains impl guide (#6933) Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> * Updates to Provisioner Guide for Async Backing (#7106) * Initial corrections and clarifications * Partial first draft * Finished first draft * Adding back wrongly removed test bit * fmt * Update roadmap/implementers-guide/src/node/utility/provisioner.md Co-authored-by: Marcin S. <marcin@realemail.net> * Addressing comments * Reorganization * fmt --------- Co-authored-by: Marcin S. <marcin@realemail.net> * fmt * Renaming Parathread Mentions (#7287) * Renaming parathreads * Renaming module to pallet * More updates * PVF: Refactor workers into separate crates, remove host dependency (#7253) * PVF: Refactor workers into separate crates, remove host dependency * Fix compile error * Remove some leftover code * Fix compile errors * Update Cargo.lock * Remove worker main.rs files I accidentally copied these from the other PR. This PR isn't intended to introduce standalone workers yet. * Address review comments * cargo fmt * Update a couple of comments * Update log targets * Update quote to 1.0.27 (#7280) Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: parity-processbot <> * pallets: implement `Default` for `GenesisConfig` in `no_std` (#7271) * pallets: implement Default for GenesisConfig in no_std This change is follow-up of: https://github.com/paritytech/substrate/pull/14108 It is a step towards: https://github.com/paritytech/substrate/issues/13334 * Cargo.lock updated * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * cli: enable BEEFY by default on test networks (#7293) We consider BEEFY mature enough to run by default on all nodes for test networks (Rococo/Wococo/Versi). Right now, most nodes are not running it since it's opt-in using --beefy flag. Switch to an opt-out model for test networks. Replace --beefy flag from CLI with --no-beefy and have BEEFY client start by default on test networks. Signed-off-by: acatangiu <adrian@parity.io> * runtime: past session slashing runtime API (#6667) * runtime/vstaging: unapplied_slashes runtime API * runtime/vstaging: key_ownership_proof runtime API * runtime/ParachainHost: submit_report_dispute_lost * fix key_ownership_proof API * runtime: submit_report_dispute_lost runtime API * nits * Update node/subsystem-types/src/messages.rs Co-authored-by: Marcin S. <marcin@bytedude.com> * revert unrelated fmt changes * post merge fixes * fix compilation --------- Co-authored-by: Marcin S. <marcin@bytedude.com> * Correcting git mishap * Document usage of `gum` crate (#7294) * Document usage of gum crate * Small fix * Add some more basic info * Update node/gum/src/lib.rs Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * Update target docs --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * XCM: Fix issue with RequestUnlock (#7278) * XCM: Fix issue with RequestUnlock * Leave API changes for v4 * Fix clippy errors * Fix tests --------- Co-authored-by: parity-processbot <> * Companion for Substrate#14228 (#7295) * Companion for Substrate#14228 https://github.com/paritytech/substrate/pull/14228 * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * Companion for #14237: Use latest sp-crates (#7300) * To revert: Update substrate branch to "lexnv/bump_sp_crates" Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Revert "To revert: Update substrate branch to "lexnv/bump_sp_crates"" This reverts commit 5f1db84eac4a226c37b7f6ce6ee19b49dc7e2008. * Update cargo lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * bounded-collections bump to 0.1.7 (#7305) * bounded-collections bump to 0.1.7 Companion for: paritytech/substrate#14225 * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * bump to quote 1.0.28 (#7306) * `RollingSessionWindow` cleanup (#7204) * Replace `RollingSessionWindow` with `RuntimeInfo` - initial commit * Fix tests in import * Fix the rest of the tests * Remove dead code * Fix todos * Simplify session caching * Comments for `SessionInfoProvider` * Separate `SessionInfoProvider` from `State` * `cache_session_info_for_head` becomes freestanding function * Remove unneeded `mut` usage * fn session_info -> fn get_session_info() to avoid name clashes. The function also tries to initialize `SessionInfoProvider` * Fix SessionInfo retrieval * Code cleanup * Don't wrap `SessionInfoProvider` in an `Option` * Remove `earliest_session()` * Remove pre-caching -> wip * Fix some tests and code cleanup * Fix all tests * Fixes in tests * Fix comments, variable names and small style changes * Fix a warning * impl From<SessionWindowSize> for NonZeroUsize * Fix logging for `get_session_info` - remove redundant logs and decrease log level to DEBUG * Code review feedback * Storage migration removing `COL_SESSION_WINDOW_DATA` from parachains db * Remove `col_session_data` usages * Storage migration clearing columns w/o removing them * Remove session data column usages from `approval-voting` and `dispute-coordinator` tests * Add some test cases from `RollingSessionWindow` to `dispute-coordinator` tests * Fix formatting in initialized.rs * Fix a corner case in `SessionInfo` caching for `dispute-coordinator` * Remove `RollingSessionWindow` ;( * Revert "Fix formatting in initialized.rs" This reverts commit 0f94664ec9f3a7e3737a30291195990e1e7065fc. * v2 to v3 migration drops `COL_DISPUTE_COORDINATOR_DATA` instead of clearing it * Fix `NUM_COLUMNS` in `approval-voting` * Use `columns::v3::NUM_COLUMNS` when opening db * Update node/service/src/parachains_db/upgrade.rs Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * Don't write in `COL_DISPUTE_COORDINATOR_DATA` for `test_rocksdb_migrate_2_to_3` * Fix `NUM+COLUMNS` in approval_voting * Fix formatting * Fix columns usage * Clarification comments about the different db versions --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * pallet-para-config: Remove remnant WeightInfo functions (#7308) * pallet-para-config: Remove remnant WeightInfo functions Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * set_config_with_weight begone Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * ".git/.scripts/commands/bench/bench.sh" runtime kusama-dev runtime_parachains::configuration --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: command-bot <> * XCM: PayOverXcm config (#6900) * Move XCM query functionality to trait * Fix tests * Add PayOverXcm implementation * fix the PayOverXcm trait to compile * moved doc comment out of trait implmeentation and to the trait * PayOverXCM documentation * Change documentation a bit * Added empty benchmark methods implementation and changed docs * update PayOverXCM to convert AccountIds to MultiLocations * Implement benchmarking method * Change v3 to latest * Descend origin to an asset sender (#6970) * descend origin to an asset sender * sender as tuple of dest and sender * Add more variants to the QueryResponseStatus enum * Change Beneficiary to Into<[u8; 32]> * update PayOverXcm to return concrete errors and use AccountId as sender * use polkadot-primitives for AccountId * fix dependency to use polkadot-core-primitives * force Unpaid instruction to the top of the instructions list * modify report_outcome to accept interior argument * use new_query directly for building final xcm query, instead of report_outcome * fix usage of new_query to use the XcmQueryHandler * fix usage of new_query to use the XcmQueryHandler * tiny method calling fix * xcm query handler (#7198) * drop redundant query status * rename ReportQueryStatus to OuterQueryStatus * revert rename of QueryResponseStatus * update mapping * Update xcm/xcm-builder/src/pay.rs Co-authored-by: Gavin Wood <gavin@parity.io> * Updates * Docs * Fix benchmarking stuff * Destination can be determined based on asset_kind * Tweaking API to minimise clones * Some repotting and docs --------- Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> * Companion for #14265 (#7307) * Update Cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update Cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: parity-processbot <> * bump serde to 1.0.163 (#7315) * bump serde to 1.0.163 * bump ci * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * fmt * Updated fmt * Removing changes accidentally pulled from master * fix another master pull issue * Another master pull fix * fmt * Fixing implementers guide build * Revert "Merge branch 'rh-async-backing-feature-while-frozen' of https://github.com/paritytech/polkadot into brad-rename-parathread" This reverts commit bebc24af52ab61155e3fe02cb3ce66a592bce49c, reversing changes made to 1b2de662dfb11173679d6da5bd0da9d149c85547. --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: acatangiu <adrian@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Marcin S. <marcin@bytedude.com> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> * fix bitfield distribution test * approval distribution tests * fix bridge tests * update Cargo.lock * [async-backing-branch] Optimize collator-protocol validator-side request fetching (#7457) * Optimize collator-protocol validator-side request fetching * address feedback: replace tuples with structs * feedback: add doc comments * move collation types to subfolder --------- Signed-off-by: alindima <alin@parity.io> * Update collation generation for asynchronous backing (#7405) * break candidate receipt construction and distribution into own function * update implementers' guide to include SubmitCollation * implement SubmitCollation for collation-generation * fmt * fix test compilation & remove unnecessary submodule * add some TODOs for a test suite. * Update roadmap/implementers-guide/src/types/overseer-protocol.md Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * add new test harness and first test * refactor to avoid requiring background sender * ensure collation gets packaged and distributed * tests for the fallback case with no hint * add parent rp-number hint tests * fmt * update uses of CollationGenerationConfig * fix remaining test * address review comments * use subsystemsender for background tasks * fmt * remove ValidationCodeHashHint and related tests --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * fix some more fallout from merge * fmt * remove staging APIs from Rococo & Westend (#7513) * send network messages on main protocol name (#7515) * misc async backing improvements for allowed ancestry blocks (#7532) * shared: fix acquire_info * backwards-compat test for prospective parachains * same relay parent is allowed * provisioner: request candidate receipt by relay parent (#7527) * return candidates hash from prospective parachains * update provisioner * update tests * guide changes * send a single message to backing * fix test * revert to old `handle_new_activations` logic in some cases (#7514) * revert to old `handle_new_activations` logic * gate sending messages on scheduled cores to max_depth >= 2 * fmt * 2->1 * Omnibus asynchronous backing bugfix PR (#7529) * fix a bug in backing * add some more logs * prospective parachains: take ancestry only up to session bounds * add test * fix zombienet tests (#7614) Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> * fix runtime compilation * make bitfield distribution tests compile * attempt to fix zombienet disputes (#7618) * update metric name * update some metric names * avoid cycles when creating fake candidates * make undying collator more friendly to malformed parents * fix a bug in malus * fmt * clippy * add RUN_IN_CONTAINER to new ZombieNet tests (#7631) * remove duplicated migration happened because of master-merge --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: acatangiu <adrian@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Signed-off-by: alindima <alin@parity.io> Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Parity Bot <admin@parity.io> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at> Co-authored-by: Robert Klotzner <eskimor@users.noreply.github.com> Co-authored-by: Marcin S <marcin@bytedude.com> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Mattia L.V. Bradascio <28816406+bredamatt@users.noreply.github.com> Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com> Co-authored-by: BradleyOlson64 <lotrftw9@gmail.com> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> Co-authored-by: Alin Dima <alin@parity.io>
This commit is contained in:
@@ -0,0 +1,739 @@
|
||||
// Copyright 2022 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Polkadot.
|
||||
|
||||
// Polkadot is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Polkadot is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use futures::channel::oneshot;
|
||||
use polkadot_node_subsystem::{
|
||||
errors::ChainApiError,
|
||||
messages::{ChainApiMessage, ProspectiveParachainsMessage},
|
||||
SubsystemSender,
|
||||
};
|
||||
use polkadot_primitives::vstaging::{BlockNumber, Hash, Id as ParaId};
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
// Always aim to retain 1 block before the active leaves.
|
||||
const MINIMUM_RETAIN_LENGTH: BlockNumber = 2;
|
||||
|
||||
/// Handles the implicit view of the relay chain derived from the immediate view, which
|
||||
/// is composed of active leaves, and the minimum relay-parents allowed for
|
||||
/// candidates of various parachains at those leaves.
|
||||
#[derive(Default, Clone)]
|
||||
pub struct View {
|
||||
leaves: HashMap<Hash, ActiveLeafPruningInfo>,
|
||||
block_info_storage: HashMap<Hash, BlockInfo>,
|
||||
}
|
||||
|
||||
// Minimum relay parents implicitly relative to a particular block.
|
||||
#[derive(Debug, Clone)]
|
||||
struct AllowedRelayParents {
|
||||
// minimum relay parents can only be fetched for active leaves,
|
||||
// so this will be empty for all blocks that haven't ever been
|
||||
// witnessed as active leaves.
|
||||
minimum_relay_parents: HashMap<ParaId, BlockNumber>,
|
||||
// Ancestry, in descending order, starting from the block hash itself down
|
||||
// to and including the minimum of `minimum_relay_parents`.
|
||||
allowed_relay_parents_contiguous: Vec<Hash>,
|
||||
}
|
||||
|
||||
impl AllowedRelayParents {
|
||||
fn allowed_relay_parents_for(
|
||||
&self,
|
||||
para_id: Option<ParaId>,
|
||||
base_number: BlockNumber,
|
||||
) -> &[Hash] {
|
||||
let para_id = match para_id {
|
||||
None => return &self.allowed_relay_parents_contiguous[..],
|
||||
Some(p) => p,
|
||||
};
|
||||
|
||||
let para_min = match self.minimum_relay_parents.get(¶_id) {
|
||||
Some(p) => *p,
|
||||
None => return &[],
|
||||
};
|
||||
|
||||
if base_number < para_min {
|
||||
return &[]
|
||||
}
|
||||
|
||||
let diff = base_number - para_min;
|
||||
|
||||
// difference of 0 should lead to slice len of 1
|
||||
let slice_len = ((diff + 1) as usize).min(self.allowed_relay_parents_contiguous.len());
|
||||
&self.allowed_relay_parents_contiguous[..slice_len]
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
struct ActiveLeafPruningInfo {
|
||||
// The minimum block in the same branch of the relay-chain that should be
|
||||
// preserved.
|
||||
retain_minimum: BlockNumber,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
struct BlockInfo {
|
||||
block_number: BlockNumber,
|
||||
// If this was previously an active leaf, this will be `Some`
|
||||
// and is useful for understanding the views of peers in the network
|
||||
// which may not be in perfect synchrony with our own view.
|
||||
//
|
||||
// If they are ahead of us in getting a new leaf, there's nothing we
|
||||
// can do as it's an unrecognized block hash. But if they're behind us,
|
||||
// it's useful for us to retain some information about previous leaves'
|
||||
// implicit views so we can continue to send relevant messages to them
|
||||
// until they catch up.
|
||||
maybe_allowed_relay_parents: Option<AllowedRelayParents>,
|
||||
parent_hash: Hash,
|
||||
}
|
||||
|
||||
impl View {
|
||||
/// Get an iterator over active leaves in the view.
|
||||
pub fn leaves(&self) -> impl Iterator<Item = &Hash> {
|
||||
self.leaves.keys()
|
||||
}
|
||||
|
||||
/// Activate a leaf in the view.
|
||||
/// This will request the minimum relay parents from the
|
||||
/// Prospective Parachains subsystem for each leaf and will load headers in the ancestry of each
|
||||
/// leaf in the view as needed. These are the 'implicit ancestors' of the leaf.
|
||||
///
|
||||
/// To maximize reuse of outdated leaves, it's best to activate new leaves before
|
||||
/// deactivating old ones.
|
||||
///
|
||||
/// This returns a list of para-ids which are relevant to the leaf,
|
||||
/// and the allowed relay parents for these paras under this leaf can be
|
||||
/// queried with [`View::known_allowed_relay_parents_under`].
|
||||
///
|
||||
/// No-op for known leaves.
|
||||
pub async fn activate_leaf<Sender>(
|
||||
&mut self,
|
||||
sender: &mut Sender,
|
||||
leaf_hash: Hash,
|
||||
) -> Result<Vec<ParaId>, FetchError>
|
||||
where
|
||||
Sender: SubsystemSender<ChainApiMessage>,
|
||||
Sender: SubsystemSender<ProspectiveParachainsMessage>,
|
||||
{
|
||||
if self.leaves.contains_key(&leaf_hash) {
|
||||
return Err(FetchError::AlreadyKnown)
|
||||
}
|
||||
|
||||
let res = fetch_fresh_leaf_and_insert_ancestry(
|
||||
leaf_hash,
|
||||
&mut self.block_info_storage,
|
||||
&mut *sender,
|
||||
)
|
||||
.await;
|
||||
|
||||
match res {
|
||||
Ok(fetched) => {
|
||||
// Retain at least `MINIMUM_RETAIN_LENGTH` blocks in storage.
|
||||
// This helps to avoid Chain API calls when activating leaves in the
|
||||
// same chain.
|
||||
let retain_minimum = std::cmp::min(
|
||||
fetched.minimum_ancestor_number,
|
||||
fetched.leaf_number.saturating_sub(MINIMUM_RETAIN_LENGTH),
|
||||
);
|
||||
|
||||
self.leaves.insert(leaf_hash, ActiveLeafPruningInfo { retain_minimum });
|
||||
|
||||
Ok(fetched.relevant_paras)
|
||||
},
|
||||
Err(e) => Err(e),
|
||||
}
|
||||
}
|
||||
|
||||
/// Deactivate a leaf in the view. This prunes any outdated implicit ancestors as well.
|
||||
///
|
||||
/// Returns hashes of blocks pruned from storage.
|
||||
pub fn deactivate_leaf(&mut self, leaf_hash: Hash) -> Vec<Hash> {
|
||||
let mut removed = Vec::new();
|
||||
|
||||
if self.leaves.remove(&leaf_hash).is_none() {
|
||||
return removed
|
||||
}
|
||||
|
||||
// Prune everything before the minimum out of all leaves,
|
||||
// pruning absolutely everything if there are no leaves (empty view)
|
||||
//
|
||||
// Pruning by block number does leave behind orphaned forks slightly longer
|
||||
// but the memory overhead is negligible.
|
||||
{
|
||||
let minimum = self.leaves.values().map(|l| l.retain_minimum).min();
|
||||
|
||||
self.block_info_storage.retain(|hash, i| {
|
||||
let keep = minimum.map_or(false, |m| i.block_number >= m);
|
||||
if !keep {
|
||||
removed.push(*hash);
|
||||
}
|
||||
keep
|
||||
});
|
||||
|
||||
removed
|
||||
}
|
||||
}
|
||||
|
||||
/// Get an iterator over all allowed relay-parents in the view with no particular order.
|
||||
///
|
||||
/// **Important**: not all blocks are guaranteed to be allowed for some leaves, it may
|
||||
/// happen that a block info is only kept in the view storage because of a retaining rule.
|
||||
///
|
||||
/// For getting relay-parents that are valid for parachain candidates use
|
||||
/// [`View::known_allowed_relay_parents_under`].
|
||||
pub fn all_allowed_relay_parents(&self) -> impl Iterator<Item = &Hash> {
|
||||
self.block_info_storage.keys()
|
||||
}
|
||||
|
||||
/// Get the known, allowed relay-parents that are valid for parachain candidates
|
||||
/// which could be backed in a child of a given block for a given para ID.
|
||||
///
|
||||
/// This is expressed as a contiguous slice of relay-chain block hashes which may
|
||||
/// include the provided block hash itself.
|
||||
///
|
||||
/// If `para_id` is `None`, this returns all valid relay-parents across all paras
|
||||
/// for the leaf.
|
||||
///
|
||||
/// `None` indicates that the block hash isn't part of the implicit view or that
|
||||
/// there are no known allowed relay parents.
|
||||
///
|
||||
/// This always returns `Some` for active leaves or for blocks that previously
|
||||
/// were active leaves.
|
||||
///
|
||||
/// This can return the empty slice, which indicates that no relay-parents are allowed
|
||||
/// for the para, e.g. if the para is not scheduled at the given block hash.
|
||||
pub fn known_allowed_relay_parents_under(
|
||||
&self,
|
||||
block_hash: &Hash,
|
||||
para_id: Option<ParaId>,
|
||||
) -> Option<&[Hash]> {
|
||||
let block_info = self.block_info_storage.get(block_hash)?;
|
||||
block_info
|
||||
.maybe_allowed_relay_parents
|
||||
.as_ref()
|
||||
.map(|mins| mins.allowed_relay_parents_for(para_id, block_info.block_number))
|
||||
}
|
||||
}
|
||||
|
||||
/// Errors when fetching a leaf and associated ancestry.
|
||||
#[fatality::fatality]
|
||||
pub enum FetchError {
|
||||
/// Activated leaf is already present in view.
|
||||
#[error("Leaf was already known")]
|
||||
AlreadyKnown,
|
||||
|
||||
/// Request to the prospective parachains subsystem failed.
|
||||
#[error("The prospective parachains subsystem was unavailable")]
|
||||
ProspectiveParachainsUnavailable,
|
||||
|
||||
/// Failed to fetch the block header.
|
||||
#[error("A block header was unavailable")]
|
||||
BlockHeaderUnavailable(Hash, BlockHeaderUnavailableReason),
|
||||
|
||||
/// A block header was unavailable due to a chain API error.
|
||||
#[error("A block header was unavailable due to a chain API error")]
|
||||
ChainApiError(Hash, ChainApiError),
|
||||
|
||||
/// Request to the Chain API subsystem failed.
|
||||
#[error("The chain API subsystem was unavailable")]
|
||||
ChainApiUnavailable,
|
||||
}
|
||||
|
||||
/// Reasons a block header might have been unavailable.
|
||||
#[derive(Debug)]
|
||||
pub enum BlockHeaderUnavailableReason {
|
||||
/// Block header simply unknown.
|
||||
Unknown,
|
||||
/// Internal Chain API error.
|
||||
Internal(ChainApiError),
|
||||
/// The subsystem was unavailable.
|
||||
SubsystemUnavailable,
|
||||
}
|
||||
|
||||
struct FetchSummary {
|
||||
minimum_ancestor_number: BlockNumber,
|
||||
leaf_number: BlockNumber,
|
||||
relevant_paras: Vec<ParaId>,
|
||||
}
|
||||
|
||||
async fn fetch_fresh_leaf_and_insert_ancestry<Sender>(
|
||||
leaf_hash: Hash,
|
||||
block_info_storage: &mut HashMap<Hash, BlockInfo>,
|
||||
sender: &mut Sender,
|
||||
) -> Result<FetchSummary, FetchError>
|
||||
where
|
||||
Sender: SubsystemSender<ChainApiMessage>,
|
||||
Sender: SubsystemSender<ProspectiveParachainsMessage>,
|
||||
{
|
||||
let min_relay_parents_raw = {
|
||||
let (tx, rx) = oneshot::channel();
|
||||
sender
|
||||
.send_message(ProspectiveParachainsMessage::GetMinimumRelayParents(leaf_hash, tx))
|
||||
.await;
|
||||
|
||||
match rx.await {
|
||||
Ok(m) => m,
|
||||
Err(_) => return Err(FetchError::ProspectiveParachainsUnavailable),
|
||||
}
|
||||
};
|
||||
|
||||
let leaf_header = {
|
||||
let (tx, rx) = oneshot::channel();
|
||||
sender.send_message(ChainApiMessage::BlockHeader(leaf_hash, tx)).await;
|
||||
|
||||
match rx.await {
|
||||
Ok(Ok(Some(header))) => header,
|
||||
Ok(Ok(None)) =>
|
||||
return Err(FetchError::BlockHeaderUnavailable(
|
||||
leaf_hash,
|
||||
BlockHeaderUnavailableReason::Unknown,
|
||||
)),
|
||||
Ok(Err(e)) =>
|
||||
return Err(FetchError::BlockHeaderUnavailable(
|
||||
leaf_hash,
|
||||
BlockHeaderUnavailableReason::Internal(e),
|
||||
)),
|
||||
Err(_) =>
|
||||
return Err(FetchError::BlockHeaderUnavailable(
|
||||
leaf_hash,
|
||||
BlockHeaderUnavailableReason::SubsystemUnavailable,
|
||||
)),
|
||||
}
|
||||
};
|
||||
|
||||
let min_min = min_relay_parents_raw.iter().map(|x| x.1).min().unwrap_or(leaf_header.number);
|
||||
let relevant_paras = min_relay_parents_raw.iter().map(|x| x.0).collect();
|
||||
let expected_ancestry_len = (leaf_header.number.saturating_sub(min_min) as usize) + 1;
|
||||
|
||||
let ancestry = if leaf_header.number > 0 {
|
||||
let mut next_ancestor_number = leaf_header.number - 1;
|
||||
let mut next_ancestor_hash = leaf_header.parent_hash;
|
||||
|
||||
let mut ancestry = Vec::with_capacity(expected_ancestry_len);
|
||||
ancestry.push(leaf_hash);
|
||||
|
||||
// Ensure all ancestors up to and including `min_min` are in the
|
||||
// block storage. When views advance incrementally, everything
|
||||
// should already be present.
|
||||
while next_ancestor_number >= min_min {
|
||||
let parent_hash = if let Some(info) = block_info_storage.get(&next_ancestor_hash) {
|
||||
info.parent_hash
|
||||
} else {
|
||||
// load the header and insert into block storage.
|
||||
let (tx, rx) = oneshot::channel();
|
||||
sender.send_message(ChainApiMessage::BlockHeader(next_ancestor_hash, tx)).await;
|
||||
|
||||
let header = match rx.await {
|
||||
Ok(Ok(Some(header))) => header,
|
||||
Ok(Ok(None)) =>
|
||||
return Err(FetchError::BlockHeaderUnavailable(
|
||||
next_ancestor_hash,
|
||||
BlockHeaderUnavailableReason::Unknown,
|
||||
)),
|
||||
Ok(Err(e)) =>
|
||||
return Err(FetchError::BlockHeaderUnavailable(
|
||||
next_ancestor_hash,
|
||||
BlockHeaderUnavailableReason::Internal(e),
|
||||
)),
|
||||
Err(_) =>
|
||||
return Err(FetchError::BlockHeaderUnavailable(
|
||||
next_ancestor_hash,
|
||||
BlockHeaderUnavailableReason::SubsystemUnavailable,
|
||||
)),
|
||||
};
|
||||
|
||||
block_info_storage.insert(
|
||||
next_ancestor_hash,
|
||||
BlockInfo {
|
||||
block_number: next_ancestor_number,
|
||||
parent_hash: header.parent_hash,
|
||||
maybe_allowed_relay_parents: None,
|
||||
},
|
||||
);
|
||||
|
||||
header.parent_hash
|
||||
};
|
||||
|
||||
ancestry.push(next_ancestor_hash);
|
||||
if next_ancestor_number == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
next_ancestor_number -= 1;
|
||||
next_ancestor_hash = parent_hash;
|
||||
}
|
||||
|
||||
ancestry
|
||||
} else {
|
||||
vec![leaf_hash]
|
||||
};
|
||||
|
||||
let fetched_ancestry = FetchSummary {
|
||||
minimum_ancestor_number: min_min,
|
||||
leaf_number: leaf_header.number,
|
||||
relevant_paras,
|
||||
};
|
||||
|
||||
let allowed_relay_parents = AllowedRelayParents {
|
||||
minimum_relay_parents: min_relay_parents_raw.iter().cloned().collect(),
|
||||
allowed_relay_parents_contiguous: ancestry,
|
||||
};
|
||||
|
||||
let leaf_block_info = BlockInfo {
|
||||
parent_hash: leaf_header.parent_hash,
|
||||
block_number: leaf_header.number,
|
||||
maybe_allowed_relay_parents: Some(allowed_relay_parents),
|
||||
};
|
||||
|
||||
block_info_storage.insert(leaf_hash, leaf_block_info);
|
||||
|
||||
Ok(fetched_ancestry)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::TimeoutExt;
|
||||
use assert_matches::assert_matches;
|
||||
use futures::future::{join, FutureExt};
|
||||
use polkadot_node_subsystem::AllMessages;
|
||||
use polkadot_node_subsystem_test_helpers::{
|
||||
make_subsystem_context, TestSubsystemContextHandle,
|
||||
};
|
||||
use polkadot_overseer::SubsystemContext;
|
||||
use polkadot_primitives::Header;
|
||||
use sp_core::testing::TaskExecutor;
|
||||
use std::time::Duration;
|
||||
|
||||
const PARA_A: ParaId = ParaId::new(0);
|
||||
const PARA_B: ParaId = ParaId::new(1);
|
||||
const PARA_C: ParaId = ParaId::new(2);
|
||||
|
||||
const GENESIS_HASH: Hash = Hash::repeat_byte(0xFF);
|
||||
const GENESIS_NUMBER: BlockNumber = 0;
|
||||
|
||||
// Chains A and B are forks of genesis.
|
||||
|
||||
const CHAIN_A: &[Hash] =
|
||||
&[Hash::repeat_byte(0x01), Hash::repeat_byte(0x02), Hash::repeat_byte(0x03)];
|
||||
|
||||
const CHAIN_B: &[Hash] = &[
|
||||
Hash::repeat_byte(0x04),
|
||||
Hash::repeat_byte(0x05),
|
||||
Hash::repeat_byte(0x06),
|
||||
Hash::repeat_byte(0x07),
|
||||
Hash::repeat_byte(0x08),
|
||||
Hash::repeat_byte(0x09),
|
||||
];
|
||||
|
||||
type VirtualOverseer = TestSubsystemContextHandle<AllMessages>;
|
||||
|
||||
const TIMEOUT: Duration = Duration::from_secs(2);
|
||||
|
||||
async fn overseer_recv(virtual_overseer: &mut VirtualOverseer) -> AllMessages {
|
||||
virtual_overseer
|
||||
.recv()
|
||||
.timeout(TIMEOUT)
|
||||
.await
|
||||
.expect("overseer `recv` timed out")
|
||||
}
|
||||
|
||||
fn default_header() -> Header {
|
||||
Header {
|
||||
parent_hash: Hash::zero(),
|
||||
number: 0,
|
||||
state_root: Hash::zero(),
|
||||
extrinsics_root: Hash::zero(),
|
||||
digest: Default::default(),
|
||||
}
|
||||
}
|
||||
|
||||
fn get_block_header(chain: &[Hash], hash: &Hash) -> Option<Header> {
|
||||
let idx = chain.iter().position(|h| h == hash)?;
|
||||
let parent_hash = idx.checked_sub(1).map(|i| chain[i]).unwrap_or(GENESIS_HASH);
|
||||
let number =
|
||||
if *hash == GENESIS_HASH { GENESIS_NUMBER } else { GENESIS_NUMBER + idx as u32 + 1 };
|
||||
Some(Header { parent_hash, number, ..default_header() })
|
||||
}
|
||||
|
||||
async fn assert_block_header_requests(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
chain: &[Hash],
|
||||
blocks: &[Hash],
|
||||
) {
|
||||
for block in blocks.iter().rev() {
|
||||
assert_matches!(
|
||||
overseer_recv(virtual_overseer).await,
|
||||
AllMessages::ChainApi(
|
||||
ChainApiMessage::BlockHeader(hash, tx)
|
||||
) => {
|
||||
assert_eq!(*block, hash, "unexpected block header request");
|
||||
let header = if block == &GENESIS_HASH {
|
||||
Header {
|
||||
number: GENESIS_NUMBER,
|
||||
..default_header()
|
||||
}
|
||||
} else {
|
||||
get_block_header(chain, block).expect("unknown block")
|
||||
};
|
||||
|
||||
tx.send(Ok(Some(header))).unwrap();
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
async fn assert_min_relay_parents_request(
|
||||
virtual_overseer: &mut VirtualOverseer,
|
||||
leaf: &Hash,
|
||||
response: Vec<(ParaId, u32)>,
|
||||
) {
|
||||
assert_matches!(
|
||||
overseer_recv(virtual_overseer).await,
|
||||
AllMessages::ProspectiveParachains(
|
||||
ProspectiveParachainsMessage::GetMinimumRelayParents(
|
||||
leaf_hash,
|
||||
tx
|
||||
)
|
||||
) => {
|
||||
assert_eq!(*leaf, leaf_hash, "received unexpected leaf hash");
|
||||
tx.send(response).unwrap();
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn construct_fresh_view() {
|
||||
let pool = TaskExecutor::new();
|
||||
let (mut ctx, mut ctx_handle) = make_subsystem_context::<AllMessages, _>(pool);
|
||||
|
||||
let mut view = View::default();
|
||||
|
||||
// Chain B.
|
||||
const PARA_A_MIN_PARENT: u32 = 4;
|
||||
const PARA_B_MIN_PARENT: u32 = 3;
|
||||
|
||||
let prospective_response = vec![(PARA_A, PARA_A_MIN_PARENT), (PARA_B, PARA_B_MIN_PARENT)];
|
||||
|
||||
let leaf = CHAIN_B.last().unwrap();
|
||||
let min_min_idx = (PARA_B_MIN_PARENT - GENESIS_NUMBER - 1) as usize;
|
||||
|
||||
let fut = view.activate_leaf(ctx.sender(), *leaf).timeout(TIMEOUT).map(|res| {
|
||||
let paras = res.expect("`activate_leaf` timed out").unwrap();
|
||||
assert_eq!(paras, vec![PARA_A, PARA_B]);
|
||||
});
|
||||
let overseer_fut = async {
|
||||
assert_min_relay_parents_request(&mut ctx_handle, leaf, prospective_response).await;
|
||||
assert_block_header_requests(&mut ctx_handle, CHAIN_B, &CHAIN_B[min_min_idx..]).await;
|
||||
};
|
||||
futures::executor::block_on(join(fut, overseer_fut));
|
||||
|
||||
for i in min_min_idx..(CHAIN_B.len() - 1) {
|
||||
// No allowed relay parents constructed for ancestry.
|
||||
assert!(view.known_allowed_relay_parents_under(&CHAIN_B[i], None).is_none());
|
||||
}
|
||||
|
||||
let leaf_info =
|
||||
view.block_info_storage.get(leaf).expect("block must be present in storage");
|
||||
assert_matches!(
|
||||
leaf_info.maybe_allowed_relay_parents,
|
||||
Some(ref allowed_relay_parents) => {
|
||||
assert_eq!(allowed_relay_parents.minimum_relay_parents[&PARA_A], PARA_A_MIN_PARENT);
|
||||
assert_eq!(allowed_relay_parents.minimum_relay_parents[&PARA_B], PARA_B_MIN_PARENT);
|
||||
let expected_ancestry: Vec<Hash> =
|
||||
CHAIN_B[min_min_idx..].iter().rev().copied().collect();
|
||||
assert_eq!(
|
||||
allowed_relay_parents.allowed_relay_parents_contiguous,
|
||||
expected_ancestry
|
||||
);
|
||||
}
|
||||
);
|
||||
|
||||
// Suppose the whole test chain A is allowed up to genesis for para C.
|
||||
const PARA_C_MIN_PARENT: u32 = 0;
|
||||
let prospective_response = vec![(PARA_C, PARA_C_MIN_PARENT)];
|
||||
let leaf = CHAIN_A.last().unwrap();
|
||||
let blocks = [&[GENESIS_HASH], CHAIN_A].concat();
|
||||
|
||||
let fut = view.activate_leaf(ctx.sender(), *leaf).timeout(TIMEOUT).map(|res| {
|
||||
let paras = res.expect("`activate_leaf` timed out").unwrap();
|
||||
assert_eq!(paras, vec![PARA_C]);
|
||||
});
|
||||
let overseer_fut = async {
|
||||
assert_min_relay_parents_request(&mut ctx_handle, leaf, prospective_response).await;
|
||||
assert_block_header_requests(&mut ctx_handle, CHAIN_A, &blocks).await;
|
||||
};
|
||||
futures::executor::block_on(join(fut, overseer_fut));
|
||||
|
||||
assert_eq!(view.leaves.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn reuse_block_info_storage() {
|
||||
let pool = TaskExecutor::new();
|
||||
let (mut ctx, mut ctx_handle) = make_subsystem_context::<AllMessages, _>(pool);
|
||||
|
||||
let mut view = View::default();
|
||||
|
||||
const PARA_A_MIN_PARENT: u32 = 1;
|
||||
let leaf_a_number = 3;
|
||||
let leaf_a = CHAIN_B[leaf_a_number - 1];
|
||||
let min_min_idx = (PARA_A_MIN_PARENT - GENESIS_NUMBER - 1) as usize;
|
||||
|
||||
let prospective_response = vec![(PARA_A, PARA_A_MIN_PARENT)];
|
||||
|
||||
let fut = view.activate_leaf(ctx.sender(), leaf_a).timeout(TIMEOUT).map(|res| {
|
||||
let paras = res.expect("`activate_leaf` timed out").unwrap();
|
||||
assert_eq!(paras, vec![PARA_A]);
|
||||
});
|
||||
let overseer_fut = async {
|
||||
assert_min_relay_parents_request(&mut ctx_handle, &leaf_a, prospective_response).await;
|
||||
assert_block_header_requests(
|
||||
&mut ctx_handle,
|
||||
CHAIN_B,
|
||||
&CHAIN_B[min_min_idx..leaf_a_number],
|
||||
)
|
||||
.await;
|
||||
};
|
||||
futures::executor::block_on(join(fut, overseer_fut));
|
||||
|
||||
// Blocks up to the 3rd are present in storage.
|
||||
const PARA_B_MIN_PARENT: u32 = 2;
|
||||
let leaf_b_number = 5;
|
||||
let leaf_b = CHAIN_B[leaf_b_number - 1];
|
||||
|
||||
let prospective_response = vec![(PARA_B, PARA_B_MIN_PARENT)];
|
||||
|
||||
let fut = view.activate_leaf(ctx.sender(), leaf_b).timeout(TIMEOUT).map(|res| {
|
||||
let paras = res.expect("`activate_leaf` timed out").unwrap();
|
||||
assert_eq!(paras, vec![PARA_B]);
|
||||
});
|
||||
let overseer_fut = async {
|
||||
assert_min_relay_parents_request(&mut ctx_handle, &leaf_b, prospective_response).await;
|
||||
assert_block_header_requests(
|
||||
&mut ctx_handle,
|
||||
CHAIN_B,
|
||||
&CHAIN_B[leaf_a_number..leaf_b_number], // Note the expected range.
|
||||
)
|
||||
.await;
|
||||
};
|
||||
futures::executor::block_on(join(fut, overseer_fut));
|
||||
|
||||
// Allowed relay parents for leaf A are preserved.
|
||||
let leaf_a_info =
|
||||
view.block_info_storage.get(&leaf_a).expect("block must be present in storage");
|
||||
assert_matches!(
|
||||
leaf_a_info.maybe_allowed_relay_parents,
|
||||
Some(ref allowed_relay_parents) => {
|
||||
assert_eq!(allowed_relay_parents.minimum_relay_parents[&PARA_A], PARA_A_MIN_PARENT);
|
||||
let expected_ancestry: Vec<Hash> =
|
||||
CHAIN_B[min_min_idx..leaf_a_number].iter().rev().copied().collect();
|
||||
let ancestry = view.known_allowed_relay_parents_under(&leaf_a, Some(PARA_A)).unwrap().to_vec();
|
||||
assert_eq!(ancestry, expected_ancestry);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pruning() {
|
||||
let pool = TaskExecutor::new();
|
||||
let (mut ctx, mut ctx_handle) = make_subsystem_context::<AllMessages, _>(pool);
|
||||
|
||||
let mut view = View::default();
|
||||
|
||||
const PARA_A_MIN_PARENT: u32 = 3;
|
||||
let leaf_a = CHAIN_B.iter().rev().nth(1).unwrap();
|
||||
let leaf_a_idx = CHAIN_B.len() - 2;
|
||||
let min_a_idx = (PARA_A_MIN_PARENT - GENESIS_NUMBER - 1) as usize;
|
||||
|
||||
let prospective_response = vec![(PARA_A, PARA_A_MIN_PARENT)];
|
||||
|
||||
let fut = view
|
||||
.activate_leaf(ctx.sender(), *leaf_a)
|
||||
.timeout(TIMEOUT)
|
||||
.map(|res| res.unwrap().unwrap());
|
||||
let overseer_fut = async {
|
||||
assert_min_relay_parents_request(&mut ctx_handle, &leaf_a, prospective_response).await;
|
||||
assert_block_header_requests(
|
||||
&mut ctx_handle,
|
||||
CHAIN_B,
|
||||
&CHAIN_B[min_a_idx..=leaf_a_idx],
|
||||
)
|
||||
.await;
|
||||
};
|
||||
futures::executor::block_on(join(fut, overseer_fut));
|
||||
|
||||
// Also activate a leaf with a lesser minimum relay parent.
|
||||
const PARA_B_MIN_PARENT: u32 = 2;
|
||||
let leaf_b = CHAIN_B.last().unwrap();
|
||||
let min_b_idx = (PARA_B_MIN_PARENT - GENESIS_NUMBER - 1) as usize;
|
||||
|
||||
let prospective_response = vec![(PARA_B, PARA_B_MIN_PARENT)];
|
||||
// Headers will be requested for the minimum block and the leaf.
|
||||
let blocks = &[CHAIN_B[min_b_idx], *leaf_b];
|
||||
|
||||
let fut = view
|
||||
.activate_leaf(ctx.sender(), *leaf_b)
|
||||
.timeout(TIMEOUT)
|
||||
.map(|res| res.expect("`activate_leaf` timed out").unwrap());
|
||||
let overseer_fut = async {
|
||||
assert_min_relay_parents_request(&mut ctx_handle, &leaf_b, prospective_response).await;
|
||||
assert_block_header_requests(&mut ctx_handle, CHAIN_B, blocks).await;
|
||||
};
|
||||
futures::executor::block_on(join(fut, overseer_fut));
|
||||
|
||||
// Prune implicit ancestor (no-op).
|
||||
let block_info_len = view.block_info_storage.len();
|
||||
view.deactivate_leaf(CHAIN_B[leaf_a_idx - 1]);
|
||||
assert_eq!(block_info_len, view.block_info_storage.len());
|
||||
|
||||
// Prune a leaf with a greater minimum relay parent.
|
||||
view.deactivate_leaf(*leaf_b);
|
||||
for hash in CHAIN_B.iter().take(PARA_B_MIN_PARENT as usize) {
|
||||
assert!(!view.block_info_storage.contains_key(hash));
|
||||
}
|
||||
|
||||
// Prune the last leaf.
|
||||
view.deactivate_leaf(*leaf_a);
|
||||
assert!(view.block_info_storage.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn genesis_ancestry() {
|
||||
let pool = TaskExecutor::new();
|
||||
let (mut ctx, mut ctx_handle) = make_subsystem_context::<AllMessages, _>(pool);
|
||||
|
||||
let mut view = View::default();
|
||||
|
||||
const PARA_A_MIN_PARENT: u32 = 0;
|
||||
|
||||
let prospective_response = vec![(PARA_A, PARA_A_MIN_PARENT)];
|
||||
let fut = view.activate_leaf(ctx.sender(), GENESIS_HASH).timeout(TIMEOUT).map(|res| {
|
||||
let paras = res.expect("`activate_leaf` timed out").unwrap();
|
||||
assert_eq!(paras, vec![PARA_A]);
|
||||
});
|
||||
let overseer_fut = async {
|
||||
assert_min_relay_parents_request(&mut ctx_handle, &GENESIS_HASH, prospective_response)
|
||||
.await;
|
||||
assert_block_header_requests(&mut ctx_handle, &[GENESIS_HASH], &[GENESIS_HASH]).await;
|
||||
};
|
||||
futures::executor::block_on(join(fut, overseer_fut));
|
||||
|
||||
assert_matches!(
|
||||
view.known_allowed_relay_parents_under(&GENESIS_HASH, None),
|
||||
Some(hashes) if !hashes.is_empty()
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
// Copyright 2017-2022 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Polkadot.
|
||||
|
||||
// Polkadot is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Polkadot is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
pub mod staging;
|
||||
File diff suppressed because it is too large
Load Diff
@@ -43,11 +43,11 @@ use futures::channel::{mpsc, oneshot};
|
||||
use parity_scale_codec::Encode;
|
||||
|
||||
use polkadot_primitives::{
|
||||
AuthorityDiscoveryId, CandidateEvent, CandidateHash, CommittedCandidateReceipt, CoreState,
|
||||
EncodeAs, GroupIndex, GroupRotationInfo, Hash, Id as ParaId, OccupiedCoreAssumption,
|
||||
PersistedValidationData, ScrapedOnChainVotes, SessionIndex, SessionInfo, Signed,
|
||||
SigningContext, ValidationCode, ValidationCodeHash, ValidatorId, ValidatorIndex,
|
||||
ValidatorSignature,
|
||||
vstaging as vstaging_primitives, AuthorityDiscoveryId, CandidateEvent, CandidateHash,
|
||||
CommittedCandidateReceipt, CoreState, EncodeAs, GroupIndex, GroupRotationInfo, Hash,
|
||||
Id as ParaId, OccupiedCoreAssumption, PersistedValidationData, ScrapedOnChainVotes,
|
||||
SessionIndex, SessionInfo, Signed, SigningContext, ValidationCode, ValidationCodeHash,
|
||||
ValidatorId, ValidatorIndex, ValidatorSignature,
|
||||
};
|
||||
pub use rand;
|
||||
use sp_application_crypto::AppCrypto;
|
||||
@@ -67,11 +67,17 @@ pub mod reexports {
|
||||
pub use polkadot_overseer::gen::{SpawnedSubsystem, Spawner, Subsystem, SubsystemContext};
|
||||
}
|
||||
|
||||
/// Convenient and efficient runtime info access.
|
||||
pub mod runtime;
|
||||
|
||||
/// A utility for managing the implicit view of the relay-chain derived from active
|
||||
/// leaves and the minimum allowed relay-parents that parachain candidates can have
|
||||
/// and be backed in those leaves' children.
|
||||
pub mod backing_implicit_view;
|
||||
/// Database trait for subsystem.
|
||||
pub mod database;
|
||||
/// An emulator for node-side code to predict the results of on-chain parachain inclusion
|
||||
/// and predict future constraints.
|
||||
pub mod inclusion_emulator;
|
||||
/// Convenient and efficient runtime info access.
|
||||
pub mod runtime;
|
||||
|
||||
/// Nested message sending
|
||||
///
|
||||
@@ -200,6 +206,7 @@ macro_rules! specialize_requests {
|
||||
}
|
||||
|
||||
specialize_requests! {
|
||||
fn request_runtime_api_version() -> u32; Version;
|
||||
fn request_authorities() -> Vec<AuthorityDiscoveryId>; Authorities;
|
||||
fn request_validators() -> Vec<ValidatorId>; Validators;
|
||||
fn request_validator_groups() -> (Vec<Vec<ValidatorIndex>>, GroupRotationInfo); ValidatorGroups;
|
||||
@@ -219,6 +226,8 @@ specialize_requests! {
|
||||
fn request_unapplied_slashes() -> Vec<(SessionIndex, CandidateHash, slashing::PendingSlashes)>; UnappliedSlashes;
|
||||
fn request_key_ownership_proof(validator_id: ValidatorId) -> Option<slashing::OpaqueKeyOwnershipProof>; KeyOwnershipProof;
|
||||
fn request_submit_report_dispute_lost(dp: slashing::DisputeProof, okop: slashing::OpaqueKeyOwnershipProof) -> Option<()>; SubmitReportDisputeLost;
|
||||
|
||||
fn request_staging_async_backing_params() -> vstaging_primitives::AsyncBackingParams; StagingAsyncBackingParams;
|
||||
}
|
||||
|
||||
/// Requests executor parameters from the runtime effective at given relay-parent. First obtains
|
||||
@@ -270,17 +279,20 @@ pub async fn executor_params_at_relay_parent(
|
||||
}
|
||||
|
||||
/// From the given set of validators, find the first key we can sign with, if any.
|
||||
pub fn signing_key(validators: &[ValidatorId], keystore: &KeystorePtr) -> Option<ValidatorId> {
|
||||
pub fn signing_key<'a>(
|
||||
validators: impl IntoIterator<Item = &'a ValidatorId>,
|
||||
keystore: &KeystorePtr,
|
||||
) -> Option<ValidatorId> {
|
||||
signing_key_and_index(validators, keystore).map(|(k, _)| k)
|
||||
}
|
||||
|
||||
/// From the given set of validators, find the first key we can sign with, if any, and return it
|
||||
/// along with the validator index.
|
||||
pub fn signing_key_and_index(
|
||||
validators: &[ValidatorId],
|
||||
pub fn signing_key_and_index<'a>(
|
||||
validators: impl IntoIterator<Item = &'a ValidatorId>,
|
||||
keystore: &KeystorePtr,
|
||||
) -> Option<(ValidatorId, ValidatorIndex)> {
|
||||
for (i, v) in validators.iter().enumerate() {
|
||||
for (i, v) in validators.into_iter().enumerate() {
|
||||
if keystore.has_keys(&[(v.to_raw_vec(), ValidatorId::ID)]) {
|
||||
return Some((v.clone(), ValidatorIndex(i as _)))
|
||||
}
|
||||
|
||||
@@ -25,7 +25,9 @@ use sp_application_crypto::AppCrypto;
|
||||
use sp_core::crypto::ByteArray;
|
||||
use sp_keystore::{Keystore, KeystorePtr};
|
||||
|
||||
use polkadot_node_subsystem::{messages::RuntimeApiMessage, overseer, SubsystemSender};
|
||||
use polkadot_node_subsystem::{
|
||||
errors::RuntimeApiError, messages::RuntimeApiMessage, overseer, SubsystemSender,
|
||||
};
|
||||
use polkadot_primitives::{
|
||||
vstaging, CandidateEvent, CandidateHash, CoreState, EncodeAs, GroupIndex, GroupRotationInfo,
|
||||
Hash, IndexedVec, OccupiedCore, ScrapedOnChainVotes, SessionIndex, SessionInfo, Signed,
|
||||
@@ -36,8 +38,8 @@ use polkadot_primitives::{
|
||||
use crate::{
|
||||
request_availability_cores, request_candidate_events, request_key_ownership_proof,
|
||||
request_on_chain_votes, request_session_index_for_child, request_session_info,
|
||||
request_submit_report_dispute_lost, request_unapplied_slashes, request_validation_code_by_hash,
|
||||
request_validator_groups,
|
||||
request_staging_async_backing_params, request_submit_report_dispute_lost,
|
||||
request_unapplied_slashes, request_validation_code_by_hash, request_validator_groups,
|
||||
};
|
||||
|
||||
/// Errors that can happen on runtime fetches.
|
||||
@@ -46,6 +48,8 @@ mod error;
|
||||
use error::{recv_runtime, Result};
|
||||
pub use error::{Error, FatalError, JfyiError};
|
||||
|
||||
const LOG_TARGET: &'static str = "parachain::runtime-info";
|
||||
|
||||
/// Configuration for construction a `RuntimeInfo`.
|
||||
pub struct Config {
|
||||
/// Needed for retrieval of `ValidatorInfo`
|
||||
@@ -393,3 +397,62 @@ where
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
/// Prospective parachains mode of a relay parent. Defined by
|
||||
/// the Runtime API version.
|
||||
///
|
||||
/// Needed for the period of transition to asynchronous backing.
|
||||
#[derive(Debug, Copy, Clone)]
|
||||
pub enum ProspectiveParachainsMode {
|
||||
/// Runtime API without support of `async_backing_params`: no prospective parachains.
|
||||
Disabled,
|
||||
/// vstaging runtime API: prospective parachains.
|
||||
Enabled {
|
||||
/// The maximum number of para blocks between the para head in a relay parent
|
||||
/// and a new candidate. Restricts nodes from building arbitrary long chains
|
||||
/// and spamming other validators.
|
||||
max_candidate_depth: usize,
|
||||
/// How many ancestors of a relay parent are allowed to build candidates on top
|
||||
/// of.
|
||||
allowed_ancestry_len: usize,
|
||||
},
|
||||
}
|
||||
|
||||
impl ProspectiveParachainsMode {
|
||||
/// Returns `true` if mode is enabled, `false` otherwise.
|
||||
pub fn is_enabled(&self) -> bool {
|
||||
matches!(self, ProspectiveParachainsMode::Enabled { .. })
|
||||
}
|
||||
}
|
||||
|
||||
/// Requests prospective parachains mode for a given relay parent based on
|
||||
/// the Runtime API version.
|
||||
pub async fn prospective_parachains_mode<Sender>(
|
||||
sender: &mut Sender,
|
||||
relay_parent: Hash,
|
||||
) -> Result<ProspectiveParachainsMode>
|
||||
where
|
||||
Sender: SubsystemSender<RuntimeApiMessage>,
|
||||
{
|
||||
let result =
|
||||
recv_runtime(request_staging_async_backing_params(relay_parent, sender).await).await;
|
||||
|
||||
if let Err(error::Error::RuntimeRequest(RuntimeApiError::NotSupported { runtime_api_name })) =
|
||||
&result
|
||||
{
|
||||
gum::trace!(
|
||||
target: LOG_TARGET,
|
||||
?relay_parent,
|
||||
"Prospective parachains are disabled, {} is not supported by the current Runtime API",
|
||||
runtime_api_name,
|
||||
);
|
||||
|
||||
Ok(ProspectiveParachainsMode::Disabled)
|
||||
} else {
|
||||
let vstaging::AsyncBackingParams { max_candidate_depth, allowed_ancestry_len } = result?;
|
||||
Ok(ProspectiveParachainsMode::Enabled {
|
||||
max_candidate_depth: max_candidate_depth as _,
|
||||
allowed_ancestry_len: allowed_ancestry_len as _,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user