mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-04-26 07:37:57 +00:00
5174b9d2d7
* inclusion emulator logic for asynchronous backing (#4790) * initial stab at candidate_context * fmt * docs & more TODOs * some cleanups * reframe as inclusion_emulator * documentations yes * update types * add constraint modifications * watermark * produce modifications * v2 primitives: re-export all v1 for consistency * vstaging primitives * emulator constraints: handle code upgrades * produce outbound HRMP modifications * stack. * method for applying modifications * method just for sanity-checking modifications * fragments produce modifications, not prospectives * make linear * add some TODOs * remove stacking; handle code upgrades * take `fragment` private * reintroduce stacking. * fragment constructor * add TODO * allow validating fragments against future constraints * docs * relay-parent number and min code size checks * check code upgrade restriction * check max hrmp per candidate * fmt * remove GoAhead logic because it wasn't helpful * docs on code upgrade failure * test stacking * test modifications against constraints * fmt * test fragments * descending or duplicate test * fmt * remove unused imports in vstaging * wrong primitives * spellcheck * Runtime changes for Asynchronous Backing (#4786) * inclusion: utility for allowed relay-parents * inclusion: use prev number instead of prev hash * track most recent context of paras * inclusion: accept previous relay-parents * update dmp advancement rule for async backing * fmt * add a comment about validation outputs * clean up a couple of TODOs * weights * fix weights * fmt * Resolve dmp todo * Restore inclusion tests * Restore paras_inherent tests * MostRecentContext test * Benchmark for new paras dispatchable * Prepare check_validation_outputs for upgrade * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=kusama-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/kusama/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=westend-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/westend/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=polkadot-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/polkadot/src/weights/runtime_parachains_paras.rs * cargo run --quiet --profile=production --features=runtime-benchmarks -- benchmark --chain=rococo-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/rococo/src/weights/runtime_parachains_paras.rs * Implementers guide changes * More tests for allowed relay parents * Add a github issue link * Compute group index based on relay parent * Storage migration * Move allowed parents tracker to shared * Compile error * Get group assigned to core at the next block * Test group assignment * fmt * Error instead of panic * Update guide * Extend doc-comment * Update runtime/parachains/src/shared.rs Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Parity Bot <admin@parity.io> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Prospective Parachains Subsystem (#4913) * docs and skeleton * subsystem skeleton * main loop * fragment tree basics & fmt * begin fragment trees & view * flesh out more of view update logic * further flesh out update logic * some refcount functions for fragment trees * add fatal/non-fatal errors * use non-fatal results * clear up some TODOs * ideal format for scheduling info * add a bunch of TODOs * some more fluff * extract fragment graph to submodule * begin fragment graph API * trees, not graphs * improve docs * scope and constructor for trees * add some test TODOs * limit max ancestors and store constraints * constructor * constraints: fix bug in HRMP watermarks * fragment tree population logic * set::retain * extract population logic * implement add_and_populate * fmt * add some TODOs in tests * implement child-selection * strip out old stuff based on wrong assumptions * use fatality * implement pruning * remove unused ancestor constraints * fragment tree instantiation * remove outdated comment * add message/request types and skeleton for handling * fmt * implement handle_candidate_seconded * candidate storage: handle backed * implement handle_candidate_backed * implement answer_get_backable_candidate * remove async where not needed * implement fetch_ancestry * add logic for run_iteration * add some docs * remove global allow(unused), fix warnings * make spellcheck happy (despite English) * fmt * bump Cargo.lock * replace tracing with gum * introduce PopulateFrom trait * implement GetHypotheticalDepths * revise docs slightly * first fragment tree scope test * more scope tests * test add_candidate * fmt * test retain * refactor test code * test populate is recursive * test contiguity of depth 0 is maintained * add_and_populate tests * cycle tests * remove PopulateFrom trait * fmt * test hypothetical depths (non-recursive) * have CandidateSeconded return membership * tree membership requests * Add a ProspectiveParachainsSubsystem struct * add a staging API for base constraints * add a `From` impl * add runtime API for staging_validity_constraints * implement fetch_base_constraints * implement `fetch_upcoming_paras` * remove reconstruction of candidate receipt; no obvious usecase * fmt * export message to broader module * remove last TODO * correctly export * fix compilation and add GetMinimumRelayParent request * make provisioner into a real subsystem with proper mesage bounds * fmt * fix ChannelsOut in overseer test * fix overseer tests * fix again * fmt * Integrate prospective parachains subsystem into backing: Part 1 (#5557) * BEGIN ASYNC candidate-backing CHANGES * rename & document modes * answer prospective validation data requests * GetMinimumRelayParents request is now plural * implement an implicit view utility for backing subsystems * implicit-view: get allowed relay parents * refactorings and improvements to implicit view * add some TODOs for tests * split implicit view updates into 2 functions * backing: define State to prepare for functional refactor * add some docs * backing: implement bones of new leaf activation logic * backing: create per-relay-parent-states * use new handle_active_leaves_update * begin extracting logic from CandidateBackingJob * mostly extract statement import from job logic * handle statement imports outside of job logic * do some TODO planning for prospective parachains integration * finish rewriting backing subsystem in functional style * add prospective parachains mode to relay parent entries * fmt * add a RejectedByProspectiveParachains error * notify prospective parachains of seconded and backed candidates * always validate candidates exhaustively in backing. * return persisted_validation_data from validation * handle rejections by prospective parachains * implement seconding sanity check * invoke validate_and_second * Alter statement table to allow multiple seconded messages per validator * refactor backing to have statements carry PVD * clean up all warnings * Add tests for implicit view * Improve doc comments * Prospective parachains mode based on Runtime API version * Add a TODO * Rework seconding_sanity_check * Iterate over responses * Update backing tests * collator-protocol: load PVD from runtime * Fix validator side tests * Update statement-distribution to fetch PVD * Fix statement-distribution tests * Backing tests with prospective paras #1 * fix per_relay_parent pruning in backing * Test multiple leaves * Test seconding sanity check * Import statement order Before creating an entry in `PerCandidateState` map wait for the approval from the prospective parachains * Add a test for correct state updates * Second multiple candidates per relay parent test * Add backing tests with prospective paras * Second more than one test without prospective paras * Add a test for prospective para blocks * Update malus * typos Co-authored-by: Chris Sosnin <chris125_@live.com> * Track occupied depth in backing per parachain (#5778) * provisioner: async backing changes (#5711) * Provisioner changes for async backing * Select candidates based on prospective paras mode * Revert naming * Update tests * Update TODO comment * review * provisioner: async backing changes (#5711) * Provisioner changes for async backing * Select candidates based on prospective paras mode * Revert naming * Update tests * Update TODO comment * review * fmt * Network bridge changes for asynchronous backing + update subsystems to handle versioned packets (#5991) * BEGIN STATEMENT DISTRIBUTION WORK create a vstaging network protocol which is the same as v1 * mostly make network bridge amenable to vstaging * network-bridge: fully adapt to vstaging * add some TODOs for tests * fix fallout in bitfield-distribution * bitfield distribution tests + TODOs * fix fallout in gossip-support * collator-protocol: fix message fallout * collator-protocol: load PVD from runtime * add TODO for vstaging tests * make things compile * set used network protocol version using a feature * fmt * get approval-distribution building * fix approval-distribution tests * spellcheck * nits * approval distribution net protocol test * bitfield distribution net protocol test * Revert "collator-protocol: fix message fallout" This reverts commit 07cc887303e16c6b3843ecb25cdc7cc2080e2ed1. * Network bridge tests Co-authored-by: Chris Sosnin <chris125_@live.com> * remove max_pov_size requirement from prospective pvd request (#6014) * remove max_pov_size requirement from prospective pvd request * fmt * Extract legacy statement distribution to its own module (#6026) * add compatibility type to v2 statement distribution message * warning cleanup * handle compatibility layer for v2 * clean up an unimplemented!() block * circulate statements based on version * extract legacy v1 code into separate module * remove unimplemented * clean up naming of from_requester/responder * remove TODOs * have backing share seconded statements with PVD * fmt * fix warning * Quick fix unused warning for not yet implemented/used staging messages. * Fix network bridge test * Fix wrong merge. We now have 23 subsystems (network bridge split + prospective parachains) Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at> * Version 3 is already live. * Fix tests (#6055) * Fix backing tests * Fix warnings. * fmt * collator-protocol: asynchronous backing changes (#5740) * Draft collator side changes * Start working on collations management * Handle peer's view change * Versioning on advertising * Versioned collation fetching request * Handle versioned messages * Improve docs for collation requests * Add spans * Add request receiver to overseer * Fix collator side tests * Extract relay parent mode to lib * Validator side draft * Add more checks for advertisement * Request pvd based on async backing mode * review * Validator side improvements * Make old tests green * More fixes * Collator side tests draft * Send collation test * fmt * Collator side network protocol versioning * cleanup * merge artifacts * Validator side net protocol versioning * Remove fragment tree membership request * Resolve todo * Collator side core state test * Improve net protocol compatibility * Validator side tests * more improvements * style fixes * downgrade log * Track implicit assignments * Limit the number of seconded candidates per para * Add a sanity check * Handle fetched candidate * fix tests * Retry fetch * Guard against dequeueing while already fetching * Reintegrate connection management * Timeout on advertisements * fmt * spellcheck * update tests after merge * validator assignment fixes for backing and collator protocol (#6158) * Rename depth->ancestry len in tests * Refactor group assignments * Remove implicit assignments * backing: consider occupied core assignments * Track a single para on validator side * Refactor prospective parachains mode request (#6179) * Extract prospective parachains mode into util * Skip activations depending on the mode * backing: don't send backed candidate to provisioner (#6185) * backing: introduce `CanSecond` request for advertisements filtering (#6225) * Drop BoundToRelayParent * draft changes * fix backing tests * Fix genesis ancestry * Fix validator side tests * more tests * cargo generate-lockfile * Implement `StagingValidityConstraints` Runtime API method (#6258) * Implement StagingValidityConstraints * spellcheck * fix ump params * Update hrmp comment * Introduce ump per candidate limit * hypothetical earliest block * refactor primitives usage * hypothetical earliest block number test * fix build * Prepare the Runtime for asynchronous backing upgrade (#6287) * Introduce async backing params to runtime config * fix cumulus config * use config * finish runtimes * Introduce new staging API * Update collator protocol * Update provisioner * Update prospective parachains * Update backing * Move async backing params lower in the config * make naming consistent * misc * Use real prospective parachains subsystem (#6407) * Backport `HypotheticalFrontier` into the feature branch (#6605) * implement more general HypotheticalFrontier * fmt * drop unneeded request Co-authored-by: Robert Habermeier <rphmeier@gmail.com> * Resolve todo about legacy leaf activation (#6447) * fix bug/warning in handling membership answers * Remove `HypotheticalDepthRequest` in favor of `HypotheticalFrontierRequest` (#6521) * Remove `HypotheticalDepthRequest` for `HypotheticalFrontierRequest` * Update tests * Fix (removed wrong docstring) * Fix can_second request * Patch some dead_code errors --------- Co-authored-by: Chris Sosnin <chris125_@live.com> * Async Backing: Send Statement Distribution "Backed" messages (#6634) * Backing: Send Statement Distribution "Backed" messages Closes #6590. **TODO:** - [ ] Adjust tests * Fix compile errors * (Mostly) fix tests * Fix comment * Fix test and compile error * Test that `StatementDistributionMessage::Backed` is sent * Fix compile error * Fix some clippy errors * Add prospective parachains subsystem tests (#6454) * Add prospective parachains subsystem test * Add `should_do_no_work_if_async_backing_disabled_for_leaf` test * Implement `activate_leaf` helper, up to getting ancestry * Finish implementing `activate_leaf` * Small refactor in `activate_leaf` * Get `CandidateSeconded` working * Finish `send_candidate_and_check_if_found` test * Refactor; send more leaves & candidates * Refactor test * Implement `check_candidate_parent_leaving_view` test * Start work on `check_candidate_on_multiple_forks` test * Don’t associate specific parachains with leaf * Finish `correctly_updates_leaves` test * Fix cycle due to reused head data * Fix `check_backable_query` test * Fix `check_candidate_on_multiple_forks` test * Add `check_depth_and_pvd_queries` test * Address review comments * Remove TODO * add a new index for output head data to candidate storage * Resolve test TODOs * Fix compile errors * test candidate storage pruning, make sure new index is cleaned up --------- Co-authored-by: Robert Habermeier <rphmeier@gmail.com> * Node-side metrics for asynchronous backing (#6549) * Add metrics for `prune_view_candidate_storage` * Add metrics for `request_unblocked_collations` * Fix docstring * Couple fixes from review comments * Fix `check_depth_query` test * inclusion-emulator: mirror advancement rule check (#6361) * inclusion-emulator: mirror advancement rule check * fix build * prospective-parachains: introduce `backed_in_path_only` flag for advertisements (#6649) * Introduce `backed_in_path_only` flag for depth request * fmt * update doc comment * fmt * Add async-backing zombienet tests (#6314) * Async backing: impl guide for statement distribution (#6738) Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com> * Asynchronous backing statement distribution: Take III (#5999) * add notification types for v2 statement-distribution * improve protocol docs * add empty vstaging module * fmt * add backed candidate packet request types * start putting down structure of new logic * handle activated leaf * some sanity-checking on outbound statements * fmt * update vstaging share to use statements with PVD * tiny refactor, candidate_hash location * import local statements * refactor statement import * first stab at broadcast logic * fmt * fill out some TODOs * start on handling incoming * split off session info into separate map * start in on a knowledge tracker * address some grumbles * format * missed comment * some docs for direct * add note on slashing * amend * simplify 'direct' code * finish up the 'direct' logic * add a bunch of tests for the direct-in-group logic * rename 'direct' to 'cluster', begin a candidate_entry module * distill candidate_entry * start in on a statement-store module * some utilities for the statement store * rewrite 'send_statement_direct' using new tools * filter sending logic on peers which have the relay-parent in their view. * some more logic for handling incoming statements * req/res: BackedCandidatePacket -> AttestedCandidate + tweaks * add a `validated_in_group` bitfield to BackedCandidateInventory * BackedCandidateInventory -> Manifest * start in on requester module * add outgoing request for attested candidate * add a priority mechanism for requester * some request dispatch logic * add seconded mask to tagged-request * amend manifest to hold group index * handle errors and set up scaffold for response validation * validate attested candidate responses * requester -> requests * add some utilities for manipulating requests * begin integrating requester * start grid module * tiny * refactor grid topology to expose more info to subsystems * fix grid_topology test * fix overseer test * implement topology group-based view construction logic * fmt * flesh out grid slightly more * add indexed groups utility * integrate Groups into per-session info * refactor statement store to borrow Groups * implement manifest knowledge utility * add a test for topology setup * don't send to group members * test for conflicting manifests * manifest knowledge tests * fmt * rename field * garbage collection for grid tracker * routines for finding correct/incorrect advertisers * add manifest import logic * tweak naming * more tests for manifest import * add comment * rework candidates into a view-wide tracker * fmt * start writing boilerplate for grid sending * fmt * some more group boilerplate * refactor handling of topology and authority IDs * fmt * send statements directly to grid peers where possible * send to cluster only if statement belongs to cluster * improve handling of cluster statements * handle incoming statements along the grid * API for introduction of candidates into the tree * backing: use new prospective parachains API * fmt prospective parachains changes * fmt statement-dist * fix condition * get ready for tracking importable candidates * prospective parachains: add Cow logic * incomplete and complete hypothetical candidates * remove keep_if_unneeded * fmt * implement more general HypotheticalFrontier * fmt, cleanup * add a by_parent_hash index to candidate tracker * more framework for future code * utilities for getting all hypothetical candidates for frontier * track origin in statement store * fmt * requests should return peer * apply post-confirmation reckoning * flesh out import/announce/circulate logic on new statements * adjust * adjust TODO comment * fix backing tests * update statement-distribution to use new indexedvec * fmt * query hypothetical candidates * implement `note_importable_under` * extract common utility of fragment tree updates * add a helper function for getting statements unknown by backing * import fresh statements to backing * send announcements and acknowledgements over grid * provide freshly importable statements also avoid tracking backed candidates in statement distribution * do not issue requests on newly importable candidates * add TODO for later when confirming candidate * write a routine for handling backed candidate notifications * simplify grid substantially * add some test TODOs * handle confirmed candidates & grid announcements * finish implementing manifest handling, including follow up statements * send follow-up statements when acknowledging freshly backed * fmt * handle incoming acknowledgements * a little DRYing * wire up network messages to handlers * fmt * some skeleton code for peer view update handling * more peer view skeleton stuff * Fix async backing statement distribution tests (#6621) * Fix compile errors in tests * Cargo fmt * Resolve some todos in async backing statement-distribution branch (#6482) * Implement `remove_by_relay_parent` * Extract `minimum_votes` to shared primitives. * Add `can_send_statements_received_with_prejudice` test * Fix test * Update docstrings * Cargo fmt * Fix compile error * Fix compile errors in tests * Cargo fmt * Add module docs; write `test_priority_ordering` (first draft) * Fix `test_priority_ordering` * Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority` * Address review comments * Remove `Entry::get_mut` * fix test compilation * add a TODO for a test * clean up a couple of TODOs * implement sending pending cluster statements * refactor utility function for sending acknowledgement and statements * mostly implement catching peers up via grid * Fix clippy error * alter grid to track all pending statements * fix more TODOs and format * tweak a TODO in requests * some logic for dispatching requests * fmt * skeleton for response receiving * Async backing statement distribution: cluster tests (#6678) * Add `pending_statements_set_when_receiving_fresh_statements` * Add `pending_statements_updated_when_sending_statements` test * fix up * fmt * update TODO * rework seconded mask in requests * change doc * change unhandledresponse not to borrow request manager * only accept responses sufficient to back * finish implementing response handling * extract statement filter to protocol crate * rework requests: use statement filter in network protocol * dispatch cluster requests correctly * rework cluster statement sending * implement request answering * fmt * only send confirmed candidate statement messages on unified relay-parent * Fix Tests In Statement Distribution Branch * Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * clean up some review comments * clean up warnings * Async backing statement distribution: grid tests (#6673) * Add `manifest_import_returns_ok_true` test * cargo fmt * Add pending_communication_receiving_manifest_on_confirmed_candidate * Add `senders_can_provide_manifests_in_acknowledgement` test * Add a couple of tests for pending statements * Add `pending_statements_cleared_when_sending` test * Add `pending_statements_respect_remote_knowledge` test * Refactor group creation in tests * Clarify docs * Address some review comments * Make some clarifications * Fix post-merge errors * Clarify test `senders_can_provide_manifests_in_acknowledgement` * Try writing `pending_statements_are_updated_after_manifest_exchange` * Document "seconding limit" and `reject_overflowing_manifests` test * Test that seconding counts are not updated for validators on error * Fix tests * Fix manifest exchange test * Add more tests in `requests.rs` (#6707) This resolves remaining TODOs in this file. * remove outdated inventory terminology * Async backing statement distribution: `Candidates` tests (#6658) * Async Backing: Fix clippy errors in statement distribution branch (#6720) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * Async Backing: Fix clippy errors in statement distribution branch * Fix some more clippy lints * add tests module * fix warnings in existing tests * create basic test harness * create a test state struct * fmt * create empty cluster & grid modules for tests * some TODOs for cluster test suite * describe test-suite for grid logic * describe request test suite * fix seconding-limit bug * Remove extraneous `pub` This somehow made it into my clippy PR. * Fix some test compile warnings * Remove some unneeded `allow`s * adapt some new test helpers from Marcin * add helper for activating a gossip topology * add utility for signing statements * helpers for connecting/disconnecting peers * round out network utilities * fmt * fix bug in initializing validator-meta * fix compilation * implement first cluster test * TODOs for incoming request tests * Remove unneeded `make_committed_candidate` helper * fmt * some more tests for cluster * add a TODO about grid senders * integrate inbound req/res into test harness * polish off initial cluster test suite * keep introduce candidate request * fix tests after introduce candidate request * fmt * Add grid protocol to module docs * Fix comments * Test `backed_in_path_only: true` * Update node/network/protocol/src/lib.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Update node/network/protocol/src/request_response/mod.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Mark receiver with `vstaging` * validate grid senders based on manifest kind * fix mask_seconded/valid * fix unwanted-mask check * fix build * resolve todo on leaf mode * Unify protocol naming to vstaging * fmt, fix grid test after topology change * typo Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * address review * adjust comment, make easier to understand * Fix typo --------- Co-authored-by: Marcin S <marcin@bytedude.com> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Chris Sosnin <chris125_@live.com> * miscellaneous fixes to make asynchronous backing work (#6791) * propagate network-protocol-staging feature * add feature to adder-collator as well * allow collation-generation of occupied cores * prospective parachains: special treatment for pending availability candidates * runtime: fetch candidates pending availability * lazily construct PVD for pending candidates * fix fallout in prospective parachains hypothetical/select_child * runtime: enact candidates when creating paras-inherent * make tests compile * test pending availability in the scope * add prospective parachains test * fix validity constraints leftovers * drop prints * Fix typos --------- Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Marcin S <marcin@realemail.net> * Remove restart from test (#6840) * Async Backing: Statement Distribution Tests (#6755) * start on handling incoming * split off session info into separate map * start in on a knowledge tracker * address some grumbles * format * missed comment * some docs for direct * add note on slashing * amend * simplify 'direct' code * finish up the 'direct' logic * add a bunch of tests for the direct-in-group logic * rename 'direct' to 'cluster', begin a candidate_entry module * distill candidate_entry * start in on a statement-store module * some utilities for the statement store * rewrite 'send_statement_direct' using new tools * filter sending logic on peers which have the relay-parent in their view. * some more logic for handling incoming statements * req/res: BackedCandidatePacket -> AttestedCandidate + tweaks * add a `validated_in_group` bitfield to BackedCandidateInventory * BackedCandidateInventory -> Manifest * start in on requester module * add outgoing request for attested candidate * add a priority mechanism for requester * some request dispatch logic * add seconded mask to tagged-request * amend manifest to hold group index * handle errors and set up scaffold for response validation * validate attested candidate responses * requester -> requests * add some utilities for manipulating requests * begin integrating requester * start grid module * tiny * refactor grid topology to expose more info to subsystems * fix grid_topology test * fix overseer test * implement topology group-based view construction logic * fmt * flesh out grid slightly more * add indexed groups utility * integrate Groups into per-session info * refactor statement store to borrow Groups * implement manifest knowledge utility * add a test for topology setup * don't send to group members * test for conflicting manifests * manifest knowledge tests * fmt * rename field * garbage collection for grid tracker * routines for finding correct/incorrect advertisers * add manifest import logic * tweak naming * more tests for manifest import * add comment * rework candidates into a view-wide tracker * fmt * start writing boilerplate for grid sending * fmt * some more group boilerplate * refactor handling of topology and authority IDs * fmt * send statements directly to grid peers where possible * send to cluster only if statement belongs to cluster * improve handling of cluster statements * handle incoming statements along the grid * API for introduction of candidates into the tree * backing: use new prospective parachains API * fmt prospective parachains changes * fmt statement-dist * fix condition * get ready for tracking importable candidates * prospective parachains: add Cow logic * incomplete and complete hypothetical candidates * remove keep_if_unneeded * fmt * implement more general HypotheticalFrontier * fmt, cleanup * add a by_parent_hash index to candidate tracker * more framework for future code * utilities for getting all hypothetical candidates for frontier * track origin in statement store * fmt * requests should return peer * apply post-confirmation reckoning * flesh out import/announce/circulate logic on new statements * adjust * adjust TODO comment * fix backing tests * update statement-distribution to use new indexedvec * fmt * query hypothetical candidates * implement `note_importable_under` * extract common utility of fragment tree updates * add a helper function for getting statements unknown by backing * import fresh statements to backing * send announcements and acknowledgements over grid * provide freshly importable statements also avoid tracking backed candidates in statement distribution * do not issue requests on newly importable candidates * add TODO for later when confirming candidate * write a routine for handling backed candidate notifications * simplify grid substantially * add some test TODOs * handle confirmed candidates & grid announcements * finish implementing manifest handling, including follow up statements * send follow-up statements when acknowledging freshly backed * fmt * handle incoming acknowledgements * a little DRYing * wire up network messages to handlers * fmt * some skeleton code for peer view update handling * more peer view skeleton stuff * Fix async backing statement distribution tests (#6621) * Fix compile errors in tests * Cargo fmt * Resolve some todos in async backing statement-distribution branch (#6482) * Implement `remove_by_relay_parent` * Extract `minimum_votes` to shared primitives. * Add `can_send_statements_received_with_prejudice` test * Fix test * Update docstrings * Cargo fmt * Fix compile error * Fix compile errors in tests * Cargo fmt * Add module docs; write `test_priority_ordering` (first draft) * Fix `test_priority_ordering` * Move `insert_or_update_priority`: `Drop` -> `set_cluster_priority` * Address review comments * Remove `Entry::get_mut` * fix test compilation * add a TODO for a test * clean up a couple of TODOs * implement sending pending cluster statements * refactor utility function for sending acknowledgement and statements * mostly implement catching peers up via grid * Fix clippy error * alter grid to track all pending statements * fix more TODOs and format * tweak a TODO in requests * some logic for dispatching requests * fmt * skeleton for response receiving * Async backing statement distribution: cluster tests (#6678) * Add `pending_statements_set_when_receiving_fresh_statements` * Add `pending_statements_updated_when_sending_statements` test * fix up * fmt * update TODO * rework seconded mask in requests * change doc * change unhandledresponse not to borrow request manager * only accept responses sufficient to back * finish implementing response handling * extract statement filter to protocol crate * rework requests: use statement filter in network protocol * dispatch cluster requests correctly * rework cluster statement sending * implement request answering * fmt * only send confirmed candidate statement messages on unified relay-parent * Fix Tests In Statement Distribution Branch * Async Backing: Integrate `vstaging` of statement distribution into `lib.rs` (#6715) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * clean up some review comments * clean up warnings * Async backing statement distribution: grid tests (#6673) * Add `manifest_import_returns_ok_true` test * cargo fmt * Add pending_communication_receiving_manifest_on_confirmed_candidate * Add `senders_can_provide_manifests_in_acknowledgement` test * Add a couple of tests for pending statements * Add `pending_statements_cleared_when_sending` test * Add `pending_statements_respect_remote_knowledge` test * Refactor group creation in tests * Clarify docs * Address some review comments * Make some clarifications * Fix post-merge errors * Clarify test `senders_can_provide_manifests_in_acknowledgement` * Try writing `pending_statements_are_updated_after_manifest_exchange` * Document "seconding limit" and `reject_overflowing_manifests` test * Test that seconding counts are not updated for validators on error * Fix tests * Fix manifest exchange test * Add more tests in `requests.rs` (#6707) This resolves remaining TODOs in this file. * remove outdated inventory terminology * Async backing statement distribution: `Candidates` tests (#6658) * Async Backing: Fix clippy errors in statement distribution branch (#6720) * Integrate `handle_active_leaves_update` * Integrate `share_local_statement`/`handle_backed_candidate_message` * Start hooking up request/response flow * Finish hooking up request/response flow * Limit number of parallel requests in responder * Fix test compilation errors * Fix missing check for prospective parachains mode * Fix some more compile errors * Async Backing: Fix clippy errors in statement distribution branch * Fix some more clippy lints * add tests module * fix warnings in existing tests * create basic test harness * create a test state struct * fmt * create empty cluster & grid modules for tests * some TODOs for cluster test suite * describe test-suite for grid logic * describe request test suite * fix seconding-limit bug * Remove extraneous `pub` This somehow made it into my clippy PR. * Fix some test compile warnings * Remove some unneeded `allow`s * adapt some new test helpers from Marcin * add helper for activating a gossip topology * add utility for signing statements * helpers for connecting/disconnecting peers * round out network utilities * fmt * fix bug in initializing validator-meta * fix compilation * implement first cluster test * TODOs for incoming request tests * Remove unneeded `make_committed_candidate` helper * fmt * Hook up request sender * Add `valid_statement_without_prior_seconded_is_ignored` test * Fix `valid_statement_without_prior_seconded_is_ignored` test * some more tests for cluster * add a TODO about grid senders * integrate inbound req/res into test harness * polish off initial cluster test suite * keep introduce candidate request * fix tests after introduce candidate request * fmt * Add grid protocol to module docs * Remove obsolete test * Fix comments * Test `backed_in_path_only: true` * Update node/network/protocol/src/lib.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Update node/network/protocol/src/request_response/mod.rs Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> * Mark receiver with `vstaging` * First draft of `ensure_seconding_limit_is_respected` test * validate grid senders based on manifest kind * fix mask_seconded/valid * fix unwanted-mask check * fix build * resolve todo on leaf mode * Unify protocol naming to vstaging * Fix `ensure_seconding_limit_is_respected` test * Start `backed_candidate_leads_to_advertisement` test * fmt, fix grid test after topology change * Send Backed notification * Finish `backed_candidate_leads_to_advertisement` test * Finish `peer_reported_for_duplicate_statements` test * Finish `received_advertisement_before_confirmation_leads_to_request` * Add `advertisements_rejected_from_incorrect_peers` test * Add `manifest_rejected_*` tests * Add `manifest_rejected_when_group_does_not_match_para` test * Add `local_node_sanity_checks_incoming_requests` test * Add `local_node_respects_statement_mask` test * Add tests where peer is reported for providing invalid signatures * Add `cluster_peer_allowed_to_send_incomplete_statements` test * Add `received_advertisement_after_backing_leads_to_acknowledgement` * Add `received_advertisement_after_confirmation_before_backing` test * peer_reported_for_advertisement_conflicting_with_confirmed_candidate * Add `peer_reported_for_not_enough_statements` test * Add `peer_reported_for_providing_statements_meant_to_be_masked_out` * Add `additional_statements_are_shared_after_manifest_exchange` * Add `grid_statements_imported_to_backing` test * Add `relay_parent_entering_peer_view_leads_to_advertisement` test * Add `advertisement_not_re_sent_when_peer_re_enters_view` test * Update node/network/statement-distribution/src/vstaging/tests/grid.rs Co-authored-by: asynchronous rob <rphmeier@gmail.com> * Resolve TODOs, update test * Address unused code * Add check after every test for unhandled requests * Refactor (`make_dummy_leaf` and `handle_sent_request`) * Refactor (`make_dummy_topology`) * Minor refactor --------- Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Chris Sosnin <chris125_@live.com> * Fix some clippy lints in tests * Async backing: minor fixes (#6920) * bitfield-distribution test * implicit view tests * Refactor parameters -> params * scheduler: update storage migration (#6963) * update scheduler migration * Adjust weight to account for storage read * Statement Distribution Guide Edits (#7025) * Statement distribution guide edits * Addressed Marcin's comments * Add attested candidate request retry timeouts (#6833) Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: asynchronous rob <rphmeier@gmail.com> Co-authored-by: Robert Habermeier <rphmeier@gmail.com> Co-authored-by: Chris Sosnin <chris125_@live.com> Fix async backing statement distribution tests (#6621) Resolve some todos in async backing statement-distribution branch (#6482) Fix clippy errors in statement distribution branch (#6720) * Async backing: add Prospective Parachains impl guide (#6933) Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> * Updates to Provisioner Guide for Async Backing (#7106) * Initial corrections and clarifications * Partial first draft * Finished first draft * Adding back wrongly removed test bit * fmt * Update roadmap/implementers-guide/src/node/utility/provisioner.md Co-authored-by: Marcin S. <marcin@realemail.net> * Addressing comments * Reorganization * fmt --------- Co-authored-by: Marcin S. <marcin@realemail.net> * fmt * Renaming Parathread Mentions (#7287) * Renaming parathreads * Renaming module to pallet * More updates * PVF: Refactor workers into separate crates, remove host dependency (#7253) * PVF: Refactor workers into separate crates, remove host dependency * Fix compile error * Remove some leftover code * Fix compile errors * Update Cargo.lock * Remove worker main.rs files I accidentally copied these from the other PR. This PR isn't intended to introduce standalone workers yet. * Address review comments * cargo fmt * Update a couple of comments * Update log targets * Update quote to 1.0.27 (#7280) Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: parity-processbot <> * pallets: implement `Default` for `GenesisConfig` in `no_std` (#7271) * pallets: implement Default for GenesisConfig in no_std This change is follow-up of: https://github.com/paritytech/substrate/pull/14108 It is a step towards: https://github.com/paritytech/substrate/issues/13334 * Cargo.lock updated * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * cli: enable BEEFY by default on test networks (#7293) We consider BEEFY mature enough to run by default on all nodes for test networks (Rococo/Wococo/Versi). Right now, most nodes are not running it since it's opt-in using --beefy flag. Switch to an opt-out model for test networks. Replace --beefy flag from CLI with --no-beefy and have BEEFY client start by default on test networks. Signed-off-by: acatangiu <adrian@parity.io> * runtime: past session slashing runtime API (#6667) * runtime/vstaging: unapplied_slashes runtime API * runtime/vstaging: key_ownership_proof runtime API * runtime/ParachainHost: submit_report_dispute_lost * fix key_ownership_proof API * runtime: submit_report_dispute_lost runtime API * nits * Update node/subsystem-types/src/messages.rs Co-authored-by: Marcin S. <marcin@bytedude.com> * revert unrelated fmt changes * post merge fixes * fix compilation --------- Co-authored-by: Marcin S. <marcin@bytedude.com> * Correcting git mishap * Document usage of `gum` crate (#7294) * Document usage of gum crate * Small fix * Add some more basic info * Update node/gum/src/lib.rs Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * Update target docs --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * XCM: Fix issue with RequestUnlock (#7278) * XCM: Fix issue with RequestUnlock * Leave API changes for v4 * Fix clippy errors * Fix tests --------- Co-authored-by: parity-processbot <> * Companion for Substrate#14228 (#7295) * Companion for Substrate#14228 https://github.com/paritytech/substrate/pull/14228 * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * Companion for #14237: Use latest sp-crates (#7300) * To revert: Update substrate branch to "lexnv/bump_sp_crates" Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Revert "To revert: Update substrate branch to "lexnv/bump_sp_crates"" This reverts commit 5f1db84eac4a226c37b7f6ce6ee19b49dc7e2008. * Update cargo lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * bounded-collections bump to 0.1.7 (#7305) * bounded-collections bump to 0.1.7 Companion for: paritytech/substrate#14225 * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * bump to quote 1.0.28 (#7306) * `RollingSessionWindow` cleanup (#7204) * Replace `RollingSessionWindow` with `RuntimeInfo` - initial commit * Fix tests in import * Fix the rest of the tests * Remove dead code * Fix todos * Simplify session caching * Comments for `SessionInfoProvider` * Separate `SessionInfoProvider` from `State` * `cache_session_info_for_head` becomes freestanding function * Remove unneeded `mut` usage * fn session_info -> fn get_session_info() to avoid name clashes. The function also tries to initialize `SessionInfoProvider` * Fix SessionInfo retrieval * Code cleanup * Don't wrap `SessionInfoProvider` in an `Option` * Remove `earliest_session()` * Remove pre-caching -> wip * Fix some tests and code cleanup * Fix all tests * Fixes in tests * Fix comments, variable names and small style changes * Fix a warning * impl From<SessionWindowSize> for NonZeroUsize * Fix logging for `get_session_info` - remove redundant logs and decrease log level to DEBUG * Code review feedback * Storage migration removing `COL_SESSION_WINDOW_DATA` from parachains db * Remove `col_session_data` usages * Storage migration clearing columns w/o removing them * Remove session data column usages from `approval-voting` and `dispute-coordinator` tests * Add some test cases from `RollingSessionWindow` to `dispute-coordinator` tests * Fix formatting in initialized.rs * Fix a corner case in `SessionInfo` caching for `dispute-coordinator` * Remove `RollingSessionWindow` ;( * Revert "Fix formatting in initialized.rs" This reverts commit 0f94664ec9f3a7e3737a30291195990e1e7065fc. * v2 to v3 migration drops `COL_DISPUTE_COORDINATOR_DATA` instead of clearing it * Fix `NUM_COLUMNS` in `approval-voting` * Use `columns::v3::NUM_COLUMNS` when opening db * Update node/service/src/parachains_db/upgrade.rs Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * Don't write in `COL_DISPUTE_COORDINATOR_DATA` for `test_rocksdb_migrate_2_to_3` * Fix `NUM+COLUMNS` in approval_voting * Fix formatting * Fix columns usage * Clarification comments about the different db versions --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * pallet-para-config: Remove remnant WeightInfo functions (#7308) * pallet-para-config: Remove remnant WeightInfo functions Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * set_config_with_weight begone Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * ".git/.scripts/commands/bench/bench.sh" runtime kusama-dev runtime_parachains::configuration --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: command-bot <> * XCM: PayOverXcm config (#6900) * Move XCM query functionality to trait * Fix tests * Add PayOverXcm implementation * fix the PayOverXcm trait to compile * moved doc comment out of trait implmeentation and to the trait * PayOverXCM documentation * Change documentation a bit * Added empty benchmark methods implementation and changed docs * update PayOverXCM to convert AccountIds to MultiLocations * Implement benchmarking method * Change v3 to latest * Descend origin to an asset sender (#6970) * descend origin to an asset sender * sender as tuple of dest and sender * Add more variants to the QueryResponseStatus enum * Change Beneficiary to Into<[u8; 32]> * update PayOverXcm to return concrete errors and use AccountId as sender * use polkadot-primitives for AccountId * fix dependency to use polkadot-core-primitives * force Unpaid instruction to the top of the instructions list * modify report_outcome to accept interior argument * use new_query directly for building final xcm query, instead of report_outcome * fix usage of new_query to use the XcmQueryHandler * fix usage of new_query to use the XcmQueryHandler * tiny method calling fix * xcm query handler (#7198) * drop redundant query status * rename ReportQueryStatus to OuterQueryStatus * revert rename of QueryResponseStatus * update mapping * Update xcm/xcm-builder/src/pay.rs Co-authored-by: Gavin Wood <gavin@parity.io> * Updates * Docs * Fix benchmarking stuff * Destination can be determined based on asset_kind * Tweaking API to minimise clones * Some repotting and docs --------- Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> * Companion for #14265 (#7307) * Update Cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update Cargo.lock Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: parity-processbot <> * bump serde to 1.0.163 (#7315) * bump serde to 1.0.163 * bump ci * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <> * fmt * Updated fmt * Removing changes accidentally pulled from master * fix another master pull issue * Another master pull fix * fmt * Fixing implementers guide build * Revert "Merge branch 'rh-async-backing-feature-while-frozen' of https://github.com/paritytech/polkadot into brad-rename-parathread" This reverts commit bebc24af52ab61155e3fe02cb3ce66a592bce49c, reversing changes made to 1b2de662dfb11173679d6da5bd0da9d149c85547. --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: acatangiu <adrian@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Marcin S. <marcin@bytedude.com> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> * fix bitfield distribution test * approval distribution tests * fix bridge tests * update Cargo.lock * [async-backing-branch] Optimize collator-protocol validator-side request fetching (#7457) * Optimize collator-protocol validator-side request fetching * address feedback: replace tuples with structs * feedback: add doc comments * move collation types to subfolder --------- Signed-off-by: alindima <alin@parity.io> * Update collation generation for asynchronous backing (#7405) * break candidate receipt construction and distribution into own function * update implementers' guide to include SubmitCollation * implement SubmitCollation for collation-generation * fmt * fix test compilation & remove unnecessary submodule * add some TODOs for a test suite. * Update roadmap/implementers-guide/src/types/overseer-protocol.md Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * add new test harness and first test * refactor to avoid requiring background sender * ensure collation gets packaged and distributed * tests for the fallback case with no hint * add parent rp-number hint tests * fmt * update uses of CollationGenerationConfig * fix remaining test * address review comments * use subsystemsender for background tasks * fmt * remove ValidationCodeHashHint and related tests --------- Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> * fix some more fallout from merge * fmt * remove staging APIs from Rococo & Westend (#7513) * send network messages on main protocol name (#7515) * misc async backing improvements for allowed ancestry blocks (#7532) * shared: fix acquire_info * backwards-compat test for prospective parachains * same relay parent is allowed * provisioner: request candidate receipt by relay parent (#7527) * return candidates hash from prospective parachains * update provisioner * update tests * guide changes * send a single message to backing * fix test * revert to old `handle_new_activations` logic in some cases (#7514) * revert to old `handle_new_activations` logic * gate sending messages on scheduled cores to max_depth >= 2 * fmt * 2->1 * Omnibus asynchronous backing bugfix PR (#7529) * fix a bug in backing * add some more logs * prospective parachains: take ancestry only up to session bounds * add test * fix zombienet tests (#7614) Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> * fix runtime compilation * make bitfield distribution tests compile * attempt to fix zombienet disputes (#7618) * update metric name * update some metric names * avoid cycles when creating fake candidates * make undying collator more friendly to malformed parents * fix a bug in malus * fmt * clippy * add RUN_IN_CONTAINER to new ZombieNet tests (#7631) * remove duplicated migration happened because of master-merge --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: acatangiu <adrian@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Signed-off-by: alindima <alin@parity.io> Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> Co-authored-by: Chris Sosnin <chris125_@live.com> Co-authored-by: Parity Bot <admin@parity.io> Co-authored-by: Chris Sosnin <48099298+slumber@users.noreply.github.com> Co-authored-by: Robert Klotzner <robert.klotzner@gmx.at> Co-authored-by: Robert Klotzner <eskimor@users.noreply.github.com> Co-authored-by: Marcin S <marcin@bytedude.com> Co-authored-by: Marcin S <marcin@realemail.net> Co-authored-by: Mattia L.V. Bradascio <28816406+bredamatt@users.noreply.github.com> Co-authored-by: Bradley Olson <34992650+BradleyOlson64@users.noreply.github.com> Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com> Co-authored-by: BradleyOlson64 <lotrftw9@gmail.com> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Anthony Alaribe <anthonyalaribe@gmail.com> Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com> Co-authored-by: Anthony Alaribe <anthony.alaribe@parity.io> Co-authored-by: Gavin Wood <gavin@parity.io> Co-authored-by: Alin Dima <alin@parity.io>
990 lines
29 KiB
Rust
990 lines
29 KiB
Rust
// Copyright (C) Parity Technologies (UK) Ltd.
|
|
// This file is part of Polkadot.
|
|
|
|
// Polkadot is free software: you can redistribute it and/or modify
|
|
// it under the terms of the GNU General Public License as published by
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
// (at your option) any later version.
|
|
|
|
// Polkadot is distributed in the hope that it will be useful,
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
// GNU General Public License for more details.
|
|
|
|
// You should have received a copy of the GNU General Public License
|
|
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
//! The statement table: generic implementation.
|
|
//!
|
|
//! This stores messages other authorities issue about candidates.
|
|
//!
|
|
//! These messages are used to create a proposal submitted to a BFT consensus process.
|
|
//!
|
|
//! Each parachain is associated with a committee of authorities, who issue statements
|
|
//! indicating whether the candidate is valid or invalid. Once a threshold of the committee
|
|
//! has signed validity statements, the candidate may be marked includable.
|
|
|
|
use std::{
|
|
collections::hash_map::{self, Entry, HashMap},
|
|
fmt::Debug,
|
|
hash::Hash,
|
|
};
|
|
|
|
use primitives::{ValidatorSignature, ValidityAttestation as PrimitiveValidityAttestation};
|
|
|
|
use parity_scale_codec::{Decode, Encode};
|
|
|
|
/// Context for the statement table.
|
|
pub trait Context {
|
|
/// An authority ID
|
|
type AuthorityId: Debug + Hash + Eq + Clone;
|
|
/// The digest (hash or other unique attribute) of a candidate.
|
|
type Digest: Debug + Hash + Eq + Clone;
|
|
/// The group ID type
|
|
type GroupId: Debug + Hash + Ord + Eq + Clone;
|
|
/// A signature type.
|
|
type Signature: Debug + Eq + Clone;
|
|
/// Candidate type. In practice this will be a candidate receipt.
|
|
type Candidate: Debug + Ord + Eq + Clone;
|
|
|
|
/// get the digest of a candidate.
|
|
fn candidate_digest(candidate: &Self::Candidate) -> Self::Digest;
|
|
|
|
/// get the group of a candidate.
|
|
fn candidate_group(candidate: &Self::Candidate) -> Self::GroupId;
|
|
|
|
/// Whether a authority is a member of a group.
|
|
/// Members are meant to submit candidates and vote on validity.
|
|
fn is_member_of(&self, authority: &Self::AuthorityId, group: &Self::GroupId) -> bool;
|
|
|
|
/// requisite number of votes for validity from a group.
|
|
fn requisite_votes(&self, group: &Self::GroupId) -> usize;
|
|
}
|
|
|
|
/// Table configuration.
|
|
pub struct Config {
|
|
/// When this is true, the table will allow multiple seconded candidates
|
|
/// per authority. This flag means that higher-level code is responsible for
|
|
/// bounding the number of candidates.
|
|
pub allow_multiple_seconded: bool,
|
|
}
|
|
|
|
/// Statements circulated among peers.
|
|
#[derive(PartialEq, Eq, Debug, Clone, Encode, Decode)]
|
|
pub enum Statement<Candidate, Digest> {
|
|
/// Broadcast by an authority to indicate that this is its candidate for inclusion.
|
|
///
|
|
/// Broadcasting two different candidate messages per round is not allowed.
|
|
#[codec(index = 1)]
|
|
Seconded(Candidate),
|
|
/// Broadcast by a authority to attest that the candidate with given digest is valid.
|
|
#[codec(index = 2)]
|
|
Valid(Digest),
|
|
}
|
|
|
|
/// A signed statement.
|
|
#[derive(PartialEq, Eq, Debug, Clone, Encode, Decode)]
|
|
pub struct SignedStatement<Candidate, Digest, AuthorityId, Signature> {
|
|
/// The statement.
|
|
pub statement: Statement<Candidate, Digest>,
|
|
/// The signature.
|
|
pub signature: Signature,
|
|
/// The sender.
|
|
pub sender: AuthorityId,
|
|
}
|
|
|
|
/// Misbehavior: voting more than one way on candidate validity.
|
|
///
|
|
/// Since there are three possible ways to vote, a double vote is possible in
|
|
/// three possible combinations (unordered)
|
|
#[derive(PartialEq, Eq, Debug, Clone)]
|
|
pub enum ValidityDoubleVote<Candidate, Digest, Signature> {
|
|
/// Implicit vote by issuing and explicitly voting validity.
|
|
IssuedAndValidity((Candidate, Signature), (Digest, Signature)),
|
|
}
|
|
|
|
impl<Candidate, Digest, Signature> ValidityDoubleVote<Candidate, Digest, Signature> {
|
|
/// Deconstruct this misbehavior into two `(Statement, Signature)` pairs, erasing the
|
|
/// information about precisely what the problem was.
|
|
pub fn deconstruct<Ctx>(
|
|
self,
|
|
) -> ((Statement<Candidate, Digest>, Signature), (Statement<Candidate, Digest>, Signature))
|
|
where
|
|
Ctx: Context<Candidate = Candidate, Digest = Digest, Signature = Signature>,
|
|
Candidate: Debug + Ord + Eq + Clone,
|
|
Digest: Debug + Hash + Eq + Clone,
|
|
Signature: Debug + Eq + Clone,
|
|
{
|
|
match self {
|
|
Self::IssuedAndValidity((c, s1), (d, s2)) =>
|
|
((Statement::Seconded(c), s1), (Statement::Valid(d), s2)),
|
|
}
|
|
}
|
|
}
|
|
|
|
/// Misbehavior: multiple signatures on same statement.
|
|
#[derive(PartialEq, Eq, Debug, Clone)]
|
|
pub enum DoubleSign<Candidate, Digest, Signature> {
|
|
/// On candidate.
|
|
Seconded(Candidate, Signature, Signature),
|
|
/// On validity.
|
|
Validity(Digest, Signature, Signature),
|
|
}
|
|
|
|
impl<Candidate, Digest, Signature> DoubleSign<Candidate, Digest, Signature> {
|
|
/// Deconstruct this misbehavior into a statement with two signatures, erasing the information
|
|
/// about precisely where in the process the issue was detected.
|
|
pub fn deconstruct(self) -> (Statement<Candidate, Digest>, Signature, Signature) {
|
|
match self {
|
|
Self::Seconded(candidate, a, b) => (Statement::Seconded(candidate), a, b),
|
|
Self::Validity(digest, a, b) => (Statement::Valid(digest), a, b),
|
|
}
|
|
}
|
|
}
|
|
|
|
/// Misbehavior: declaring multiple candidates.
|
|
#[derive(PartialEq, Eq, Debug, Clone)]
|
|
pub struct MultipleCandidates<Candidate, Signature> {
|
|
/// The first candidate seen.
|
|
pub first: (Candidate, Signature),
|
|
/// The second candidate seen.
|
|
pub second: (Candidate, Signature),
|
|
}
|
|
|
|
/// Misbehavior: submitted statement for wrong group.
|
|
#[derive(PartialEq, Eq, Debug, Clone)]
|
|
pub struct UnauthorizedStatement<Candidate, Digest, AuthorityId, Signature> {
|
|
/// A signed statement which was submitted without proper authority.
|
|
pub statement: SignedStatement<Candidate, Digest, AuthorityId, Signature>,
|
|
}
|
|
|
|
/// Different kinds of misbehavior. All of these kinds of malicious misbehavior
|
|
/// are easily provable and extremely disincentivized.
|
|
#[derive(PartialEq, Eq, Debug, Clone)]
|
|
pub enum Misbehavior<Candidate, Digest, AuthorityId, Signature> {
|
|
/// Voted invalid and valid on validity.
|
|
ValidityDoubleVote(ValidityDoubleVote<Candidate, Digest, Signature>),
|
|
/// Submitted multiple candidates.
|
|
MultipleCandidates(MultipleCandidates<Candidate, Signature>),
|
|
/// Submitted a message that was unauthorized.
|
|
UnauthorizedStatement(UnauthorizedStatement<Candidate, Digest, AuthorityId, Signature>),
|
|
/// Submitted two valid signatures for the same message.
|
|
DoubleSign(DoubleSign<Candidate, Digest, Signature>),
|
|
}
|
|
|
|
/// Type alias for misbehavior corresponding to context type.
|
|
pub type MisbehaviorFor<Ctx> = Misbehavior<
|
|
<Ctx as Context>::Candidate,
|
|
<Ctx as Context>::Digest,
|
|
<Ctx as Context>::AuthorityId,
|
|
<Ctx as Context>::Signature,
|
|
>;
|
|
|
|
// Kinds of votes for validity on a particular candidate.
|
|
#[derive(Clone, PartialEq, Eq)]
|
|
enum ValidityVote<Signature: Eq + Clone> {
|
|
// Implicit validity vote.
|
|
Issued(Signature),
|
|
// Direct validity vote.
|
|
Valid(Signature),
|
|
}
|
|
|
|
/// A summary of import of a statement.
|
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
|
pub struct Summary<Digest, Group> {
|
|
/// The digest of the candidate referenced.
|
|
pub candidate: Digest,
|
|
/// The group that the candidate is in.
|
|
pub group_id: Group,
|
|
/// How many validity votes are currently witnessed.
|
|
pub validity_votes: usize,
|
|
}
|
|
|
|
/// A validity attestation.
|
|
#[derive(Clone, PartialEq, Decode, Encode)]
|
|
pub enum ValidityAttestation<Signature> {
|
|
/// implicit validity attestation by issuing.
|
|
/// This corresponds to issuance of a `Candidate` statement.
|
|
Implicit(Signature),
|
|
/// An explicit attestation. This corresponds to issuance of a
|
|
/// `Valid` statement.
|
|
Explicit(Signature),
|
|
}
|
|
|
|
impl Into<PrimitiveValidityAttestation> for ValidityAttestation<ValidatorSignature> {
|
|
fn into(self) -> PrimitiveValidityAttestation {
|
|
match self {
|
|
Self::Implicit(s) => PrimitiveValidityAttestation::Implicit(s),
|
|
Self::Explicit(s) => PrimitiveValidityAttestation::Explicit(s),
|
|
}
|
|
}
|
|
}
|
|
|
|
/// An attested-to candidate.
|
|
#[derive(Clone, PartialEq, Decode, Encode)]
|
|
pub struct AttestedCandidate<Group, Candidate, AuthorityId, Signature> {
|
|
/// The group ID that the candidate is in.
|
|
pub group_id: Group,
|
|
/// The candidate data.
|
|
pub candidate: Candidate,
|
|
/// Validity attestations.
|
|
pub validity_votes: Vec<(AuthorityId, ValidityAttestation<Signature>)>,
|
|
}
|
|
|
|
/// Stores votes and data about a candidate.
|
|
pub struct CandidateData<Ctx: Context> {
|
|
group_id: Ctx::GroupId,
|
|
candidate: Ctx::Candidate,
|
|
validity_votes: HashMap<Ctx::AuthorityId, ValidityVote<Ctx::Signature>>,
|
|
}
|
|
|
|
impl<Ctx: Context> CandidateData<Ctx> {
|
|
/// Yield a full attestation for a candidate.
|
|
/// If the candidate can be included, it will return `Some`.
|
|
pub fn attested(
|
|
&self,
|
|
validity_threshold: usize,
|
|
) -> Option<AttestedCandidate<Ctx::GroupId, Ctx::Candidate, Ctx::AuthorityId, Ctx::Signature>> {
|
|
let valid_votes = self.validity_votes.len();
|
|
if valid_votes < validity_threshold {
|
|
return None
|
|
}
|
|
|
|
let validity_votes = self
|
|
.validity_votes
|
|
.iter()
|
|
.map(|(a, v)| match *v {
|
|
ValidityVote::Valid(ref s) => (a.clone(), ValidityAttestation::Explicit(s.clone())),
|
|
ValidityVote::Issued(ref s) =>
|
|
(a.clone(), ValidityAttestation::Implicit(s.clone())),
|
|
})
|
|
.collect();
|
|
|
|
Some(AttestedCandidate {
|
|
group_id: self.group_id.clone(),
|
|
candidate: self.candidate.clone(),
|
|
validity_votes,
|
|
})
|
|
}
|
|
|
|
fn summary(&self, digest: Ctx::Digest) -> Summary<Ctx::Digest, Ctx::GroupId> {
|
|
Summary {
|
|
candidate: digest,
|
|
group_id: self.group_id.clone(),
|
|
validity_votes: self.validity_votes.len(),
|
|
}
|
|
}
|
|
}
|
|
|
|
// authority metadata
|
|
struct AuthorityData<Ctx: Context> {
|
|
proposals: Vec<(Ctx::Digest, Ctx::Signature)>,
|
|
}
|
|
|
|
impl<Ctx: Context> Default for AuthorityData<Ctx> {
|
|
fn default() -> Self {
|
|
AuthorityData { proposals: Vec::new() }
|
|
}
|
|
}
|
|
|
|
/// Type alias for the result of a statement import.
|
|
pub type ImportResult<Ctx> = Result<
|
|
Option<Summary<<Ctx as Context>::Digest, <Ctx as Context>::GroupId>>,
|
|
MisbehaviorFor<Ctx>,
|
|
>;
|
|
|
|
/// Stores votes
|
|
pub struct Table<Ctx: Context> {
|
|
authority_data: HashMap<Ctx::AuthorityId, AuthorityData<Ctx>>,
|
|
detected_misbehavior: HashMap<Ctx::AuthorityId, Vec<MisbehaviorFor<Ctx>>>,
|
|
candidate_votes: HashMap<Ctx::Digest, CandidateData<Ctx>>,
|
|
config: Config,
|
|
}
|
|
|
|
impl<Ctx: Context> Table<Ctx> {
|
|
/// Create a new `Table` from a `Config`.
|
|
pub fn new(config: Config) -> Self {
|
|
Table {
|
|
authority_data: HashMap::default(),
|
|
detected_misbehavior: HashMap::default(),
|
|
candidate_votes: HashMap::default(),
|
|
config,
|
|
}
|
|
}
|
|
|
|
/// Get the attested candidate for `digest`.
|
|
///
|
|
/// Returns `Some(_)` if the candidate exists and is includable.
|
|
pub fn attested_candidate(
|
|
&self,
|
|
digest: &Ctx::Digest,
|
|
context: &Ctx,
|
|
) -> Option<AttestedCandidate<Ctx::GroupId, Ctx::Candidate, Ctx::AuthorityId, Ctx::Signature>> {
|
|
self.candidate_votes.get(digest).and_then(|data| {
|
|
let v_threshold = context.requisite_votes(&data.group_id);
|
|
data.attested(v_threshold)
|
|
})
|
|
}
|
|
|
|
/// Import a signed statement. Signatures should be checked for validity, and the
|
|
/// sender should be checked to actually be an authority.
|
|
///
|
|
/// Validity and invalidity statements are only valid if the corresponding
|
|
/// candidate has already been imported.
|
|
///
|
|
/// If this returns `None`, the statement was either duplicate or invalid.
|
|
pub fn import_statement(
|
|
&mut self,
|
|
context: &Ctx,
|
|
statement: SignedStatement<Ctx::Candidate, Ctx::Digest, Ctx::AuthorityId, Ctx::Signature>,
|
|
) -> Option<Summary<Ctx::Digest, Ctx::GroupId>> {
|
|
let SignedStatement { statement, signature, sender: signer } = statement;
|
|
|
|
let res = match statement {
|
|
Statement::Seconded(candidate) =>
|
|
self.import_candidate(context, signer.clone(), candidate, signature),
|
|
Statement::Valid(digest) =>
|
|
self.validity_vote(context, signer.clone(), digest, ValidityVote::Valid(signature)),
|
|
};
|
|
|
|
match res {
|
|
Ok(maybe_summary) => maybe_summary,
|
|
Err(misbehavior) => {
|
|
// all misbehavior in agreement is provable and actively malicious.
|
|
// punishments may be cumulative.
|
|
self.detected_misbehavior.entry(signer).or_default().push(misbehavior);
|
|
None
|
|
},
|
|
}
|
|
}
|
|
|
|
/// Get a candidate by digest.
|
|
pub fn get_candidate(&self, digest: &Ctx::Digest) -> Option<&Ctx::Candidate> {
|
|
self.candidate_votes.get(digest).map(|d| &d.candidate)
|
|
}
|
|
|
|
/// Access all witnessed misbehavior.
|
|
pub fn get_misbehavior(&self) -> &HashMap<Ctx::AuthorityId, Vec<MisbehaviorFor<Ctx>>> {
|
|
&self.detected_misbehavior
|
|
}
|
|
|
|
/// Create a draining iterator of misbehaviors.
|
|
///
|
|
/// This consumes all detected misbehaviors, even if the iterator is not completely consumed.
|
|
pub fn drain_misbehaviors(&mut self) -> DrainMisbehaviors<'_, Ctx> {
|
|
self.detected_misbehavior.drain().into()
|
|
}
|
|
|
|
fn import_candidate(
|
|
&mut self,
|
|
context: &Ctx,
|
|
authority: Ctx::AuthorityId,
|
|
candidate: Ctx::Candidate,
|
|
signature: Ctx::Signature,
|
|
) -> ImportResult<Ctx> {
|
|
let group = Ctx::candidate_group(&candidate);
|
|
if !context.is_member_of(&authority, &group) {
|
|
return Err(Misbehavior::UnauthorizedStatement(UnauthorizedStatement {
|
|
statement: SignedStatement {
|
|
signature,
|
|
statement: Statement::Seconded(candidate),
|
|
sender: authority,
|
|
},
|
|
}))
|
|
}
|
|
|
|
// check that authority hasn't already specified another candidate.
|
|
let digest = Ctx::candidate_digest(&candidate);
|
|
|
|
let new_proposal = match self.authority_data.entry(authority.clone()) {
|
|
Entry::Occupied(mut occ) => {
|
|
// if digest is different, fetch candidate and
|
|
// note misbehavior.
|
|
let existing = occ.get_mut();
|
|
|
|
if !self.config.allow_multiple_seconded && existing.proposals.len() == 1 {
|
|
let (old_digest, old_sig) = &existing.proposals[0];
|
|
|
|
if old_digest != &digest {
|
|
const EXISTENCE_PROOF: &str =
|
|
"when proposal first received from authority, candidate \
|
|
votes entry is created. proposal here is `Some`, therefore \
|
|
candidate votes entry exists; qed";
|
|
|
|
let old_candidate = self
|
|
.candidate_votes
|
|
.get(old_digest)
|
|
.expect(EXISTENCE_PROOF)
|
|
.candidate
|
|
.clone();
|
|
|
|
return Err(Misbehavior::MultipleCandidates(MultipleCandidates {
|
|
first: (old_candidate, old_sig.clone()),
|
|
second: (candidate, signature.clone()),
|
|
}))
|
|
}
|
|
|
|
false
|
|
} else if self.config.allow_multiple_seconded &&
|
|
existing.proposals.iter().any(|(ref od, _)| od == &digest)
|
|
{
|
|
false
|
|
} else {
|
|
existing.proposals.push((digest.clone(), signature.clone()));
|
|
true
|
|
}
|
|
},
|
|
Entry::Vacant(vacant) => {
|
|
vacant
|
|
.insert(AuthorityData { proposals: vec![(digest.clone(), signature.clone())] });
|
|
true
|
|
},
|
|
};
|
|
|
|
// NOTE: altering this code may affect the existence proof above. ensure it remains
|
|
// valid.
|
|
if new_proposal {
|
|
self.candidate_votes
|
|
.entry(digest.clone())
|
|
.or_insert_with(move || CandidateData {
|
|
group_id: group,
|
|
candidate,
|
|
validity_votes: HashMap::new(),
|
|
});
|
|
}
|
|
|
|
self.validity_vote(context, authority, digest, ValidityVote::Issued(signature))
|
|
}
|
|
|
|
fn validity_vote(
|
|
&mut self,
|
|
context: &Ctx,
|
|
from: Ctx::AuthorityId,
|
|
digest: Ctx::Digest,
|
|
vote: ValidityVote<Ctx::Signature>,
|
|
) -> ImportResult<Ctx> {
|
|
let votes = match self.candidate_votes.get_mut(&digest) {
|
|
None => return Ok(None),
|
|
Some(votes) => votes,
|
|
};
|
|
|
|
// check that this authority actually can vote in this group.
|
|
if !context.is_member_of(&from, &votes.group_id) {
|
|
let sig = match vote {
|
|
ValidityVote::Valid(s) => s,
|
|
ValidityVote::Issued(_) => panic!(
|
|
"implicit issuance vote only cast from `import_candidate` after \
|
|
checking group membership of issuer; qed"
|
|
),
|
|
};
|
|
|
|
return Err(Misbehavior::UnauthorizedStatement(UnauthorizedStatement {
|
|
statement: SignedStatement {
|
|
signature: sig,
|
|
sender: from,
|
|
statement: Statement::Valid(digest),
|
|
},
|
|
}))
|
|
}
|
|
|
|
// check for double votes.
|
|
match votes.validity_votes.entry(from.clone()) {
|
|
Entry::Occupied(occ) => {
|
|
let make_vdv = |v| Misbehavior::ValidityDoubleVote(v);
|
|
let make_ds = |ds| Misbehavior::DoubleSign(ds);
|
|
return if occ.get() != &vote {
|
|
Err(match (occ.get().clone(), vote) {
|
|
// valid vote conflicting with candidate statement
|
|
(ValidityVote::Issued(iss), ValidityVote::Valid(good)) |
|
|
(ValidityVote::Valid(good), ValidityVote::Issued(iss)) =>
|
|
make_vdv(ValidityDoubleVote::IssuedAndValidity(
|
|
(votes.candidate.clone(), iss),
|
|
(digest, good),
|
|
)),
|
|
|
|
// two signatures on same candidate
|
|
(ValidityVote::Issued(a), ValidityVote::Issued(b)) =>
|
|
make_ds(DoubleSign::Seconded(votes.candidate.clone(), a, b)),
|
|
|
|
// two signatures on same validity vote
|
|
(ValidityVote::Valid(a), ValidityVote::Valid(b)) =>
|
|
make_ds(DoubleSign::Validity(digest, a, b)),
|
|
})
|
|
} else {
|
|
Ok(None)
|
|
}
|
|
},
|
|
Entry::Vacant(vacant) => {
|
|
vacant.insert(vote);
|
|
},
|
|
}
|
|
|
|
Ok(Some(votes.summary(digest)))
|
|
}
|
|
}
|
|
|
|
type Drain<'a, Ctx> = hash_map::Drain<'a, <Ctx as Context>::AuthorityId, Vec<MisbehaviorFor<Ctx>>>;
|
|
|
|
struct MisbehaviorForAuthority<Ctx: Context> {
|
|
id: Ctx::AuthorityId,
|
|
misbehaviors: Vec<MisbehaviorFor<Ctx>>,
|
|
}
|
|
|
|
impl<Ctx: Context> From<(Ctx::AuthorityId, Vec<MisbehaviorFor<Ctx>>)>
|
|
for MisbehaviorForAuthority<Ctx>
|
|
{
|
|
fn from((id, mut misbehaviors): (Ctx::AuthorityId, Vec<MisbehaviorFor<Ctx>>)) -> Self {
|
|
// we're going to be popping items off this list in the iterator, so reverse it now to
|
|
// preserve the original ordering.
|
|
misbehaviors.reverse();
|
|
Self { id, misbehaviors }
|
|
}
|
|
}
|
|
|
|
impl<Ctx: Context> Iterator for MisbehaviorForAuthority<Ctx> {
|
|
type Item = (Ctx::AuthorityId, MisbehaviorFor<Ctx>);
|
|
|
|
fn next(&mut self) -> Option<Self::Item> {
|
|
self.misbehaviors.pop().map(|misbehavior| (self.id.clone(), misbehavior))
|
|
}
|
|
}
|
|
|
|
pub struct DrainMisbehaviors<'a, Ctx: Context> {
|
|
drain: Drain<'a, Ctx>,
|
|
in_progress: Option<MisbehaviorForAuthority<Ctx>>,
|
|
}
|
|
|
|
impl<'a, Ctx: Context> From<Drain<'a, Ctx>> for DrainMisbehaviors<'a, Ctx> {
|
|
fn from(drain: Drain<'a, Ctx>) -> Self {
|
|
Self { drain, in_progress: None }
|
|
}
|
|
}
|
|
|
|
impl<'a, Ctx: Context> DrainMisbehaviors<'a, Ctx> {
|
|
fn maybe_item(&mut self) -> Option<(Ctx::AuthorityId, MisbehaviorFor<Ctx>)> {
|
|
self.in_progress.as_mut().and_then(Iterator::next)
|
|
}
|
|
}
|
|
|
|
impl<'a, Ctx: Context> Iterator for DrainMisbehaviors<'a, Ctx> {
|
|
type Item = (Ctx::AuthorityId, MisbehaviorFor<Ctx>);
|
|
|
|
fn next(&mut self) -> Option<Self::Item> {
|
|
// Note: this implementation will prematurely return `None` if `self.drain.next()` ever
|
|
// returns a tuple whose vector is empty. That will never currently happen, as the only
|
|
// modification to the backing map is currently via `drain` and
|
|
// `entry(...).or_default().push(...)`. However, future code changes might change that
|
|
// property.
|
|
self.maybe_item().or_else(|| {
|
|
self.in_progress = self.drain.next().map(Into::into);
|
|
self.maybe_item()
|
|
})
|
|
}
|
|
}
|
|
|
|
#[cfg(test)]
|
|
mod tests {
|
|
use super::*;
|
|
use std::collections::HashMap;
|
|
|
|
fn create_single_seconded<Candidate: Context>() -> Table<Candidate> {
|
|
Table::new(Config { allow_multiple_seconded: false })
|
|
}
|
|
|
|
fn create_many_seconded<Candidate: Context>() -> Table<Candidate> {
|
|
Table::new(Config { allow_multiple_seconded: true })
|
|
}
|
|
|
|
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)]
|
|
struct AuthorityId(usize);
|
|
|
|
#[derive(Debug, Copy, Clone, Hash, PartialOrd, Ord, PartialEq, Eq)]
|
|
struct GroupId(usize);
|
|
|
|
// group, body
|
|
#[derive(Debug, Copy, Clone, Hash, PartialOrd, Ord, PartialEq, Eq)]
|
|
struct Candidate(usize, usize);
|
|
|
|
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)]
|
|
struct Signature(usize);
|
|
|
|
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)]
|
|
struct Digest(usize);
|
|
|
|
#[derive(Debug, PartialEq, Eq)]
|
|
struct TestContext {
|
|
// v -> parachain group
|
|
authorities: HashMap<AuthorityId, GroupId>,
|
|
}
|
|
|
|
impl Context for TestContext {
|
|
type AuthorityId = AuthorityId;
|
|
type Digest = Digest;
|
|
type Candidate = Candidate;
|
|
type GroupId = GroupId;
|
|
type Signature = Signature;
|
|
|
|
fn candidate_digest(candidate: &Candidate) -> Digest {
|
|
Digest(candidate.1)
|
|
}
|
|
|
|
fn candidate_group(candidate: &Candidate) -> GroupId {
|
|
GroupId(candidate.0)
|
|
}
|
|
|
|
fn is_member_of(&self, authority: &AuthorityId, group: &GroupId) -> bool {
|
|
self.authorities.get(authority).map(|v| v == group).unwrap_or(false)
|
|
}
|
|
|
|
fn requisite_votes(&self, id: &GroupId) -> usize {
|
|
let mut total_validity = 0;
|
|
|
|
for validity in self.authorities.values() {
|
|
if validity == id {
|
|
total_validity += 1
|
|
}
|
|
}
|
|
|
|
total_validity / 2 + 1
|
|
}
|
|
}
|
|
|
|
#[test]
|
|
fn submitting_two_candidates_can_be_misbehavior() {
|
|
let context = TestContext {
|
|
authorities: {
|
|
let mut map = HashMap::new();
|
|
map.insert(AuthorityId(1), GroupId(2));
|
|
map
|
|
},
|
|
};
|
|
|
|
let mut table = create_single_seconded();
|
|
let statement_a = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
|
|
let statement_b = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 999)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
|
|
table.import_statement(&context, statement_a);
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(1)));
|
|
|
|
table.import_statement(&context, statement_b);
|
|
assert_eq!(
|
|
table.detected_misbehavior[&AuthorityId(1)][0],
|
|
Misbehavior::MultipleCandidates(MultipleCandidates {
|
|
first: (Candidate(2, 100), Signature(1)),
|
|
second: (Candidate(2, 999), Signature(1)),
|
|
})
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn submitting_two_candidates_can_be_allowed() {
|
|
let context = TestContext {
|
|
authorities: {
|
|
let mut map = HashMap::new();
|
|
map.insert(AuthorityId(1), GroupId(2));
|
|
map
|
|
},
|
|
};
|
|
|
|
let mut table = create_many_seconded();
|
|
let statement_a = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
|
|
let statement_b = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 999)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
|
|
table.import_statement(&context, statement_a);
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(1)));
|
|
|
|
table.import_statement(&context, statement_b);
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(1)));
|
|
}
|
|
|
|
#[test]
|
|
fn submitting_candidate_from_wrong_group_is_misbehavior() {
|
|
let context = TestContext {
|
|
authorities: {
|
|
let mut map = HashMap::new();
|
|
map.insert(AuthorityId(1), GroupId(3));
|
|
map
|
|
},
|
|
};
|
|
|
|
let mut table = create_single_seconded();
|
|
let statement = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
|
|
table.import_statement(&context, statement);
|
|
|
|
assert_eq!(
|
|
table.detected_misbehavior[&AuthorityId(1)][0],
|
|
Misbehavior::UnauthorizedStatement(UnauthorizedStatement {
|
|
statement: SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
},
|
|
})
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn unauthorized_votes() {
|
|
let context = TestContext {
|
|
authorities: {
|
|
let mut map = HashMap::new();
|
|
map.insert(AuthorityId(1), GroupId(2));
|
|
map.insert(AuthorityId(2), GroupId(3));
|
|
map
|
|
},
|
|
};
|
|
|
|
let mut table = create_single_seconded();
|
|
|
|
let candidate_a = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
let candidate_a_digest = Digest(100);
|
|
|
|
table.import_statement(&context, candidate_a);
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(1)));
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(2)));
|
|
|
|
// authority 2 votes for validity on 1's candidate.
|
|
let bad_validity_vote = SignedStatement {
|
|
statement: Statement::Valid(candidate_a_digest),
|
|
signature: Signature(2),
|
|
sender: AuthorityId(2),
|
|
};
|
|
table.import_statement(&context, bad_validity_vote);
|
|
|
|
assert_eq!(
|
|
table.detected_misbehavior[&AuthorityId(2)][0],
|
|
Misbehavior::UnauthorizedStatement(UnauthorizedStatement {
|
|
statement: SignedStatement {
|
|
statement: Statement::Valid(candidate_a_digest),
|
|
signature: Signature(2),
|
|
sender: AuthorityId(2),
|
|
},
|
|
})
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn candidate_double_signature_is_misbehavior() {
|
|
let context = TestContext {
|
|
authorities: {
|
|
let mut map = HashMap::new();
|
|
map.insert(AuthorityId(1), GroupId(2));
|
|
map.insert(AuthorityId(2), GroupId(2));
|
|
map
|
|
},
|
|
};
|
|
|
|
let mut table = create_single_seconded();
|
|
let statement = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
|
|
table.import_statement(&context, statement);
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(1)));
|
|
|
|
let invalid_statement = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(999),
|
|
sender: AuthorityId(1),
|
|
};
|
|
|
|
table.import_statement(&context, invalid_statement);
|
|
assert!(table.detected_misbehavior.contains_key(&AuthorityId(1)));
|
|
}
|
|
|
|
#[test]
|
|
fn issue_and_vote_is_misbehavior() {
|
|
let context = TestContext {
|
|
authorities: {
|
|
let mut map = HashMap::new();
|
|
map.insert(AuthorityId(1), GroupId(2));
|
|
map
|
|
},
|
|
};
|
|
|
|
let mut table = create_single_seconded();
|
|
let statement = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
let candidate_digest = Digest(100);
|
|
|
|
table.import_statement(&context, statement);
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(1)));
|
|
|
|
let extra_vote = SignedStatement {
|
|
statement: Statement::Valid(candidate_digest),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
|
|
table.import_statement(&context, extra_vote);
|
|
assert_eq!(
|
|
table.detected_misbehavior[&AuthorityId(1)][0],
|
|
Misbehavior::ValidityDoubleVote(ValidityDoubleVote::IssuedAndValidity(
|
|
(Candidate(2, 100), Signature(1)),
|
|
(Digest(100), Signature(1)),
|
|
))
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn candidate_attested_works() {
|
|
let validity_threshold = 6;
|
|
|
|
let mut candidate = CandidateData::<TestContext> {
|
|
group_id: GroupId(4),
|
|
candidate: Candidate(4, 12345),
|
|
validity_votes: HashMap::new(),
|
|
};
|
|
|
|
assert!(candidate.attested(validity_threshold).is_none());
|
|
|
|
for i in 0..validity_threshold {
|
|
candidate
|
|
.validity_votes
|
|
.insert(AuthorityId(i + 100), ValidityVote::Valid(Signature(i + 100)));
|
|
}
|
|
|
|
assert!(candidate.attested(validity_threshold).is_some());
|
|
|
|
candidate.validity_votes.insert(
|
|
AuthorityId(validity_threshold + 100),
|
|
ValidityVote::Valid(Signature(validity_threshold + 100)),
|
|
);
|
|
|
|
assert!(candidate.attested(validity_threshold).is_some());
|
|
}
|
|
|
|
#[test]
|
|
fn includability_works() {
|
|
let context = TestContext {
|
|
authorities: {
|
|
let mut map = HashMap::new();
|
|
map.insert(AuthorityId(1), GroupId(2));
|
|
map.insert(AuthorityId(2), GroupId(2));
|
|
map.insert(AuthorityId(3), GroupId(2));
|
|
map
|
|
},
|
|
};
|
|
|
|
// have 2/3 validity guarantors note validity.
|
|
let mut table = create_single_seconded();
|
|
let statement = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
let candidate_digest = Digest(100);
|
|
|
|
table.import_statement(&context, statement);
|
|
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(1)));
|
|
assert!(table.attested_candidate(&candidate_digest, &context).is_none());
|
|
|
|
let vote = SignedStatement {
|
|
statement: Statement::Valid(candidate_digest),
|
|
signature: Signature(2),
|
|
sender: AuthorityId(2),
|
|
};
|
|
|
|
table.import_statement(&context, vote);
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(2)));
|
|
assert!(table.attested_candidate(&candidate_digest, &context).is_some());
|
|
}
|
|
|
|
#[test]
|
|
fn candidate_import_gives_summary() {
|
|
let context = TestContext {
|
|
authorities: {
|
|
let mut map = HashMap::new();
|
|
map.insert(AuthorityId(1), GroupId(2));
|
|
map
|
|
},
|
|
};
|
|
|
|
let mut table = create_single_seconded();
|
|
let statement = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
|
|
let summary = table
|
|
.import_statement(&context, statement)
|
|
.expect("candidate import to give summary");
|
|
|
|
assert_eq!(summary.candidate, Digest(100));
|
|
assert_eq!(summary.group_id, GroupId(2));
|
|
assert_eq!(summary.validity_votes, 1);
|
|
}
|
|
|
|
#[test]
|
|
fn candidate_vote_gives_summary() {
|
|
let context = TestContext {
|
|
authorities: {
|
|
let mut map = HashMap::new();
|
|
map.insert(AuthorityId(1), GroupId(2));
|
|
map.insert(AuthorityId(2), GroupId(2));
|
|
map
|
|
},
|
|
};
|
|
|
|
let mut table = create_single_seconded();
|
|
let statement = SignedStatement {
|
|
statement: Statement::Seconded(Candidate(2, 100)),
|
|
signature: Signature(1),
|
|
sender: AuthorityId(1),
|
|
};
|
|
let candidate_digest = Digest(100);
|
|
|
|
table.import_statement(&context, statement);
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(1)));
|
|
|
|
let vote = SignedStatement {
|
|
statement: Statement::Valid(candidate_digest),
|
|
signature: Signature(2),
|
|
sender: AuthorityId(2),
|
|
};
|
|
|
|
let summary =
|
|
table.import_statement(&context, vote).expect("candidate vote to give summary");
|
|
|
|
assert!(!table.detected_misbehavior.contains_key(&AuthorityId(2)));
|
|
|
|
assert_eq!(summary.candidate, Digest(100));
|
|
assert_eq!(summary.group_id, GroupId(2));
|
|
assert_eq!(summary.validity_votes, 2);
|
|
}
|
|
}
|