Collators need to join the validation network to tell its connected
relay chain peers the leaf they listen on. This is required to make the
Parachain validator send the signed statement to the collators as well.
* update local chain name in docker-compose and docu
the name of a local network changed from local to polkadot-local so
some local test environments were broken and the ticket #965 was
created
* using CLI flags --alice in local dev
as this directly adds the required keys to the keystore
Co-authored-by: Christian Seidemann <christian.seidemann@t-systems.com>
* service/src/lib: Enable authority discovery on sentry nodes
When run as a sentry node, the authority discovery module does not
publish any addresses to the dht, but still discovers validators and
sentry nodes of validators.
* bin/node/cli/src/service: Use expressions instead of statements
* Cargo.lock: Run `cargo update`
* service/src/lib: Fix compile error
* Adds an offchain call to submit double vote reports
* Some tweaks
* Remove unnecessary IdentifyAccount impls
* Adds ValidateDoubleVoteReports to test runtime
* sp-application-crypto is only a dev dependency
* Initial draft
* More work
* Build
* Docs
* Insert westend keys
* Add badBlock to fork from old chain
* Updated spec to reset westend
* Use raw spec
* Fix spec format and use westend2 for both id's
* Correct public key for bootnode 3
* Build
* Extra space
* Fix build
* Lock
* Update lock
* Fixes
* Fix for he startup text
* Bump
Co-authored-by: Gav Wood <gavin@parity.io>
* Use tempdir for tests
* Rename tmp to tempdir
* Update Cargo.lock
* Update expect error message in run_command_and_kill
* Call with .path() rather than creating an string slice
* Call tempdir with arg instead of args
* Update tests/purge_chain_works.rs
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
* upgrade primitives to allow changing validation function
* set up storage schema for old parachains code
* fix compilation errors
* fix test compilation
* add some tests for past code meta
* most of the runtime logic for code upgrades
* implement old-code pruning
* add a couple tests
* clean up remaining TODOs
* add a whole bunch of tests for runtime functionality
* remove unused function
* fix runtime compilation
* extract some primitives to parachain crate
* add validation-code upgrades to validation params and result
* extend validation params with code upgrade fields
* provide maximums to validation params
* port test-parachains
* add a code-upgrader test-parachain and tests
* fix collator tests
* move test-parachains to own folder to work around compilation errors
* fix test compilation
* update the Cargo.lock
* fix parachains tests
* remove dbg! invocation
* use new pool in code-upgrader
* bump lockfile
* link TODO to issue
* Ensure that table router is always build
This pr ensures that the table router is always build, aka the future is
resolved. This is important, as the table router internally spawns tasks
to handle gossip messages. Handling gossip messages is not only required
on parachain validators, but also on relay chain validators to receive collations.
Tests are added to ensure that the assumptions hold.
* Fix compilation
* Switch to closures
* Remove empty line
* Revert "Remove empty line"
This reverts commit 0d4aaba1780aec1c8d61e1d5dcf7768918af02d9.
* Revert "Switch to closures"
This reverts commit d128c4ecc02c911552a3bfd2142b5a4f7b1338ba.
* Hybrid approach
* Rename test
* Make trait crate local
* Companion PR to splitting Roles
* Fix network tests
* Fix service build
* Even more fixing
* Oops, quick fix
* use is_network_authority in grandpa service config
Co-authored-by: André Silva <andre.beat@gmail.com>
* add dummy parachains.toml
* flesh out parachains.toml
* finish phase-1 rendering
* render to svg instead
* put graphviz svg through sanitizer so github can render
* return to PNG
Up to now consensus instances used the main channel to communicate with
the background network worker. This lead to a race condition when
sending a local collation and dropping the router before driving the
send local collation future until it is finished. This pr changes the
communication between worker and the instances to use their own
channels. This has the advantage that we don't need an extra
`DropConsensusNetworking` message as the network is dropped
automatically when the last sender is dropped.