Setting zero as weight may be a source of problems.
The problem is, a rogue validator can shove a lot of duplicated votes
and thus fill a block with work that may well exceed the weight limit.
This commit refactors the consistency checks. Instead of each individual
setter performs its checks locally, we delegate those checks to the
already existing function `check_consistency`. This removes duplication
and simplifies the logic.
A motivating example of this one is the next PR in the stack that will
introduce a check for a field, which validity depends on the validity of
other two fields. Without this refactoring we will have to place a check
not only to the field in question, but also to the other two fields so
that if they are changed they do not violate consistency criteria. It's
easy to imagine how this can go unwieldy with the number of checks.
This also adds a test that verifies that the default chain spec host
configuration is consistent.
* zombinet: fixed adder-collator image for the smoke test
* update COL_IMAGE
* bump zombienet version
* try a different version
Co-authored-by: Javier Viola <javier@parity.io>
Make check-dependent-* jobs only be executed in PRs instead of both PRs and
master.
Reason 1: The companion is not merged at the same time as the parent PR
([1](https://github.com/paritytech/parity-processbot/issues/347#issuecomment-994729950)),
therefore the pipeline will fail on master since the companion PR is not yet
merged in the other repository. This scenario is demonstrated by the pipeline
of
https://github.com/paritytech/substrate/commit/82cc3746450ae9722a249f4ddf83b8de59ba6e0d.
Reason 2: The job can still fail on master due to a new commit on the companion
PR's repository which was merged after `bot merge` happened, as demonstrated by
the following scheme:
1. Parent PR is merged
2. Companion PR is updated and set to merge in the future
3. In the meantime a new commit is merged into the companion PR repository's
master branch
4. The `check-dependent-*` job runs on master but, due to the new commit, it
fails for unrelated reasons
While "Reason 2" can be used as an argument against this PR, in that it would
be useful to know if the integration is failing on master, "Reason 1" should be
taken care of due to this inherent flaw of the current companion build system
design.
* merge master (do not compile)
* fix
* lock
* update lock
* Update to refactoring.
* runtime version
* fmt
* remove trie patch
* remove patch
* No layout alias for bridge proof.
* update depupdate depss
* No switch until migration.
* master lock
* test
* test
* Revert "test"
This reverts commit 57325ef73332bf4b054aa4a667bb716fcf8a0d89.
* Revert "test"
This reverts commit ce74d0e2062806f72c0e9e9ca07b14165f43521e.
* rename feature
* state version as parameter, use the feature only on runtimes.
* update
* update to state version in runtime
* state version from storage
* update lockfile for substrate
Co-authored-by: parity-processbot <>
The function `has_api` checks that the api + version matches, which
isn't true anymore after bumping the version. The fix is to just compare
the runtime api version being at least `1`.
The runtime version check `runtime_version <= version` was wrong, it
needs to be `>=`. Besides that the `WIDELY_DEPLOYED_API_VERSION` is
removed. The runtime version is already cached, aka we don't always call
into the runtime when requesting the runtime version. So, there is no
need to "optimize" this.
* First step in implementing https://github.com/paritytech/polkadot/issues/4386
This PR:
- Reduces MAX_UNSHARED_UPLOAD_TIME to 150ms
- Increases timeout on collation fetching to 1200ms
- Reduces limit on needed backing votes in the runtime
This PR does not yet reduce the number of needed backing votes on the
node as this can only be meaningfully enacted once the changed limit in
the runtime is live.
* Fix tests.
* Guide updates.
* Review remarks.
* Bump minimum required backing votes to 2 in runtime.
* Make sure node side code won't make runtime vomit.
* cargo +nightly fmt
Refactor the configuration module's initializer_on_new_session in such a
way that it returns the configuration. This would make it inline with
other special initialization routines like `shared`'s or `paras`.
This will be useful in a following PR that will check consistency of the
configuration before setting it.
* alter currently-checking-set to launch work only on new candidates
* fmt
* fix compilation
* address review
* Introduce approvals cache test that ensures approval work is only triggered once for each Candidate Hash
* Fix formatting
* Address Feedback
* Move final message await into handle function
Co-authored-by: Chris Sosnin <chris125_@live.com>
Co-authored-by: Lldenaurois <Ljdenaurois@gmail.com>
This commit hooks up the API provided by #4457 to the runtime API
subsystem. In a following PR this API will be consumed by the PVF
pre-checking subsystem.
Co-authored-by: Chris Sosnin <chris125_@live.com>
Co-authored-by: Chris Sosnin <chris125_@live.com>
* parachains: Fix configuration module
Closes#4529Closes#4533
I figured that trying to avoid updates does not really worth it to keep.
This is because we seem to not update the configuration often and when
we do we approach this carefully. Thus possibility of a redundant update
is really negligable. At the same time, if such a redundant update does
happen then the effects of that are really small: just some wasted
storage interactions.
On the other hand, making it work was a little bit annoying. With the
proper fix for the pending updates this would be even more annoying
since now we would have to add combinatorically more cases to test this.
So I figured that I will just scrap that and simplify the code.
* cargo run --quiet --release --features=runtime-benchmarks -- benchmark --chain=kusama-dev --steps=50 --repeat=20 --pallet=runtime_parachains::configuration --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/kusama/src/weights/runtime_parachains_configuration.rs
* cargo run --quiet --release --features=runtime-benchmarks -- benchmark --chain=polkadot-dev --steps=50 --repeat=20 --pallet=runtime_parachains::configuration --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/polkadot/src/weights/runtime_parachains_configuration.rs
* cargo run --quiet --release --features=runtime-benchmarks -- benchmark --chain=westend-dev --steps=50 --repeat=20 --pallet=runtime_parachains::configuration --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/westend/src/weights/runtime_parachains_configuration.rs
* review fixes
Co-authored-by: Parity Bot <admin@parity.io>
* Create a README for XCMv2 detailing notable changes
* Typo
* Appease spellcheck and cargo fmt
* Add XCM pallet item to be aware of
* cargo fmt
* Remove all XCM README.md
* enable disputes, for all known chains but polkadot
* chore: fmt
* don't propagate disputes either
* review
* remove disputes feature
* remove superfluous line
* Update node/service/src/lib.rs
Co-authored-by: Andronik Ordian <write@reusable.software>
* fixup
* allow being a dummy
* rialto
* add an enum, to make things work better
* overseer
* fix test
* comments
* move condition out
* excess arg
Co-authored-by: Andronik Ordian <write@reusable.software>
* test new version of zombienet
* use default chain spec command
* use commit ref to get test files
* change test spec for zombienet new versionzombienet_tests/parachains/0001-parachains-smoke-test.toml
* changes for new version of zombienet
* use paritytech zombienet version
* pvf-precheck: Integrate PVF pre-checking into paras module
Closes#4009
This is the most of the runtime-side change needed for #3211.
Here is how it works.
The PVF pre-checking can be triggered either by an upgrade or by
onboarding (i.e. calling `schedule_para_initialize`). The PVF
pre-checking process is identified by the PVF code hash that is being
voted on. If there is already PVF pre-checking process running, then no
new PVF pre-checking process will be started. Instead, we just subscribe
to the existing one.
If there is no PVF pre-checking process running but the PVF code hash
was already saved in the storage, that necessarily means (I invite the
reviewers to double-check this invariant) that the PVF already passed
pre-checking. This is equivalent to instant approving of the PVF.
The pre-checking process can be concluded either by obtaining a
supermajority or if it expires.
Each validator checks the list of PVFs available for voting. The vote is
binary, i.e. accept or reject a given PVF. As soon as the supermajority
of votes are collected for one of the sides of the vote, the voting is
concluded in that direction and the effects of the voting are enacted.
Only validators from the active set can participate in the vote. The set
of active validators can change each session. That's why we reset the
votes each session. A voting that observed a certain number of sessions
will be rejected.
The effects of the PVF accepting depend on the operations requested it:
1. All onboardings subscribed to the approved PVF pre-checking process will
get scheduled and after passing 2 session boundaries they will be onboarded.
2. All upgrades subscribed to the approved PVF pre-checking process will
get scheduled very similarly to the existing process. Upgrades with
pre-checking are really the same process that is just delayed by the
time required for pre-checking voting. In case of instant approval the
mechanism is exactly the same. This is important from parachains
compatibility standpoint since following the delayed upgrade requires
the parachain to implement
https://github.com/paritytech/cumulus/pull/517.
In case, PVF pre-checking process was concluded with rejection, then all
the requesting operations get cancelled. For onboarding it means it gets
without movement: the lifecycle of such parachain is terminated on the
`Onboarding` state and after rejection the lifecycle is none. That in
turn means that the caller can attempt registering the parachain once
more. For upgrading it means that the upgrade process is aborted: that
flashes go-ahead signal with `Abort` flag.
Rejection leads to removing the allegedly bad validation code from the
chain storage. Among other things, this implies that the operation can
be re-requested. That allows for retrying an operation in case there was
some bug. At the same time it does not look as a DoS vector due to the
caching performed by the nodes.
PVF pre-checking can be enabled and disabled. Initially, according to
the changes in #4420, this mechanism is disabled. Triggering the PVF
pre-checking when it is disabled just means that we insta approve the
requesting operation. This should lead to the behavior being unchanged.
Follow-ups:
- expose runtime APIs
* cargo run --quiet --release --features=runtime-benchmarks -- benchmark --chain=polkadot-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/polkadot/src/weights/runtime_parachains_paras.rs
* cargo run --quiet --release --features=runtime-benchmarks -- benchmark --chain=westend-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/westend/src/weights/runtime_parachains_paras.rs
* cargo run --quiet --release --features=runtime-benchmarks -- benchmark --chain=kusama-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/kusama/src/weights/runtime_parachains_paras.rs
* cargo run --quiet --release --features runtime-benchmarks -- benchmark --chain=rococo-dev --steps=50 --repeat=20 --pallet=runtime_parachains::paras --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/rococo/src/weights/runtime_parachains_paras.rs
* Review fixes
Co-authored-by: Parity Bot <admin@parity.io>