We currently use a bit of a hack in `.cargo/config` to make sure that
clippy isn't too annoying by specifying the list of lints.
There is now a stable way to define lints for a workspace. The only down
side is that every crate seems to have to opt into this so there's a
*few* files modified in this PR.
Dependencies:
- [x] PR that upgrades CI to use rust 1.74 is merged.
---------
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Initial implementation for the plan discussed here: https://github.com/paritytech/polkadot-sdk/issues/701
Built on top of https://github.com/paritytech/polkadot-sdk/pull/1178
v0: https://github.com/paritytech/polkadot/pull/7554,
## Overall idea
When approval-voting checks a candidate and is ready to advertise the
approval, defer it in a per-relay chain block until we either have
MAX_APPROVAL_COALESCE_COUNT candidates to sign or a candidate has stayed
MAX_APPROVALS_COALESCE_TICKS in the queue, in both cases we sign what
candidates we have available.
This should allow us to reduce the number of approvals messages we have
to create/send/verify. The parameters are configurable, so we should
find some values that balance:
- Security of the network: Delaying broadcasting of an approval
shouldn't but the finality at risk and to make sure that never happens
we won't delay sending a vote if we are past 2/3 from the no-show time.
- Scalability of the network: MAX_APPROVAL_COALESCE_COUNT = 1 &
MAX_APPROVALS_COALESCE_TICKS =0, is what we have now and we know from
the measurements we did on versi, it bottlenecks
approval-distribution/approval-voting when increase significantly the
number of validators and parachains
- Block storage: In case of disputes we have to import this votes on
chain and that increase the necessary storage with
MAX_APPROVAL_COALESCE_COUNT * CandidateHash per vote. Given that
disputes are not the normal way of the network functioning and we will
limit MAX_APPROVAL_COALESCE_COUNT in the single digits numbers, this
should be good enough. Alternatively, we could try to create a better
way to store this on-chain through indirection, if that's needed.
## Other fixes:
- Fixed the fact that we were sending random assignments to
non-validators, that was wrong because those won't do anything with it
and they won't gossip it either because they do not have a grid topology
set, so we would waste the random assignments.
- Added metrics to be able to debug potential no-shows and
mis-processing of approvals/assignments.
## TODO:
- [x] Get feedback, that this is moving in the right direction. @ordian
@sandreim @eskimor @burdges, let me know what you think.
- [x] More and more testing.
- [x] Test in versi.
- [x] Make MAX_APPROVAL_COALESCE_COUNT &
MAX_APPROVAL_COALESCE_WAIT_MILLIS a parachain host configuration.
- [x] Make sure the backwards compatibility works correctly
- [x] Make sure this direction is compatible with other streams of work:
https://github.com/paritytech/polkadot-sdk/issues/635 &
https://github.com/paritytech/polkadot-sdk/issues/742
- [x] Final versi burn-in before merging
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Using taplo, fixes all our broken and inconsistent toml formatting and
adds CI to keep them tidy.
If people want we can customise the format rules as described here
https://taplo.tamasfe.dev/configuration/formatter-options.html
@ggwpez, I suggest zepter is used only for checking features are
propagated, and leave formatting for taplo to avoid duplicate work and
conflicts.
TODO
- [x] Use `exclude = [...]` syntax in taplo file to ignore zombienet
tests instead of deleting the dir
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
Hey guys, as discussed I've changed the name to a more general one
`PvfExecKind`, is this good or too general?
Creating this as a draft, I still have to fix the tests.
Closes#1585
Kusama address: FkB6QEo8VnV3oifugNj5NeVG3Mvq1zFbrUu4P5YwRoe5mQN
---------
Co-authored-by: command-bot <>
Co-authored-by: Marcin S <marcin@realemail.net>
Adds a `NodeFeatures` bitfield value to the runtime `HostConfiguration`,
with the purpose of coordinating the enabling of node-side features,
such as: https://github.com/paritytech/polkadot-sdk/issues/628 and
https://github.com/paritytech/polkadot-sdk/issues/598.
These are features that require all validators enable them at the same
time, assuming all/most nodes have upgraded their node versions.
This PR doesn't add any feature yet. These are coming in future PRs.
Also adds a runtime API for querying the state of the client features
and an extrinsic for setting/unsetting a feature by its index in the bitfield.
Note: originally part of:
https://github.com/paritytech/polkadot-sdk/pull/1644, but posted as
standalone to be reused by other PRs until the initial PR is merged
**_PR migrated from https://github.com/paritytech/polkadot/pull/6782_**
This PR will upgrade the network protocol to version 3 -> VStaging which
will later be renamed to V3. This version introduces a new kind of
assignment certificate that will be used for tranche0 assignments.
Instead of issuing/importing one tranche0 assignment per candidate,
there will be just one certificate per relay chain block per validator.
However, we will not be sending out the new assignment certificates,
yet. So everything should work exactly as before. Once the majority of
the validators have been upgraded to the new protocol version we will
enable the new certificates (starting at a specific relay chain block)
with a new client update.
There are still a few things that need to be done:
- [x] Use bitfield instead of Vec<CandidateIndex>:
https://github.com/paritytech/polkadot/pull/6802
- [x] Fix existing approval-distribution and approval-voting tests
- [x] Fix bitfield-distribution and statement-distribution tests
- [x] Fix network bridge tests
- [x] Implement todos in the code
- [x] Add tests to cover new code
- [x] Update metrics
- [x] Remove the approval distribution aggression levels: TBD PR
- [x] Parachains DB migration
- [x] Test network protocol upgrade on Versi
- [x] Versi Load test
- [x] Add Zombienet test
- [x] Documentation updates
- [x] Fix for sending DistributeAssignment for each candidate claimed by
a v2 assignment (warning: Importing locally an already known assignment)
- [x] Fix AcceptedDuplicate
- [x] Fix DB migration so that we can still keep old data.
- [x] Final Versi burn in
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
While investigating some db migrations that make the node startup fail,
I noticed that the node wasn't exiting and that the log file were
growing exponentially, until my whole system was freezing and that makes
it really hard to actually find why it was failing in the first place.
E.g:
```
ls -lh /tmp/zombie-01a04c2a2c0265d85f6440cf01c0f44a_-51319-uyggzuD4wEpV/bob.log
32,6G oct 27 11:16 /tmp/zombie-01a04c2a2c0265d85f6440cf01c0f44a_-51319-uyggzuD4wEpV/bob.log
```
This was happening because the following errors were being printed
continously without the subsystem main loop exiting:
From dispute-coordinator:
```
WARN tokio-runtime-worker parachain::dispute-coordinator: error=Subsystem(Generated(Context("Signal channel is terminated and empty.")))
```
From availability recovery:
```
Erasure task channel closed. Node shutting down ?
```
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
closes#695
Could potentially be helpful to preserving caches when applicable, as
discussed in #685
kusama address: FvpsvV1GQAAbwqX6oyRjemgdKV11QU5bXsMg9xsonD1FLGK
closes#622
Pros:
* simpler interface, just functions:
`create_runtime_from_artifact_bytes()` and `execute_artifact()`
Cons:
* extra overhead of constructing executor semantics each time
I could make it a combination of
* `create_runtime_config(params)` (such that we could clone the
constructed semantics)
* `create_runtime(blob, config)`
* `execute_artifact(blob, config, params)`
Not sure if it's worth it though.
---------
Co-authored-by: Bastian Köcher <git@kchr.de>
# Description
In a couple of cases, there were links pointing to the w3f github pages
domain. In other instances, there were links pointing to the old
polkadot repo's github pages. Both of these are now pointing to the
relevant links in
https://paritytech.github.io/polkadot-sdk/book/index.html.
These changes were made specifically because the w3f github pages
returns a 404, and while fixing the links, the old polkadot repo links
were touched up as well even if they do redirect properly.
This shouldn't affect anything as these are documentation link changes
only.
- Async-backing related primitives are stable `primitives::v6`
- Async-backing API is now part of `api_version(7)`
- It's enabled on Rococo and Westend runtimes
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
It seems the old strategy have been depracted more than one year.
So maybe it's time to clean up old strategy for wasm executor.
---
polkadot address: 15ouFh2SHpGbHtDPsJ6cXQfes9Cx1gEFnJJsJVqPGzBSTudr
---------
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Koute <koute@users.noreply.github.com>
In follow-up to https://github.com/paritytech/polkadot-sdk/pull/1518
Adding extra tests for inclusion pruning. Primarily focusing on various
cases surrounding candidates included in different forks (with different
relay parents).
All cases fall into a few buckets based on 3 degrees of freedom - number
of candidates, number of blocks (height), number of forks + extra case
for pruning multiple heights at once.
Added small tweak to the original pruning function to disregard stale
candidate duplicates which should keep the same behaviour.
---------
Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>