### What's been done
- `subsystem-bench` has been split into two parts: a cli benchmark
runner and a library.
- The cli runner is quite simple. It just allows us to run `.yaml` based
test sequences. Now it should only be used to run benchmarks during
development.
- The library is used in the cli runner and in regression tests. Some
code is changed to make the library independent of the runner.
- Added first regression tests for availability read and write that
replicate existing test sequences.
### How we run regression tests
- Regression tests are simply rust integration tests without the
harnesses.
- They should only be compiled under the `subsystem-benchmarks` feature
to prevent them from running with other tests.
- This doesn't work when running tests with `nextest` in CI, so
additional filters have been added to the `nextest` runs.
- Each benchmark run takes a different time in the beginning, so we
"warm up" the tests until their CPU usage differs by only 1%.
- After the warm-up, we run the benchmarks a few more times and compare
the average with the exception using a precision.
### What is still wrong?
- I haven't managed to set up approval voting tests. The spread of their
results is too large and can't be narrowed down in a reasonable amount
of time in the warm-up phase.
- The tests start an unconfigurable prometheus endpoint inside, which
causes errors because they use the same 9999 port. I disable it with a
flag, but I think it's better to extract the endpoint launching outside
the test, as we already do with `valgrind` and `pyroscope`. But we still
use `prometheus` inside the tests.
### Future work
* https://github.com/paritytech/polkadot-sdk/issues/3528
* https://github.com/paritytech/polkadot-sdk/issues/3529
* https://github.com/paritytech/polkadot-sdk/issues/3530
* https://github.com/paritytech/polkadot-sdk/issues/3531
---------
Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com>
Add more debug logs to understand if statement-distribution is in a bad
state, should be useful for debugging
https://github.com/paritytech/polkadot-sdk/issues/3314 on production
networks.
Additionally, increase the number of parallel requests should make,
since I notice that requests take around 100ms on kusama, and the 5
parallel request was picked mostly random, no reason why we can do more
than that.
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: ordian <write@reusable.software>
Lifting some more dependencies to the workspace. Just using the
most-often updated ones for now.
It can be reproduced locally.
```sh
# First you can check if there would be semver incompatible bumps (looks good in this case):
$ zepter transpose dependency lift-to-workspace --ignore-errors syn quote thiserror "regex:^serde.*"
# Then apply the changes:
$ zepter transpose dependency lift-to-workspace --version-resolver=highest syn quote thiserror "regex:^serde.*" --fix
# And format the changes:
$ taplo format --config .config/taplo.toml
```
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
~The previous fix was actually incomplete because we update the
authorties only on the situation where we decided to reconnect because
we had a low connectivity issue. Now the problem is that
update_authority_ids use the list of connected peers, so on restart that
does contain anything, so calling immediately after
issue_connection_request won't detect all authorities, so we need to
also check every block as the comment said, but that did not match the
code.~
Actually the fix was correct the flow is follow if more than 1/3 of the
authorities can not be resolved we set last_failure and call
`ConnectToResolvedValidators`.
We will call UpdateAuthorities for all the authorities already connected
and for which we already know the address and for the ones that will
connect later on `PeerConnected` will have the AuthorityId field set,
because it is already known, so approval-distribution will update its
cache topology.
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com>
Changes (partial https://github.com/paritytech/polkadot-sdk/issues/994):
- Set log to `0.4.20` everywhere
- Lift `log` to the workspace
Starting with a simpler one after seeing
https://github.com/paritytech/polkadot-sdk/pull/2065 from @jsdw.
This sets the `default-features` to `false` in the root and then
overwrites that in each create to its original value. This is necessary
since otherwise the `default` features are additive and its impossible
to disable them in the crate again once they are enabled in the
workspace.
I am using a tool to do this, so its mostly a test to see that it works
as expected.
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
On grid distribution messages have two paths of reaching a node, so
there is the possiblity of a race when two peers send each other the
same statement around the same time. Statement local_knowledge will tell
us that the peer should have not send the statement because we sent it
to it.
Fix it by also keeping track only of the statement we received from a
given peer and penalize it only if it sends it to us more than once.
Fixes: https://github.com/paritytech/polkadot-sdk/issues/2346
Additionally, also use different Cost labels for different paths to make
it easier to debug things.
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Fixes: https://github.com/paritytech/polkadot-sdk/issues/2138.
Especially on restart AuthorithyDiscovery cache is not populated so we
create an invalid topology and messages won't be routed correctly for
the entire session. This PR proposes to try to fix this by updating the
topology as soon as we now the Authority/PeerId mapping, that should
impact the situation dramatically.
[This issue was hit
yesterday](https://grafana.teleport.parity.io/goto/o9q2625Sg?orgId=1),
on Westend and resulted in stalling the finality.
# TODO
- [x] Unit tests
- [x] Test impact on versi
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Found the issue while investigating the recent finality stall on Westend
after upgrading to 1.6.0. Approval distribution aggression is supposed
to trade off bandwidth and re-send assignemnts/approvals until enough
approvals are be received by at least 2/3 validators. This is supposed
to be a catch all mechanism when network connectivity goes south or many
validators reboot at the same time.
This fix ensures that we always resend approvals starting with the first
unfinalized block even in the case when it appears approved from the
node's perspective.
TODO:
- [x] Versi test
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Step towards https://github.com/paritytech/polkadot-sdk/issues/1975
As reported
https://github.com/paritytech/polkadot-sdk/issues/1975#issuecomment-1774534225
I'd like to encapsulate crypto related stuff in a dedicated folder.
Currently all cryptographic primitive wrappers are all sparsed in
`substrate/core` which contains "misc core" stuff.
To simplify the process, as the first step with this PR I propose to
move the cryptographic hashing there.
The `substrate/crypto` folder was already created to contains `ec-utils`
crate.
Notes:
- rename `sp-core-hashing` to `sp-crypto-hashing`
- rename `sp-core-hashing-proc-macro` to `sp-crypto-hashing-proc-macro`
- As the crates name is changed I took the freedom to restart fresh from
version 0.1.0 for both crates
---------
Co-authored-by: Robert Hambrock <roberthambrock@gmail.com>
Bumps [indexmap](https://github.com/bluss/indexmap) from 1.9.3 to 2.0.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/bluss/indexmap/blob/master/RELEASES.md">indexmap's
changelog</a>.</em></p>
<blockquote>
<ul>
<li>
<p>2.0.0</p>
<ul>
<li>
<p><strong>MSRV</strong>: Rust 1.64.0 or later is now required.</p>
</li>
<li>
<p>The <code>"std"</code> feature is no longer auto-detected.
It is included in the
default feature set, or else can be enabled like any other Cargo
feature.</p>
</li>
<li>
<p>The <code>"serde-1"</code> feature has been removed,
leaving just the optional
<code>"serde"</code> dependency to be enabled like a feature
itself.</p>
</li>
<li>
<p><code>IndexMap::get_index_mut</code> now returns
<code>Option<(&K, &mut V)></code>, changing
the key part from <code>&mut K</code> to <code>&K</code>. There
is also a new alternative
<code>MutableKeys::get_index_mut2</code> to access the former
behavior.</p>
</li>
<li>
<p>The new <code>map::Slice<K, V></code> and
<code>set::Slice<T></code> offer a linear view of maps
and sets, behaving a lot like normal <code>[(K, V)]</code> and
<code>[T]</code> slices. Notably,
comparison traits like <code>Eq</code> only consider items in order,
rather than hash
lookups, and slices even implement <code>Hash</code>.</p>
</li>
<li>
<p><code>IndexMap</code> and <code>IndexSet</code> now have
<code>sort_by_cached_key</code> and
<code>par_sort_by_cached_key</code> methods which perform stable sorts
in place
using a key extraction function.</p>
</li>
<li>
<p><code>IndexMap</code> and <code>IndexSet</code> now have
<code>reserve_exact</code>, <code>try_reserve</code>, and
<code>try_reserve_exact</code> methods that correspond to the same
methods on <code>Vec</code>.
However, exactness only applies to the direct capacity for items, while
the
raw hash table still follows its own rules for capacity and load
factor.</p>
</li>
<li>
<p>The <code>Equivalent</code> trait is now re-exported from the
<code>equivalent</code> crate,
intended as a common base to allow types to work with multiple map
types.</p>
</li>
<li>
<p>The <code>hashbrown</code> dependency has been updated to version
0.14.</p>
</li>
<li>
<p>The <code>serde_seq</code> module has been moved from the crate root
to below the
<code>map</code> module.</p>
</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/bluss/indexmap/commit/8e47be81a10bcb4b203955d6f6eb05bf85630c20"><code>8e47be8</code></a>
Merge pull request <a
href="https://redirect.github.com/bluss/indexmap/issues/267">#267</a>
from cuviper/release-2.0.0</li>
<li><a
href="https://github.com/bluss/indexmap/commit/ad694fbb8c73f92e4316e436422dc80763a7ed03"><code>ad694fb</code></a>
Release 2.0.0</li>
<li><a
href="https://github.com/bluss/indexmap/commit/b5b2814999255f82559f172d00d1382ae66d23c1"><code>b5b2814</code></a>
Merge pull request <a
href="https://redirect.github.com/bluss/indexmap/issues/266">#266</a>
from cuviper/doc-capacity</li>
<li><a
href="https://github.com/bluss/indexmap/commit/d3ea28992194484dea671a19b808b62ed8efd5d1"><code>d3ea289</code></a>
Document the lower-bound semantics of capacity</li>
<li><a
href="https://github.com/bluss/indexmap/commit/74e14dac622eac69fa292b100a51c5a385a81d61"><code>74e14da</code></a>
Merge pull request <a
href="https://redirect.github.com/bluss/indexmap/issues/264">#264</a>
from cuviper/equivalent</li>
<li><a
href="https://github.com/bluss/indexmap/commit/677c60522815f53e83ab173c199772567e9c9007"><code>677c605</code></a>
Add a relnote for Equivalent</li>
<li><a
href="https://github.com/bluss/indexmap/commit/6d83bc1902b95758d98ea973778d8fc4b4a599a2"><code>6d83bc1</code></a>
pub use equivalent::Equivalent;</li>
<li><a
href="https://github.com/bluss/indexmap/commit/bb48357fee6494a335963072ea8c51ed30903a19"><code>bb48357</code></a>
Merge pull request <a
href="https://redirect.github.com/bluss/indexmap/issues/263">#263</a>
from cuviper/insert_in_slot</li>
<li><a
href="https://github.com/bluss/indexmap/commit/c37dae6bcb2fc2c1f45b1e6fd924f92685acd8b3"><code>c37dae6</code></a>
Use hashbrown's new single-lookup insertion</li>
<li><a
href="https://github.com/bluss/indexmap/commit/ee71507aaacf25b43f99350af44c137e6af65a7c"><code>ee71507</code></a>
Merge pull request <a
href="https://redirect.github.com/bluss/indexmap/issues/262">#262</a>
from daxpedda/hashbrown-v0.14</li>
<li>Additional commits viewable in <a
href="https://github.com/bluss/indexmap/compare/1.9.3...2.0.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Previously, it was only possible to retry the same request on a
different protocol name that had the exact same binary payloads.
Introduce a way of trying a different request on a different protocol if
the first one fails with Unsupported protocol.
This helps with adding new req-response versions in polkadot while
preserving compatibility with unupgraded nodes.
The way req-response protocols were bumped previously was that they were
bundled with some other notifications protocol upgrade, like for async
backing (but that is more complicated, especially if the feature does
not require any changes to a notifications protocol). Will be needed for
implementing https://github.com/polkadot-fellows/RFCs/pull/47
TODO:
- [x] add tests
- [x] add guidance docs in polkadot about req-response protocol
versioning
Closes#1591.
The purpose of this PR is filter out backing statements from the network
signed by disabled validators. This is just an optimization, since we
will do filtering in the runtime in #1863 to avoid nodes to filter
garbage out at block production time.
- [x] Ensure it's ok to fiddle with the mask of manifests
- [x] Write more unit tests
- [x] Test locally
- [x] simple zombienet test
- [x] PRDoc
---------
Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>
This tool makes it easy to run parachain consensus stress/performance
testing on your development machine or in CI.
## Motivation
The parachain consensus node implementation spans across many modules
which we call subsystems. Each subsystem is responsible for a small part
of logic of the parachain consensus pipeline, but in general the most
load and performance issues are localized in just a few core subsystems
like `availability-recovery`, `approval-voting` or
`dispute-coordinator`. In the absence of such a tool, we would run large
test nets to load/stress test these parts of the system. Setting up and
making sense of the amount of data produced by such a large test is very
expensive, hard to orchestrate and is a huge development time sink.
## PR contents
- CLI tool
- Data Availability Read test
- reusable mockups and components needed so far
- Documentation on how to get started
### Data Availability Read test
An overseer is built with using a real `availability-recovery` susbsytem
instance while dependent subsystems like `av-store`, `network-bridge`
and `runtime-api` are mocked. The network bridge will emulate all the
network peers and their answering to requests.
The test is going to be run for a number of blocks. For each block it
will generate send a “RecoverAvailableData” request for an arbitrary
number of candidates. We wait for the subsystem to respond to all
requests before moving to the next block.
At the same time we collect the usual subsystem metrics and task CPU
metrics and show some nice progress reports while running.
### Here is how the CLI looks like:
```
[2023-11-28T13:06:27Z INFO subsystem_bench::core::display] n_validators = 1000, n_cores = 20, pov_size = 5120 - 5120, error = 3, latency = Some(PeerLatency { min_latency: 1ms, max_latency: 100ms })
[2023-11-28T13:06:27Z INFO subsystem-bench::availability] Generating template candidate index=0 pov_size=5242880
[2023-11-28T13:06:27Z INFO subsystem-bench::availability] Created test environment.
[2023-11-28T13:06:27Z INFO subsystem-bench::availability] Pre-generating 60 candidates.
[2023-11-28T13:06:30Z INFO subsystem-bench::core] Initializing network emulation for 1000 peers.
[2023-11-28T13:06:30Z INFO subsystem-bench::availability] Current block 1/3
[2023-11-28T13:06:30Z INFO substrate_prometheus_endpoint] 〽️ Prometheus exporter started at 127.0.0.1:9999
[2023-11-28T13:06:30Z INFO subsystem_bench::availability] 20 recoveries pending
[2023-11-28T13:06:37Z INFO subsystem_bench::availability] Block time 6262ms
[2023-11-28T13:06:37Z INFO subsystem-bench::availability] Sleeping till end of block (0ms)
[2023-11-28T13:06:37Z INFO subsystem-bench::availability] Current block 2/3
[2023-11-28T13:06:37Z INFO subsystem_bench::availability] 20 recoveries pending
[2023-11-28T13:06:43Z INFO subsystem_bench::availability] Block time 6369ms
[2023-11-28T13:06:43Z INFO subsystem-bench::availability] Sleeping till end of block (0ms)
[2023-11-28T13:06:43Z INFO subsystem-bench::availability] Current block 3/3
[2023-11-28T13:06:43Z INFO subsystem_bench::availability] 20 recoveries pending
[2023-11-28T13:06:49Z INFO subsystem_bench::availability] Block time 6194ms
[2023-11-28T13:06:49Z INFO subsystem-bench::availability] Sleeping till end of block (0ms)
[2023-11-28T13:06:49Z INFO subsystem_bench::availability] All blocks processed in 18829ms
[2023-11-28T13:06:49Z INFO subsystem_bench::availability] Throughput: 102400 KiB/block
[2023-11-28T13:06:49Z INFO subsystem_bench::availability] Block time: 6276 ms
[2023-11-28T13:06:49Z INFO subsystem_bench::availability]
Total received from network: 415 MiB
Total sent to network: 724 KiB
Total subsystem CPU usage 24.00s
CPU usage per block 8.00s
Total test environment CPU usage 0.15s
CPU usage per block 0.05s
```
### Prometheus/Grafana stack in action
<img width="1246" alt="Screenshot 2023-11-28 at 15 11 10"
src="https://github.com/paritytech/polkadot-sdk/assets/54316454/eaa47422-4a5e-4a3a-aaef-14ca644c1574">
<img width="1246" alt="Screenshot 2023-11-28 at 15 12 01"
src="https://github.com/paritytech/polkadot-sdk/assets/54316454/237329d6-1710-4c27-8f67-5fb11d7f66ea">
<img width="1246" alt="Screenshot 2023-11-28 at 15 12 38"
src="https://github.com/paritytech/polkadot-sdk/assets/54316454/a07119e8-c9f1-4810-a1b3-f1b7b01cf357">
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
We currently use a bit of a hack in `.cargo/config` to make sure that
clippy isn't too annoying by specifying the list of lints.
There is now a stable way to define lints for a workspace. The only down
side is that every crate seems to have to opt into this so there's a
*few* files modified in this PR.
Dependencies:
- [x] PR that upgrades CI to use rust 1.74 is merged.
---------
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Initial implementation for the plan discussed here: https://github.com/paritytech/polkadot-sdk/issues/701
Built on top of https://github.com/paritytech/polkadot-sdk/pull/1178
v0: https://github.com/paritytech/polkadot/pull/7554,
## Overall idea
When approval-voting checks a candidate and is ready to advertise the
approval, defer it in a per-relay chain block until we either have
MAX_APPROVAL_COALESCE_COUNT candidates to sign or a candidate has stayed
MAX_APPROVALS_COALESCE_TICKS in the queue, in both cases we sign what
candidates we have available.
This should allow us to reduce the number of approvals messages we have
to create/send/verify. The parameters are configurable, so we should
find some values that balance:
- Security of the network: Delaying broadcasting of an approval
shouldn't but the finality at risk and to make sure that never happens
we won't delay sending a vote if we are past 2/3 from the no-show time.
- Scalability of the network: MAX_APPROVAL_COALESCE_COUNT = 1 &
MAX_APPROVALS_COALESCE_TICKS =0, is what we have now and we know from
the measurements we did on versi, it bottlenecks
approval-distribution/approval-voting when increase significantly the
number of validators and parachains
- Block storage: In case of disputes we have to import this votes on
chain and that increase the necessary storage with
MAX_APPROVAL_COALESCE_COUNT * CandidateHash per vote. Given that
disputes are not the normal way of the network functioning and we will
limit MAX_APPROVAL_COALESCE_COUNT in the single digits numbers, this
should be good enough. Alternatively, we could try to create a better
way to store this on-chain through indirection, if that's needed.
## Other fixes:
- Fixed the fact that we were sending random assignments to
non-validators, that was wrong because those won't do anything with it
and they won't gossip it either because they do not have a grid topology
set, so we would waste the random assignments.
- Added metrics to be able to debug potential no-shows and
mis-processing of approvals/assignments.
## TODO:
- [x] Get feedback, that this is moving in the right direction. @ordian
@sandreim @eskimor @burdges, let me know what you think.
- [x] More and more testing.
- [x] Test in versi.
- [x] Make MAX_APPROVAL_COALESCE_COUNT &
MAX_APPROVAL_COALESCE_WAIT_MILLIS a parachain host configuration.
- [x] Make sure the backwards compatibility works correctly
- [x] Make sure this direction is compatible with other streams of work:
https://github.com/paritytech/polkadot-sdk/issues/635 &
https://github.com/paritytech/polkadot-sdk/issues/742
- [x] Final versi burn-in before merging
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Using taplo, fixes all our broken and inconsistent toml formatting and
adds CI to keep them tidy.
If people want we can customise the format rules as described here
https://taplo.tamasfe.dev/configuration/formatter-options.html
@ggwpez, I suggest zepter is used only for checking features are
propagated, and leave formatting for taplo to avoid duplicate work and
conflicts.
TODO
- [x] Use `exclude = [...]` syntax in taplo file to ignore zombienet
tests instead of deleting the dir
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
In gossip-support, we shuffled the list of authorities using the
`rand::seq::SliceRandom::shuffle`. This function's behavior is
unspecified beyond being `O(n)` and could change in the future, leading
to network issues between nodes using different shuffling algorithms. In
practice, the implementation was a Fisher-Yates shuffle. This PR
replaces the call with a re-implementation of Fisher-Yates and adds a
test to ensure the behavior is the same between the two at the moment.
This commit introduces a new concept called `NotificationService` which
allows Polkadot protocols to communicate with the underlying
notification protocol implementation directly, without routing events
through `NetworkWorker`. This implies that each protocol has its own
service which it uses to communicate with remote peers and that each
`NotificationService` is unique with respect to the underlying
notification protocol, meaning `NotificationService` for the transaction
protocol can only be used to send and receive transaction-related
notifications.
The `NotificationService` concept introduces two additional benefits:
* allow protocols to start using custom handshakes
* allow protocols to accept/reject inbound peers
Previously the validation of inbound connections was solely the
responsibility of `ProtocolController`. This caused issues with light
peers and `SyncingEngine` as `ProtocolController` would accept more
peers than `SyncingEngine` could accept which caused peers to have
differing views of their own states. `SyncingEngine` would reject excess
peers but these rejections were not properly communicated to those peers
causing them to assume that they were accepted.
With `NotificationService`, the local handshake is not sent to remote
peer if peer is rejected which allows it to detect that it was rejected.
This commit also deprecates the use of `NetworkEventStream` for all
notification-related events and going forward only DHT events are
provided through `NetworkEventStream`. If protocols wish to follow each
other's events, they must introduce additional abtractions, as is done
for GRANDPA and transactions protocols by following the syncing protocol
through `SyncEventStream`.
Fixes https://github.com/paritytech/polkadot-sdk/issues/512
Fixes https://github.com/paritytech/polkadot-sdk/issues/514
Fixes https://github.com/paritytech/polkadot-sdk/issues/515
Fixes https://github.com/paritytech/polkadot-sdk/issues/554
Fixes https://github.com/paritytech/polkadot-sdk/issues/556
---
These changes are transferred from
https://github.com/paritytech/substrate/pull/14197 but there are no
functional changes compared to that PR
---------
Co-authored-by: Dmitry Markin <dmitry@markin.tech>
Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
Collators were previously reencoding the available data and checking the
erasure root.
Replace that with just checking the PoV hash, which consumes much less
CPU and takes less time.
We also don't need to check the `PersistedValidationData` hash, as
collators don't use it.
Reason:
https://github.com/paritytech/polkadot-sdk/issues/575#issuecomment-1806572230
After systematic chunks recovery is merged, collators will no longer do
any reed-solomon encoding/decoding, which has proven to be a great CPU
consumer.
Signed-off-by: alindima <alin@parity.io>
**_PR migrated from https://github.com/paritytech/polkadot/pull/6782_**
This PR will upgrade the network protocol to version 3 -> VStaging which
will later be renamed to V3. This version introduces a new kind of
assignment certificate that will be used for tranche0 assignments.
Instead of issuing/importing one tranche0 assignment per candidate,
there will be just one certificate per relay chain block per validator.
However, we will not be sending out the new assignment certificates,
yet. So everything should work exactly as before. Once the majority of
the validators have been upgraded to the new protocol version we will
enable the new certificates (starting at a specific relay chain block)
with a new client update.
There are still a few things that need to be done:
- [x] Use bitfield instead of Vec<CandidateIndex>:
https://github.com/paritytech/polkadot/pull/6802
- [x] Fix existing approval-distribution and approval-voting tests
- [x] Fix bitfield-distribution and statement-distribution tests
- [x] Fix network bridge tests
- [x] Implement todos in the code
- [x] Add tests to cover new code
- [x] Update metrics
- [x] Remove the approval distribution aggression levels: TBD PR
- [x] Parachains DB migration
- [x] Test network protocol upgrade on Versi
- [x] Versi Load test
- [x] Add Zombienet test
- [x] Documentation updates
- [x] Fix for sending DistributeAssignment for each candidate claimed by
a v2 assignment (warning: Importing locally an already known assignment)
- [x] Fix AcceptedDuplicate
- [x] Fix DB migration so that we can still keep old data.
- [x] Final Versi burn in
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
While investigating some db migrations that make the node startup fail,
I noticed that the node wasn't exiting and that the log file were
growing exponentially, until my whole system was freezing and that makes
it really hard to actually find why it was failing in the first place.
E.g:
```
ls -lh /tmp/zombie-01a04c2a2c0265d85f6440cf01c0f44a_-51319-uyggzuD4wEpV/bob.log
32,6G oct 27 11:16 /tmp/zombie-01a04c2a2c0265d85f6440cf01c0f44a_-51319-uyggzuD4wEpV/bob.log
```
This was happening because the following errors were being printed
continously without the subsystem main loop exiting:
From dispute-coordinator:
```
WARN tokio-runtime-worker parachain::dispute-coordinator: error=Subsystem(Generated(Context("Signal channel is terminated and empty.")))
```
From availability recovery:
```
Erasure task channel closed. Node shutting down ?
```
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
The validators are checking if async backing is enabled by checking the
version of the runtime api. If the runtime api is upgraded by a runtime
upgrade, the validators start to also enable the async backing logic.
However, just because async backing is enabled, it doesn't mean that all
collators and parachain runtimes have upgraded. This pull request fixes
an issue about advertising collations to the relay chain when it has
async backing enabled, but the collator is still using the old
networking protocol. The implementation is actually backwards compatible
as we can not expect that everyone directly upgrades. However, the
collation advertisement logic was requiring V2 networking messages after
async backing was enabled, which was wrong. This is now fixed by this
pull request.
Closes: https://github.com/paritytech/polkadot-sdk/issues/1923
---------
Co-authored-by: eskimor <eskimor@users.noreply.github.com>
# Description
In a couple of cases, there were links pointing to the w3f github pages
domain. In other instances, there were links pointing to the old
polkadot repo's github pages. Both of these are now pointing to the
relevant links in
https://paritytech.github.io/polkadot-sdk/book/index.html.
These changes were made specifically because the w3f github pages
returns a 404, and while fixing the links, the old polkadot repo links
were touched up as well even if they do redirect properly.
This shouldn't affect anything as these are documentation link changes
only.
- Async-backing related primitives are stable `primitives::v6`
- Async-backing API is now part of `api_version(7)`
- It's enabled on Rococo and Westend runtimes
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
Refactors availability-recovery strategies to allow for easily adding
new hotpaths and failover mechanisms.
The new interface allows for chaining multiple `RecoveryStrategy`-es
together, to cleanly express the relationship between them and share
state and code where neccessary/possible:
This was done in order to aid in implementing new hotpaths like
[systematic chunks
recovery](https://github.com/paritytech/polkadot-sdk/issues/598) and
[fetching from approval
checkers](https://github.com/paritytech/polkadot-sdk/issues/575).
Thanks to this design, intermediate state can be shared between the
strategies. For example, if the systematic chunks recovery retrieved
less than the needed amount of chunks, pass them over to the next
FetchChunks strategy, which will only need to recover the remaining
number of chunks.
Draft example of how a systematic chunk recovery strategy would look:
https://github.com/paritytech/polkadot-sdk/commit/667d870bdf1470525d66c13929d5eac7249dd995
(notice how easy it was to add and reuse code)
Note that this PR doesn't itself add any new strategy, it should fully
preserve backwards compatiblity in terms of functionality. Follow-up PRs
to add new strategies will come.
Futures channels that are used by default has a side effect of
`Sender::Clone` that efficiently increases the capacity of the bounded
channel by one. This PR fixes the undesired backpressure removal that
was caused by the #1409. This issue has been discovered by @sandreim
during Versi testing and needs to be treated as critical that should not
be included in any release without this reversion.
This PR reverts the original behaviour.
This PR addresses multiple issues pending:
* [x] Update orchestra to the recent version and test how the node
performs
* [x] Add some useful metrics for outbound network bridge
* [x] Try to send incoming network requests to all subsystems without
blocking on some particular subsystem in that loop
* [x] Fix all incompatibilities between orchestra and polkadot code
(e.g. malus node)
* polkadot: propagate UnpinHandle to ActiveLeafUpdate
Also extract the leaf creation for tests
into a common function.
* dispute-coordinator: try pinned blocks for slashin
* apparently 1.72 is smarter than 1.70
* address nits
* rename fresh_leaf to new_leaf