with the deprecation of Rococo, Encointer needs a new staging
environment. Paseo will be Polkadot-focused and westend Kusama-focused,
so we propose to use Westend
## Problem
During the bumping of the `polkadot-fellows` repository to
`polkadot-sdk@1.6.0`, I encountered a situation where the benchmarks
`teleport_assets` and `reserve_transfer_assets` in AssetHubKusama
started to fail. This issue arose due to a decreased ED balance for
AssetHubs introduced
[here](https://github.com/polkadot-fellows/runtimes/pull/158/files#diff-80668ff8e793b64f36a9a3ec512df5cbca4ad448c157a5d81abda1b15f35f1daR213),
and also because of a [missing CI
pipeline](https://github.com/polkadot-fellows/runtimes/issues/197) to
check the benchmarks, which went unnoticed.
These benchmarks expect the `caller` to have enough:
1. balance to transfer (BTT)
2. balance for paying delivery (BFPD).
So the initial balance was calculated as `ED * 100`, which seems
reasonable:
```
const ED_MULTIPLIER: u32 = 100;
let balance = existential_deposit.saturating_mul(ED_MULTIPLIER.into());`
```
The problem arises when the price for delivery is 100 times higher than
the existential deposit. In other words, when `ED * 100` does not cover
`BTT` + `BFPD`.
I check AHR/AHW/AHK/AHP and this problem has only AssetHubKusama
```
ED: 3333333
calculated price to parent delivery: 1031666634 (from xcm logs from the benchmark)
---
3333333 * 100 - BTT(3333333) - BFPD(1031666634) = −701666667
```
which results in the error;
```
2024-02-23 09:19:42 Unable to charge fee with error Module(ModuleError { index: 31, error: [17, 0, 0, 0], message: Some("FeesNotMet") })
Error: Input("Benchmark pallet_xcm::reserve_transfer_assets failed: FeesNotMet")
```
## Solution
The benchmarks `teleport_assets` and `reserve_transfer_assets` were
fixed by removing `ED * 100` and replacing it with `DeliveryHelper`
logic, which calculates the (almost real) price for delivery and sets it
along with the existential deposit as the initial balance for the
account used in the benchmark.
## TODO
- [ ] patch for 1.6 -
https://github.com/paritytech/polkadot-sdk/pull/3466
- [ ] patch for 1.7 -
https://github.com/paritytech/polkadot-sdk/pull/3465
- [ ] patch for 1.8 - TODO: PR
---------
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Add more debug logs to understand if statement-distribution is in a bad
state, should be useful for debugging
https://github.com/paritytech/polkadot-sdk/issues/3314 on production
networks.
Additionally, increase the number of parallel requests should make,
since I notice that requests take around 100ms on kusama, and the 5
parallel request was picked mostly random, no reason why we can do more
than that.
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: ordian <write@reusable.software>
Fixes https://github.com/paritytech/polkadot-sdk/issues/3144
Builds on top of https://github.com/paritytech/polkadot-sdk/pull/3229
### Summary
Some preparations for Runtime to support elastic scaling, guarded by
config node features bit `FeatureIndex::ElasticScalingMVP`. This PR
introduces a per-candidate `CoreIndex` but does it in a hacky way to
avoid changing `CandidateCommitments`, `CandidateReceipts` primitives
and networking protocols.
#### Including `CoreIndex` in `BackedCandidate`
If the `ElasticScalingMVP` feature bit is enabled then
`BackedCandidate::validator_indices` is extended by 8 bits.
The value stored in these bits represents the assumed core index for the
candidate.
It is temporary solution which works by creating a mapping from
`BackedCandidate` to `CoreIndex` by assuming the `CoreIndex` can be
discovered by checking in which validator group the validator that
signed the statement is.
TODO:
- [x] fix tests
- [x] add new tests
- [x] Bump runtime API for Kusama, so we have that node features thing!
-> https://github.com/polkadot-fellows/runtimes/pull/194
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Signed-off-by: alindima <alin@parity.io>
Co-authored-by: alindima <alin@parity.io>
The `fee` should be calculated with the reanchored asset, otherwise it
could lead to a failure where the set aside fee ends up not being
enough.
@acatangiu
---------
Co-authored-by: Adrian Catangiu <adrian@parity.io>
This introduces a check to ensure that the parachain code matches the
validation code stored in the relay chain state. If not, it will print a
warning. This should be mainly useful for parachain builders to make
sure they have setup everything correctly.
First step in implementing
https://github.com/paritytech/polkadot-sdk/issues/3144
### Summary of changes
- switch statement `Table` candidate mapping from `ParaId` to
`CoreIndex`
- introduce experimental `InjectCoreIndex` node feature.
- determine and assume a `CoreIndex` for a candidate based on statement
validator index. If the signature is valid it means validator controls
the validator that index and we can easily map it to a validator
group/core.
- introduce a temporary provisioner fix until we fully enable elastic
scaling in the subystem. The fix ensures we don't fetch the same
backable candidate when calling `get_backable_candidate` for each core.
TODO:
- [x] fix backing tests
- [x] fix statement table tests
- [x] add new test
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Signed-off-by: alindima <alin@parity.io>
Co-authored-by: alindima <alin@parity.io>
The rationale behind this, is that it may be useful for some users
actually disable RPC batch requests or limit them by length instead of
the total size bytes of the batch.
This PR adds two new CLI options:
```
--rpc-disable-batch-requests - disable batch requests on the server
--rpc-max-batch-request-len <LEN> - limit batches to LEN on the server.
```
Lifting some more dependencies to the workspace. Just using the
most-often updated ones for now.
It can be reproduced locally.
```sh
# First you can check if there would be semver incompatible bumps (looks good in this case):
$ zepter transpose dependency lift-to-workspace --ignore-errors syn quote thiserror "regex:^serde.*"
# Then apply the changes:
$ zepter transpose dependency lift-to-workspace --version-resolver=highest syn quote thiserror "regex:^serde.*" --fix
# And format the changes:
$ taplo format --config .config/taplo.toml
```
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
## Xcm changes:
- Fix `pallet_xcm::execute`, move the logic into The `ExecuteController`
so it can be shared with anything that implement that trait.
- Make `ExecuteController::execute` retursn `DispatchErrorWithPostInfo`
instead of `DispatchError`, so that we don't charge the full
`max_weight` provided if the execution is incomplete (useful for
force_batch or contracts calls)
- Fix docstring for `pallet_xcm::execute`, to reflect the changes from
#2405
- Update the signature for `ExecuteController::execute`, we don't need
to return the `Outcome` anymore since we only care about
`Outcome::Complete`
## Contracts changes:
- Update host fn `xcm_exexute`, we don't need to write the `Outcome` to
the sandbox memory anymore. This was also not charged as well before so
it if fixes this too.
- One of the issue was that the dry_run of a contract that call
`xcm_execute` would exhaust the `gas_limit`.
This is because `XcmExecuteController::execute` takes a `max_weight`
argument, and since we don't want the user to specify it manually we
were passing everything left by pre-charghing
`ctx.ext.gas_meter().gas_left()`
- To fix it I added a `fn influence_lowest_limit` on the `Token` trait
and make it return false for `RuntimeCost::XcmExecute`.
- Got rid of the `RuntimeToken` indirection, we can just use
`RuntimeCost` directly.
---------
Co-authored-by: command-bot <>
Adds the coretime and on demand pallets to enable Coretime on Westend.
In order for the migration to run successfully, we need the
Broker/Coretime parachain to be live.
TODO:
- [ ] Broker parachain is live
https://github.com/paritytech/polkadot-sdk/pull/3272
---------
Co-authored-by: command-bot <>
Co-authored-by: Bastian Köcher <info@kchr.de>
~The previous fix was actually incomplete because we update the
authorties only on the situation where we decided to reconnect because
we had a low connectivity issue. Now the problem is that
update_authority_ids use the list of connected peers, so on restart that
does contain anything, so calling immediately after
issue_connection_request won't detect all authorities, so we need to
also check every block as the comment said, but that did not match the
code.~
Actually the fix was correct the flow is follow if more than 1/3 of the
authorities can not be resolved we set last_failure and call
`ConnectToResolvedValidators`.
We will call UpdateAuthorities for all the authorities already connected
and for which we already know the address and for the ones that will
connect later on `PeerConnected` will have the AuthorityId field set,
because it is already known, so approval-distribution will update its
cache topology.
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com>
This is a follow-up for `im-online` pallet removal that is cleaning up
its off-chain storage. Must be merged no earlier than #2265 is enacted.
Related: #1964
---------
Co-authored-by: Bastian Köcher <git@kchr.de>
Add RPC server rate limiting which can be utilized by the CLI
`--rpc-rate-limit <calls/per minute>`
Resolves first part of
https://github.com/paritytech/polkadot-sdk/issues/3028
//cc @PierreBesson @kogeler you might be interested in this one
---------
Co-authored-by: James Wilson <james@jsdw.me>
Co-authored-by: Xiliang Chen <xlchen1291@gmail.com>
Fixes#3014
This PR adds retry mechanics to `pallet-scheduler`, as described in the
issue above.
Users can now set a retry configuration for a task so that, in case its
scheduled run fails, it will be retried after a number of blocks, for a
specified number of times or until it succeeds.
If a retried task runs successfully before running out of retries, its
remaining retry counter will be reset to the initial value. If a retried
task runs out of retries, it will be removed from the schedule.
Tasks which need to be scheduled for a retry are still subject to weight
metering and agenda space, same as a regular task. Periodic tasks will
have their periodic schedule put on hold while the task is retrying.
---------
Signed-off-by: georgepisaltu <george.pisaltu@parity.io>
Co-authored-by: command-bot <>
This PR implements an (optional) cap of the era inflation that is
allocated to staking rewards. The remaining is minted directly into the
[`RewardRemainder`](https://github.com/paritytech/polkadot-sdk/blob/fb0fd3e62445eb2dee2b2456a0c8574d1ecdcc73/substrate/frame/staking/src/pallet/mod.rs#L160)
account, which is the treasury pot account in Polkadot and Kusama.
The staking pallet now has a percent storage item, `MaxStakersRewards`,
which defines the max percentage of the era inflation that should be
allocated to staking rewards. The remaining era inflation (i.e.
`remaining = max_era_payout - staking_payout.min(staking_payout *
MaxStakersRewards))` is minted directly into the treasury.
The `MaxStakersRewards` can be set by a privileged origin through the
`set_staking_configs` extrinsic.
**To finish**
- [x] run benchmarks for westend-runtime
Replaces https://github.com/paritytech/polkadot-sdk/pull/1483
Closes https://github.com/paritytech/polkadot-sdk/issues/403
---------
Co-authored-by: command-bot <>
Close#2992
Breaking changes:
- rpc server grafana metric `substrate_rpc_requests_started` is removed
(not possible to implement anymore)
- rpc server grafana metric `substrate_rpc_requests_finished` is removed
(not possible to implement anymore)
- rpc server ws ping/pong not ACK:ed within 30 seconds more than three
times then the connection will be closed
Added
- rpc server grafana metric `substrate_rpc_sessions_time` is added to
get the duration for each websocket session
Currently, anyone can registrar a code that exceeds the code size limit
when performing the upgrade from the registrar. This PR fixes that and
adds a new test to cover this.
cc @bkchr @eskimor
I found out during the cleanup of this deprecation message in the
`polkadot-fellows` repository that we deprecated `CurrencyAdapter`
without making the recommended changes.
## TODO
- [ ] fix `polkadot-fellows` bump to 1.6.0
https://github.com/polkadot-fellows/runtimes/pull/159
---------
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Changes (partial https://github.com/paritytech/polkadot-sdk/issues/994):
- Set log to `0.4.20` everywhere
- Lift `log` to the workspace
Starting with a simpler one after seeing
https://github.com/paritytech/polkadot-sdk/pull/2065 from @jsdw.
This sets the `default-features` to `false` in the root and then
overwrites that in each create to its original value. This is necessary
since otherwise the `default` features are additive and its impossible
to disable them in the crate again once they are enabled in the
workspace.
I am using a tool to do this, so its mostly a test to see that it works
as expected.
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
On grid distribution messages have two paths of reaching a node, so
there is the possiblity of a race when two peers send each other the
same statement around the same time. Statement local_knowledge will tell
us that the peer should have not send the statement because we sent it
to it.
Fix it by also keeping track only of the statement we received from a
given peer and penalize it only if it sends it to us more than once.
Fixes: https://github.com/paritytech/polkadot-sdk/issues/2346
Additionally, also use different Cost labels for different paths to make
it easier to debug things.
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Add [forklift
caching](https://gitlab.parity.io/parity/infrastructure/ci_cd/forklift/forklift)
to remainig jobs
by .sh and .py scripts:
- cargo-check-each-crate x6 (`.gitlab/check-each-crate.py`)
- build-linux-stable (`polkadot/scripts/build-only-wasm.sh`)
by before_script:
- build-linux-substrate
- build-subkey-linux (with `.build-subkey` job)
- cargo-check-benches x2
**To disable feature set FORKLIFT_BYPASS variable to true in [project
settings in
gitlab](https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/settings/ci_cd)**
(forklift now handles FORKLIFT_BYPASS by itself)
The `TotalLockedValue` storage value in nomination pools pallet may get
out of sync if the staking pallet does implicit withdrawal of unlocking
chunks belonging to a bonded pool stash. This fix is based on a new
method in the `OnStakingUpdate` traits, `on_withdraw`, which allows the
nomination pools pallet to adjust the `TotalLockedValue` every time
there is an implicit or explicit withdrawal from a bonded pool's stash.
This PR also adds a migration that checks and updates the on-chain TVL
if it got out of sync due to the bug this PR fixes.
**Changes to `trait OnStakingUpdate`**
In order for staking to notify the nomination pools pallet that chunks
where withdrew, we add a new method, `on_withdraw` to the
`OnStakingUpdate` trait. The nomination pools pallet filters the
withdraws that are related to bonded pool accounts and updates the
`TotalValueLocked` accordingly.
**Others**
- Adds try-state checks to the EPM/staking e2e tests
- Adds tests for auto withdrawing in the context of nomination pools
**To-do**
- [x] check if we need a migration to fix the current `TotalValueLocked`
(run try-runtime)
- [x] migrations to fix the current on-chain TVL value
✅ **Kusama**:
```
TotalValueLocked: 99.4559 kKSM
TotalValueLocked (calculated) 99.4559 kKSM
```
⚠️ **Westend**:
```
TotalValueLocked: 18.4060 kWND
TotalValueLocked (calculated) 18.4050 kWND
```
**Polkadot**: TVL not released yet.
Closes https://github.com/paritytech/polkadot-sdk/issues/3055
---------
Co-authored-by: command-bot <>
Co-authored-by: Ross Bulat <ross@parity.io>
Co-authored-by: Dónal Murray <donal.murray@parity.io>
Preparation for https://github.com/paritytech/polkadot-sdk/issues/2664
Changes:
- Only require `Hash` instead of `Block` for the benchmarking
- Refactor DB types to do the same
## Integration
This breaking change can easily be integrated into your node via:
```patch
- cmd.run::<Block, ()>(config)
+ cmd.run::<HashingFor<Block>, ()>(config)
```
Status: waiting for CI checks
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: cheme <emericchevalier.pro@gmail.com>
This PR removes the configuration of subsystem benchmarks via CLI
arguments. After this, we only keep configurations only in yaml files.
It removes unnecessary code duplication
1. Benchmark results are collected in a single struct.
2. The output of the results is prettified.
3. The result struct used to save the output as a yaml and store it in
artifacts in a CI job.
```
$ cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
$ cat output.txt
polkadot/node/subsystem-bench/examples/availability_read.yaml #1
Network usage, KiB total per block
Received from peers 510796.000 170265.333
Sent to peers 221.000 73.667
CPU usage, s total per block
availability-recovery 38.671 12.890
Test environment 0.255 0.085
polkadot/node/subsystem-bench/examples/availability_read.yaml #2
Network usage, KiB total per block
Received from peers 413633.000 137877.667
Sent to peers 353.000 117.667
CPU usage, s total per block
availability-recovery 52.630 17.543
Test environment 0.271 0.090
polkadot/node/subsystem-bench/examples/availability_read.yaml #3
Network usage, KiB total per block
Received from peers 424379.000 141459.667
Sent to peers 703.000 234.333
CPU usage, s total per block
availability-recovery 51.128 17.043
Test environment 0.502 0.167
```
```
$ cargo run -p polkadot-subsystem-bench --release -- --ci test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
$ cat output.txt
- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #1'
network:
- resource: Received from peers
total: 509011.0
per_block: 169670.33333333334
- resource: Sent to peers
total: 220.0
per_block: 73.33333333333333
cpu:
- resource: availability-recovery
total: 31.845848445
per_block: 10.615282815
- resource: Test environment
total: 0.23582828799999941
per_block: 0.07860942933333313
- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #2'
network:
- resource: Received from peers
total: 411738.0
per_block: 137246.0
- resource: Sent to peers
total: 351.0
per_block: 117.0
cpu:
- resource: availability-recovery
total: 18.93596025099999
per_block: 6.31198675033333
- resource: Test environment
total: 0.2541994199999979
per_block: 0.0847331399999993
- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #3'
network:
- resource: Received from peers
total: 424548.0
per_block: 141516.0
- resource: Sent to peers
total: 703.0
per_block: 234.33333333333334
cpu:
- resource: availability-recovery
total: 16.54178526900001
per_block: 5.513928423000003
- resource: Test environment
total: 0.43960946299999537
per_block: 0.14653648766666513
```
---------
Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
Superseeds https://github.com/paritytech/polkadot-sdk/pull/1245
This PR is a migration of the
https://github.com/paritytech/substrate/pull/14577.
The PR added associated types (`AddOrigin` & `RemoveOrigin`) to
`Config`. It allows you to decouple types and areas of responsibility,
since at the moment the same types are responsible for adding and
promoting(removing and demoting). This will improve the flexibility of
the pallet configuration.
```
/// The origin required to add a member.
type AddOrigin: EnsureOrigin<Self::RuntimeOrigin, Success = ()>;
/// The origin required to remove a member. The success value indicates the
/// maximum rank *from which* the removal may be.
type RemoveOrigin: EnsureOrigin<Self::RuntimeOrigin, Success = Rank>;
```
To achieve the backward compatibility, the users of the pallet can use
the old type via the new morph:
```
type AddOrigin = MapSuccess<Self::PromoteOrigin, Ignore>;
type RemoveOrigin = Self::DemoteOrigin;
```
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: PraetorP <praetorian281@gmail.com>
Co-authored-by: Pavel Orlov <45266194+PraetorP@users.noreply.github.com>
## Summary
Built on top of the tooling and ideas introduced in
https://github.com/paritytech/polkadot-sdk/pull/2528, this PR introduces
a synthetic benchmark for measuring and assessing the performance
characteristics of the approval-voting and approval-distribution
subsystems.
Currently this allows, us to simulate the behaviours of these systems
based on the following dimensions:
```
TestConfiguration:
# Test 1
- objective: !ApprovalsTest
last_considered_tranche: 89
min_coalesce: 1
max_coalesce: 6
enable_assignments_v2: true
send_till_tranche: 60
stop_when_approved: false
coalesce_tranche_diff: 12
workdir_prefix: "/tmp"
num_no_shows_per_candidate: 0
approval_distribution_expected_tof: 6.0
approval_distribution_cpu_ms: 3.0
approval_voting_cpu_ms: 4.30
n_validators: 500
n_cores: 100
n_included_candidates: 100
min_pov_size: 1120
max_pov_size: 5120
peer_bandwidth: 524288000000
bandwidth: 524288000000
latency:
min_latency:
secs: 0
nanos: 1000000
max_latency:
secs: 0
nanos: 100000000
error: 0
num_blocks: 10
```
## The approach
1. We build a real overseer with the real implementations for
approval-voting and approval-distribution subsystems.
2. For a given network size, for each validator we pre-computed all
potential assignments and approvals it would send, because this a
computation heavy operation this will be cached on a file on disk and be
re-used if the generation parameters don't change.
3. The messages will be sent accordingly to the configured parameters
and those are split into 3 main benchmarking scenarios.
## Benchmarking scenarios
### Best case scenario *approvals_throughput_best_case.yaml*
It send to the approval-distribution only the minimum required tranche
to gathered the needed_approvals, so that a candidate is approved.
### Behaviour in the presence of no-shows *approvals_no_shows.yaml*
It sends the tranche needed to approve a candidate when we have a
maximum of *num_no_shows_per_candidate* tranches with no-shows for each
candidate.
### Maximum throughput *approvals_throughput.yaml*
It sends all the tranches for each block and measures the used CPU and
necessary network bandwidth. by the approval-voting and
approval-distribution subsystem.
## How to run it
```
cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/approvals_throughput.yaml
```
## Evaluating performance
### Use the real subsystems metrics
If you follow the steps in
https://github.com/paritytech/polkadot-sdk/tree/master/polkadot/node/subsystem-bench#install-grafana
for installing locally prometheus and grafana, all real metrics for the
`approval-distribution`, `approval-voting` and overseer are available.
E.g:
<img width="2149" alt="Screenshot 2023-12-05 at 11 07 46"
src="https://github.com/paritytech/polkadot-sdk/assets/49718502/cb8ae2dd-178b-4922-bfa4-dc37e572ed38">
<img width="2551" alt="Screenshot 2023-12-05 at 11 09 42"
src="https://github.com/paritytech/polkadot-sdk/assets/49718502/8b4542ba-88b9-46f9-9b70-cc345366081b">
<img width="2154" alt="Screenshot 2023-12-05 at 11 10 15"
src="https://github.com/paritytech/polkadot-sdk/assets/49718502/b8874d8d-632e-443a-9840-14ad8e90c54f">
<img width="2535" alt="Screenshot 2023-12-05 at 11 10 52"
src="https://github.com/paritytech/polkadot-sdk/assets/49718502/779a439f-fd18-4985-bb80-85d5afad78e2">
### Profile with pyroscope
1. Setup pyroscope following the steps in
https://github.com/paritytech/polkadot-sdk/tree/master/polkadot/node/subsystem-bench#install-pyroscope,
then run any of the benchmark scenario with `--profile` as the
arguments.
2. Open the pyroscope dashboard in grafana, e.g:
<img width="2544" alt="Screenshot 2024-01-09 at 17 09 58"
src="https://github.com/paritytech/polkadot-sdk/assets/49718502/58f50c99-a910-4d20-951a-8b16639303d9">
### Useful logs
1. Network bandwidth requirements:
```
Payload bytes received from peers: 503993 KiB total, 50399 KiB/block
Payload bytes sent to peers: 629971 KiB total, 62997 KiB/block
```
2. Cpu usage by the approval-distribution/approval-voting subsystems.
```
approval-distribution CPU usage 84.061s
approval-distribution CPU usage per block 8.406s
approval-voting CPU usage 96.532s
approval-voting CPU usage per block 9.653s
```
3. Time passed until a given block is approved
```
Chain selection approved after 3500 ms hash=0x0101010101010101010101010101010101010101010101010101010101010101
Chain selection approved after 4500 ms hash=0x0202020202020202020202020202020202020202020202020202020202020202
```
### Using benchmark to quantify improvements from
https://github.com/paritytech/polkadot-sdk/pull/1178 +
https://github.com/paritytech/polkadot-sdk/pull/1191
Using a versi-node we compare the scenarios where all new optimisations
are disabled with a scenarios where tranche0 assignments are sent in a
single message and a conservative simulation where the coalescing of
approvals gives us just 50% reduction in the number of messages we send.
Overall, what we see is a speedup of around 30-40% in the time it takes
to process the necessary messages and a 30-40% reduction in the
necessary bandwidth.
#### Best case scenario comparison(minimum required tranches sent).
Unoptimised
```
Number of blocks: 10
Payload bytes received from peers: 53289 KiB total, 5328 KiB/block
Payload bytes sent to peers: 52489 KiB total, 5248 KiB/block
approval-distribution CPU usage 6.732s
approval-distribution CPU usage per block 0.673s
approval-voting CPU usage 9.523s
approval-voting CPU usage per block 0.952s
```
vs Optimisation enabled
```
Number of blocks: 10
Payload bytes received from peers: 32141 KiB total, 3214 KiB/block
Payload bytes sent to peers: 37314 KiB total, 3731 KiB/block
approval-distribution CPU usage 4.658s
approval-distribution CPU usage per block 0.466s
approval-voting CPU usage 6.236s
approval-voting CPU usage per block 0.624s
```
#### Worst case all tranches sent, very unlikely happens when sharding
breaks.
Unoptimised
```
Number of blocks: 10
Payload bytes received from peers: 746393 KiB total, 74639 KiB/block
Payload bytes sent to peers: 729151 KiB total, 72915 KiB/block
approval-distribution CPU usage 118.681s
approval-distribution CPU usage per block 11.868s
approval-voting CPU usage 124.118s
approval-voting CPU usage per block 12.412s
```
vs optimised
```
Number of blocks: 10
Payload bytes received from peers: 503993 KiB total, 50399 KiB/block
Payload bytes sent to peers: 629971 KiB total, 62997 KiB/block
approval-distribution CPU usage 84.061s
approval-distribution CPU usage per block 8.406s
approval-voting CPU usage 96.532s
approval-voting CPU usage per block 9.653s
```
## TODOs
[x] Polish implementation.
[x] Use what we have so far to evaluate
https://github.com/paritytech/polkadot-sdk/pull/1191 before merging.
[x] List of features and additional dimensions we want to use for
benchmarking.
[x] Run benchmark on hardware similar with versi and kusama nodes.
[ ] Add benchmark to be run in CI for catching regression in
performance.
[ ] Rebase on latest changes for network emulation.
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Andrei Sandu <andrei-mihail@parity.io>
Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
This change is mainly for people running the local variants. They can
directly start with async backing.
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
The PR contains small fixes for:
* A test `HostConfig v11` storage migration - v11 was compared with
itself instead of with `v10`.
* Outdated comment for `ClaimQueue`
* Typos
This PR addresses two issues:
- It modifies `DepositAsset`'s asset filter from `All` to
`AllCounted(1)` to prevent potentially charging excessive weight/fees.
This adjustment avoids situations where fees could be calculated based
on the count of assets, as illustrated
[here](https://github.com/paritytech/polkadot-sdk/blob/master/cumulus/parachains/runtimes/bridge-hubs/bridge-hub-rococo/src/weights/xcm/mod.rs#L38-L46).
- It encapsulates `DepositAsset` with `SetAppendix` to ensure that
`fees` are not trapped in any case. For instance, this prevents issues
when `ExportXcm::validate` encounters an error during the processing of
`ExportMessage`.
Bumps [bounded-collections](https://github.com/paritytech/parity-common)
from 0.1.9 to 0.2.0.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/paritytech/parity-common/commits/impl-rlp-v0.2.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>