This PR implements an (optional) cap of the era inflation that is
allocated to staking rewards. The remaining is minted directly into the
[`RewardRemainder`](https://github.com/paritytech/polkadot-sdk/blob/fb0fd3e62445eb2dee2b2456a0c8574d1ecdcc73/substrate/frame/staking/src/pallet/mod.rs#L160)
account, which is the treasury pot account in Polkadot and Kusama.
The staking pallet now has a percent storage item, `MaxStakersRewards`,
which defines the max percentage of the era inflation that should be
allocated to staking rewards. The remaining era inflation (i.e.
`remaining = max_era_payout - staking_payout.min(staking_payout *
MaxStakersRewards))` is minted directly into the treasury.
The `MaxStakersRewards` can be set by a privileged origin through the
`set_staking_configs` extrinsic.
**To finish**
- [x] run benchmarks for westend-runtime
Replaces https://github.com/paritytech/polkadot-sdk/pull/1483
Closes https://github.com/paritytech/polkadot-sdk/issues/403
---------
Co-authored-by: command-bot <>
Close#2992
Breaking changes:
- rpc server grafana metric `substrate_rpc_requests_started` is removed
(not possible to implement anymore)
- rpc server grafana metric `substrate_rpc_requests_finished` is removed
(not possible to implement anymore)
- rpc server ws ping/pong not ACK:ed within 30 seconds more than three
times then the connection will be closed
Added
- rpc server grafana metric `substrate_rpc_sessions_time` is added to
get the duration for each websocket session
Currently, anyone can registrar a code that exceeds the code size limit
when performing the upgrade from the registrar. This PR fixes that and
adds a new test to cover this.
cc @bkchr @eskimor
This PR addresses an issue where calling chainHead_unpin with duplicate
hashes could lead to unintended side effects.
This backports:
https://github.com/paritytech/json-rpc-interface-spec/pull/135
While at it, have added a test to check that the global reference count
is decremented only once on unpin.
cc @paritytech/subxt-team
---------
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Davide Galassi <davxy@datawok.net>
Given how the block production is driven for Parachains right now, with
the enabling of async backing we would produce two blocks per slot.
Until we have a proper collator implementation, the "hack" is to prevent
the production of multiple blocks per slot.
Closes: https://github.com/paritytech/polkadot-sdk/issues/3282
This PR should supersede
https://github.com/paritytech/polkadot-sdk/pull/2814 and accomplish the
same with less changes. It's needed to run sync strategies in parallel,
like running `ChainSync` and `GapSync` as independent strategies, and
running `ChainSync` and Sync 2.0 alongside each other.
The difference with https://github.com/paritytech/polkadot-sdk/pull/2814
is that we allow simultaneous requests to remote peers initiated by
different strategies, as this is not tracked on the remote node in any
way. Therefore, `PeerPool` is not needed.
CC @skunert
---------
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
I found out during the cleanup of this deprecation message in the
`polkadot-fellows` repository that we deprecated `CurrencyAdapter`
without making the recommended changes.
## TODO
- [ ] fix `polkadot-fellows` bump to 1.6.0
https://github.com/polkadot-fellows/runtimes/pull/159
---------
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
It's a follow-up of #2949. It enables the lookahead collator to
dynamically adjust the aura slot size, which may change during the
runtime upgrade. It also addressed a couple of issues with time
constants I missed in the original PR.
Good news: it works. The parachain successfully switches from sync
backing with 12s slots to async backing with 6s slots.
Bad news: during the transitional period of a single block in which the
actual runtime upgrade is performed, it still gets the old slot duration
of 12s (as it gets it from the best block), resulting in a runtime panic
(logs follow). That doesn't affect the following block production of the
parachain. Ideas on how to improve the situation are appreciated.
<details>
```
2024-02-05 12:59:36.373 INFO tokio-runtime-worker sc_basic_authorship::basic_authorship: [Parachain] 🙌 Starting consensus session on top of parent 0x6fd2d8f904f12c22531bfabf77b16dc84a6a29e45d9ae358aa6547fbf3f0438b
2024-02-05 12:59:36.373 ERROR tokio-runtime-worker runtime: [Parachain] panicked at /home/s0me0ne/wrk/parity/polkadot-sdk/cumulus/pallets/aura-ext/src/consensus_hook.rs:69:9:
assertion `left == right` failed: slot number mismatch
left: Slot(142261198)
right: Slot(284522396)
2024-02-05 12:59:36.373 WARN tokio-runtime-worker sp_state_machine::overlayed_changes::changeset: [Parachain] 1 storage transactions are left open by the runtime. Those will be rolled back.
2024-02-05 12:59:36.373 WARN tokio-runtime-worker sp_state_machine::overlayed_changes::changeset: [Parachain] 1 storage transactions are left open by the runtime. Those will be rolled back.
2024-02-05 12:59:36.373 WARN tokio-runtime-worker basic-authorship: [Parachain] ❗ Inherent extrinsic returned unexpected error: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
WASM backtrace:
error while executing at wasm backtrace:
0: 0x4e4a3b - <unknown>!rust_begin_unwind
1: 0x46cf57 - <unknown>!core::panicking::panic_fmt::h3c280dba88683724
2: 0x46d238 - <unknown>!core::panicking::assert_failed_inner::hebac5970933beb4d
3: 0x3d00fc - <unknown>!core::panicking::assert_failed::h640a47e2fb5dfb4b
4: 0xd0db3 - <unknown>!frame_support::storage::transactional::with_transaction::hcbc31515f81b2ee1
5: 0x34d654 - <unknown>!<cumulus_pallet_parachain_system::pallet::Call<T> as frame_support::traits::dispatch::UnfilteredDispatchable>::dispatch_bypass_filter::{{closure}}::hb7c2c9a11fa88301
6: 0x3547db - <unknown>!environmental::local_key::LocalKey<T>::with::h783f2605ae27d6d3
7: 0x7f454 - <unknown>!<asset_hub_rococo_runtime::RuntimeCall as frame_support::traits::dispatch::UnfilteredDispatchable>::dispatch_bypass_filter::h5e11a01ab97c06c7
8: 0x7f237 - <unknown>!<asset_hub_rococo_runtime::RuntimeCall as sp_runtime::traits::Dispatchable>::dispatch::h7f8ae4a8fede71ca
9: 0x26a0f3 - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::apply_extrinsic::h75e524ff34738391
10: 0x282211 - <unknown>!BlockBuilder_apply_extrinsic. Dropping.
2024-02-05 12:59:36.374 ERROR tokio-runtime-worker runtime: [Parachain] panicked at /home/s0me0ne/wrk/parity/polkadot-sdk/substrate/frame/aura/src/lib.rs:416:9:
assertion `left == right` failed: Timestamp slot must match `CurrentSlot`
left: Slot(142261198)
right: Slot(284522396)
2024-02-05 12:59:36.374 WARN tokio-runtime-worker sp_state_machine::overlayed_changes::changeset: [Parachain] 1 storage transactions are left open by the runtime. Those will be rolled back.
2024-02-05 12:59:36.374 WARN tokio-runtime-worker sp_state_machine::overlayed_changes::changeset: [Parachain] 1 storage transactions are left open by the runtime. Those will be rolled back.
2024-02-05 12:59:36.374 WARN tokio-runtime-worker basic-authorship: [Parachain] ❗ Inherent extrinsic returned unexpected error: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
WASM backtrace:
error while executing at wasm backtrace:
0: 0x4e4a3b - <unknown>!rust_begin_unwind
1: 0x46cf57 - <unknown>!core::panicking::panic_fmt::h3c280dba88683724
2: 0x46d238 - <unknown>!core::panicking::assert_failed_inner::hebac5970933beb4d
3: 0x3d00fc - <unknown>!core::panicking::assert_failed::h640a47e2fb5dfb4b
4: 0x9ece6 - <unknown>!frame_support::storage::transactional::with_transaction::h26f75cb9f9462088
5: 0x356d7e - <unknown>!environmental::local_key::LocalKey<T>::with::hbcf2d4e17b48fdb5
6: 0x7f507 - <unknown>!<asset_hub_rococo_runtime::RuntimeCall as frame_support::traits::dispatch::UnfilteredDispatchable>::dispatch_bypass_filter::h5e11a01ab97c06c7
7: 0x7f237 - <unknown>!<asset_hub_rococo_runtime::RuntimeCall as sp_runtime::traits::Dispatchable>::dispatch::h7f8ae4a8fede71ca
8: 0x26a0f3 - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::apply_extrinsic::h75e524ff34738391
9: 0x282211 - <unknown>!BlockBuilder_apply_extrinsic. Dropping.
2024-02-05 12:59:36.374 DEBUG tokio-runtime-worker runtime::xcmp-queue-migration: [Parachain] Lazy migration finished: item gone
2024-02-05 12:59:36.374 ERROR tokio-runtime-worker runtime: [Parachain] panicked at /home/s0me0ne/wrk/parity/polkadot-sdk/cumulus/pallets/parachain-system/src/lib.rs:265:18:
set_validation_data inherent needs to be present in every block!
2024-02-05 12:59:36.374 ERROR tokio-runtime-worker aura::cumulus: [Parachain] err=Error { inner: Proposing
Caused by:
0: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
WASM backtrace:
error while executing at wasm backtrace:
0: 0x4e4a3b - <unknown>!rust_begin_unwind
1: 0x46cf57 - <unknown>!core::panicking::panic_fmt::h3c280dba88683724
2: 0x46da8b - <unknown>!core::option::expect_failed::hdf18d99c3adabca7
3: 0x2134cb - <unknown>!<cumulus_pallet_parachain_system::pallet::Pallet<T> as frame_support::traits::hooks::OnFinalize<<<<T as frame_system::pallet::Config>::Block as sp_runtime::traits::HeaderProvider>::HeaderT as sp_runtime::traits::Header>::Number>>::on_finalize::hf98aac39802896ba
4: 0x26a9d6 - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::idle_and_finalize_hook::h32775c0df0749d92
5: 0x26ad9f - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::finalize_block::h15e5a1a6b9ca8032
6: 0x2822bd - <unknown>!BlockBuilder_finalize_block
1: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
WASM backtrace:
error while executing at wasm backtrace:
0: 0x4e4a3b - <unknown>!rust_begin_unwind
1: 0x46cf57 - <unknown>!core::panicking::panic_fmt::h3c280dba88683724
2: 0x46da8b - <unknown>!core::option::expect_failed::hdf18d99c3adabca7
3: 0x2134cb - <unknown>!<cumulus_pallet_parachain_system::pallet::Pallet<T> as frame_support::traits::hooks::OnFinalize<<<<T as frame_system::pallet::Config>::Block as sp_runtime::traits::HeaderProvider>::HeaderT as sp_runtime::traits::Header>::Number>>::on_finalize::hf98aac39802896ba
4: 0x26a9d6 - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::idle_and_finalize_hook::h32775c0df0749d92
5: 0x26ad9f - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::finalize_block::h15e5a1a6b9ca8032
6: 0x2822bd - <unknown>!BlockBuilder_finalize_block }
```
</details>
---------
Co-authored-by: Bastian Köcher <git@kchr.de>
PR removes `pull_request_target` from gitspiegel trigger because it
breaks the logic. With `pull_request_target` the action runs in any case
even for first-time contributors.
cc @mutantcornholio
Refactor in accordance with
https://github.com/paritytech/polkadot-sdk/issues/2245#issuecomment-1937025951
Prior to this PR, the `remote_tests` test module would either use
`TEST_WS` or `DEFAULT_HTTP_ENDPOINT`.
With the PR, `TEST_WS` is the default for the `remote_tests` test module
and the fallback is `DEFAULT_HTTP_ENDPOINT`.
The only downside I see to this PR is that for particular tests in the
`remote_tests` module, one would want to use a different http endpoint.
If that is the case, they would have to manually hardcode the http
endpoint for that particular test.
Note: The `TEST_WS` node should fulfill the role for all test cases e.g.
include child tries.
Give it a _try_:
```
TEST_WS=wss://rococo-try-runtime-node.parity-chains.parity.io:443 cargo test --features=remote-test -p frame-remote-externalities -- --nocapture
```
---------
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
This error suggests using either `unvote` or `reap_vote` calls which are
unavailable in the pallet. The only available call for this is
`remove_vote`.
EDIT: Please ignore my earlier write-up. I was able to delegate with
conviction after calling `remove_vote` on all decided proposals
---------
Co-authored-by: command-bot <>
This PR implements the
[transaction_unstable_broadcast](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_broadcast.md)
and
[transaction_unstable_stop](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_stop.md).
The
[transaction_unstable_broadcast](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_broadcast.md)
submits the provided transaction at the best block of the chain.
If the transaction is dropped or declared invalid, the API tries to
resubmit the transaction at the next available best block.
### Broadcasting
The broadcasting operation continues until either:
- the user called `transaction_unstable_stop` with the operation ID that
identifies the broadcasting operation
- the transaction state is one of the following:
- Finalized: the transaction is part of the chain
- FinalizedTimeout: we have waited for 256 finalized blocks and timedout
- Usurped the transaction has been replaced in the tx pool
The broadcasting retires to submit the transaction when the transaction
state is:
- Invalid: the transaction might become valid at a later time
- Dropped: the transaction pool's capacity is full at the moment, but
might clear when other transactions are finalized/dropped
### Stopping
The `transaction_unstable_broadcast` spawns an abortable future and
tracks the abort handler.
When the
[transaction_unstable_stop](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_stop.md)
is called with a valid operation ID; the abort handler of the
corresponding `transaction_unstable_broadcast` future is called. This
behavior ensures the broadcast future is finishes on the next polling.
When the `transaction_unstable_stop` is called with an invalid operation
ID, an invalid jsonrpc specific error object is returned.
### Testing
This PR adds the testing harness of the transaction API and validates
two basic scenarios:
- transaction enters and exits the transaction pool
- transaction stop returns appropriate values when called with valid and
invalid operation IDs
Closes: https://github.com/paritytech/polkadot-sdk/issues/3039
Note that the API should be enabled after:
https://github.com/paritytech/polkadot-sdk/issues/3084.
cc @paritytech/subxt-team
---------
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Leases can be force set, but since Leases is a StorageValue, if a lease
misses its sale rotation in which it should expire, it can never be
cleared.
This can happen if a lease is added with an until timeslice that lies in
a region whose sale has already started or has passed, even if the
timeslice itself hasn't passed.
Trappist is currently trapped in a lease that will never end, so this
will remove it at the next sale rotation.
A fix was introduced in
https://github.com/paritytech/polkadot-sdk/pull/3213 but this missed the
1.7 release. This PR bumps the `coretime-rococo` version to get these
changes on Rococo.
Changes (partial https://github.com/paritytech/polkadot-sdk/issues/994):
- Set log to `0.4.20` everywhere
- Lift `log` to the workspace
Starting with a simpler one after seeing
https://github.com/paritytech/polkadot-sdk/pull/2065 from @jsdw.
This sets the `default-features` to `false` in the root and then
overwrites that in each create to its original value. This is necessary
since otherwise the `default` features are additive and its impossible
to disable them in the crate again once they are enabled in the
workspace.
I am using a tool to do this, so its mostly a test to see that it works
as expected.
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
On grid distribution messages have two paths of reaching a node, so
there is the possiblity of a race when two peers send each other the
same statement around the same time. Statement local_knowledge will tell
us that the peer should have not send the statement because we sent it
to it.
Fix it by also keeping track only of the statement we received from a
given peer and penalize it only if it sends it to us more than once.
Fixes: https://github.com/paritytech/polkadot-sdk/issues/2346
Additionally, also use different Cost labels for different paths to make
it easier to debug things.
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Related to https://github.com/paritytech/polkadot-sdk/issues/3242
Reorganizing the bridge zombienet tests in order to:
- separate the environment spawning from the actual tests
- offer better control over the tests and some possibility to
orchestrate them as opposed to running everything from the zndsl file
Only rewrote the asset transfer test using this new "framework". The old
logic and old tests weren't functionally modified or deleted. The plan
is to get feedback on this approach first and if this is agreed upon,
migrate the other 2 tests later in separate PRs and also do other
improvements later.
This PR improves the transaction status documentation.
- Added doc references for describing the main states
- Extra comment wrt pool ready / future queues
- `FinalityTimeout` no longer describes a lagging finality gadget, it
signals that the maximum number of finality gadgets has been reached
A few helper methods are added to indicate when:
- a final event is generated by the transaction pool for a given event
- a final event is provided, although the transaction might become valid
at a later time and could be re-submitted
The helper methods are used and taken from
https://github.com/paritytech/polkadot-sdk/pull/3079 to help us better
keep it in sync.
cc @paritytech/subxt-team
---------
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Add [forklift
caching](https://gitlab.parity.io/parity/infrastructure/ci_cd/forklift/forklift)
to remainig jobs
by .sh and .py scripts:
- cargo-check-each-crate x6 (`.gitlab/check-each-crate.py`)
- build-linux-stable (`polkadot/scripts/build-only-wasm.sh`)
by before_script:
- build-linux-substrate
- build-subkey-linux (with `.build-subkey` job)
- cargo-check-benches x2
**To disable feature set FORKLIFT_BYPASS variable to true in [project
settings in
gitlab](https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/settings/ci_cd)**
(forklift now handles FORKLIFT_BYPASS by itself)
Closes#169
Fork of the `orml-parameters-pallet` as introduced by
https://github.com/open-web3-stack/open-runtime-module-library/pull/927
(cc @xlc)
It greatly changes how the macros work, but keeps the pallet the same.
The downside of my code is now that it does only support constant keys
in the form of types, not value-bearing keys.
I think this is an acceptable trade off, give that it can be used by
*any* pallet without any changes.
The pallet allows to dynamically set parameters that can be used in
pallet configs while also restricting the updating on a per-key basis.
The rust-docs contains a complete example.
Changes:
- Add `parameters-pallet`
- Use in the kitchensink as demonstration
- Add experimental attribute to define dynamic params in the runtime.
- Adding a bunch of traits to `frame_support::traits::dynamic_params`
that can be re-used by the ORML macros
## Example
First to define the parameters in the runtime file. The syntax is very
explicit about the codec index and errors if there is no.
```rust
#[dynamic_params(RuntimeParameters, pallet_parameters::Parameters::<Runtime>))]
pub mod dynamic_params {
use super::*;
#[dynamic_pallet_params]
#[codec(index = 0)]
pub mod storage {
/// Configures the base deposit of storing some data.
#[codec(index = 0)]
pub static BaseDeposit: Balance = 1 * DOLLARS;
/// Configures the per-byte deposit of storing some data.
#[codec(index = 1)]
pub static ByteDeposit: Balance = 1 * CENTS;
}
#[dynamic_pallet_params]
#[codec(index = 1)]
pub mod contracts {
#[codec(index = 0)]
pub static DepositPerItem: Balance = deposit(1, 0);
#[codec(index = 1)]
pub static DepositPerByte: Balance = deposit(0, 1);
}
}
```
Then the pallet is configured with the aggregate:
```rust
impl pallet_parameters::Config for Runtime {
type AggregratedKeyValue = RuntimeParameters;
type AdminOrigin = EnsureRootWithSuccess<AccountId, ConstBool<true>>;
...
}
```
And then the parameters can be used in a pallet config:
```rust
impl pallet_preimage::Config for Runtime {
type DepositBase = dynamic_params::storage::DepositBase;
}
```
A custom origin an be defined like this:
```rust
pub struct DynamicParametersManagerOrigin;
impl EnsureOriginWithArg<RuntimeOrigin, RuntimeParametersKey> for DynamicParametersManagerOrigin {
type Success = ();
fn try_origin(
origin: RuntimeOrigin,
key: &RuntimeParametersKey,
) -> Result<Self::Success, RuntimeOrigin> {
match key {
RuntimeParametersKey::Storage(_) => {
frame_system::ensure_root(origin.clone()).map_err(|_| origin)?;
return Ok(())
},
RuntimeParametersKey::Contract(_) => {
frame_system::ensure_root(origin.clone()).map_err(|_| origin)?;
return Ok(())
},
}
}
#[cfg(feature = "runtime-benchmarks")]
fn try_successful_origin(_key: &RuntimeParametersKey) -> Result<RuntimeOrigin, ()> {
Ok(RuntimeOrigin::Root)
}
}
```
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Nikhil Gupta <17176722+gupnik@users.noreply.github.com>
Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: command-bot <>
The `TotalLockedValue` storage value in nomination pools pallet may get
out of sync if the staking pallet does implicit withdrawal of unlocking
chunks belonging to a bonded pool stash. This fix is based on a new
method in the `OnStakingUpdate` traits, `on_withdraw`, which allows the
nomination pools pallet to adjust the `TotalLockedValue` every time
there is an implicit or explicit withdrawal from a bonded pool's stash.
This PR also adds a migration that checks and updates the on-chain TVL
if it got out of sync due to the bug this PR fixes.
**Changes to `trait OnStakingUpdate`**
In order for staking to notify the nomination pools pallet that chunks
where withdrew, we add a new method, `on_withdraw` to the
`OnStakingUpdate` trait. The nomination pools pallet filters the
withdraws that are related to bonded pool accounts and updates the
`TotalValueLocked` accordingly.
**Others**
- Adds try-state checks to the EPM/staking e2e tests
- Adds tests for auto withdrawing in the context of nomination pools
**To-do**
- [x] check if we need a migration to fix the current `TotalValueLocked`
(run try-runtime)
- [x] migrations to fix the current on-chain TVL value
✅ **Kusama**:
```
TotalValueLocked: 99.4559 kKSM
TotalValueLocked (calculated) 99.4559 kKSM
```
⚠️ **Westend**:
```
TotalValueLocked: 18.4060 kWND
TotalValueLocked (calculated) 18.4050 kWND
```
**Polkadot**: TVL not released yet.
Closes https://github.com/paritytech/polkadot-sdk/issues/3055
---------
Co-authored-by: command-bot <>
Co-authored-by: Ross Bulat <ross@parity.io>
Co-authored-by: Dónal Murray <donal.murray@parity.io>
Preparation for https://github.com/paritytech/polkadot-sdk/issues/2664
Changes:
- Only require `Hash` instead of `Block` for the benchmarking
- Refactor DB types to do the same
## Integration
This breaking change can easily be integrated into your node via:
```patch
- cmd.run::<Block, ()>(config)
+ cmd.run::<HashingFor<Block>, ()>(config)
```
Status: waiting for CI checks
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: cheme <emericchevalier.pro@gmail.com>
This is the significant step to make BEEFY client able to handle both
ECDSA and (ECDSA, BLS) type signature. The idea is having BEEFY Client
generic on crypto types makes migration to new types smoother.
This makes the BEEFY Keystore generic over AuthorityId and extends its
tests to cover the case when the AuthorityId is of type (ECDSA,
BLS12-377)
---------
Co-authored-by: Davide Galassi <davxy@datawok.net>
Co-authored-by: Robert Hambrock <roberthambrock@gmail.com>
When switching from the instrumented gas metering to the wasmi gas
metering we also removed all imposed limits regarding Wasm module
internals. All those things do not interact with the host and have to be
handled by wasmi. For example, Wasmi charges additional gas for
parameters to each function because as they incur some overhead.
Back then we took the opportunity to remove the dependency on the
deprecated `parity-wasm` which was used to enforce those limits.
This PR merely removes them from the `Schedule` they aren't enforced for
a while.
Those were used for some adhoc comparison of solang vs ink! with regards
to ERC20 transfers. Not been used for a while.
Benchmarking is done here now:
[smart-bench](https://github.com/paritytech/smart-bench): Weight based
benchmark to test how much transaction actually fit into a block with
the current Weights
[schlau](https://github.com/ascjones/schlau): Time based benchmarks to
compare performance
When doing a cross contract call you can supply an optional Weight limit
for that call. If one doesn't specify the limit (setting it to 0) the
sub call will have all the remaining gas available. If one does specify
the limit we subtract that amount eagerly from the Weight meter and fail
fast if not enough `Weight` is available.
This is quite annoying because setting a fixed limit will set the
`gas_required` in the gas estimation according to the specified limit.
Even if in that dry-run the actual call didn't consume that whole
amount. It effectively discards the more precise measurement it should
have from the dry-run.
This PR changes the behaviour so that the supplied limit is an actual
limit: We do the cross contract call even if the limit is higher than
the remaining `Weight`. We then fail and roll back in the cub call in
case there is not enough weight.
This makes the weight estimation in the dry-run no longer dependent on
the weight limit supplied when doing a cross contract call.
---------
Co-authored-by: PG Herveou <pgherveou@gmail.com>
This PR removes the configuration of subsystem benchmarks via CLI
arguments. After this, we only keep configurations only in yaml files.
It removes unnecessary code duplication
Leases can be force set, but since `Leases` is a `StorageValue`, if a
lease misses its sale rotation in which it should expire, it can never
be cleared.
This can happen if a lease is added with an `until` timeslice that lies
in a region whose sale has already started or has passed, even if the
timeslice itself hasn't passed.
This solves that issue in a minimal way, with all expired leases being
cleaned up in each sale rotation, not just the ones that are expiring in
the coming region.
TODO:
- [x] Write test
1. Benchmark results are collected in a single struct.
2. The output of the results is prettified.
3. The result struct used to save the output as a yaml and store it in
artifacts in a CI job.
```
$ cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
$ cat output.txt
polkadot/node/subsystem-bench/examples/availability_read.yaml #1
Network usage, KiB total per block
Received from peers 510796.000 170265.333
Sent to peers 221.000 73.667
CPU usage, s total per block
availability-recovery 38.671 12.890
Test environment 0.255 0.085
polkadot/node/subsystem-bench/examples/availability_read.yaml #2
Network usage, KiB total per block
Received from peers 413633.000 137877.667
Sent to peers 353.000 117.667
CPU usage, s total per block
availability-recovery 52.630 17.543
Test environment 0.271 0.090
polkadot/node/subsystem-bench/examples/availability_read.yaml #3
Network usage, KiB total per block
Received from peers 424379.000 141459.667
Sent to peers 703.000 234.333
CPU usage, s total per block
availability-recovery 51.128 17.043
Test environment 0.502 0.167
```
```
$ cargo run -p polkadot-subsystem-bench --release -- --ci test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
$ cat output.txt
- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #1'
network:
- resource: Received from peers
total: 509011.0
per_block: 169670.33333333334
- resource: Sent to peers
total: 220.0
per_block: 73.33333333333333
cpu:
- resource: availability-recovery
total: 31.845848445
per_block: 10.615282815
- resource: Test environment
total: 0.23582828799999941
per_block: 0.07860942933333313
- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #2'
network:
- resource: Received from peers
total: 411738.0
per_block: 137246.0
- resource: Sent to peers
total: 351.0
per_block: 117.0
cpu:
- resource: availability-recovery
total: 18.93596025099999
per_block: 6.31198675033333
- resource: Test environment
total: 0.2541994199999979
per_block: 0.0847331399999993
- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #3'
network:
- resource: Received from peers
total: 424548.0
per_block: 141516.0
- resource: Sent to peers
total: 703.0
per_block: 234.33333333333334
cpu:
- resource: availability-recovery
total: 16.54178526900001
per_block: 5.513928423000003
- resource: Test environment
total: 0.43960946299999537
per_block: 0.14653648766666513
```
---------
Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
This PR improves compatibility with RISC-V and PolkaVM, allowing more
runtimes to successfully compile.
In particular, it makes the following changes:
- The `sp-mmr-primitives` and `sp-consensus-beefy` crates
unconditionally required an `std`-only dependency; now they only require
those dependencies when the `std` feature is actually enabled. (Our
RISC-V target is, unlike WASM, a true `no_std` target where you can't
accidentally use stuff from `std` anymore.)
- One of our dependencies (the `bitvec` trace) uses a crate called
`radium` which doesn't compile under RISC-V due to incomplete
autodetection logic in their `build.rs` file. The good news is that this
is already fixed in the newest upstream version of `radium`, and the
newest version of `bitvec` uses it. The bad news is that the newest
version of `bitvec` is not currently released on crates.io, so we can't
use it. I've [created an
issue](https://github.com/ferrilab/ferrilab/issues/5) asking for a new
release, but in the meantime I forked the currently used `radium` 0.7,
[fixed the faulty
logic](https://github.com/paritytech/radium-0.7-fork/commit/ed66c8a294b138c67f93499644051d97d4c7fbda)
and used cargo's patching capabilities to use it for the RISC-V runtime
builds. This might be a little hacky, but it is the least intrusive way
to fix the problem, doesn't affect WASM builds at all, and we can
trivially remove it once a new `bitvec` is released.
- The new runtimes are added to the CI to make sure their compilation
doesn't break.