This moves the macro related re-exports to `__private` to make it more
obvious for downstream users that they are using an internal api.
---------
Co-authored-by: command-bot <>
Adding on top of the new builder pattern for creating XCM programs, I'm
adding some more APIs:
```rust
let paying_fees: Xcm<()> = Xcm::builder() // Only allow paying for fees
.withdraw_asset() // First instruction has to load the holding register
.buy_execution() // Second instruction has to be `buy_execution`
.build();
let paying_fees_invalid: Xcm<()> = Xcm::builder()
.withdraw_asset()
.build(); // Invalid, need to pay for fees
let not_paying_fees: Xcm<()> = Xcm::builder_unpaid()
.unpaid_execution() // Needed
.withdraw_asset()
.deposit_asset()
.build();
let all_goes: Xcm<()> = Xcm::builder_unsafe() // You can do anything
.withdraw_asset()
.deposit_asset()
.build();
```
The invalid bits are because the methods don't even exist on the types
that you'd want to call them on.
---------
Co-authored-by: command-bot <>
This PR removes:
- `New`, `Generate`, `Edit` commands,
- `kitchensink` dependency
from the `chain-spec-builder` util.
New `convert-to-raw`, `update-code` commands were added.
Additionally renames the `runtime` command (which was added in #1256) to
`create`.
---------
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: command-bot <>
# Description
Sometimes changing file descriptor limits is not allowed, but there is
no need to crash the node if/when this happens. Since `fdlimit`'s author
decided to use panics instead of returning `Result`, we need to catch
it.
# Checklist
- [x] My PR includes a detailed description as outlined in the
"Description" section above
- [ ] My PR follows the [labeling requirements](CONTRIBUTING.md#Process)
of this project (at minimum one label for `T`
required)
- [ ] I have made corresponding changes to the documentation (if
applicable)
- [ ] I have added tests that prove my fix is effective or that my
feature works (if applicable)
---------
Co-authored-by: Koute <koute@users.noreply.github.com>
The `lazy_static` package does not work well in `no-std`: it requires
`spin_no_std` feature, which also will propagate into `std` if enabled.
This is not what we want.
This PR provides simple address uri parser which allows to get rid of
_regex_ which was used to parse the address uri, what in turns allows to
remove lazy_static.
Three regular expressions
(`SS58_REGEX`,`SECRET_PHRASE_REGEX`,`JUNCTION_REGEX`) were replaced with
the parser which unifies all of them.
The new parser does not support Unicode, it is ASCII only.
Related to: #2044
---------
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Koute <koute@users.noreply.github.com>
Co-authored-by: command-bot <>
Adds a function for querying the last runtime upgrade spec version. This
can be useful for when writing runtime level migrations to ensure that
they are not executed multiple times. An example would be a session key
migration.
---------
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Add collectives and glutton parachain westend runtimes to prepare for
#1737.
The removal of system parachain native runtimes #1737 is blocked until
chainspecs and runtime APIs can be dealt with cleanly (merge of #1256
and follow up PRs).
In the meantime, these additions are ready to be merged to `master`, so
I have separated them out into this PR.
Also marked `bridge-hub-westend` as unimplemented in line with [this
issue](https://github.com/paritytech/parity-bridges-common/issues/2602).
TODO
- [x] add to `command-bot` benchmarks
- [x] add to `command-bot-scripts` benchmarks
- [x] generate weights
---------
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: Muharem <ismailov.m.h@gmail.com>
Co-authored-by: command-bot <>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
The goal of this PR is to migrate Identity deposits from the Relay Chain
to a system parachain.
The problem I want to solve is that `IdentityOf` and `SubsOf` both store
an amount that's held in reserve as a storage deposit. When migrating to
a parachain, we can take a snapshot of the actual `IdentityInfo` and
sub-account mappings, but should migrate (off chain) the `deposit`s to
zero, since the chain (and by extension, accounts) won't have any funds
at genesis.
The good news is that we expect parachain deposits to be significantly
lower (possibly 100x) on the parachain. That is, a deposit of 21 DOT on
the Relay Chain would need 0.21 DOT on a parachain. This PR proposes to
migrate the deposits in the following way:
1. Introduces a new pallet with two extrinsics:
- `reap_identity`: Has a configurable `ReapOrigin`, which would be set
to `EnsureSigned` on the Relay Chain (i.e. callable by anyone) and
`EnsureRoot` on the parachain (we don't want identities reaped from
there).
- `poke_deposit`: Checks what deposit the pallet holds (at genesis,
zero) and attempts to update the amount based on the calculated deposit
for storage data.
2. `reap_identity` clears all storage data for a `target` account and
unreserves their deposit.
3. A `ReapIdentityHandler` teleports the necessary DOT to the parachain
and calls `poke_deposit`. Since the parachain deposit is much lower, and
was just unreserved, we know we have enough.
One awkwardness I ran into was that the XCMv3 instruction set does not
provide a way for the system to teleport assets without a fee being
deducted on reception. Users shouldn't have to pay a fee for the system
to migrate their info to a more efficient location. So I wrote my own
program and did the `InitiateTeleport` accounting on my own to send a
program with `UnpaidExecution`. Have discussed an
`InitiateUnpaidTeleport` instruction with @franciscoaguirre . Obviously
any chain executing this would have to pass a `Barrier` for free
execution.
TODO:
- [x] Confirm People Chain ParaId
- [x] Confirm People Chain deposit rates (determined in
https://github.com/paritytech/polkadot-sdk/pull/2281)
- [x] Add pallet to Westend
---------
Co-authored-by: Bastian Köcher <git@kchr.de>
This PR introduces:
- XCM host functions `xcm_send`, `xcm_execute`
- An Xcm trait into the config. that proxy these functions to to
`pallet_xcm`, or disable their usage by using `()`.
- A mock_network and xcm_test files to test the newly added xcm-related
functions.
---------
Co-authored-by: Keith Yeung <kungfukeith11@gmail.com>
Co-authored-by: Sasha Gryaznov <hi@agryaznov.com>
Co-authored-by: command-bot <>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Co-authored-by: Alexander Theißen <alex.theissen@me.com>
Adds a `NodeFeatures` bitfield value to the runtime `HostConfiguration`,
with the purpose of coordinating the enabling of node-side features,
such as: https://github.com/paritytech/polkadot-sdk/issues/628 and
https://github.com/paritytech/polkadot-sdk/issues/598.
These are features that require all validators enable them at the same
time, assuming all/most nodes have upgraded their node versions.
This PR doesn't add any feature yet. These are coming in future PRs.
Also adds a runtime API for querying the state of the client features
and an extrinsic for setting/unsetting a feature by its index in the bitfield.
Note: originally part of:
https://github.com/paritytech/polkadot-sdk/pull/1644, but posted as
standalone to be reused by other PRs until the initial PR is merged
This PR adds support for multiple hashes being passed to the
`chainHeda_unpin` parameters.
The `hash` parameter is renamed to `hash_or_hashes` per
https://github.com/paritytech/json-rpc-interface-spec/pull/111.
While at it, a new integration test is added to check the unpinning of
multiple hashes. The API is checked against a hash or a vector of
hashes.
cc @paritytech/subxt-team
---------
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
## Motivation
`pallet-xcm` is the main user-facing interface for XCM functionality,
including assets manipulation functions like `teleportAssets()` and
`reserve_transfer_assets()` calls.
While `teleportAsset()` works both ways, `reserve_transfer_assets()`
works only for sending reserve-based assets to a remote destination and
beneficiary when the reserve is the _local chain_.
## Solution
This PR enhances `pallet_xcm::(limited_)reserve_withdraw_assets` to
support transfers when reserves are other chains.
This will allow complete, **bi-directional** reserve-based asset
transfers user stories using `pallet-xcm`.
Enables following scenarios:
- transferring assets with local reserve (was previously supported iff
asset used as fee also had local reserve - now it works in all cases),
- transferring assets with reserve on destination,
- transferring assets with reserve on remote/third-party chain (iff
assets and fees have same remote reserve),
- transferring assets with reserve different than the reserve of the
asset to be used as fees - meaning can be used to transfer random asset
with local/dest reserve while using DOT for fees on all involved chains,
even if DOT local/dest reserve doesn't match asset reserve,
- transferring assets with any type of local/dest reserve while using
fees which can be teleported between involved chains.
All of the above is done by pallet inner logic without the user having
to specify which scenario/reserves/teleports/etc. The correct scenario
and corresponding XCM programs are identified, and respectively, built
automatically based on runtime configuration of trusted teleporters and
trusted reserves.
#### Current limitations:
- while `fees` and "non-fee" `assets` CAN have different reserves (or
fees CAN be teleported), the remaining "non-fee" `assets` CANNOT, among
themselves, have different reserve locations (this is also implicitly
enforced by `MAX_ASSETS_FOR_TRANSFER=2`, but this can be safely
increased in the future).
- `fees` and "non-fee" `assets` CANNOT have **different remote**
reserves (this could also be supported in the future, but adds even more
complexity while possibly not being worth it - we'll see what the future
holds).
Fixes https://github.com/paritytech/polkadot-sdk/issues/1584
Fixes https://github.com/paritytech/polkadot-sdk/issues/2055
---------
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Fixes https://github.com/paritytech/polkadot-sdk/issues/1725
This PR adds the following changes:
1. An attribute `pallet::feeless_if` that can be optionally attached to
a call like so:
```rust
#[pallet::feeless_if(|_origin: &OriginFor<T>, something: &u32| -> bool {
*something == 0
})]
pub fn do_something(origin: OriginFor<T>, something: u32) -> DispatchResult {
....
}
```
The closure passed accepts references to arguments as specified in the
call fn. It returns a boolean that denotes the conditions required for
this call to be "feeless".
2. A signed extension `SkipCheckIfFeeless<T: SignedExtension>` that
wraps a transaction payment processor such as
`pallet_transaction_payment::ChargeTransactionPayment`. It checks for
all calls annotated with `pallet::feeless_if` to see if the conditions
are met. If so, the wrapped signed extension is not called, essentially
making the call feeless.
In order to use this, you can simply replace your existing signed
extension that manages transaction payment like so:
```diff
- pallet_transaction_payment::ChargeTransactionPayment<Runtime>,
+ pallet_skip_feeless_payment::SkipCheckIfFeeless<
+ Runtime,
+ pallet_transaction_payment::ChargeTransactionPayment<Runtime>,
+ >,
```
### Todo
- [x] Tests
- [x] Docs
- [x] Prdoc
---------
Co-authored-by: Nikhil Gupta <>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
This PR contains some fixes and cleanups for parachain nodes:
1. When using async backing, node no longer complains about being unable
to reach the prospective-parachain subsystem.
2. Parachain warp sync now informs users that the finalized para block
has been retrieved.
```
2023-11-08 13:24:42 [Parachain] 🎉 Received finalized parachain header #5747719 (0xa0aa…674b) from the relay chain.
```
3. When a user supplied an invalid `--relay-chain-rpc-url`, we were
crashing with a very verbose message. Removed the `expect` and improved
the error message.
```
2023-11-08 13:57:56 [Parachain] No valid RPC url found. Stopping RPC worker.
2023-11-08 13:57:56 [Parachain] Essential task `relay-chain-rpc-worker` failed. Shutting down service.
Error: Service(Application(WorkerCommunicationError("RPC worker channel closed. This can hint and connectivity issues with the supplied RPC endpoints. Message: oneshot canceled")))
```
Closes:
- #1383
- Declared chains can be now be imported and reused in a different
crate.
- Chain declaration are now generic over a generic type `N` (the
Network)
- #1389
- Solved #1383, chains and networks declarations can be restructure to
avoid having to compile all chains when running integrations tests where
are not needed.
- Chains are now declared on its own crate (removed from
`integration-tests-common`)
- Networks are now declared on its own crate (removed from
`integration-tests-common`)
- Integration tests will import only the relevant Network crate
- `integration-tests-common` is renamed to
`emulated-integration-tests-common`
All this is necessary to be able to implement what is described here:
https://github.com/paritytech/roadmap/issues/56#issuecomment-1777010553
---------
Co-authored-by: command-bot <>
The trie cache implementation was ignoring the `storage_root` when
setting up the value cache. The problem with this is that the value
cache works using `storage_keys` and these keys are not unique across
different tries. A block can actually have different tries (main trie
and multiple child tries). This pull request fixes the issue by not
ignoring the `storage_root` and returning an unique `value_cache` per
`storage_root`. It also adds a test for the seen bug and improves
documentation that this doesn't happen again.
`bridge-hub-westend-runtime` was added to cumulus/parachains, but wasn't
hooked up to xcm-emulator to run tests against it.
This commit addresses that ^.
Signed-off-by: Adrian Catangiu <adrian@parity.io>
Added a proc macro to be able to write XCMs using the builder pattern.
This means we go from having to do this:
```rust
let message: Xcm<()> = Xcm(vec![
WithdrawAsset(assets),
BuyExecution { fees: asset, weight_limit: Unlimited },
DepositAsset { assets, beneficiary },
]);
```
to this:
```rust
let message: Xcm<()> = Xcm::builder()
.withdraw_asset(assets)
.buy_execution(asset, Unlimited),
.deposit_asset(assets, beneficiary)
.build();
```
---------
Co-authored-by: Keith Yeung <kungfukeith11@gmail.com>
Co-authored-by: command-bot <>
**_PR migrated from https://github.com/paritytech/polkadot/pull/6782_**
This PR will upgrade the network protocol to version 3 -> VStaging which
will later be renamed to V3. This version introduces a new kind of
assignment certificate that will be used for tranche0 assignments.
Instead of issuing/importing one tranche0 assignment per candidate,
there will be just one certificate per relay chain block per validator.
However, we will not be sending out the new assignment certificates,
yet. So everything should work exactly as before. Once the majority of
the validators have been upgraded to the new protocol version we will
enable the new certificates (starting at a specific relay chain block)
with a new client update.
There are still a few things that need to be done:
- [x] Use bitfield instead of Vec<CandidateIndex>:
https://github.com/paritytech/polkadot/pull/6802
- [x] Fix existing approval-distribution and approval-voting tests
- [x] Fix bitfield-distribution and statement-distribution tests
- [x] Fix network bridge tests
- [x] Implement todos in the code
- [x] Add tests to cover new code
- [x] Update metrics
- [x] Remove the approval distribution aggression levels: TBD PR
- [x] Parachains DB migration
- [x] Test network protocol upgrade on Versi
- [x] Versi Load test
- [x] Add Zombienet test
- [x] Documentation updates
- [x] Fix for sending DistributeAssignment for each candidate claimed by
a v2 assignment (warning: Importing locally an already known assignment)
- [x] Fix AcceptedDuplicate
- [x] Fix DB migration so that we can still keep old data.
- [x] Final Versi burn in
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
The `BlockBuilderProvider` was a trait that was defined in
`sc-block-builder`. The trait was implemented for `Client`. This
basically meant that you needed to import `sc-block-builder` any way to
have access to the block builder. So, this trait was not providing any
real value. This pull request is removing the said trait. Instead of the
trait it introduces a builder for creating a `BlockBuilder`. The builder
currently has the quite fabulous name `BlockBuilderBuilder` (I'm open to
any better name 😅). The rest of the pull request is about
replacing the old trait with the new builder.
# Downstream code changes
If you used `new_block` or `new_block_at` before you now need to switch
it over to the new `BlockBuilderBuilder` pattern:
```rust
// `new` requires a type that implements `CallApiAt`.
let mut block_builder = BlockBuilderBuilder::new(client)
// Then you need to specify the hash of the parent block the block will be build on top of
.on_parent_block(at)
// The block builder also needs the block number of the parent block.
// Here it is fetched from the given `client` using the `HeaderBackend`
// However, there also exists `with_parent_block_number` for directly passing the number
.fetch_parent_block_number(client)
.unwrap()
// Enable proof recording if required. This call is optional.
.enable_proof_recording()
// Pass the digests. This call is optional.
.with_inherent_digests(digests)
.build()
.expect("Creates new block builder");
```
---------
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Co-authored-by: command-bot <>
The check_hardware functions does not give us too much information as to
what is failing, so let's return the list of failed metrics, so that callers can print
it.
This would make debugging easier, rather than try to guess which
dimension is actually failing.
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Right now governance could only control byte-fee component of Rococo <>
Westend message fees (paid at Asset Hubs). This PR changes it a bit:
1) governance now allowed to control both fee components - byte fee and
base fee;
2) base fee now includes cost of "default" delivery and confirmation
transactions, in addition to `ExportMessage` instruction cost.
(imported from https://github.com/paritytech/cumulus/pull/2157)
## Changes
This MR refactores the XCMP, Parachains System and DMP pallets to use
the [MessageQueue](https://github.com/paritytech/substrate/pull/12485)
for delayed execution of incoming messages. The DMP pallet is entirely
replaced by the MQ and thereby removed. This allows for PoV-bounded
execution and resolves a number of issues that stem from the current
work-around.
All System Parachains adopt this change.
The most important changes are in `primitives/core/src/lib.rs`,
`parachains/common/src/process_xcm_message.rs`,
`pallets/parachain-system/src/lib.rs`, `pallets/xcmp-queue/src/lib.rs`
and the runtime configs.
### DMP Queue Pallet
The pallet got removed and its logic refactored into parachain-system.
Overweight message management can be done directly through the MQ
pallet.
Final undeployment migrations are provided by
`cumulus_pallet_dmp_queue::UndeployDmpQueue` and `DeleteDmpQueue` that
can be configured with an aux config trait like:
```rust
parameter_types! {
pub const DmpQueuePalletName: &'static str = \"DmpQueue\" < CHANGE ME;
pub const RelayOrigin: AggregateMessageOrigin = AggregateMessageOrigin::Parent;
}
impl cumulus_pallet_dmp_queue::MigrationConfig for Runtime {
type PalletName = DmpQueuePalletName;
type DmpHandler = frame_support::traits::EnqueueWithOrigin<MessageQueue, RelayOrigin>;
type DbWeight = <Runtime as frame_system::Config>::DbWeight;
}
// And adding them to your Migrations tuple:
pub type Migrations = (
...
cumulus_pallet_dmp_queue::UndeployDmpQueue<Runtime>,
cumulus_pallet_dmp_queue::DeleteDmpQueue<Runtime>,
);
```
### XCMP Queue pallet
Removed all dispatch queue functionality. Incoming XCMP messages are now
either: Immediately handled if they are Signals, enqueued into the MQ
pallet otherwise.
New config items for the XCMP queue pallet:
```rust
/// The actual queue implementation that retains the messages for later processing.
type XcmpQueue: EnqueueMessage<ParaId>;
/// How a XCM over HRMP from a sibling parachain should be processed.
type XcmpProcessor: ProcessMessage<Origin = ParaId>;
/// The maximal number of suspended XCMP channels at the same time.
#[pallet::constant]
type MaxInboundSuspended: Get<u32>;
```
How to configure those:
```rust
// Use the MessageQueue pallet to store messages for later processing. The `TransformOrigin` is needed since
// the MQ pallet itself operators on `AggregateMessageOrigin` but we want to enqueue `ParaId`s.
type XcmpQueue = TransformOrigin<MessageQueue, AggregateMessageOrigin, ParaId, ParaIdToSibling>;
// Process XCMP messages from siblings. This is type-safe to only accept `ParaId`s. They will be dispatched
// with origin `Junction::Sibling(…)`.
type XcmpProcessor = ProcessFromSibling<
ProcessXcmMessage<
AggregateMessageOrigin,
xcm_executor::XcmExecutor<xcm_config::XcmConfig>,
RuntimeCall,
>,
>;
// Not really important what to choose here. Just something larger than the maximal number of channels.
type MaxInboundSuspended = sp_core::ConstU32<1_000>;
```
The `InboundXcmpStatus` storage item was replaced by
`InboundXcmpSuspended` since it now only tracks inbound queue suspension
and no message indices anymore.
Now only sends the most recent channel `Signals`, as all prio ones are
out-dated anyway.
### Parachain System pallet
For `DMP` messages instead of forwarding them to the `DMP` pallet, it
now pushes them to the configured `DmpQueue`. The message processing
which was triggered in `set_validation_data` is now being done by the MQ
pallet `on_initialize`.
XCMP messages are still handed off to the `XcmpMessageHandler`
(XCMP-Queue pallet) - no change here.
New config items for the parachain system pallet:
```rust
/// Queues inbound downward messages for delayed processing.
///
/// Analogous to the `XcmpQueue` of the XCMP queue pallet.
type DmpQueue: EnqueueMessage<AggregateMessageOrigin>;
```
How to configure:
```rust
/// Use the MQ pallet to store DMP messages for delayed processing.
type DmpQueue = MessageQueue;
```
## Message Flow
The flow of messages on the parachain side. Messages come in from the
left via the `Validation Data` and finally end up at the `Xcm Executor`
on the right.

## Further changes
- Bumped the default suspension, drop and resume thresholds in
`QueueConfigData::default()`.
- `XcmpQueue::{suspend_xcm_execution, resume_xcm_execution}` errors when
they would be a noop.
- Properly validate the `QueueConfigData` before setting it.
- Marked weight files as auto-generated so they wont auto-expand in the
MR files view.
- Move the `hypothetical` asserts to `frame_support` under the name
`experimental_hypothetically`
Questions:
- [ ] What about the ugly `#[cfg(feature = \"runtime-benchmarks\")]` in
the runtimes? Not sure how to best fix. Just having them like this makes
tests fail that rely on the real message processor when the feature is
enabled.
- [ ] Need a good weight for `MessageQueueServiceWeight`. The scheduler
already takes 80% so I put it to 10% but that is quite low.
TODO:
- [x] Remove c&p code after
https://github.com/paritytech/polkadot/pull/6271
- [x] Use `HandleMessage` once it is public in Substrate
- [x] fix `runtime-benchmarks` feature
https://github.com/paritytech/polkadot/pull/6966
- [x] Benchmarks
- [x] Tests
- [ ] Migrate `InboundXcmpStatus` to `InboundXcmpSuspended`
- [x] Possibly cleanup Migrations (DMP+XCMP)
- [x] optional: create `TransformProcessMessageOrigin` in Substrate and
replace `ProcessFromSibling`
- [ ] Rerun weights on ref HW
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: command-bot <>
## Summary
Asset bridging support for AssetHub**Rococo** <-> AssetHub**Wococo** was
added [here](https://github.com/paritytech/polkadot-sdk/pull/1215), so
now we aim to bridge AssetHub**Rococo** and AssetHub**Westend**. (And
perhaps retire AssetHubWococo and the Wococo chains).
## Solution
**bridge-hub-westend-runtime**
- added new runtime as a copy of `bridge-hub-rococo-runtime`
- added support for bridging to `BridgeHubRococo`
- added tests and benchmarks
**bridge-hub-rococo-runtime**
- added support for bridging to `BridgeHubWestend`
- added tests and benchmarks
- internal refactoring by splitting bridge configuration per network,
e.g., `bridge_to_whatevernetwork_config.rs`.
**asset-hub-rococo-runtime**
- added support for asset bridging to `AssetHubWestend` (allows to
receive only WNDs)
- added new xcm router for `Westend`
- added tests and benchmarks
**asset-hub-westend-runtime**
- added support for asset bridging to `AssetHubRococo` (allows to
receive only ROCs)
- added new xcm router for `Rococo`
- added tests and benchmarks
## Deployment
All changes will be deployed as a part of
https://github.com/paritytech/polkadot-sdk/issues/1988.
## TODO
- [x] benchmarks for all pallet instances
- [x] integration tests
- [x] local run scripts
Relates to:
https://github.com/paritytech/parity-bridges-common/issues/2602
Relates to: https://github.com/paritytech/polkadot-sdk/issues/1988
---------
Co-authored-by: command-bot <>
Co-authored-by: Adrian Catangiu <adrian@parity.io>
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
helps https://github.com/paritytech/polkadot-sdk/issues/439.
closes https://github.com/paritytech/polkadot-sdk/issues/473.
PR link in the older substrate repository:
https://github.com/paritytech/substrate/pull/13498.
# Context
Rewards payout is processed today in a single block and limited to
`MaxNominatorRewardedPerValidator`. This number is currently 512 on both
Kusama and Polkadot.
This PR tries to scale the nominators payout to an unlimited count in a
multi-block fashion. Exposures are stored in pages, with each page
capped to a certain number (`MaxExposurePageSize`). Starting out, this
number would be the same as `MaxNominatorRewardedPerValidator`, but
eventually, this number can be lowered through new runtime upgrades to
limit the rewardeable nominators per dispatched call instruction.
The changes in the PR are backward compatible.
## How payouts would work like after this change
Staking exposes two calls, 1) the existing `payout_stakers` and 2)
`payout_stakers_by_page`.
### payout_stakers
This remains backward compatible with no signature change. If for a
given era a validator has multiple pages, they can call `payout_stakers`
multiple times. The pages are executed in an ascending sequence and the
runtime takes care of preventing double claims.
### payout_stakers_by_page
Very similar to `payout_stakers` but also accepts an extra param
`page_index`. An account can choose to payout rewards only for an
explicitly passed `page_index`.
**Lets look at an example scenario**
Given an active validator on Kusama had 1100 nominators,
`MaxExposurePageSize` set to 512 for Era e. In order to pay out rewards
to all nominators, the caller would need to call `payout_stakers` 3
times.
- `payout_stakers(origin, stash, e)` => will pay the first 512
nominators.
- `payout_stakers(origin, stash, e)` => will pay the second set of 512
nominators.
- `payout_stakers(origin, stash, e)` => will pay the last set of 76
nominators.
...
- `payout_stakers(origin, stash, e)` => calling it the 4th time would
return an error `InvalidPage`.
The above calls can also be replaced by `payout_stakers_by_page` and
passing a `page_index` explicitly.
## Commission note
Validator commission is paid out in chunks across all the pages where
each commission chunk is proportional to the total stake of the current
page. This implies higher the total stake of a page, higher will be the
commission. If all the pages of a validator's single era are paid out,
the sum of commission paid to the validator across all pages should be
equal to what the commission would have been if we had a non-paged
exposure.
### Migration Note
Strictly speaking, we did not need to bump our storage version since
there is no migration of storage in this PR. But it is still useful to
mark a storage upgrade for the following reasons:
- New storage items are introduced in this PR while some older storage
items are deprecated.
- For the next `HistoryDepth` eras, the exposure would be incrementally
migrated to its corresponding paged storage item.
- Runtimes using staking pallet would strictly need to wait at least
`HistoryDepth` eras with current upgraded version (14) for the migration
to complete. At some era `E` such that `E >
era_at_which_V14_gets_into_effect + HistoryDepth`, we will upgrade to
version X which will remove the deprecated storage items.
In other words, it is a strict requirement that E<sub>x</sub> -
E<sub>14</sub> > `HistoryDepth`, where
E<sub>x</sub> = Era at which deprecated storages are removed from
runtime,
E<sub>14</sub> = Era at which runtime is upgraded to version 14.
- For Polkadot and Kusama, there is a [tracker
ticket](https://github.com/paritytech/polkadot-sdk/issues/433) to clean
up the deprecated storage items.
### Storage Changes
#### Added
- ErasStakersOverview
- ClaimedRewards
- ErasStakersPaged
#### Deprecated
The following can be cleaned up after 84 eras which is tracked
[here](https://github.com/paritytech/polkadot-sdk/issues/433).
- ErasStakers.
- ErasStakersClipped.
- StakingLedger.claimed_rewards, renamed to
StakingLedger.legacy_claimed_rewards.
### Config Changes
- Renamed MaxNominatorRewardedPerValidator to MaxExposurePageSize.
### TODO
- [x] Tracker ticket for cleaning up the old code after 84 eras.
- [x] Add companion.
- [x] Redo benchmarks before merge.
- [x] Add Changelog for pallet_staking.
- [x] Pallet should be configurable to enable/disable paged rewards.
- [x] Commission payouts are distributed across pages.
- [x] Review documentation thoroughly.
- [x] Rename `MaxNominatorRewardedPerValidator` ->
`MaxExposurePageSize`.
- [x] NMap for `ErasStakersPaged`.
- [x] Deprecate ErasStakers.
- [x] Integrity tests.
### Followup issues
[Runtime api for deprecated ErasStakers storage
item](https://github.com/paritytech/polkadot-sdk/issues/426)
---------
Co-authored-by: Javier Viola <javier@parity.io>
Co-authored-by: Ross Bulat <ross@parity.io>
Co-authored-by: command-bot <>
This PR moves syncing-related code from `sc-network-common` to
`sc-network-sync`.
Unfortunately, some parts are tightly integrated with networking, so
they were left in `sc-network-common` for now:
1. `SyncMode` in `common/src/sync.rs` (used in `NetworkConfiguration`).
2. `BlockAnnouncesHandshake`, `BlockRequest`, `BlockResponse`, etc. in
`common/src/sync/message.rs` (used in `src/protocol.rs` and
`src/protocol/message.rs`).
More substantial refactoring is needed to decouple syncing and
networking completely, including getting rid of the hardcoded sync
protocol.
## Release notes
Move syncing-related code from `sc-network-common` to `sc-network-sync`.
Delete `ChainSync` trait as it's never used (the only implementation is
accessed directly from `SyncingEngine` and exposes a lot of public
methods that are not part of the trait). Some new trait(s) for syncing
will likely be introduced as part of Sync 2.0 refactoring to represent
syncing strategies.
# Description
The `trigger_defensive` call has been added to the `root-testing`
pallet. The idea is to have this pallet running on `Rococo/Westend` and
use it to verify if the runtime monitoring works end-to-end.
To accomplish this, `trigger_defensive` dispatches an event when it is
called.
Closes#1953
# Checklist
- [x] My PR includes a detailed description as outlined in the
"Description" section above
- [ ] My PR follows the [labeling requirements](CONTRIBUTING.md#Process)
of this project (at minimum one label for `T`
required)
- [ ] I have made corresponding changes to the documentation (if
applicable)
- [ ] I have added tests that prove my fix is effective or that my
feature works (if applicable)
You can remove the "Checklist" section once all have been checked. Thank
you for your contribution!
✄
-----------------------------------------------------------------------------
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
- Usage the new published
[arkworks-extensions](https://github.com/paritytech/arkworks-extensions)
crates.
Hooks are internally defined to jump into the proper host functions.
- Conditional compilation of each curve (gated by feature with curve
name)
- Separation in smaller host functions sets, divided by curve (fits
nicely with prev point)
The change adds a test to show the failure scenario that caused #1812 to
be rolled back (more context:
https://github.com/paritytech/polkadot-sdk/issues/493#issuecomment-1772009924)
Summary of the scenario:
1. Node has finished downloading up to block 1000 from the peers, from
the canonical chain.
2. Peers are undergoing re-org around this time. One of the peers has
switched to a non-canonical chain, announces block 1001 from that chain
3. Node downloads 1001 from the peer, and tries to import which would
fail (as we don't have the parent block 1000 from the other chain)
---------
Co-authored-by: Dmitry Markin <dmitry@markin.tech>