PR adds `test-linux-stable-int` and `quick-benchmarks` as github action
jobs. It's a copy of `test-linux-stable-int` and `quick-benchmarks` from
gitlab ci and now it's needed to make a stress test for self-hosted
github runners. `test-linux-stable-int` and `quick-benchmarks` in gitlab
are still `Required` whereas this workflow is allowed to fail.
cc https://github.com/paritytech/infrastructure/issues/46
Introduce `touch` call designed to address operational prerequisites
before providing liquidity to a pool.
This function ensures that essential requirements, such as the presence
of the pool's accounts, are fulfilled. It is particularly beneficial in
scenarios where a pool creator removes the pool's accounts without
providing liquidity.
---------
Co-authored-by: command-bot <>
- Moved `substrate/frame/contracts/src/tests/builder.rs` into a pub
test_utils module, so we can use that in the
`pallet-contracts-mock-network` tests
- Refactor xcm tests to use XCM builders, and simplify the use case for
xcm-send
Preparation for https://github.com/paritytech/polkadot-sdk/pull/3935
Changes:
- Add some `default-features = false` for the case that a crate and that
dependency both support nostd builds.
- Shuffle files around of some benchmarking-only crates. These
conditionally disabled the `cfg_attr` for nostd and pulled in libstd.
Example [here](https://github.com/ggwpez/zepter/pull/95). The actual
logic is moved into a `inner.rs` to preserve nostd capability of the
crate in case the benchmarking feature is disabled.
- Add some `use sp_std::vec` where needed.
- Remove some `optional = true` in cases where it was not optional.
- Removed one superfluous `cfg_attr(not(feature = "std"), no_std..`.
All in all this should be logical no-op.
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
This PR ensures that the reported pruned blocks are unique.
While at it, ensure that the best block event is properly generated when
the last best block is a fork that will be pruned in the future.
To achieve this, the chainHead keeps a LRU set of reported pruned blocks
to ensure the following are not reported twice:
```bash
finalized -> block 1 -> block 2 -> block 3
-> block 2 -> block 4 -> block 5
-> block 1 -> block 2_f -> block 6 -> block 7 -> block 8
```
When block 7 is finalized the branch [block 2; block 3] is reported as
pruned.
When block 8 is finalized the branch [block 2; block 4; block 5] should
be reported as pruned, however block 2 was already reported as pruned at
the previous step.
This is a side-effect of the pruned blocks being reported at level N -
1. For example, if all pruned forks would be reported with the first
encounter (when block 6 is finalized we know that block 3 and block 5
are stale), we would not need the LRU cache.
cc @paritytech/subxt-team
Closes https://github.com/paritytech/polkadot-sdk/issues/3658
---------
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Improve doc:
* the pallet macro is actually referring to 2 places, for the module and
for the struct placeholder but doesn't really clarify it (I should have
named the latter just `pallet_struct` or something but it is a bit late)
* The doc of `with_default` is a bit confusing too IMO.
CC @kianenigma
---------
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Introduce `PalletId` as an additional seed parameter for pool's account
id derivation.
The PR also introduces the `pallet_asset_conversion_ops` pallet with a
call to migrate a given pool to thew new account. Additionally
`fungibles::lifetime::ResetTeam` and `fungible::lifetime::Refund`
traits, to facilitate the migration of pools.
---------
Co-authored-by: command-bot <>
This PR introduces changes enabling the transfer of coretime regions via
XCM.
TL;DR: There are two primary issues that are resolved in this PR:
1. The `mint` and `burn` functions were not implemented for coretime
regions. These operations are essential for moving assets to and from
the XCM holding register.
2. The transfer of non-fungible assets through XCM was previously
disallowed. This was due to incorrectly benchmarking non-fungible asset
transfers via XCM, which led to assigning it a weight of `Weight::Max`,
effectively preventing its execution.
### `mint_into` and `burn` implementation
This PR addresses the issue with cross-chain transferring regions back
to the Coretime chain. Remote reserve transfers are performed by
withdrawing and depositing the asset to and from the holding registry.
This requires the asset to support burning and minting functionality.
This PR adds burning and minting; however, they work a bit differently
than usual so that the associated region record is not lost when
burning. Instead of removing all the data, burning will set the owner of
the region to `None`, and when minting it back, it will set it to an
actual value. So, when cross-chain transferring, withdrawing into the
registry will remove the region from its original owner, and when
depositing it from the registry, it will set its owner to another
account
This was originally implemented in this PR: #3455, however we decided to
move all of it to this single PR
(https://github.com/paritytech/polkadot-sdk/pull/3455#discussion_r1547324892)
### Fixes made in this PR
- Update the `XcmReserveTransferFilter` on coretime chain since it is
meant as a reserve chain for coretime regions.
- Update the XCM benchmark to use `AssetTransactor` instead of assuming
`pallet-balances` for fungible transfers.
- Update the XCM benchmark to properly measure weight consumption for
nonfungible reserve asset transfers. ATM reserve transfers via the
extrinsic do not work since the weight for it is set to `Weight::max()`.
Closes: https://github.com/paritytech/polkadot-sdk/issues/865
---------
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Co-authored-by: Dónal Murray <donalm@seadanda.dev>
This PR adjusts `xcm-bridge-hub-router` to be usable in the chain of
routers when a `NotApplicable` error occurs.
Closes: https://github.com/paritytech/polkadot-sdk/issues/4133
## TODO
- [ ] backport to polkadot-sdk 1.10.0 crates.io release
This PR sends the GrandpaNeighbor packet to lightclients similarly to
the full-nodes.
Previously, the lightclient would receive a GrandpaNeigbor packet only
when the note set changed.
Related to: https://github.com/paritytech/polkadot-sdk/issues/4120
Next steps:
- [ ] check with lightclient
---------
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Introduce types to define 1:1 balance conversion for different relative
asset ids/locations of native asset.
Examples:
native asset on Asset Hub presented as `VersionedLocatableAsset` type in
the context of Relay Chain is
```
{
`location`: (0, Parachain(1000)),
`asset_id`: (1, Here),
}
```
and it's balance should be converted 1:1 by implementations of
`ConversionToAssetBalance` trait.
---------
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Updating the prdoc doc file to be a bit more useful for new contributors
and adding a SemVer section.
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Changes:
- Saturate in the input validation of he drop history function or
pallet-broker.
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
`LiveAsset` is an error to be returned when an asset is not supposed to
be live.
And `AssetNotLive` is an error to be returned when an asset is supposed
to be live, I don't think frozen qualifies as live.
The first test proves that parachains who were migrated over on a legacy
lease can renew without downtime.
The exception is if their lease expires in period 0 - aka within
`region_length` timeslices after `start_sales` is called. The second
test is designed such that it passes if the issue exists and should be
fixed.
This will require an intervention on Kusama to add these renewals to
storage as it is too tight to schedule a runtime upgrade before the
start_sales call. All leases will still have at least two full regions
of coretime.
This PR ensures the proper logging target (ie `libp2p_tcp` or `beefy`)
is displayed.
The issue has been introduced in:
https://github.com/paritytech/polkadot-sdk/pull/4059, which removes the
normalized metadata of logs.
From
[documentation](https://docs.rs/tracing-log/latest/tracing_log/trait.NormalizeEvent.html#tymethod.normalized_metadata):
> In tracing-log, an Event produced by a log (through
[AsTrace](https://docs.rs/tracing-log/latest/tracing_log/trait.AsTrace.html))
has an hard coded “log” target
>
[normalized_metadata](https://docs.rs/tracing-log/latest/tracing_log/trait.NormalizeEvent.html#tymethod.normalized_metadata):
If this Event comes from a log, this method provides a new normalized
Metadata which has all available attributes from the original log,
including file, line, module_path and target
This has low implications if a version was deployed containing the
mentioned pull request, as we'll lose the ability to distinguish between
log targets.
### Before this PR
```
2024-04-15 12:45:40.327 INFO main log: Parity Polkadot
2024-04-15 12:45:40.328 INFO main log: ✌️ version 1.10.0-d1b0ef76a8b
2024-04-15 12:45:40.328 INFO main log: ❤️ by Parity Technologies <admin@parity.io>, 2017-2024
2024-04-15 12:45:40.328 INFO main log: 📋 Chain specification: Development
2024-04-15 12:45:40.328 INFO main log: 🏷 Node name: yellow-eyes-2963
2024-04-15 12:45:40.328 INFO main log: 👤 Role: AUTHORITY
2024-04-15 12:45:40.328 INFO main log: 💾 Database: RocksDb at /tmp/substrated39i9J/chains/rococo_dev/db/full
2024-04-15 12:45:44.508 WARN main log: Took active validators from set with wrong size
...
2024-04-15 12:45:45.805 INFO main log: 👶 Starting BABE Authorship worker
2024-04-15 12:45:45.806 INFO tokio-runtime-worker log: 🥩 BEEFY gadget waiting for BEEFY pallet to become available...
2024-04-15 12:45:45.806 DEBUG tokio-runtime-worker log: New listen address: /ip6/::1/tcp/30333
2024-04-15 12:45:45.806 DEBUG tokio-runtime-worker log: New listen address: /ip4/127.0.0.1/tcp/30333
```
### After this PR
```
2024-04-15 12:59:45.623 INFO main sc_cli:🏃 Parity Polkadot
2024-04-15 12:59:45.623 INFO main sc_cli:🏃✌️ version 1.10.0-d1b0ef76a8b
2024-04-15 12:59:45.623 INFO main sc_cli:🏃❤️ by Parity Technologies <admin@parity.io>, 2017-2024
2024-04-15 12:59:45.623 INFO main sc_cli:🏃📋 Chain specification: Development
2024-04-15 12:59:45.623 INFO main sc_cli:🏃 🏷 Node name: helpless-lizards-0550
2024-04-15 12:59:45.623 INFO main sc_cli:🏃👤 Role: AUTHORITY
...
2024-04-15 12:59:50.204 INFO tokio-runtime-worker beefy: 🥩 BEEFY gadget waiting for BEEFY pallet to become available...
2024-04-15 12:59:50.204 DEBUG tokio-runtime-worker libp2p_tcp: New listen address: /ip6/::1/tcp/30333
2024-04-15 12:59:50.204 DEBUG tokio-runtime-worker libp2p_tcp: New listen address: /ip4/127.0.0.1/tcp/30333
```
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Closes https://github.com/paritytech/opstooling/issues/174
Added a new step in the action that triggers review bot to stop approval
from new pushes.
This step works in the following way:
- If the **author of the PR**, who **is not** a member of the org,
pushed a new commit then:
- Review-Trigger requests new reviews from the reviewers and fails.
It *does not dismiss reviews*. It simply request them again, but they
will still be available.
This way, if the author changed something in the code, they will still
need to have this latest change approved to stop them from uploading
malicious code.
Find the requested issue linked to this PR (it is from a private repo so
I can't link it here)
While `sp-api-proc-macro` isn't used directly and thus, it should have
the same features enabled as `sp-api`. However, I have seen issues
around `frame-metadata` not being enabled for `sp-api`, but for
`sp-api-proc-macro`. This can be prevented by using the
`frame_metadata_enabled` macro from `sp-api` that ensures we have the
same feature set between both crates.
As discovered during investigation of
https://github.com/paritytech/polkadot-sdk/issues/3314 and
https://github.com/paritytech/polkadot-sdk/issues/3673 there are active
validators which accidentally might change their network key during
restart, that's not a safe operation when you are in the active set
because of distributed nature of DHT, so the old records would still
exist in the network until they expire 36h, so unless they have a good
reason validators should avoid changing their key when they restart
their nodes.
There is an effort in parallel to improve this situation
https://github.com/paritytech/polkadot-sdk/pull/3786, but those changes
are way more intrusive and will need more rigorous testing, additionally
they will reduce the time to less than 36h, but the propagation won't be
instant anyway, so not changing your network during restart should be
the safest way to run your node, unless you have a really good reason to
change it.
## Proposal
1. Do not auto-generate the network if the network file does not exist
in the provided path. Nodes where the key file does not exist will get
the following error:
```
Error:
0: Starting an authorithy without network key in /home/alexggh/.local/share/polkadot/chains/ksmcc3/network/secret_ed25519.
This is not a safe operation because the old identity still lives in the dht for 36 hours.
Because of it your node might suffer from not being properly connected to other nodes for validation purposes.
If it is the first time running your node you could use one of the following methods.
1. Pass --unsafe-force-node-key-generation and make sure you remove it for subsequent node restarts
2. Separetly generate the key with: polkadot key generate-node-key --file <YOUR_PATH_TO_NODE_KEY>
```
2. Add an explicit parameters for nodes that do want to change their
network despite the warnings or if they run the node for the first time.
`--unsafe-force-node-key-generation`
3. For `polkadot key generate-node-key` add two new mutually exclusive
parameters `base_path` and `default_base_path` to help with the key
generation in the same path the polkadot main command would expect it.
4. Modify the installation scripts to auto-generate a key in default
path if one was not present already there, this should help with making
the executable work out of the box after an instalation.
## Notes
Nodes that do not have already the key persisted will fail to start
after this change, however I do consider that better than the current
situation where they start but they silently hide that they might not be
properly connected to their peers.
## TODO
- [x] Make sure only nodes that are authorities on producation chains
will be affected by this restrictions.
- [x] Proper PRDOC, to make sure node operators are aware this is
coming.
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Dmitry Markin <dmitry@markin.tech>
Co-authored-by: s0me0ne-unkn0wn <48632512+s0me0ne-unkn0wn@users.noreply.github.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
This workflow will automatically add issues related to async backing to
the Parachain team board, updating a custom "meta" field.
Requested by @the-right-joyce
This PR introduces `BlockHashProvider` into `pallet_mmr::Config`
This type is used to get `block_hash` for a given `block_number` rather
than directly using `frame_system::Pallet::block_hash`
The `DefaultBlockHashProvider` uses `frame_system::Pallet::block_hash`
to get the `block_hash`
Closes: #4062
This PR mainly removes `xcm::v3` stuff from `assets-common` to make it
more generic and facilitate the transition to newer XCM versions. Some
of the implementations here used hard-coded `xcm::v3::Location`, but now
it's up to the runtime to configure according to its needs.
Additional/consequent changes:
- `penpal` runtime uses now `xcm::latest::Location` for `pallet_assets`
as `AssetId`, because we don't care about migrations here
- it pretty much simplify xcm-emulator integration tests, where we don't
need now a lots of boilerplate conversions:
```
v3::Location::try_from(...).expect("conversion works")`
```
- xcm-emulator tests
- split macro `impl_assets_helpers_for_parachain` to the
`impl_assets_helpers_for_parachain` and
`impl_foreign_assets_helpers_for_parachain` (avoids using hard-coded
`xcm::v3::Location`)
1. The `CustomFmtContext::ContextWithFormatFields` enum arm isn't
actually used and thus we don't need the enum anymore.
2. We don't do anything with most of the normalized metadata that's
created by calling `event.normalized_metadata();` - the `target` we can
get from `event.metadata.target()` and level we can get from
`event.metadata.level()` - let's just call them direct to simplify
things. (`event.metadata()` is just a field call under the hood)
Changelog: No functional changes, might run a tad faster with lots of
logging turned on.
---------
Co-authored-by: Bastian Köcher <git@kchr.de>
# Description
Add `transfer_assets_using()` for transferring assets from local chain
to destination chain using explicit XCM transfer types such as:
- `TransferType::LocalReserve`: transfer assets to sovereign account of
destination chain and forward a notification XCM to `dest` to mint and
deposit reserve-based assets to `beneficiary`.
- `TransferType::DestinationReserve`: burn local assets and forward a
notification to `dest` chain to withdraw the reserve assets from this
chain's sovereign account and deposit them to `beneficiary`.
- `TransferType::RemoteReserve(reserve)`: burn local assets, forward XCM
to `reserve` chain to move reserves from this chain's SA to `dest`
chain's SA, and forward another XCM to `dest` to mint and deposit
reserve-based assets to `beneficiary`. Typically the remote `reserve` is
Asset Hub.
- `TransferType::Teleport`: burn local assets and forward XCM to `dest`
chain to mint/teleport assets and deposit them to `beneficiary`.
By default, an asset's reserve is its origin chain. But sometimes we may
want to explicitly use another chain as reserve (as long as allowed by
runtime `IsReserve` filter).
This is very helpful for transferring assets with multiple configured
reserves (such as Asset Hub ForeignAssets), when the transfer strictly
depends on the used reserve.
E.g. For transferring Foreign Assets over a bridge, Asset Hub must be
used as the reserve location.
# Example usage scenarios
## Transfer bridged ethereum ERC20-tokenX between ecosystem parachains.
ERC20-tokenX is registered on AssetHub as a ForeignAsset by the
Polkadot<>Ethereum bridge (Snowbridge). Its asset_id is something like
`(parents:2, (GlobalConsensus(Ethereum), Address(tokenX_contract)))`.
Its _original_ reserve is Ethereum (only we can't use Ethereum as a
reserve in local transfers); but, since tokenX is also registered on
AssetHub as a ForeignAsset, we can use AssetHub as a reserve.
With this PR we can transfer tokenX from ParaA to ParaB while using
AssetHub as a reserve.
## Transfer AssetHub ForeignAssets between parachains
AssetA created on ParaA but also registered as foreign asset on Asset
Hub. Can use AssetHub as a reserve.
And all of the above can be done while still controlling transfer type
for `fees` so mixing assets in same transfer is supported.
# Tests
Added integration tests for showcasing:
- transferring local (not bridged) assets from parachain over bridge
using local Asset Hub reserve,
- transferring foreign assets from parachain to Asset Hub,
- transferring foreign assets from Asset Hub to parachain,
- transferring foreign assets from parachain to parachain using local
Asset Hub reserve.
---------
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: command-bot <>
Small adjustments which should make understanding what is going on much
easier for future readers.
Initialization is a bit messy, the very least we should do is adding
documentation to make it harder to use wrongly.
I was thinking about calling `request_core_count` right from
`start_sales`, but as explained in the docs, this is not necessarily
what you want.
---------
Co-authored-by: eskimor <eskimor@no-such-url.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Dónal Murray <donal.murray@parity.io>
One more try to make this test robust from a resource perspective.
---------
Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Co-authored-by: Javier Viola <javier@parity.io>
Implements the idea from
https://github.com/paritytech/polkadot-sdk/pull/3899
- Removed latencies
- Number of runs reduced from 50 to 5, according to local runs it's
quite enough
- Network message is always sent in a spawned task, even if latency is
zero. Without it, CPU time sometimes spikes.
- Removed the `testnet` profile because we probably don't need that
debug additions.
After the local tests I can't say that it brings a significant
improvement in the stability of the results. However, I belive it is
worth trying and looking at the results over time.
Currently `subsystem-regression-tests` job fails if the first benchmarks
fail and there is no result for the second benchmark. Also dividing the
job makes the pipeline faster (currently it's a longest job)
cc https://github.com/paritytech/ci_cd/issues/969
cc @AndreiEres
---------
Co-authored-by: Andrei Eres <eresav@me.com>
This is phase 2 of async backing enablement for the system parachains on
the production networks. ~~It should be merged after
polkadot-fellows/runtimes#228 is enacted. After it is released,~~ all
the system parachain collators should be upgraded, and then we can
proceed with phase 3, which will enable async backing in the runtimes.
UPDATE: Indeed, we don't need to wait for the runtime upgrade enactions.
The lookahead collator handles the transition by itself, so we can
upgrade ASAP.
## Scope of changes
Here, we eliminate the dichotomy of having "generic Aura collators" for
the production system parachains and "lookahead Aura collators" for the
testnet system parachains. Now, all the collators are started as
lookahead ones, preserving the logic of transferring from the shell node
to Aura-enabled collators for the asset hubs. So, indeed, it simplifies
the parachain service logic, which cannot but rejoice.