* Replace `futures-channel` with `async-channel` in `out_events`
* Apply suggestions from code review
Co-authored-by: Koute <koute@users.noreply.github.com>
* Also print the backtrace of `send()` call
* Switch from `backtrace` crate to `std::backtrace`
* Remove outdated `backtrace` dependency
* Remove `backtrace` from `Cargo.lock`
---------
Co-authored-by: Koute <koute@users.noreply.github.com>
* Speed up storage iteration from within the runtime
* Move the cached iterator into an `Option`
* Use `RefCell` in no_std
* Simplify the code slightly
* Use `Option::replace`
* Update doc comment for `next_storage_key_slow`
* Temporary commit to make the Substrate CI happy
* Revert "Temporary commit to make the Substrate CI happy"
This reverts commit 9eb2fd223c3e36312242d4fda4ebacf3dd732547.
* Align to substrate master
* Update lock
* Adjust some naming according to the new substrate crates
* Bump default 'additional_trie_layers' to two
The default here only works for extremely small runtimes, which have
no more than 16 storage prefices. This is changed to a "sane" default
of 2, which is save for runtimes with up to 4096 storage prefices (eg StorageValue).
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* Update tests and test weights
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* Fix PoV weights
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* ".git/.scripts/commands/bench/bench.sh" pallet dev pallet_balances
* ".git/.scripts/commands/bench/bench.sh" pallet dev pallet_message_queue
* ".git/.scripts/commands/bench/bench.sh" pallet dev pallet_glutton
* ".git/.scripts/commands/bench/bench.sh" pallet dev pallet_glutton
* Fix sanity check
>0 would also do as a check, but let's try this.
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
---------
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: command-bot <>
Establish fewer outbound connections in an attempt to allow publicly
available nodes to accept more full nodes.
Maintain the overall number of connections node should establish.
* `pallet-treasury`: Ensure we respect `max_amount` for spend across batch calls
When calling `spend` the origin defines the `max_amount` of tokens it is allowed to spend. The
problem is that someone can send a `batch(spend, spend)` to circumvent this restriction as we don't
check across different calls that the `max_amount` is respected. This pull request fixes this
behavior by introducing a so-called dispatch context. This dispatch context is created once per
outer most `dispatch` call. For more information see the docs in this pr. The treasury then uses
this dispatch context to attach information about already spent funds per `max_amount` (we assume
that each origin has a different `max_amount` configured). So, a `batch(spend, spend)` is now
checked to stay inside the allowed spending bounds.
Fixes: https://github.com/paritytech/substrate/issues/13167
* Import `Box` for wasm
* FMT
* Wwstmint test for ReceiveTeleportedAsset
* Missing fix for `weigh_multi_assets`
* Added tests for statemine/statemint
* [Enhancement] Use XCM V3 for initiate_teleport weight calc (#2102)
* [Enhancement] Use XCM V3 for initiate_teleport weight calc
* deref
* replicate in all the runtimes
* fmt
* better handling for AllOf
* fmt
* small type fix
* replicate the fix for all runtimes
---------
Co-authored-by: parity-processbot <>
* removed `frame_support::sp_tracing::try_init_simple();`
* Review fixes
* Removed `as u64`
---------
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Roman Useinov <roman.useinov@gmail.com>
* Remove `Backend::apply_to_key_values_while`
* Add `IterArgs::start_at_exclusive`
* Use `start_at_exclusive` in functions which used `Backend::apply_to_key_values_while`
* Remove `Backend::apply_to_keys_while`
* Remove `for_keys_with_prefix`, `for_key_values_with_prefix` and `for_child_keys_with_prefix`
* Remove unnecessary `to_vec` calls
* Fix unused method warning in no_std
* Remove unnecessary import
* Also check proof sizes in the test
* Iterate over both keys and values in `prove_range_read_with_size` and add a test
* improve error message
* removed unused argument
* docs: disconnect_peer_inner no longer accepts `ban`
* remove redundant trace message
```
sync: Too many full nodes, rejecting 12D3KooWSQAP2fh4qBkLXBW4mvCtbAiK8sqMnExWHHTZtVAxZ8bQ
sync: 12D3KooWSQAP2fh4qBkLXBW4mvCtbAiK8sqMnExWHHTZtVAxZ8bQ disconnected
```
is enough to understand that we've refused to connect to the given peer
* Revert "removed unused argument"
This reverts commit c87f755b1fd03494fb446b604fe25c2418da7c87.
* ban peer for 10s after disconnect
* do not accept incoming conns if peer was banned
* Revert "do not accept incoming conns if peer was banned"
This reverts commit 7e59d05975765f2547468e9dcfd1361516c41e06.
* Revert "ban peer for 10s after disconnect"
This reverts commit 3859201ced42a5b2d18c0600e29efd20962a7289.
* Revert "Revert "removed unused argument""
This reverts commit f1dc623646dc5a69e1822c35f428e90dffe34d95.
* format code
* Revert "remove redundant trace message"
This reverts commit a87e65f08553dbe69027e9aa4f7ca4779ccaa7f2.
* [ci] Deduplicate variables: sections in pipeline specs
The prettier yaml parser doesn't like these.
* [ci] provide git clean filter to format pipeline specs
* [ci] Reformat pipeline specs with prettier