* ed25519_verify: Support using dalek for historical blocks
The switch from `ed25519-dalek` to `ed25519-zebra` was actually a breaking change. `ed25519-zebra`
is more permissive. To support historical blocks when syncing a chain this pull request introduces
an externalities extension `UseDalekExt`. This extension is just used as a signaling mechanism to
`ed25519_verify` to use `ed25519-dalek` when it is present. Together with `ExtensionBeforeBlock` it
can be used to setup a node in way to sync historical blocks that require `ed25519-dalek`, because
they included a transaction that verified differently as when using `ed25519-zebra`.
This feature can be enabled in the following way. In the chain service file, directly after the
client is created, the following code should be added:
```
use sc_client_api::ExecutorProvider;
client.execution_extensions().set_extensions_factory(
sc_client_api::execution_extensions::ExtensionBeforeBlock::<Block, sp_io::UseDalekExt>::new(BLOCK_NUMBER_UNTIL_DALEK_SHOULD_BE_USED)
);
```
* Fix doc
* More fixes
* Update client/api/src/execution_extensions.rs
Co-authored-by: André Silva <123550+andresilva@users.noreply.github.com>
* Fix merge and warning
* Fix docs
Co-authored-by: André Silva <123550+andresilva@users.noreply.github.com>
* CI: Explicitly unset RUSTC_WRAPPER=sccache environment variable
* Try with `rusty-cachier` disabled
* Re-enable `rusty-cachier` and try with the staging image
* Bring back `production` image
* Sort crates before splitting them into groups (+ some improvements) (#12755)
* sort crates before splitting them into groups
this is useful so that crates always get routed to a specific group for a given version of the source code, which means that jobs for each batch can be reliably retried individually
* more verbose output
* misc improvements
* put uniq after sort
uniq filters by adjacent lines
* shellcheck
* rm useless backlashes
* handle edge case of no crates detected
* Revert "Sort crates before splitting them into groups (+ some improvements) (#12755)"
This reverts commit fde839183a12a2bd51efc7143ebcddeed81ea6fa.
Co-authored-by: João Paulo Silva de Souza <77391175+joao-paulo-parity@users.noreply.github.com>
* allow fellows to abdicate voting rights
* rename founders to founding fellows, give equal power
* Drop FoundingFellow role and veto call (#12762)
* drop FoundingFellow role
* drop veto call
* Storage migration to remove founder role (#12766)
* storage migration to remove founder role
* skip migration if no members
* truncate the final fellows set if overflows
* change log - action order
* MemberAbdicated -> FellowAbdicated
Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com>
* sort crates before splitting them into groups
this is useful so that crates always get routed to a specific group for a given version of the source code, which means that jobs for each batch can be reliably retried individually
* more verbose output
* misc improvements
* put uniq after sort
uniq filters by adjacent lines
* shellcheck
* rm useless backlashes
* handle edge case of no crates detected
* client/beefy: remove high-freq network events from main loop
Network events are many and very frequent, remove the net-event-stream
from the main voter loop and drastically reduce BEEFY voter task
'wakeups'.
Instead have the `GossipValidator` track known peers as it already
has callbacks for that coming from `GossipEngine`.
Signed-off-by: acatangiu <adrian@parity.io>
* client/beefy: prepare worker for persisting state
* client/beefy: persist voter state
* client/beefy: initialize persistent state
* client/beefy: try to vote from the very beginning
Now that voter is initialized from persistent state, it makes
sense that it can attempt voting right away. This also helps
the genesis case when we consider block `One` as mandatory.
* client/beefy: add tests for voter state db
* client/beefy: persist voter state as soon as initialized
* client/beefy: make sure min-block-delta is at least 1
* client/beefy: persist state after voting
Persist state after handling self vote to avoid double voting in case
of voter restarts.
* client/beefy: persist state after handling mandatory block vote
For mandatory blocks we want to make sure we're not losing votes
in case of crashes or restarts, since voter will not make further
progress without finalizing them.
* frame/beefy: use GENESIS_AUTHORITY_SET_ID on pallet genesis
* client/beefy: initialize voter at either genesis or last finalized
To guarantee unbroken chain of mandatory blocks justifications, voter
will always resume from either last BEEFY-justified block or
`pallet-beefy` genesis, whichever is more recent.
Initialization walks back the chain from latest GRANDPA finalized
block looking for one of the above. Along the way, it also records
and enqueues for processing any BEEFY mandatory blocks that have
been already GRANDPA finalized but not BEEFY finalized.
* client/beefy: decouple voter init from aux db state load
* client/beefy: fix voter init tests
* remove debug prints
* gadget future must be type ()
* fix init from last justification
Signed-off-by: Adrian Catangiu <adrian@parity.io>
* check all crates individually
It's relevant to check workspace crates individually because otherwise their compilation problems
due to feature misconfigurations won't be caught, as exemplified by
https://github.com/paritytech/substrate/issues/12705
* adapt to lack of multiple macos runners
https://github.com/paritytech/substrate/pull/12709#discussion_r1022868752
* fix cancel-pipeline-cargo-check-each-crate-macos
* fix cargo-check-each-crate-macos again
* time command execution
* fix YAML anchors
* add explanation for rounding division
* ensure the minimum of one crate per group
* collect artifacts for pipeline stopper
* revert hardcoded crates_per_group
* re-add crates_per_group=1
Add an example of how to test for events into the example pallet. Right now, the information is pretty hard to find without looking into pallet tests or finding some particular posts on the stackoverflow.
Before it was using `build_storage` and `assimilate_storage` was returning an error. However, there
was no real reason for `assimilate_storage` to return an error. This pr implements
`assimilate_storage` and uses the default `build_storage` of the trait.
* Support repeated destroys to safely destroy large assets
* require freezing accounts before destroying
* support only deleting asset as final stage when there's no assets left
* pre: introduce the RemoveKeyLimit config parameter
* debug_ensure empty account in the right if block
* update to having separate max values for accounts and approvals
* add tests and use RemoveKeyLimit constant
* add useful comments to the extrinsics, and calculate returned weight
* add benchmarking for start_destroy and finish destroy
* push failing benchmark logic
* add benchmark tests for new functions
* update weights via local benchmarks
* remove extra weight file
* Update frame/assets/src/lib.rs
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
* Update frame/assets/src/types.rs
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
* Update frame/assets/src/lib.rs
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
* effect some changes from codereview
* use NotFrozen error
* remove origin checks, as anyone can complete destruction after owner has begun the process; Add live check for other extrinsics
* fix comments about Origin behaviour
* add AssetStatus docs
* modularize logic to allow calling logic in on_idle and on_initialize hooks
* introduce simple migration for assets details
* reintroduce logging in the migrations
* move deposit_Event out of the mutate block
* Update frame/assets/src/functions.rs
Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com>
* Update frame/assets/src/migration.rs
Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com>
* move AssetNotLive checkout out of the mutate blocks
* rename RemoveKeysLimit to RemoveItemsLimit
* update docs
* fix event name in benchmark
* fix cargo fmt.
* fix lint in benchmarking
* Empty commit to trigger CI
* Update frame/assets/src/lib.rs
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* Update frame/assets/src/lib.rs
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* Update frame/assets/src/functions.rs
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* Update frame/assets/src/functions.rs
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* Update frame/assets/src/functions.rs
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* Update frame/assets/src/lib.rs
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* Update frame/assets/src/functions.rs
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* effect change suggested during code review
* move limit to a single location
* Update frame/assets/src/functions.rs
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
* rename events
* fix weight typo, using rocksdb instead of T::DbWeight. Pending generating weights
* switch to using dead_account.len()
* rename event in the benchmarks
* empty to retrigger CI
* trigger CI to check cumulus dependency
* trigger CI for dependent cumulus
* Update frame/assets/src/migration.rs
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* move is-frozen to the assetStatus enum (#12547)
* add pre and post migration hooks
* update do_transfer logic to add new assert for more correct error messages
* trigger CI
* switch checking AssetStatus from checking Destroying state to checking live state
* fix error type in tests from Frozen to AssetNotLive
* trigger CI
* change ensure check for fn reducible_balance()
* change the error type to Error:<T,I>::IncorrectStatus to be clearer
* Trigger CI
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: parity-processbot <>
Co-authored-by: Muharem Ismailov <ismailov.m.h@gmail.com>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* init
* clean
* remove manual getter for ReferendumStatus in favor of changing pub crate to pub for ReferendumStatus DecidingStatus Deposit types
* rm status getters because fields are pub now
* Remove `sp_tasks::spawn` API and related code
* Remove `RuntimeTasks::{spawn, join}` host functions
* remove unused
* Remove a few more tests that I forgot to remove
Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>
The grandpa crate is deriving `Debug` only when the `std` feature is enabled. `RuntimeDebug` can be
forced to derive `Debug` also in `no_std` and that doesn't work together. So, we should feature gate
`Debug` on `no_std`.
* `sp-runtime`: make `parity-util-mem` dependency optional
* Use default-features = false for sp-runtime in sp-keyring
* Remove parity-util-mem from sp-core
* Cargo.lock
* Restore default-features for keyring dependency
* histor. batch proof: make best block arg optional
* correct testing range
* make generate_batch_proof stub for historical
* merge generate_{historical_}batch_proof functions
* merge generate_{batch_}proof functions
* merge verify_{batch_}proof functions
* merge verify_{batch_}proof_stateless functions
* remove {Leaf}Proof
Not utilized by API anymore, so superfluous.
Removal consistent with prior changes to just use "batch" proof API.
* rename BatchProof->Proof
no need to qualify if only one universal proof type.
* cleanup
* expose verify_proof rpc api
* document verify_proof
* expose verify_proof_stateless rpc api
* add optional BlockHash to mmr_root rpc api
* fixup! expose verify_proof rpc api
* fix documentation phrasing
Co-authored-by: Adrian Catangiu <adrian@parity.io>
* documentation grammar
Co-authored-by: Adrian Catangiu <adrian@parity.io>
* define mmr error msgs together with error enum
Co-authored-by: Serban Iorga <serban@parity.io>
* fixup! define mmr error msgs together with error enum
* map decoding errors to CallError::InvalidParams
Co-authored-by: Serban Iorga <serban@parity.io>
* fixup! map decoding errors to CallError::InvalidParams
Co-authored-by: Adrian Catangiu <adrian@parity.io>
Co-authored-by: parity-processbot <>
Co-authored-by: Serban Iorga <serban@parity.io>