mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-04-26 23:57:56 +00:00
fd5f9292f5
Closes #2160 First part of [Extrinsic Horizon](https://github.com/paritytech/polkadot-sdk/issues/2415) Introduces a new trait `TransactionExtension` to replace `SignedExtension`. Introduce the idea of transactions which obey the runtime's extensions and have according Extension data (né Extra data) yet do not have hard-coded signatures. Deprecate the terminology of "Unsigned" when used for transactions/extrinsics owing to there now being "proper" unsigned transactions which obey the extension framework and "old-style" unsigned which do not. Instead we have __*General*__ for the former and __*Bare*__ for the latter. (Ultimately, the latter will be phased out as a type of transaction, and Bare will only be used for Inherents.) Types of extrinsic are now therefore: - Bare (no hardcoded signature, no Extra data; used to be known as "Unsigned") - Bare transactions (deprecated): Gossiped, validated with `ValidateUnsigned` (deprecated) and the `_bare_compat` bits of `TransactionExtension` (deprecated). - Inherents: Not gossiped, validated with `ProvideInherent`. - Extended (Extra data): Gossiped, validated via `TransactionExtension`. - Signed transactions (with a hardcoded signature). - General transactions (without a hardcoded signature). `TransactionExtension` differs from `SignedExtension` because: - A signature on the underlying transaction may validly not be present. - It may alter the origin during validation. - `pre_dispatch` is renamed to `prepare` and need not contain the checks present in `validate`. - `validate` and `prepare` is passed an `Origin` rather than a `AccountId`. - `validate` may pass arbitrary information into `prepare` via a new user-specifiable type `Val`. - `AdditionalSigned`/`additional_signed` is renamed to `Implicit`/`implicit`. It is encoded *for the entire transaction* and passed in to each extension as a new argument to `validate`. This facilitates the ability of extensions to acts as underlying crypto. There is a new `DispatchTransaction` trait which contains only default function impls and is impl'ed for any `TransactionExtension` impler. It provides several utility functions which reduce some of the tedium from using `TransactionExtension` (indeed, none of its regular functions should now need to be called directly). Three transaction version discriminator ("versions") are now permissible: - 0b000000100: Bare (used to be called "Unsigned"): contains Signature or Extra (extension data). After bare transactions are no longer supported, this will strictly identify an Inherents only. - 0b100000100: Old-school "Signed" Transaction: contains Signature and Extra (extension data). - 0b010000100: New-school "General" Transaction: contains Extra (extension data), but no Signature. For the New-school General Transaction, it becomes trivial for authors to publish extensions to the mechanism for authorizing an Origin, e.g. through new kinds of key-signing schemes, ZK proofs, pallet state, mutations over pre-authenticated origins or any combination of the above. ## Code Migration ### NOW: Getting it to build Wrap your `SignedExtension`s in `AsTransactionExtension`. This should be accompanied by renaming your aggregate type in line with the new terminology. E.g. Before: ```rust /// The SignedExtension to the basic transaction logic. pub type SignedExtra = ( /* snip */ MySpecialSignedExtension, ); /// Unchecked extrinsic type as expected by this runtime. pub type UncheckedExtrinsic = generic::UncheckedExtrinsic<Address, RuntimeCall, Signature, SignedExtra>; ``` After: ```rust /// The extension to the basic transaction logic. pub type TxExtension = ( /* snip */ AsTransactionExtension<MySpecialSignedExtension>, ); /// Unchecked extrinsic type as expected by this runtime. pub type UncheckedExtrinsic = generic::UncheckedExtrinsic<Address, RuntimeCall, Signature, TxExtension>; ``` You'll also need to alter any transaction building logic to add a `.into()` to make the conversion happen. E.g. Before: ```rust fn construct_extrinsic( /* snip */ ) -> UncheckedExtrinsic { let extra: SignedExtra = ( /* snip */ MySpecialSignedExtension::new(/* snip */), ); let payload = SignedPayload::new(call.clone(), extra.clone()).unwrap(); let signature = payload.using_encoded(|e| sender.sign(e)); UncheckedExtrinsic::new_signed( /* snip */ Signature::Sr25519(signature), extra, ) } ``` After: ```rust fn construct_extrinsic( /* snip */ ) -> UncheckedExtrinsic { let tx_ext: TxExtension = ( /* snip */ MySpecialSignedExtension::new(/* snip */).into(), ); let payload = SignedPayload::new(call.clone(), tx_ext.clone()).unwrap(); let signature = payload.using_encoded(|e| sender.sign(e)); UncheckedExtrinsic::new_signed( /* snip */ Signature::Sr25519(signature), tx_ext, ) } ``` ### SOON: Migrating to `TransactionExtension` Most `SignedExtension`s can be trivially converted to become a `TransactionExtension`. There are a few things to know. - Instead of a single trait like `SignedExtension`, you should now implement two traits individually: `TransactionExtensionBase` and `TransactionExtension`. - Weights are now a thing and must be provided via the new function `fn weight`. #### `TransactionExtensionBase` This trait takes care of anything which is not dependent on types specific to your runtime, most notably `Call`. - `AdditionalSigned`/`additional_signed` is renamed to `Implicit`/`implicit`. - Weight must be returned by implementing the `weight` function. If your extension is associated with a pallet, you'll probably want to do this via the pallet's existing benchmarking infrastructure. #### `TransactionExtension` Generally: - `pre_dispatch` is now `prepare` and you *should not reexecute the `validate` functionality in there*! - You don't get an account ID any more; you get an origin instead. If you need to presume an account ID, then you can use the trait function `AsSystemOriginSigner::as_system_origin_signer`. - You get an additional ticket, similar to `Pre`, called `Val`. This defines data which is passed from `validate` into `prepare`. This is important since you should not be duplicating logic from `validate` to `prepare`, you need a way of passing your working from the former into the latter. This is it. - This trait takes two type parameters: `Call` and `Context`. `Call` is the runtime call type which used to be an associated type; you can just move it to become a type parameter for your trait impl. `Context` is not currently used and you can safely implement over it as an unbounded type. - There's no `AccountId` associated type any more. Just remove it. Regarding `validate`: - You get three new parameters in `validate`; all can be ignored when migrating from `SignedExtension`. - `validate` returns a tuple on success; the second item in the tuple is the new ticket type `Self::Val` which gets passed in to `prepare`. If you use any information extracted during `validate` (off-chain and on-chain, non-mutating) in `prepare` (on-chain, mutating) then you can pass it through with this. For the tuple's last item, just return the `origin` argument. Regarding `prepare`: - This is renamed from `pre_dispatch`, but there is one change: - FUNCTIONALITY TO VALIDATE THE TRANSACTION NEED NOT BE DUPLICATED FROM `validate`!! - (This is different to `SignedExtension` which was required to run the same checks in `pre_dispatch` as in `validate`.) Regarding `post_dispatch`: - Since there are no unsigned transactions handled by `TransactionExtension`, `Pre` is always defined, so the first parameter is `Self::Pre` rather than `Option<Self::Pre>`. If you make use of `SignedExtension::validate_unsigned` or `SignedExtension::pre_dispatch_unsigned`, then: - Just use the regular versions of these functions instead. - Have your logic execute in the case that the `origin` is `None`. - Ensure your transaction creation logic creates a General Transaction rather than a Bare Transaction; this means having to include all `TransactionExtension`s' data. - `ValidateUnsigned` can still be used (for now) if you need to be able to construct transactions which contain none of the extension data, however these will be phased out in stage 2 of the Transactions Horizon, so you should consider moving to an extension-centric design. ## TODO - [x] Introduce `CheckSignature` impl of `TransactionExtension` to ensure it's possible to have crypto be done wholly in a `TransactionExtension`. - [x] Deprecate `SignedExtension` and move all uses in codebase to `TransactionExtension`. - [x] `ChargeTransactionPayment` - [x] `DummyExtension` - [x] `ChargeAssetTxPayment` (asset-tx-payment) - [x] `ChargeAssetTxPayment` (asset-conversion-tx-payment) - [x] `CheckWeight` - [x] `CheckTxVersion` - [x] `CheckSpecVersion` - [x] `CheckNonce` - [x] `CheckNonZeroSender` - [x] `CheckMortality` - [x] `CheckGenesis` - [x] `CheckOnlySudoAccount` - [x] `WatchDummy` - [x] `PrevalidateAttests` - [x] `GenericSignedExtension` - [x] `SignedExtension` (chain-polkadot-bulletin) - [x] `RefundSignedExtensionAdapter` - [x] Implement `fn weight` across the board. - [ ] Go through all pre-existing extensions which assume an account signer and explicitly handle the possibility of another kind of origin. - [x] `CheckNonce` should probably succeed in the case of a non-account origin. - [x] `CheckNonZeroSender` should succeed in the case of a non-account origin. - [x] `ChargeTransactionPayment` and family should fail in the case of a non-account origin. - [ ] - [x] Fix any broken tests. --------- Signed-off-by: georgepisaltu <george.pisaltu@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io> Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> Co-authored-by: Nikhil Gupta <17176722+gupnik@users.noreply.github.com> Co-authored-by: georgepisaltu <52418509+georgepisaltu@users.noreply.github.com> Co-authored-by: Chevdor <chevdor@users.noreply.github.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Maciej <maciej.zyszkiewicz@parity.io> Co-authored-by: Javier Viola <javier@parity.io> Co-authored-by: Marcin S. <marcin@realemail.net> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Javier Bullrich <javier@bullrich.dev> Co-authored-by: Koute <koute@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: Vladimir Istyufeev <vladimir@parity.io> Co-authored-by: Ross Bulat <ross@parity.io> Co-authored-by: Gonçalo Pestana <g6pestana@gmail.com> Co-authored-by: Liam Aharon <liam.aharon@hotmail.com> Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com> Co-authored-by: André Silva <123550+andresilva@users.noreply.github.com> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: s0me0ne-unkn0wn <48632512+s0me0ne-unkn0wn@users.noreply.github.com> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Sebastian Kunert <skunert49@gmail.com> Co-authored-by: Aaro Altonen <48052676+altonen@users.noreply.github.com> Co-authored-by: Dmitry Markin <dmitry@markin.tech> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com> Co-authored-by: Julian Eager <eagr@tutanota.com> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Davide Galassi <davxy@datawok.net> Co-authored-by: Dónal Murray <donal.murray@parity.io> Co-authored-by: yjh <yjh465402634@gmail.com> Co-authored-by: Tom Mi <tommi@niemi.lol> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Will | Paradox | ParaNodes.io <79228812+paradox-tt@users.noreply.github.com> Co-authored-by: Bastian Köcher <info@kchr.de> Co-authored-by: Joshy Orndorff <JoshOrndorff@users.noreply.github.com> Co-authored-by: Joshy Orndorff <git-user-email.h0ly5@simplelogin.com> Co-authored-by: PG Herveou <pgherveou@gmail.com> Co-authored-by: Alexander Theißen <alex.theissen@me.com> Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com> Co-authored-by: Juan Girini <juangirini@gmail.com> Co-authored-by: bader y <ibnbassem@gmail.com> Co-authored-by: James Wilson <james@jsdw.me> Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com> Co-authored-by: asynchronous rob <rphmeier@gmail.com> Co-authored-by: Parth <desaiparth08@gmail.com> Co-authored-by: Andrew Jones <ascjones@gmail.com> Co-authored-by: Jonathan Udd <jonathan@dwellir.com> Co-authored-by: Serban Iorga <serban@parity.io> Co-authored-by: Egor_P <egor@parity.io> Co-authored-by: Branislav Kontur <bkontur@gmail.com> Co-authored-by: Evgeny Snitko <evgeny@parity.io> Co-authored-by: Just van Stam <vstam1@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: gupnik <nikhilgupta.iitk@gmail.com> Co-authored-by: dzmitry-lahoda <dzmitry@lahoda.pro> Co-authored-by: zhiqiangxu <652732310@qq.com> Co-authored-by: Nazar Mokrynskyi <nazar@mokrynskyi.com> Co-authored-by: Anwesh <anweshknayak@gmail.com> Co-authored-by: cheme <emericchevalier.pro@gmail.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: kianenigma <kian@parity.io> Co-authored-by: Jegor Sidorenko <5252494+jsidorenko@users.noreply.github.com> Co-authored-by: Muharem <ismailov.m.h@gmail.com> Co-authored-by: joepetrowski <joe@parity.io> Co-authored-by: Alexandru Gheorghe <49718502+alexggh@users.noreply.github.com> Co-authored-by: Gabriel Facco de Arruda <arrudagates@gmail.com> Co-authored-by: Squirrel <gilescope@gmail.com> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: georgepisaltu <george.pisaltu@parity.io> Co-authored-by: command-bot <>
313 lines
8.2 KiB
Rust
313 lines
8.2 KiB
Rust
// This file is part of Substrate.
|
|
|
|
// Copyright (C) Parity Technologies (UK) Ltd.
|
|
// SPDX-License-Identifier: GPL-3.0-or-later WITH Classpath-exception-2.0
|
|
|
|
// This program is free software: you can redistribute it and/or modify
|
|
// it under the terms of the GNU General Public License as published by
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
// (at your option) any later version.
|
|
|
|
// This program is distributed in the hope that it will be useful,
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
// GNU General Public License for more details.
|
|
|
|
// You should have received a copy of the GNU General Public License
|
|
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
|
|
use criterion::{criterion_group, criterion_main, BatchSize, Criterion};
|
|
use rand::{distributions::Uniform, rngs::StdRng, Rng, SeedableRng};
|
|
use sc_client_api::{Backend as _, BlockImportOperation, NewBlockState, StateBackend};
|
|
use sc_client_db::{Backend, BlocksPruning, DatabaseSettings, DatabaseSource, PruningMode};
|
|
use sp_core::H256;
|
|
use sp_runtime::{
|
|
generic::UncheckedExtrinsic,
|
|
testing::{Block as RawBlock, Header, MockCallU64},
|
|
StateVersion, Storage,
|
|
};
|
|
use tempfile::TempDir;
|
|
|
|
pub(crate) type Block = RawBlock<UncheckedExtrinsic<u64, MockCallU64, (), ()>>;
|
|
|
|
fn insert_blocks(db: &Backend<Block>, storage: Vec<(Vec<u8>, Vec<u8>)>) -> H256 {
|
|
let mut op = db.begin_operation().unwrap();
|
|
let mut header = Header {
|
|
number: 0,
|
|
parent_hash: Default::default(),
|
|
state_root: Default::default(),
|
|
digest: Default::default(),
|
|
extrinsics_root: Default::default(),
|
|
};
|
|
|
|
header.state_root = op
|
|
.set_genesis_state(
|
|
Storage {
|
|
top: vec![(
|
|
sp_core::storage::well_known_keys::CODE.to_vec(),
|
|
kitchensink_runtime::wasm_binary_unwrap().to_vec(),
|
|
)]
|
|
.into_iter()
|
|
.collect(),
|
|
children_default: Default::default(),
|
|
},
|
|
true,
|
|
StateVersion::V1,
|
|
)
|
|
.unwrap();
|
|
|
|
op.set_block_data(header.clone(), Some(vec![]), None, None, NewBlockState::Best)
|
|
.unwrap();
|
|
|
|
db.commit_operation(op).unwrap();
|
|
|
|
let mut number = 1;
|
|
let mut parent_hash = header.hash();
|
|
|
|
for i in 0..10 {
|
|
let mut op = db.begin_operation().unwrap();
|
|
|
|
db.begin_state_operation(&mut op, parent_hash).unwrap();
|
|
|
|
let mut header = Header {
|
|
number,
|
|
parent_hash,
|
|
state_root: Default::default(),
|
|
digest: Default::default(),
|
|
extrinsics_root: Default::default(),
|
|
};
|
|
|
|
let changes = storage
|
|
.iter()
|
|
.skip(i * 100_000)
|
|
.take(100_000)
|
|
.map(|(k, v)| (k.clone(), Some(v.clone())))
|
|
.collect::<Vec<_>>();
|
|
|
|
let (state_root, tx) = db.state_at(parent_hash).unwrap().storage_root(
|
|
changes.iter().map(|(k, v)| (k.as_slice(), v.as_deref())),
|
|
StateVersion::V1,
|
|
);
|
|
header.state_root = state_root;
|
|
|
|
op.update_db_storage(tx).unwrap();
|
|
op.update_storage(changes.clone(), Default::default()).unwrap();
|
|
|
|
op.set_block_data(header.clone(), Some(vec![]), None, None, NewBlockState::Best)
|
|
.unwrap();
|
|
|
|
db.commit_operation(op).unwrap();
|
|
|
|
number += 1;
|
|
parent_hash = header.hash();
|
|
}
|
|
|
|
parent_hash
|
|
}
|
|
|
|
enum BenchmarkConfig {
|
|
NoCache,
|
|
TrieNodeCache,
|
|
}
|
|
|
|
fn create_backend(config: BenchmarkConfig, temp_dir: &TempDir) -> Backend<Block> {
|
|
let path = temp_dir.path().to_owned();
|
|
|
|
let trie_cache_maximum_size = match config {
|
|
BenchmarkConfig::NoCache => None,
|
|
BenchmarkConfig::TrieNodeCache => Some(2 * 1024 * 1024 * 1024),
|
|
};
|
|
|
|
let settings = DatabaseSettings {
|
|
trie_cache_maximum_size,
|
|
state_pruning: Some(PruningMode::ArchiveAll),
|
|
source: DatabaseSource::ParityDb { path },
|
|
blocks_pruning: BlocksPruning::KeepAll,
|
|
};
|
|
|
|
Backend::new(settings, 100).expect("Creates backend")
|
|
}
|
|
|
|
/// Generate the storage that will be used for the benchmark
|
|
///
|
|
/// Returns the `Vec<key>` and the `Vec<(key, value)>`
|
|
fn generate_storage() -> (Vec<Vec<u8>>, Vec<(Vec<u8>, Vec<u8>)>) {
|
|
let mut rng = StdRng::seed_from_u64(353893213);
|
|
|
|
let mut storage = Vec::new();
|
|
let mut keys = Vec::new();
|
|
|
|
for _ in 0..1_000_000 {
|
|
let key_len: usize = rng.gen_range(32..128);
|
|
let key = (&mut rng)
|
|
.sample_iter(Uniform::new_inclusive(0, 255))
|
|
.take(key_len)
|
|
.collect::<Vec<u8>>();
|
|
|
|
let value_len: usize = rng.gen_range(20..60);
|
|
let value = (&mut rng)
|
|
.sample_iter(Uniform::new_inclusive(0, 255))
|
|
.take(value_len)
|
|
.collect::<Vec<u8>>();
|
|
|
|
keys.push(key.clone());
|
|
storage.push((key, value));
|
|
}
|
|
|
|
(keys, storage)
|
|
}
|
|
|
|
fn state_access_benchmarks(c: &mut Criterion) {
|
|
sp_tracing::try_init_simple();
|
|
|
|
let (keys, storage) = generate_storage();
|
|
let path = TempDir::new().expect("Creates temporary directory");
|
|
|
|
let block_hash = {
|
|
let backend = create_backend(BenchmarkConfig::NoCache, &path);
|
|
insert_blocks(&backend, storage.clone())
|
|
};
|
|
|
|
let mut group = c.benchmark_group("Reading entire state");
|
|
group.sample_size(20);
|
|
|
|
let mut bench_multiple_values = |config, desc, multiplier| {
|
|
let backend = create_backend(config, &path);
|
|
|
|
group.bench_function(desc, |b| {
|
|
b.iter_batched(
|
|
|| backend.state_at(block_hash).expect("Creates state"),
|
|
|state| {
|
|
for key in keys.iter().cycle().take(keys.len() * multiplier) {
|
|
let _ = state.storage(&key).expect("Doesn't fail").unwrap();
|
|
}
|
|
},
|
|
BatchSize::SmallInput,
|
|
)
|
|
});
|
|
};
|
|
|
|
bench_multiple_values(
|
|
BenchmarkConfig::TrieNodeCache,
|
|
"with trie node cache and reading each key once",
|
|
1,
|
|
);
|
|
bench_multiple_values(BenchmarkConfig::NoCache, "no cache and reading each key once", 1);
|
|
|
|
bench_multiple_values(
|
|
BenchmarkConfig::TrieNodeCache,
|
|
"with trie node cache and reading 4 times each key in a row",
|
|
4,
|
|
);
|
|
bench_multiple_values(
|
|
BenchmarkConfig::NoCache,
|
|
"no cache and reading 4 times each key in a row",
|
|
4,
|
|
);
|
|
|
|
group.finish();
|
|
|
|
let mut group = c.benchmark_group("Reading a single value");
|
|
|
|
let mut bench_single_value = |config, desc, multiplier| {
|
|
let backend = create_backend(config, &path);
|
|
|
|
group.bench_function(desc, |b| {
|
|
b.iter_batched(
|
|
|| backend.state_at(block_hash).expect("Creates state"),
|
|
|state| {
|
|
for key in keys.iter().take(1).cycle().take(multiplier) {
|
|
let _ = state.storage(&key).expect("Doesn't fail").unwrap();
|
|
}
|
|
},
|
|
BatchSize::SmallInput,
|
|
)
|
|
});
|
|
};
|
|
|
|
bench_single_value(
|
|
BenchmarkConfig::TrieNodeCache,
|
|
"with trie node cache and reading the key once",
|
|
1,
|
|
);
|
|
bench_single_value(BenchmarkConfig::NoCache, "no cache and reading the key once", 1);
|
|
|
|
bench_single_value(
|
|
BenchmarkConfig::TrieNodeCache,
|
|
"with trie node cache and reading 4 times each key in a row",
|
|
4,
|
|
);
|
|
bench_single_value(
|
|
BenchmarkConfig::NoCache,
|
|
"no cache and reading 4 times each key in a row",
|
|
4,
|
|
);
|
|
|
|
group.finish();
|
|
|
|
let mut group = c.benchmark_group("Hashing a value");
|
|
|
|
let mut bench_single_value = |config, desc, multiplier| {
|
|
let backend = create_backend(config, &path);
|
|
|
|
group.bench_function(desc, |b| {
|
|
b.iter_batched(
|
|
|| backend.state_at(block_hash).expect("Creates state"),
|
|
|state| {
|
|
for key in keys.iter().take(1).cycle().take(multiplier) {
|
|
let _ = state.storage_hash(&key).expect("Doesn't fail").unwrap();
|
|
}
|
|
},
|
|
BatchSize::SmallInput,
|
|
)
|
|
});
|
|
};
|
|
|
|
bench_single_value(
|
|
BenchmarkConfig::TrieNodeCache,
|
|
"with trie node cache and hashing the key once",
|
|
1,
|
|
);
|
|
bench_single_value(BenchmarkConfig::NoCache, "no cache and hashing the key once", 1);
|
|
|
|
bench_single_value(
|
|
BenchmarkConfig::TrieNodeCache,
|
|
"with trie node cache and hashing 4 times each key in a row",
|
|
4,
|
|
);
|
|
bench_single_value(
|
|
BenchmarkConfig::NoCache,
|
|
"no cache and hashing 4 times each key in a row",
|
|
4,
|
|
);
|
|
|
|
group.finish();
|
|
|
|
let mut group = c.benchmark_group("Hashing `:code`");
|
|
|
|
let mut bench_single_value = |config, desc| {
|
|
let backend = create_backend(config, &path);
|
|
|
|
group.bench_function(desc, |b| {
|
|
b.iter_batched(
|
|
|| backend.state_at(block_hash).expect("Creates state"),
|
|
|state| {
|
|
let _ = state
|
|
.storage_hash(sp_core::storage::well_known_keys::CODE)
|
|
.expect("Doesn't fail")
|
|
.unwrap();
|
|
},
|
|
BatchSize::SmallInput,
|
|
)
|
|
});
|
|
};
|
|
|
|
bench_single_value(BenchmarkConfig::TrieNodeCache, "with trie node cache");
|
|
bench_single_value(BenchmarkConfig::NoCache, "no cache");
|
|
|
|
group.finish();
|
|
}
|
|
|
|
criterion_group!(benches, state_access_benchmarks);
|
|
criterion_main!(benches);
|