feat: initialize Kurdistan SDK - independent fork of Polkadot SDK

This commit is contained in:
2025-12-13 15:44:15 +03:00
commit e4778b4576
6838 changed files with 1847450 additions and 0 deletions
@@ -0,0 +1,111 @@
# The `benchmark storage` command
The cost of storage operations in a Substrate chain depends on the current chain state.
It is therefore important to regularly update these weights as the chain grows.
This sub-command measures the cost of storage operations for a concrete snapshot.
For the Substrate node it looks like this (for debugging you can use `--release`):
```sh
cargo run --profile=production -- benchmark storage --dev --state-version=1
```
Running the command on Substrate itself is not verify meaningful, since the genesis state of the `--dev` chain spec is
used.
The output for the PezkuwiChain client with a recent chain snapshot will give you a better impression. A recent snapshot can
be downloaded from [PezkuwiChain Snapshots].
Then run (remove the `--db=paritydb` if you have a RocksDB snapshot):
```sh
cargo run --profile=production -- benchmark storage --dev --state-version=0 --db=paritydb --weight-path runtime/pezkuwi/constants/src/weights
```
This takes a while since reads and writes all keys from the snapshot:
```pre
# The 'read' benchmark
Preparing keys from block BlockId::Number(9939462)
Reading 1379083 keys
Time summary [ns]:
Total: 19668919930
Min: 6450, Max: 1217259
Average: 14262, Median: 14190, Stddev: 3035.79
Percentiles 99th, 95th, 75th: 18270, 16190, 14819
Value size summary:
Total: 265702275
Min: 1, Max: 1381859
Average: 192, Median: 80, Stddev: 3427.53
Percentiles 99th, 95th, 75th: 3368, 383, 80
# The 'write' benchmark
Preparing keys from block BlockId::Number(9939462)
Writing 1379083 keys
Time summary [ns]:
Total: 98393809781
Min: 12969, Max: 13282577
Average: 71347, Median: 69499, Stddev: 25145.27
Percentiles 99th, 95th, 75th: 135839, 106129, 79239
Value size summary:
Total: 265702275
Min: 1, Max: 1381859
Average: 192, Median: 80, Stddev: 3427.53
Percentiles 99th, 95th, 75th: 3368, 383, 80
Writing weights to "paritydb_weights.rs"
```
You will see that the [paritydb_weights.rs] files was modified and now contains new weights. The exact command for
PezkuwiChain can be seen at the top of the file.
This uses the most recent block from your snapshot which is printed at the top.
The value size summary tells us that the pruned PezkuwiChain chain state is ~253 MiB in size.
Reading a value on average takes (in this examples) 14.3 µs and writing 71.3 µs.
The interesting part in the generated weight file tells us the weight constants and some statistics about the
measurements:
```rust
/// Time to read one storage item.
/// Calculated by multiplying the *Average* of all values with `1.1` and adding `0`.
///
/// Stats [NS]:
/// Min, Max: 4_611, 1_217_259
/// Average: 14_262
/// Median: 14_190
/// Std-Dev: 3035.79
///
/// Percentiles [NS]:
/// 99th: 18_270
/// 95th: 16_190
/// 75th: 14_819
read: 14_262 * constants::WEIGHT_REF_TIME_PER_NANOS,
/// Time to write one storage item.
/// Calculated by multiplying the *Average* of all values with `1.1` and adding `0`.
///
/// Stats [NS]:
/// Min, Max: 12_969, 13_282_577
/// Average: 71_347This works under the assumption that the *average* read a
/// Median: 69_499
/// Std-Dev: 25145.27
///
/// Percentiles [NS]:
/// 99th: 135_839
/// 95th: 106_129
/// 75th: 79_239
write: 71_347 * constants::WEIGHT_REF_TIME_PER_NANOS,
```
## Arguments
- `--db` Specify which database backend to use. This greatly influences the results.
- `--state-version` Set the version of the state encoding that this snapshot uses. Should be set to `1` for Substrate
`--dev` and `0` for PezkuwiChain et al. Using the wrong version can corrupt the snapshot.
- [`--mul`](../shared/README.md#arguments)
- [`--add`](../shared/README.md#arguments)
- [`--metric`](../shared/README.md#arguments)
- [`--weight-path`](../shared/README.md#arguments)
- `--json-read-path` Write the raw 'read' results to this file or directory.
- `--json-write-path` Write the raw 'write' results to this file or directory.
- [`--header`](../shared/README.md#arguments)
License: Apache-2.0
<!-- LINKS -->
[PezkuwiChain Snapshots]: https://snapshots.polkadot.io
[paritydb_weights.rs]:
https://github.com/paritytech/polkadot/blob/c254e5975711a6497af256f6831e9a6c752d28f5/runtime/polkadot/constants/src/weights/paritydb_weights.rs#L60
@@ -0,0 +1,310 @@
// This file is part of Substrate.
// Copyright (C) Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use sc_cli::{CliConfiguration, DatabaseParams, PruningParams, Result, SharedParams};
use sc_client_api::{Backend as ClientBackend, StorageProvider, UsageProvider};
use sc_client_db::DbHash;
use sc_service::Configuration;
use sp_api::CallApiAt;
use sp_blockchain::HeaderBackend;
use sp_database::{ColumnId, Database};
use sp_runtime::traits::{Block as BlockT, HashingFor};
use sp_state_machine::Storage;
use sp_storage::{ChildInfo, ChildType, PrefixedStorageKey, StateVersion};
use clap::{Args, Parser, ValueEnum};
use log::info;
use rand::prelude::*;
use serde::Serialize;
use sp_runtime::generic::BlockId;
use std::{fmt::Debug, path::PathBuf, sync::Arc};
use super::template::TemplateData;
use crate::shared::{new_rng, HostInfoParams, WeightParams};
/// The mode in which to run the storage benchmark.
#[derive(Default, Debug, Clone, Copy, PartialEq, Eq, Serialize, ValueEnum)]
pub enum StorageBenchmarkMode {
/// Run the benchmark for block import.
#[default]
ImportBlock,
/// Run the benchmark for block validation.
ValidateBlock,
}
/// Benchmark the storage speed of a chain snapshot.
#[derive(Debug, Parser)]
pub struct StorageCmd {
#[allow(missing_docs)]
#[clap(flatten)]
pub shared_params: SharedParams,
#[allow(missing_docs)]
#[clap(flatten)]
pub database_params: DatabaseParams,
#[allow(missing_docs)]
#[clap(flatten)]
pub pruning_params: PruningParams,
#[allow(missing_docs)]
#[clap(flatten)]
pub params: StorageParams,
}
/// Parameters for modifying the benchmark behaviour and the post processing of the results.
#[derive(Debug, Default, Serialize, Clone, PartialEq, Args)]
pub struct StorageParams {
#[allow(missing_docs)]
#[clap(flatten)]
pub weight_params: WeightParams,
#[allow(missing_docs)]
#[clap(flatten)]
pub hostinfo: HostInfoParams,
/// Skip the `read` benchmark.
#[arg(long)]
pub skip_read: bool,
/// Skip the `write` benchmark.
#[arg(long)]
pub skip_write: bool,
/// Specify the Handlebars template to use for outputting benchmark results.
#[arg(long)]
pub template_path: Option<PathBuf>,
/// Add a header to the generated weight output file.
///
/// Good for adding LICENSE headers.
#[arg(long, value_name = "PATH")]
pub header: Option<PathBuf>,
/// Path to write the raw 'read' results in JSON format to. Can be a file or directory.
#[arg(long)]
pub json_read_path: Option<PathBuf>,
/// Path to write the raw 'write' results in JSON format to. Can be a file or directory.
#[arg(long)]
pub json_write_path: Option<PathBuf>,
/// Rounds of warmups before measuring.
#[arg(long, default_value_t = 1)]
pub warmups: u32,
/// The `StateVersion` to use. Substrate `--dev` should use `V1` and Pezkuwi `V0`.
/// Selecting the wrong version can corrupt the DB.
#[arg(long, value_parser = clap::value_parser!(u8).range(0..=1))]
pub state_version: u8,
/// Trie cache size in bytes.
///
/// Providing `0` will disable the cache.
#[arg(long, value_name = "Bytes", default_value_t = 67108864)]
pub trie_cache_size: usize,
/// Enable the Trie cache.
///
/// This should only be used for performance analysis and not for final results.
#[arg(long)]
pub enable_trie_cache: bool,
/// Include child trees in benchmark.
#[arg(long)]
pub include_child_trees: bool,
/// Disable PoV recorder.
///
/// The recorder has impact on performance when benchmarking with the TrieCache enabled.
/// If the chain is recording a proof while building/importing a block, the pov recorder
/// should be activated.
///
/// Hence, when generating weights for a teyrchain this should be activated and when generating
/// weights for a standalone chain this should be deactivated.
#[arg(long, default_value = "false")]
pub disable_pov_recorder: bool,
/// The batch size for the read/write benchmark.
///
/// Since the write size needs to also include the cost of computing the storage root, which is
/// done once at the end of the block, the batch size is used to simulate multiple writes in a
/// block.
#[arg(long, default_value_t = 100_000)]
pub batch_size: usize,
/// The mode in which to run the storage benchmark.
///
/// PoV recorder must be activated to provide a storage proof for block validation at runtime.
#[arg(long, value_enum, default_value_t = StorageBenchmarkMode::ImportBlock)]
pub mode: StorageBenchmarkMode,
/// Number of rounds to execute block validation during the benchmark.
///
/// We need to run the benchmark several times to avoid fluctuations during runtime setup.
/// This is only used when `mode` is `validate-block`.
#[arg(long, default_value_t = 20)]
pub validate_block_rounds: u32,
}
impl StorageParams {
pub fn is_import_block_mode(&self) -> bool {
matches!(self.mode, StorageBenchmarkMode::ImportBlock)
}
pub fn is_validate_block_mode(&self) -> bool {
matches!(self.mode, StorageBenchmarkMode::ValidateBlock)
}
}
impl StorageCmd {
/// Calls into the Read and Write benchmarking functions.
/// Processes the output and writes it into files and stdout.
pub fn run<Block, BA, C>(
&self,
cfg: Configuration,
client: Arc<C>,
db: (Arc<dyn Database<DbHash>>, ColumnId),
storage: Arc<dyn Storage<HashingFor<Block>>>,
shared_trie_cache: Option<sp_trie::cache::SharedTrieCache<HashingFor<Block>>>,
) -> Result<()>
where
BA: ClientBackend<Block>,
Block: BlockT<Hash = DbHash>,
C: UsageProvider<Block>
+ StorageProvider<Block, BA>
+ HeaderBackend<Block>
+ CallApiAt<Block>,
{
let mut template = TemplateData::new(&cfg, &self.params)?;
let block_id = BlockId::<Block>::Number(client.usage_info().chain.best_number);
template.set_block_number(block_id.to_string());
if !self.params.skip_read {
self.bench_warmup(&client)?;
let record = self.bench_read(client.clone(), shared_trie_cache.clone())?;
if let Some(path) = &self.params.json_read_path {
record.save_json(&cfg, path, "read")?;
}
let stats = record.calculate_stats()?;
info!("Time summary [ns]:\n{:?}\nValue size summary:\n{:?}", stats.0, stats.1);
template.set_stats(Some(stats), None)?;
}
if !self.params.skip_write {
self.bench_warmup(&client)?;
let record = self.bench_write(client, db, storage, shared_trie_cache)?;
if let Some(path) = &self.params.json_write_path {
record.save_json(&cfg, path, "write")?;
}
let stats = record.calculate_stats()?;
info!("Time summary [ns]:\n{:?}\nValue size summary:\n{:?}", stats.0, stats.1);
template.set_stats(None, Some(stats))?;
}
template.write(&self.params.weight_params.weight_path, &self.params.template_path)
}
/// Returns the specified state version.
pub(crate) fn state_version(&self) -> StateVersion {
match self.params.state_version {
0 => StateVersion::V0,
1 => StateVersion::V1,
_ => unreachable!("Clap set to only allow 0 and 1"),
}
}
/// Returns Some if child node and None if regular
pub(crate) fn is_child_key(&self, key: Vec<u8>) -> Option<ChildInfo> {
if let Some((ChildType::ParentKeyId, storage_key)) =
ChildType::from_prefixed_key(&PrefixedStorageKey::new(key))
{
return Some(ChildInfo::new_default(storage_key));
}
None
}
/// Run some rounds of the (read) benchmark as warmup.
/// See `frame_benchmarking_cli::storage::read::bench_read` for detailed comments.
fn bench_warmup<B, BA, C>(&self, client: &Arc<C>) -> Result<()>
where
C: UsageProvider<B> + StorageProvider<B, BA>,
B: BlockT + Debug,
BA: ClientBackend<B>,
{
let hash = client.usage_info().chain.best_hash;
let mut keys: Vec<_> = client.storage_keys(hash, None, None)?.collect();
let (mut rng, _) = new_rng(None);
keys.shuffle(&mut rng);
for i in 0..self.params.warmups {
info!("Warmup round {}/{}", i + 1, self.params.warmups);
let mut child_nodes = Vec::new();
for key in keys.as_slice() {
let _ = client
.storage(hash, &key)
.expect("Checked above to exist")
.ok_or("Value unexpectedly empty");
if let Some(info) = self
.params
.include_child_trees
.then(|| self.is_child_key(key.clone().0))
.flatten()
{
// child tree key
for ck in client.child_storage_keys(hash, info.clone(), None, None)? {
child_nodes.push((ck.clone(), info.clone()));
}
}
}
for (key, info) in child_nodes.as_slice() {
client
.child_storage(hash, info, key)
.expect("Checked above to exist")
.ok_or("Value unexpectedly empty")?;
}
}
Ok(())
}
}
// Boilerplate
impl CliConfiguration for StorageCmd {
fn shared_params(&self) -> &SharedParams {
&self.shared_params
}
fn database_params(&self) -> Option<&DatabaseParams> {
Some(&self.database_params)
}
fn pruning_params(&self) -> Option<&PruningParams> {
Some(&self.pruning_params)
}
fn trie_cache_maximum_size(&self) -> Result<Option<usize>> {
if self.params.enable_trie_cache && self.params.trie_cache_size > 0 {
Ok(Some(self.params.trie_cache_size))
} else {
Ok(None)
}
}
}
@@ -0,0 +1,57 @@
// This file is part of Substrate.
// Copyright (C) Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod cmd;
pub mod read;
pub mod template;
pub mod write;
pub use cmd::StorageCmd;
/// Empirically, the maximum batch size for block validation should be no more than 10,000.
/// Bigger sizes may cause problems with runtime memory allocation.
pub(crate) const MAX_BATCH_SIZE_FOR_BLOCK_VALIDATION: usize = 10_000;
pub(crate) fn get_wasm_module() -> Box<dyn sc_executor_common::wasm_runtime::WasmModule> {
let blob = sc_executor_common::runtime_blob::RuntimeBlob::uncompress_if_needed(
frame_storage_access_test_runtime::WASM_BINARY
.expect("You need to build the WASM binaries to run the benchmark!"),
)
.expect("Failed to create runtime blob");
let config = sc_executor_wasmtime::Config {
allow_missing_func_imports: true,
cache_path: None,
semantics: sc_executor_wasmtime::Semantics {
heap_alloc_strategy: sc_executor_common::wasm_runtime::HeapAllocStrategy::Dynamic {
maximum_pages: Some(4096),
},
instantiation_strategy: sc_executor::WasmtimeInstantiationStrategy::PoolingCopyOnWrite,
deterministic_stack_limit: None,
canonicalize_nans: false,
parallel_compilation: false,
wasm_multi_value: false,
wasm_bulk_memory: false,
wasm_reference_types: false,
wasm_simd: false,
},
};
Box::new(
sc_executor_wasmtime::create_runtime::<sp_io::SubstrateHostFunctions>(blob, config)
.expect("Unable to create wasm module."),
)
}
@@ -0,0 +1,263 @@
// This file is part of Substrate.
// Copyright (C) Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use codec::Encode;
use frame_storage_access_test_runtime::StorageAccessParams;
use log::{debug, info};
use rand::prelude::*;
use sc_cli::{Error, Result};
use sc_client_api::{Backend as ClientBackend, StorageProvider, UsageProvider};
use sp_api::CallApiAt;
use sp_runtime::traits::{Block as BlockT, HashingFor, Header as HeaderT};
use sp_state_machine::{backend::AsTrieBackend, Backend};
use sp_storage::ChildInfo;
use sp_trie::StorageProof;
use std::{fmt::Debug, sync::Arc, time::Instant};
use super::{cmd::StorageCmd, get_wasm_module, MAX_BATCH_SIZE_FOR_BLOCK_VALIDATION};
use crate::shared::{new_rng, BenchRecord};
impl StorageCmd {
/// Benchmarks the time it takes to read a single Storage item.
/// Uses the latest state that is available for the given client.
pub(crate) fn bench_read<B, BA, C>(
&self,
client: Arc<C>,
_shared_trie_cache: Option<sp_trie::cache::SharedTrieCache<HashingFor<B>>>,
) -> Result<BenchRecord>
where
C: UsageProvider<B> + StorageProvider<B, BA> + CallApiAt<B>,
B: BlockT + Debug,
BA: ClientBackend<B>,
<<B as BlockT>::Header as HeaderT>::Number: From<u32>,
{
if self.params.is_validate_block_mode() && self.params.disable_pov_recorder {
return Err("PoV recorder must be activated to provide a storage proof for block validation at runtime. Remove `--disable-pov-recorder` from the command line.".into());
}
if self.params.is_validate_block_mode() &&
self.params.batch_size > MAX_BATCH_SIZE_FOR_BLOCK_VALIDATION
{
return Err(format!("Batch size is too large. This may cause problems with runtime memory allocation. Better set `--batch-size {}` or less.", MAX_BATCH_SIZE_FOR_BLOCK_VALIDATION).into());
}
let mut record = BenchRecord::default();
let best_hash = client.usage_info().chain.best_hash;
info!("Preparing keys from block {}", best_hash);
// Load all keys and randomly shuffle them.
let mut keys: Vec<_> = client.storage_keys(best_hash, None, None)?.collect();
let (mut rng, _) = new_rng(None);
keys.shuffle(&mut rng);
if keys.is_empty() {
return Err("Can't process benchmarking with empty storage".into());
}
let mut child_nodes = Vec::new();
// Interesting part here:
// Read all the keys in the database and measure the time it takes to access each.
info!("Reading {} keys", keys.len());
// Read using the same TrieBackend and recorder for up to `batch_size` keys.
// This would allow us to measure the amortized cost of reading a key.
let state = client
.state_at(best_hash)
.map_err(|_err| Error::Input("State not found".into()))?;
// We reassign the backend and recorder for every batch size.
// Using a new recorder for every read vs using the same for the entire batch
// produces significant different results. Since in the real use case we use a
// single recorder per block, simulate the same behavior by creating a new
// recorder every batch size, so that the amortized cost of reading a key is
// measured in conditions closer to the real world.
let (mut backend, mut recorder) = self.create_backend::<B, C>(&state);
let mut read_in_batch = 0;
let mut on_validation_batch = vec![];
let mut on_validation_size = 0;
let last_key = keys.last().expect("Checked above to be non-empty");
for key in keys.as_slice() {
match (self.params.include_child_trees, self.is_child_key(key.clone().0)) {
(true, Some(info)) => {
// child tree key
for ck in client.child_storage_keys(best_hash, info.clone(), None, None)? {
child_nodes.push((ck, info.clone()));
}
},
_ => {
// regular key
on_validation_batch.push((key.0.clone(), None));
let start = Instant::now();
let v = backend
.storage(key.0.as_ref())
.expect("Checked above to exist")
.ok_or("Value unexpectedly empty")?;
on_validation_size += v.len();
if self.params.is_import_block_mode() {
record.append(v.len(), start.elapsed())?;
}
},
}
read_in_batch += 1;
let is_batch_full = read_in_batch >= self.params.batch_size || key == last_key;
// Read keys on block validation
if is_batch_full && self.params.is_validate_block_mode() {
let root = backend.root();
let storage_proof = recorder
.clone()
.map(|r| r.drain_storage_proof())
.expect("Storage proof must exist for block validation");
let elapsed = measure_block_validation::<B>(
*root,
storage_proof,
on_validation_batch.clone(),
self.params.validate_block_rounds,
);
record.append(on_validation_size / on_validation_batch.len(), elapsed)?;
on_validation_batch = vec![];
on_validation_size = 0;
}
// Reload recorder
if is_batch_full {
(backend, recorder) = self.create_backend::<B, C>(&state);
read_in_batch = 0;
}
}
if self.params.include_child_trees && !child_nodes.is_empty() {
child_nodes.shuffle(&mut rng);
info!("Reading {} child keys", child_nodes.len());
let (last_child_key, last_child_info) =
child_nodes.last().expect("Checked above to be non-empty");
for (key, info) in child_nodes.as_slice() {
on_validation_batch.push((key.0.clone(), Some(info.clone())));
let start = Instant::now();
let v = backend
.child_storage(info, key.0.as_ref())
.expect("Checked above to exist")
.ok_or("Value unexpectedly empty")?;
on_validation_size += v.len();
if self.params.is_import_block_mode() {
record.append(v.len(), start.elapsed())?;
}
read_in_batch += 1;
let is_batch_full = read_in_batch >= self.params.batch_size ||
(last_child_key == key && last_child_info == info);
// Read child keys on block validation
if is_batch_full && self.params.is_validate_block_mode() {
let root = backend.root();
let storage_proof = recorder
.clone()
.map(|r| r.drain_storage_proof())
.expect("Storage proof must exist for block validation");
let elapsed = measure_block_validation::<B>(
*root,
storage_proof,
on_validation_batch.clone(),
self.params.validate_block_rounds,
);
record.append(on_validation_size / on_validation_batch.len(), elapsed)?;
on_validation_batch = vec![];
on_validation_size = 0;
}
// Reload recorder
if is_batch_full {
(backend, recorder) = self.create_backend::<B, C>(&state);
read_in_batch = 0;
}
}
}
Ok(record)
}
fn create_backend<'a, B, C>(
&self,
state: &'a C::StateBackend,
) -> (
sp_state_machine::TrieBackend<
&'a <C::StateBackend as AsTrieBackend<HashingFor<B>>>::TrieBackendStorage,
HashingFor<B>,
&'a sp_trie::cache::LocalTrieCache<HashingFor<B>>,
>,
Option<sp_trie::recorder::Recorder<HashingFor<B>>>,
)
where
C: CallApiAt<B>,
B: BlockT + Debug,
{
let recorder = (!self.params.disable_pov_recorder).then(|| Default::default());
let backend = sp_state_machine::TrieBackendBuilder::wrap(state.as_trie_backend())
.with_optional_recorder(recorder.clone())
.build();
(backend, recorder)
}
}
fn measure_block_validation<B: BlockT + Debug>(
root: B::Hash,
storage_proof: StorageProof,
on_validation_batch: Vec<(Vec<u8>, Option<ChildInfo>)>,
rounds: u32,
) -> std::time::Duration {
debug!(
"POV: len {:?} {:?}",
storage_proof.len(),
storage_proof.clone().encoded_compact_size::<HashingFor<B>>(root)
);
let batch_size = on_validation_batch.len();
let wasm_module = get_wasm_module();
let mut instance = wasm_module.new_instance().expect("Failed to create wasm instance");
let params = StorageAccessParams::<B>::new_read(root, storage_proof, on_validation_batch);
let dry_run_encoded = params.as_dry_run().encode();
let encoded = params.encode();
let mut durations_in_nanos = Vec::new();
for i in 1..=rounds {
info!("validate_block with {} keys, round {}/{}", batch_size, i, rounds);
// Dry run to get the time it takes without storage access
let dry_run_start = Instant::now();
instance
.call_export("validate_block", &dry_run_encoded)
.expect("Failed to call validate_block");
let dry_run_elapsed = dry_run_start.elapsed();
debug!("validate_block dry-run time {:?}", dry_run_elapsed);
let start = Instant::now();
instance
.call_export("validate_block", &encoded)
.expect("Failed to call validate_block");
let elapsed = start.elapsed();
debug!("validate_block time {:?}", elapsed);
durations_in_nanos
.push(elapsed.saturating_sub(dry_run_elapsed).as_nanos() as u64 / batch_size as u64);
}
std::time::Duration::from_nanos(
durations_in_nanos.iter().sum::<u64>() / durations_in_nanos.len() as u64,
)
}
@@ -0,0 +1,153 @@
// This file is part of Substrate.
// Copyright (C) Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use sc_cli::Result;
use sc_service::Configuration;
use log::info;
use serde::Serialize;
use std::{env, fs, path::PathBuf};
use super::cmd::StorageParams;
use crate::shared::{Stats, UnderscoreHelper};
static VERSION: &str = env!("CARGO_PKG_VERSION");
static TEMPLATE: &str = include_str!("./weights.hbs");
/// Data consumed by Handlebar to fill out the `weights.hbs` template.
#[derive(Serialize, Default, Debug, Clone)]
pub(crate) struct TemplateData {
/// Name of the database used.
db_name: String,
/// Block number that was used.
block_number: String,
/// Name of the runtime. Taken from the chain spec.
runtime_name: String,
/// Version of the benchmarking CLI used.
version: String,
/// Date that the template was filled out.
date: String,
/// Hostname of the machine that executed the benchmarks.
hostname: String,
/// CPU name of the machine that executed the benchmarks.
cpuname: String,
/// Header for the generated file.
header: String,
/// Command line arguments that were passed to the CLI.
args: Vec<String>,
/// Storage params of the executed command.
params: StorageParams,
/// The weight for one `read`.
read_weight: u64,
/// The weight for one `write`.
write_weight: u64,
/// Stats about a `read` benchmark. Contains *time* and *value size* stats.
/// The *value size* stats are currently not used in the template.
read: Option<(Stats, Stats)>,
/// Stats about a `write` benchmark. Contains *time* and *value size* stats.
/// The *value size* stats are currently not used in the template.
write: Option<(Stats, Stats)>,
}
impl TemplateData {
/// Returns a new [`Self`] from the given configuration.
pub fn new(cfg: &Configuration, params: &StorageParams) -> Result<Self> {
let header = params
.header
.as_ref()
.map(|p| std::fs::read_to_string(p))
.transpose()?
.unwrap_or_default();
Ok(TemplateData {
db_name: if params.is_validate_block_mode() {
String::from("InMemoryDb")
} else {
format!("{}", cfg.database)
},
runtime_name: cfg.chain_spec.name().into(),
version: VERSION.into(),
date: chrono::Utc::now().format("%Y-%m-%d (Y/M/D)").to_string(),
hostname: params.hostinfo.hostname(),
cpuname: params.hostinfo.cpuname(),
header,
args: env::args().collect::<Vec<String>>(),
params: params.clone(),
..Default::default()
})
}
/// Sets the stats and calculates the final weights.
pub fn set_stats(
&mut self,
read: Option<(Stats, Stats)>,
write: Option<(Stats, Stats)>,
) -> Result<()> {
if let Some(read) = read {
self.read_weight = self.params.weight_params.calc_weight(&read.0)?;
self.read = Some(read);
}
if let Some(write) = write {
self.write_weight = self.params.weight_params.calc_weight(&write.0)?;
self.write = Some(write);
}
Ok(())
}
/// Sets the block id that was used.
pub fn set_block_number(&mut self, block_number: String) {
self.block_number = block_number
}
/// Fills out the `weights.hbs` or specified HBS template with its own data.
/// Writes the result to `path` which can be a directory or file.
pub fn write(&self, path: &Option<PathBuf>, hbs_template: &Option<PathBuf>) -> Result<()> {
let mut handlebars = handlebars::Handlebars::new();
// Format large integers with underscore.
handlebars.register_helper("underscore", Box::new(UnderscoreHelper));
// Don't HTML escape any characters.
handlebars.register_escape_fn(|s| -> String { s.to_string() });
// Use custom template if provided.
let template = match hbs_template {
Some(template) if template.is_file() => fs::read_to_string(template)?,
Some(_) => return Err("Handlebars template is not a valid file!".into()),
None => TEMPLATE.to_string(),
};
let out_path = self.build_path(path);
let mut fd = fs::File::create(&out_path)?;
info!("Writing weights to {:?}", fs::canonicalize(&out_path)?);
handlebars
.render_template_to_write(&template, &self, &mut fd)
.map_err(|e| format!("HBS template write: {:?}", e).into())
}
/// Builds a path for the weight file.
fn build_path(&self, weight_out: &Option<PathBuf>) -> PathBuf {
let mut path = match weight_out {
Some(p) => PathBuf::from(p),
None => PathBuf::new(),
};
if path.is_dir() || path.as_os_str().is_empty() {
path.push(format!("{}_weights", self.db_name.to_lowercase()));
path.set_extension("rs");
}
path
}
}
@@ -0,0 +1,99 @@
{{header}}
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION {{version}}
//! DATE: {{date}}
//! HOSTNAME: `{{hostname}}`, CPU: `{{cpuname}}`
//!
//! DATABASE: `{{db_name}}`, RUNTIME: `{{runtime_name}}`
//! BLOCK-NUM: `{{block_number}}`
//! SKIP-WRITE: `{{params.skip_write}}`, SKIP-READ: `{{params.skip_read}}`, WARMUPS: `{{params.warmups}}`
//! STATE-VERSION: `V{{params.state_version}}`, STATE-CACHE-SIZE: `{{params.state_cache_size}}`
//! WEIGHT-PATH: `{{params.weight_params.weight_path}}`
//! METRIC: `{{params.weight_params.weight_metric}}`, WEIGHT-MUL: `{{params.weight_params.weight_mul}}`, WEIGHT-ADD: `{{params.weight_params.weight_add}}`
// Executed Command:
{{#each args as |arg|}}
// {{arg}}
{{/each}}
/// Storage DB weights for the `{{runtime_name}}` runtime and `{{db_name}}`.
pub mod constants {
use frame_support::weights::constants;
use sp_core::parameter_types;
use sp_weights::RuntimeDbWeight;
parameter_types! {
{{#if (eq db_name "InMemoryDb")}}
/// `InMemoryDb` weights are measured in the context of the validation functions.
/// To avoid submitting overweight blocks to the relay chain this is the configuration
/// parachains should use.
{{else if (eq db_name "ParityDb")}}
/// `ParityDB` can be enabled with a feature flag, but is still experimental. These weights
/// are available for brave runtime engineers who may want to try this out as default.
{{else}}
/// By default, Substrate uses `RocksDB`, so this will be the weight used throughout
/// the runtime.
{{/if}}
pub const {{db_name}}Weight: RuntimeDbWeight = RuntimeDbWeight {
/// Time to read one storage item.
/// Calculated by multiplying the *{{params.weight_params.weight_metric}}* of all values with `{{params.weight_params.weight_mul}}` and adding `{{params.weight_params.weight_add}}`.
///
/// Stats nanoseconds:
/// Min, Max: {{underscore read.0.min}}, {{underscore read.0.max}}
/// Average: {{underscore read.0.avg}}
/// Median: {{underscore read.0.median}}
/// Std-Dev: {{read.0.stddev}}
///
/// Percentiles nanoseconds:
/// 99th: {{underscore read.0.p99}}
/// 95th: {{underscore read.0.p95}}
/// 75th: {{underscore read.0.p75}}
read: {{underscore read_weight}} * constants::WEIGHT_REF_TIME_PER_NANOS,
/// Time to write one storage item.
/// Calculated by multiplying the *{{params.weight_params.weight_metric}}* of all values with `{{params.weight_params.weight_mul}}` and adding `{{params.weight_params.weight_add}}`.
///
/// Stats nanoseconds:
/// Min, Max: {{underscore write.0.min}}, {{underscore write.0.max}}
/// Average: {{underscore write.0.avg}}
/// Median: {{underscore write.0.median}}
/// Std-Dev: {{write.0.stddev}}
///
/// Percentiles nanoseconds:
/// 99th: {{underscore write.0.p99}}
/// 95th: {{underscore write.0.p95}}
/// 75th: {{underscore write.0.p75}}
write: {{underscore write_weight}} * constants::WEIGHT_REF_TIME_PER_NANOS,
};
}
#[cfg(test)]
mod test_db_weights {
use super::constants::{{db_name}}Weight as W;
use sp_weights::constants;
/// Checks that all weights exist and have sane values.
// NOTE: If this test fails but you are sure that the generated values are fine,
// you can delete it.
#[test]
fn bound() {
// At least 1 µs.
assert!(
W::get().reads(1).ref_time() >= constants::WEIGHT_REF_TIME_PER_MICROS,
"Read weight should be at least 1 µs."
);
assert!(
W::get().writes(1).ref_time() >= constants::WEIGHT_REF_TIME_PER_MICROS,
"Write weight should be at least 1 µs."
);
// At most 1 ms.
assert!(
W::get().reads(1).ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS,
"Read weight should be at most 1 ms."
);
assert!(
W::get().writes(1).ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS,
"Write weight should be at most 1 ms."
);
}
}
}
@@ -0,0 +1,436 @@
// This file is part of Substrate.
// Copyright (C) Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use codec::Encode;
use frame_storage_access_test_runtime::StorageAccessParams;
use log::{debug, info, trace, warn};
use rand::prelude::*;
use sc_cli::Result;
use sc_client_api::{Backend as ClientBackend, StorageProvider, UsageProvider};
use sc_client_db::{DbHash, DbState, DbStateBuilder};
use sp_blockchain::HeaderBackend;
use sp_database::{ColumnId, Transaction};
use sp_runtime::traits::{Block as BlockT, HashingFor, Header as HeaderT};
use sp_state_machine::Backend as StateBackend;
use sp_storage::{ChildInfo, StateVersion};
use sp_trie::{recorder::Recorder, PrefixedMemoryDB};
use std::{
fmt::Debug,
sync::Arc,
time::{Duration, Instant},
};
use super::{cmd::StorageCmd, get_wasm_module, MAX_BATCH_SIZE_FOR_BLOCK_VALIDATION};
use crate::shared::{new_rng, BenchRecord};
impl StorageCmd {
/// Benchmarks the time it takes to write a single Storage item.
///
/// Uses the latest state that is available for the given client.
///
/// Unlike reading benchmark, where we read every single key, here we write a batch of keys in
/// one time. So writing a remaining keys with the size much smaller than batch size can
/// dramatically distort the results. To avoid this, we skip the remaining keys.
pub(crate) fn bench_write<Block, BA, H, C>(
&self,
client: Arc<C>,
(db, state_col): (Arc<dyn sp_database::Database<DbHash>>, ColumnId),
storage: Arc<dyn sp_state_machine::Storage<HashingFor<Block>>>,
shared_trie_cache: Option<sp_trie::cache::SharedTrieCache<HashingFor<Block>>>,
) -> Result<BenchRecord>
where
Block: BlockT<Header = H, Hash = DbHash> + Debug,
H: HeaderT<Hash = DbHash>,
BA: ClientBackend<Block>,
C: UsageProvider<Block> + HeaderBackend<Block> + StorageProvider<Block, BA>,
{
if self.params.is_validate_block_mode() && self.params.disable_pov_recorder {
return Err("PoV recorder must be activated to provide a storage proof for block validation at runtime. Remove `--disable-pov-recorder`.".into());
}
if self.params.is_validate_block_mode() &&
self.params.batch_size > MAX_BATCH_SIZE_FOR_BLOCK_VALIDATION
{
return Err(format!("Batch size is too large. This may cause problems with runtime memory allocation. Better set `--batch-size {}` or less.", MAX_BATCH_SIZE_FOR_BLOCK_VALIDATION).into());
}
// Store the time that it took to write each value.
let mut record = BenchRecord::default();
let best_hash = client.usage_info().chain.best_hash;
let header = client.header(best_hash)?.ok_or("Header not found")?;
let original_root = *header.state_root();
let (trie, _) = self.create_trie_backend::<Block, H>(
original_root,
&storage,
shared_trie_cache.as_ref(),
);
info!("Preparing keys from block {}", best_hash);
// Load all KV pairs and randomly shuffle them.
let mut kvs: Vec<_> = trie.pairs(Default::default())?.collect();
let (mut rng, _) = new_rng(None);
kvs.shuffle(&mut rng);
if kvs.is_empty() {
return Err("Can't process benchmarking with empty storage".into());
}
info!("Writing {} keys in batches of {}", kvs.len(), self.params.batch_size);
let remainder = kvs.len() % self.params.batch_size;
if self.params.is_validate_block_mode() && remainder != 0 {
info!("Remaining `{remainder}` keys will be skipped");
}
let mut child_nodes = Vec::new();
let mut batched_keys = Vec::new();
// Generate all random values first; Make sure there are no collisions with existing
// db entries, so we can rollback all additions without corrupting existing entries.
for key_value in kvs {
let (k, original_v) = key_value?;
match (self.params.include_child_trees, self.is_child_key(k.to_vec())) {
(true, Some(info)) => {
let child_keys = client
.child_storage_keys(best_hash, info.clone(), None, None)?
.collect::<Vec<_>>();
child_nodes.push((child_keys, info.clone()));
},
_ => {
// regular key
let mut new_v = vec![0; original_v.len()];
loop {
// Create a random value to overwrite with.
// NOTE: We use a possibly higher entropy than the original value,
// could be improved but acts as an over-estimation which is fine for now.
rng.fill_bytes(&mut new_v[..]);
if check_new_value::<Block>(
db.clone(),
&trie,
&k.to_vec(),
&new_v,
self.state_version(),
state_col,
None,
) {
break;
}
}
batched_keys.push((k.to_vec(), new_v.to_vec()));
if batched_keys.len() < self.params.batch_size {
continue;
}
// Write each value in one commit.
let (size, duration) = if self.params.is_validate_block_mode() {
self.measure_per_key_amortised_validate_block_write_cost::<Block, H>(
original_root,
&storage,
shared_trie_cache.as_ref(),
batched_keys.clone(),
None,
)?
} else {
self.measure_per_key_amortised_import_block_write_cost::<Block, H>(
original_root,
&storage,
shared_trie_cache.as_ref(),
db.clone(),
batched_keys.clone(),
self.state_version(),
state_col,
None,
)?
};
record.append(size, duration)?;
batched_keys.clear();
},
}
}
if self.params.include_child_trees && !child_nodes.is_empty() {
info!("Writing {} child keys", child_nodes.iter().map(|(c, _)| c.len()).sum::<usize>());
for (mut child_keys, info) in child_nodes {
if child_keys.len() < self.params.batch_size {
warn!(
"{} child keys will be skipped because it's less than batch size",
child_keys.len()
);
continue;
}
child_keys.shuffle(&mut rng);
for key in child_keys {
if let Some(original_v) = client
.child_storage(best_hash, &info, &key)
.expect("Checked above to exist")
{
let mut new_v = vec![0; original_v.0.len()];
loop {
rng.fill_bytes(&mut new_v[..]);
if check_new_value::<Block>(
db.clone(),
&trie,
&key.0,
&new_v,
self.state_version(),
state_col,
Some(&info),
) {
break;
}
}
batched_keys.push((key.0, new_v.to_vec()));
if batched_keys.len() < self.params.batch_size {
continue;
}
let (size, duration) = if self.params.is_validate_block_mode() {
self.measure_per_key_amortised_validate_block_write_cost::<Block, H>(
original_root,
&storage,
shared_trie_cache.as_ref(),
batched_keys.clone(),
None,
)?
} else {
self.measure_per_key_amortised_import_block_write_cost::<Block, H>(
original_root,
&storage,
shared_trie_cache.as_ref(),
db.clone(),
batched_keys.clone(),
self.state_version(),
state_col,
Some(&info),
)?
};
record.append(size, duration)?;
batched_keys.clear();
}
}
}
}
Ok(record)
}
fn create_trie_backend<Block, H>(
&self,
original_root: Block::Hash,
storage: &Arc<dyn sp_state_machine::Storage<HashingFor<Block>>>,
shared_trie_cache: Option<&sp_trie::cache::SharedTrieCache<HashingFor<Block>>>,
) -> (DbState<HashingFor<Block>>, Option<Recorder<HashingFor<Block>>>)
where
Block: BlockT<Header = H, Hash = DbHash> + Debug,
H: HeaderT<Hash = DbHash>,
{
let recorder = (!self.params.disable_pov_recorder).then(|| Default::default());
let trie = DbStateBuilder::<HashingFor<Block>>::new(storage.clone(), original_root)
.with_optional_cache(shared_trie_cache.map(|c| c.local_cache_trusted()))
.with_optional_recorder(recorder.clone())
.build();
(trie, recorder)
}
/// Measures write benchmark
/// if `child_info` exist then it means this is a child tree key
fn measure_per_key_amortised_import_block_write_cost<Block, H>(
&self,
original_root: Block::Hash,
storage: &Arc<dyn sp_state_machine::Storage<HashingFor<Block>>>,
shared_trie_cache: Option<&sp_trie::cache::SharedTrieCache<HashingFor<Block>>>,
db: Arc<dyn sp_database::Database<DbHash>>,
changes: Vec<(Vec<u8>, Vec<u8>)>,
version: StateVersion,
col: ColumnId,
child_info: Option<&ChildInfo>,
) -> Result<(usize, Duration)>
where
Block: BlockT<Header = H, Hash = DbHash> + Debug,
H: HeaderT<Hash = DbHash>,
{
let batch_size = changes.len();
let average_len = changes.iter().map(|(_, v)| v.len()).sum::<usize>() / batch_size;
// For every batched write use a different trie instance and recorder, so we
// don't benefit from past runs.
let (trie, _recorder) =
self.create_trie_backend::<Block, H>(original_root, storage, shared_trie_cache);
let start = Instant::now();
// Create a TX that will modify the Trie in the DB and
// calculate the root hash of the Trie after the modification.
let replace = changes
.iter()
.map(|(key, new_v)| (key.as_ref(), Some(new_v.as_ref())))
.collect::<Vec<_>>();
let stx = match child_info {
Some(info) => trie.child_storage_root(info, replace.iter().cloned(), version).2,
None => trie.storage_root(replace.iter().cloned(), version).1,
};
// Only the keep the insertions, since we do not want to benchmark pruning.
let tx = convert_tx::<Block>(db.clone(), stx.clone(), false, col);
db.commit(tx).map_err(|e| format!("Writing to the Database: {}", e))?;
let result = (average_len, start.elapsed() / batch_size as u32);
// Now undo the changes by removing what was added.
let tx = convert_tx::<Block>(db.clone(), stx.clone(), true, col);
db.commit(tx).map_err(|e| format!("Writing to the Database: {}", e))?;
Ok(result)
}
/// Measures write benchmark on block validation
/// if `child_info` exist then it means this is a child tree key
fn measure_per_key_amortised_validate_block_write_cost<Block, H>(
&self,
original_root: Block::Hash,
storage: &Arc<dyn sp_state_machine::Storage<HashingFor<Block>>>,
shared_trie_cache: Option<&sp_trie::cache::SharedTrieCache<HashingFor<Block>>>,
changes: Vec<(Vec<u8>, Vec<u8>)>,
maybe_child_info: Option<&ChildInfo>,
) -> Result<(usize, Duration)>
where
Block: BlockT<Header = H, Hash = DbHash> + Debug,
H: HeaderT<Hash = DbHash>,
{
let batch_size = changes.len();
let average_len = changes.iter().map(|(_, v)| v.len()).sum::<usize>() / batch_size;
let (trie, recorder) =
self.create_trie_backend::<Block, H>(original_root, storage, shared_trie_cache);
for (key, _) in changes.iter() {
let _v = trie
.storage(key)
.expect("Checked above to exist")
.ok_or("Value unexpectedly empty")?;
}
let storage_proof = recorder
.map(|r| r.drain_storage_proof())
.expect("Storage proof must exist for block validation");
let root = trie.root();
debug!(
"POV: len {:?} {:?}",
storage_proof.len(),
storage_proof.clone().encoded_compact_size::<HashingFor<Block>>(*root)
);
let params = StorageAccessParams::<Block>::new_write(
*root,
storage_proof,
(changes, maybe_child_info.cloned()),
);
let mut durations_in_nanos = Vec::new();
let wasm_module = get_wasm_module();
let mut instance = wasm_module.new_instance().expect("Failed to create wasm instance");
let dry_run_encoded = params.as_dry_run().encode();
let encoded = params.encode();
for i in 1..=self.params.validate_block_rounds {
info!(
"validate_block with {} keys, round {}/{}",
batch_size, i, self.params.validate_block_rounds
);
// Dry run to get the time it takes without storage access
let dry_run_start = Instant::now();
instance
.call_export("validate_block", &dry_run_encoded)
.expect("Failed to call validate_block");
let dry_run_elapsed = dry_run_start.elapsed();
debug!("validate_block dry-run time {:?}", dry_run_elapsed);
let start = Instant::now();
instance
.call_export("validate_block", &encoded)
.expect("Failed to call validate_block");
let elapsed = start.elapsed();
debug!("validate_block time {:?}", elapsed);
durations_in_nanos.push(
elapsed.saturating_sub(dry_run_elapsed).as_nanos() as u64 / batch_size as u64,
);
}
let result = (
average_len,
std::time::Duration::from_nanos(
durations_in_nanos.iter().sum::<u64>() / durations_in_nanos.len() as u64,
),
);
Ok(result)
}
}
/// Converts a Trie transaction into a DB transaction.
/// Removals are ignored and will not be included in the final tx.
/// `invert_inserts` replaces all inserts with removals.
fn convert_tx<B: BlockT>(
db: Arc<dyn sp_database::Database<DbHash>>,
mut tx: PrefixedMemoryDB<HashingFor<B>>,
invert_inserts: bool,
col: ColumnId,
) -> Transaction<DbHash> {
let mut ret = Transaction::<DbHash>::default();
for (mut k, (v, rc)) in tx.drain().into_iter() {
if rc > 0 {
db.sanitize_key(&mut k);
if invert_inserts {
ret.remove(col, &k);
} else {
ret.set(col, &k, &v);
}
}
// < 0 means removal - ignored.
// 0 means no modification.
}
ret
}
/// Checks if a new value causes any collision in tree updates
/// returns true if there is no collision
/// if `child_info` exist then it means this is a child tree key
fn check_new_value<Block: BlockT>(
db: Arc<dyn sp_database::Database<DbHash>>,
trie: &DbState<HashingFor<Block>>,
key: &Vec<u8>,
new_v: &Vec<u8>,
version: StateVersion,
col: ColumnId,
child_info: Option<&ChildInfo>,
) -> bool {
let new_kv = vec![(key.as_ref(), Some(new_v.as_ref()))];
let mut stx = match child_info {
Some(info) => trie.child_storage_root(info, new_kv.iter().cloned(), version).2,
None => trie.storage_root(new_kv.iter().cloned(), version).1,
};
for (mut k, (_, rc)) in stx.drain().into_iter() {
if rc > 0 {
db.sanitize_key(&mut k);
if db.get(col, &k).is_some() {
trace!("Benchmark-store key creation: Key collision detected, retry");
return false;
}
}
}
true
}