mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-04-26 04:07:57 +00:00
Add Proof Size to Weight Output (#11637)
* initial impl * add template test * linear fit proof size * always record proof when tracking storage * calculate worst case pov * remove duplicate worst case * cargo run --quiet --profile=production --features=runtime-benchmarks --manifest-path=bin/node/cli/Cargo.toml -- benchmark pallet --chain=dev --steps=50 --repeat=20 --pallet=pallet_assets --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./frame/assets/src/weights.rs --template=./.maintain/frame-weight-template.hbs * more comment output * add cli for worst case map size * update name * clap does not support underscores * rename * expose worst case map values * improve some comments * cargo run --quiet --profile=production --features=runtime-benchmarks --manifest-path=bin/node/cli/Cargo.toml -- benchmark pallet --chain=dev --steps=50 --repeat=20 --pallet=pallet_assets --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./frame/assets/src/weights.rs --template=./.maintain/frame-weight-template.hbs * update template * cargo run --quiet --profile=production --features=runtime-benchmarks --manifest-path=bin/node/cli/Cargo.toml -- benchmark pallet --chain=dev --steps=50 --repeat=20 --pallet=pallet_assets --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./frame/assets/src/weights.rs --template=./.maintain/frame-weight-template.hbs * fix fmt * more fmt * more fmt * Dont panic when there is no proof Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Fix test features Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Whitelist :extrinsic_index Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Use whitelist when recording proof Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Add logs Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Add PoV testing pallet Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Deploy PoV testing pallet Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Storage benches reside in the PoV pallet Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Linear regress PoV per component Splits the PoV calculation into "measured" and "estimated". The measured part is reported by the Proof recorder and linear regressed over all components at once. The estimated part is calculated as worst-case by using the max PoV size per storage access and calculating one linear regress per component. This gives each component a (possibly) independent PoV. For now the measured size will always be lower than the PoV on Polkadot since it is measured on an empty snapshot. The measured part is therefor only used as diagnostic for debugging. Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Put PoV into the weight templates Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * fmt Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Extra alanysis choise for PoV Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Add+Fix tests Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Make benches faster Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Cleanup Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Use same template comments Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * ".git/.scripts/bench-bot.sh" pallet dev pallet_balances * ".git/.scripts/bench-bot.sh" pallet dev pallet_democracy * Update referenda mock BlockWeights Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Take measured value size into account Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * clippy Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * ".git/.scripts/bench-bot.sh" pallet dev pallet_scheduler * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * proof_size: None Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * ugly, but works Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * wup Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * WIP Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Add pov_mode attribute to the benchmarks! macro Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Use pov_mode attribute in PoV benchmarking Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Update tests Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Scheduler, Whitelist: Add pov_mode attr Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Update PoV weights * Add CLI arg: default-pov-mode Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Fix tests Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * fmt Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * fix Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Revert "Update PoV weights" This reverts commit 2f3ac2387396470b118122a6ff8fa4ee12216f4b. * Revert "WIP" This reverts commit c34b538cd2bc45da4544e887180184e30957904a. * Revert first approach This reverts commit range 8ddaa2fffe5930f225a30bee314d0b7c94c344dd^..4c84f8748e5395852a9e0e25b0404953fee1a59e Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Clippy Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Add extra benchmarks Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * ".git/.scripts/commands/bench/bench.sh" pallet dev pallet_alliance * ".git/.scripts/commands/bench/bench.sh" pallet dev pallet_whitelist * ".git/.scripts/commands/bench/bench.sh" pallet dev pallet_scheduler * fmt Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Clippy Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Clippy 🤦 Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Add reference benchmarks Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Fix doc comments Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Undo logging Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Add 'Ignored' pov_mode Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Allow multiple attributes per benchmark Turns out that the current benchmarking syntax does not support multiple attributes per bench 🤦. Changing it to support that since otherwise the `pov_mode` would conflict with the others. Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Validate pov_mode syntax Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Ignore PoV for all contract benchmarks Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Test Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * test Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Bump macro recursion limit Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * fmt Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Update contract weights They dont have a PoV component anymore. Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * fix test ffs Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * pov_mode is unsupported in V2 syntax Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Fix pallet ui tests Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * update pallet ui Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Fix pallet ui tests Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Update weights Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Parity Bot <admin@parity.io> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: command-bot <> Co-authored-by: Your Name <you@example.com>
This commit is contained in:
@@ -38,7 +38,7 @@ use sp_externalities::Extensions;
|
||||
use sp_keystore::{testing::KeyStore, KeystoreExt, SyncCryptoStorePtr};
|
||||
use sp_runtime::traits::{Block as BlockT, Header as HeaderT};
|
||||
use sp_state_machine::StateMachine;
|
||||
use std::{collections::HashMap, fmt::Debug, fs, sync::Arc, time};
|
||||
use std::{collections::HashMap, fmt::Debug, fs, str::FromStr, sync::Arc, time};
|
||||
|
||||
/// Logging target
|
||||
const LOG_TARGET: &'static str = "frame::benchmark::pallet";
|
||||
@@ -54,6 +54,34 @@ pub(crate) struct ComponentRange {
|
||||
max: u32,
|
||||
}
|
||||
|
||||
/// How the PoV size of a storage item should be estimated.
|
||||
#[derive(clap::ValueEnum, Debug, Eq, PartialEq, Clone, Copy)]
|
||||
pub enum PovEstimationMode {
|
||||
/// Use the maximal encoded length as provided by [`codec::MaxEncodedLen`].
|
||||
MaxEncodedLen,
|
||||
/// Measure the accessed value size in the pallet benchmarking and add some trie overhead.
|
||||
Measured,
|
||||
/// Do not estimate the PoV size for this storage item or benchmark.
|
||||
Ignored,
|
||||
}
|
||||
|
||||
impl FromStr for PovEstimationMode {
|
||||
type Err = &'static str;
|
||||
|
||||
fn from_str(s: &str) -> std::result::Result<Self, Self::Err> {
|
||||
match s {
|
||||
"MaxEncodedLen" => Ok(Self::MaxEncodedLen),
|
||||
"Measured" => Ok(Self::Measured),
|
||||
"Ignored" => Ok(Self::Ignored),
|
||||
_ => unreachable!("The benchmark! macro should have prevented this"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Maps (pallet, benchmark) -> ((pallet, storage) -> PovEstimationMode)
|
||||
pub(crate) type PovModesMap =
|
||||
HashMap<(Vec<u8>, Vec<u8>), HashMap<(String, String), PovEstimationMode>>;
|
||||
|
||||
// This takes multiple benchmark batches and combines all the results where the pallet, instance,
|
||||
// and benchmark are the same.
|
||||
fn combine_batches(
|
||||
@@ -165,11 +193,19 @@ impl PalletCmd {
|
||||
let state_with_tracking = BenchmarkingState::<BB>::new(
|
||||
genesis_storage.clone(),
|
||||
cache_size,
|
||||
self.record_proof,
|
||||
// Record proof size
|
||||
true,
|
||||
// Enable storage tracking
|
||||
true,
|
||||
)?;
|
||||
let state_without_tracking =
|
||||
BenchmarkingState::<BB>::new(genesis_storage, cache_size, self.record_proof, false)?;
|
||||
let state_without_tracking = BenchmarkingState::<BB>::new(
|
||||
genesis_storage,
|
||||
cache_size,
|
||||
// Do not record proof size
|
||||
false,
|
||||
// Do not enable storage tracking
|
||||
false,
|
||||
)?;
|
||||
let executor = NativeElseWasmExecutor::<ExecDispatch>::new(
|
||||
execution_method_from_cli(self.wasm_method, self.wasmtime_instantiation_strategy),
|
||||
self.heap_pages,
|
||||
@@ -222,10 +258,27 @@ impl PalletCmd {
|
||||
item.pallet.clone(),
|
||||
benchmark.name.clone(),
|
||||
benchmark.components.clone(),
|
||||
benchmark.pov_modes.clone(),
|
||||
))
|
||||
}
|
||||
}
|
||||
});
|
||||
// Convert `Vec<u8>` to `String` for better readability.
|
||||
let benchmarks_to_run: Vec<_> = benchmarks_to_run
|
||||
.into_iter()
|
||||
.map(|b| {
|
||||
(
|
||||
b.0,
|
||||
b.1,
|
||||
b.2,
|
||||
b.3.into_iter()
|
||||
.map(|(p, s)| {
|
||||
(String::from_utf8(p).unwrap(), String::from_utf8(s).unwrap())
|
||||
})
|
||||
.collect(),
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
|
||||
if benchmarks_to_run.is_empty() {
|
||||
return Err("No benchmarks found which match your input.".into())
|
||||
@@ -243,8 +296,9 @@ impl PalletCmd {
|
||||
let mut timer = time::SystemTime::now();
|
||||
// Maps (pallet, extrinsic) to its component ranges.
|
||||
let mut component_ranges = HashMap::<(Vec<u8>, Vec<u8>), Vec<ComponentRange>>::new();
|
||||
let pov_modes = Self::parse_pov_modes(&benchmarks_to_run)?;
|
||||
|
||||
for (pallet, extrinsic, components) in benchmarks_to_run {
|
||||
for (pallet, extrinsic, components, _) in benchmarks_to_run.clone() {
|
||||
log::info!(
|
||||
target: LOG_TARGET,
|
||||
"Starting benchmark: {}::{}",
|
||||
@@ -429,7 +483,7 @@ impl PalletCmd {
|
||||
// Combine all of the benchmark results, so that benchmarks of the same pallet/function
|
||||
// are together.
|
||||
let batches = combine_batches(batches, batches_db);
|
||||
self.output(&batches, &storage_info, &component_ranges)
|
||||
self.output(&batches, &storage_info, &component_ranges, pov_modes)
|
||||
}
|
||||
|
||||
fn output(
|
||||
@@ -437,21 +491,31 @@ impl PalletCmd {
|
||||
batches: &[BenchmarkBatchSplitResults],
|
||||
storage_info: &[StorageInfo],
|
||||
component_ranges: &HashMap<(Vec<u8>, Vec<u8>), Vec<ComponentRange>>,
|
||||
pov_modes: PovModesMap,
|
||||
) -> Result<()> {
|
||||
// Jsonify the result and write it to a file or stdout if desired.
|
||||
if !self.jsonify(&batches)? {
|
||||
// Print the summary only if `jsonify` did not write to stdout.
|
||||
self.print_summary(&batches, &storage_info)
|
||||
self.print_summary(&batches, &storage_info, pov_modes.clone())
|
||||
}
|
||||
|
||||
// Create the weights.rs file.
|
||||
if let Some(output_path) = &self.output {
|
||||
writer::write_results(&batches, &storage_info, &component_ranges, output_path, self)?;
|
||||
writer::write_results(
|
||||
&batches,
|
||||
&storage_info,
|
||||
&component_ranges,
|
||||
pov_modes,
|
||||
self.default_pov_mode,
|
||||
output_path,
|
||||
self,
|
||||
)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Re-analyze a batch historic benchmark timing data. Will not take the PoV into account.
|
||||
fn output_from_results(&self, batches: &[BenchmarkBatchSplitResults]) -> Result<()> {
|
||||
let mut component_ranges =
|
||||
HashMap::<(Vec<u8>, Vec<u8>), HashMap<String, (u32, u32)>>::new();
|
||||
@@ -484,7 +548,7 @@ impl PalletCmd {
|
||||
})
|
||||
.collect();
|
||||
|
||||
self.output(batches, &[], &component_ranges)
|
||||
self.output(batches, &[], &component_ranges, Default::default())
|
||||
}
|
||||
|
||||
/// Jsonifies the passed batches and writes them to stdout or into a file.
|
||||
@@ -507,7 +571,12 @@ impl PalletCmd {
|
||||
}
|
||||
|
||||
/// Prints the results as human-readable summary without raw timing data.
|
||||
fn print_summary(&self, batches: &[BenchmarkBatchSplitResults], storage_info: &[StorageInfo]) {
|
||||
fn print_summary(
|
||||
&self,
|
||||
batches: &[BenchmarkBatchSplitResults],
|
||||
storage_info: &[StorageInfo],
|
||||
pov_modes: PovModesMap,
|
||||
) {
|
||||
for batch in batches.iter() {
|
||||
// Print benchmark metadata
|
||||
println!(
|
||||
@@ -526,13 +595,32 @@ impl PalletCmd {
|
||||
}
|
||||
|
||||
if !self.no_storage_info {
|
||||
let mut comments: Vec<String> = Default::default();
|
||||
writer::add_storage_comments(&mut comments, &batch.db_results, storage_info);
|
||||
let mut storage_per_prefix = HashMap::<Vec<u8>, Vec<BenchmarkResult>>::new();
|
||||
let pov_mode = pov_modes
|
||||
.get(&(batch.pallet.clone(), batch.benchmark.clone()))
|
||||
.cloned()
|
||||
.unwrap_or_default();
|
||||
|
||||
let comments = writer::process_storage_results(
|
||||
&mut storage_per_prefix,
|
||||
&batch.db_results,
|
||||
storage_info,
|
||||
&pov_mode,
|
||||
self.default_pov_mode,
|
||||
self.worst_case_map_values,
|
||||
self.additional_trie_layers,
|
||||
);
|
||||
println!("Raw Storage Info\n========");
|
||||
for comment in comments {
|
||||
println!("{}", comment);
|
||||
}
|
||||
println!();
|
||||
|
||||
println!("-- Proof Sizes --\n");
|
||||
for result in batch.db_results.iter() {
|
||||
println!("{} bytes", result.proof_size);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
|
||||
// Conduct analysis.
|
||||
@@ -553,6 +641,11 @@ impl PalletCmd {
|
||||
{
|
||||
println!("Writes = {:?}", analysis);
|
||||
}
|
||||
if let Some(analysis) =
|
||||
Analysis::median_slopes(&batch.db_results, BenchmarkSelector::ProofSize)
|
||||
{
|
||||
println!("Recorded proof Size = {:?}", analysis);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
if !self.no_min_squares {
|
||||
@@ -572,10 +665,60 @@ impl PalletCmd {
|
||||
{
|
||||
println!("Writes = {:?}", analysis);
|
||||
}
|
||||
if let Some(analysis) =
|
||||
Analysis::min_squares_iqr(&batch.db_results, BenchmarkSelector::ProofSize)
|
||||
{
|
||||
println!("Recorded proof Size = {:?}", analysis);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Parses the PoV modes per benchmark that were specified by the `#[pov_mode]` attribute.
|
||||
fn parse_pov_modes(
|
||||
benchmarks: &Vec<(
|
||||
Vec<u8>,
|
||||
Vec<u8>,
|
||||
Vec<(BenchmarkParameter, u32, u32)>,
|
||||
Vec<(String, String)>,
|
||||
)>,
|
||||
) -> Result<PovModesMap> {
|
||||
use std::collections::hash_map::Entry;
|
||||
let mut parsed = PovModesMap::new();
|
||||
|
||||
for (pallet, call, _components, pov_modes) in benchmarks {
|
||||
for (pallet_storage, mode) in pov_modes {
|
||||
let mode = PovEstimationMode::from_str(&mode)?;
|
||||
let splits = pallet_storage.split("::").collect::<Vec<_>>();
|
||||
if splits.is_empty() || splits.len() > 2 {
|
||||
return Err(format!(
|
||||
"Expected 'Pallet::Storage' as storage name but got: {}",
|
||||
pallet_storage
|
||||
)
|
||||
.into())
|
||||
}
|
||||
let (pov_pallet, pov_storage) = (splits[0], splits.get(1).unwrap_or(&"ALL"));
|
||||
|
||||
match parsed
|
||||
.entry((pallet.clone(), call.clone()))
|
||||
.or_default()
|
||||
.entry((pov_pallet.to_string(), pov_storage.to_string()))
|
||||
{
|
||||
Entry::Occupied(_) =>
|
||||
return Err(format!(
|
||||
"Cannot specify pov_mode tag twice for the same key: {}",
|
||||
pallet_storage
|
||||
)
|
||||
.into()),
|
||||
Entry::Vacant(e) => {
|
||||
e.insert(mode);
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(parsed)
|
||||
}
|
||||
}
|
||||
|
||||
impl CliConfiguration for PalletCmd {
|
||||
@@ -592,9 +735,16 @@ impl CliConfiguration for PalletCmd {
|
||||
}
|
||||
|
||||
/// List the benchmarks available in the runtime, in a CSV friendly format.
|
||||
fn list_benchmark(benchmarks_to_run: Vec<(Vec<u8>, Vec<u8>, Vec<(BenchmarkParameter, u32, u32)>)>) {
|
||||
fn list_benchmark(
|
||||
benchmarks_to_run: Vec<(
|
||||
Vec<u8>,
|
||||
Vec<u8>,
|
||||
Vec<(BenchmarkParameter, u32, u32)>,
|
||||
Vec<(String, String)>,
|
||||
)>,
|
||||
) {
|
||||
println!("pallet, benchmark");
|
||||
for (pallet, extrinsic, _components) in benchmarks_to_run {
|
||||
for (pallet, extrinsic, _, _) in benchmarks_to_run {
|
||||
println!("{}, {}", String::from_utf8_lossy(&pallet), String::from_utf8_lossy(&extrinsic));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -103,6 +103,14 @@ pub struct PalletCmd {
|
||||
#[arg(long)]
|
||||
pub output_analysis: Option<String>,
|
||||
|
||||
/// Which analysis function to use when analyzing measured proof sizes.
|
||||
#[arg(long, default_value("median-slopes"))]
|
||||
pub output_pov_analysis: Option<String>,
|
||||
|
||||
/// The PoV estimation mode of a benchmark if no `pov_mode` attribute is present.
|
||||
#[arg(long, default_value("max-encoded-len"), value_enum)]
|
||||
pub default_pov_mode: command::PovEstimationMode,
|
||||
|
||||
/// Set the heap pages while running benchmarks. If not set, the default value from the client
|
||||
/// is used.
|
||||
#[arg(long)]
|
||||
@@ -117,10 +125,6 @@ pub struct PalletCmd {
|
||||
#[arg(long)]
|
||||
pub extra: bool,
|
||||
|
||||
/// Estimate PoV size.
|
||||
#[arg(long)]
|
||||
pub record_proof: bool,
|
||||
|
||||
#[allow(missing_docs)]
|
||||
#[clap(flatten)]
|
||||
pub shared_params: sc_cli::SharedParams,
|
||||
@@ -167,6 +171,25 @@ pub struct PalletCmd {
|
||||
#[arg(long)]
|
||||
pub no_storage_info: bool,
|
||||
|
||||
/// The assumed default maximum size of any `StorageMap`.
|
||||
///
|
||||
/// When the maximum size of a map is not defined by the runtime developer,
|
||||
/// this value is used as a worst case scenario. It will affect the calculated worst case
|
||||
/// PoV size for accessing a value in a map, since the PoV will need to include the trie
|
||||
/// nodes down to the underlying value.
|
||||
#[clap(long = "map-size", default_value = "1000000")]
|
||||
pub worst_case_map_values: u32,
|
||||
|
||||
/// Adjust the PoV estimation by adding additional trie layers to it.
|
||||
///
|
||||
/// This should be set to `log16(n)` where `n` is the number of top-level storage items in the
|
||||
/// runtime, eg. `StorageMap`s and `StorageValue`s. A value of 2 to 3 is usually sufficient.
|
||||
/// Each layer will result in an additional 495 bytes PoV per distinct top-level access.
|
||||
/// Therefore multiple `StorageMap` accesses only suffer from this increase once. The exact
|
||||
/// number of storage items depends on the runtime and the deployed pallets.
|
||||
#[clap(long, default_value = "0")]
|
||||
pub additional_trie_layers: u8,
|
||||
|
||||
/// A path to a `.json` file with existing benchmark results generated with `--json` or
|
||||
/// `--json-file`. When specified the benchmarks are not actually executed, and the data for
|
||||
/// the analysis is read from this file.
|
||||
|
||||
@@ -2,7 +2,8 @@
|
||||
//! Autogenerated weights for `{{pallet}}`
|
||||
//!
|
||||
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION {{version}}
|
||||
//! DATE: {{date}}, STEPS: `{{cmd.steps}}`, REPEAT: {{cmd.repeat}}, LOW RANGE: `{{cmd.lowest_range_values}}`, HIGH RANGE: `{{cmd.highest_range_values}}`
|
||||
//! DATE: {{date}}, STEPS: `{{cmd.steps}}`, REPEAT: `{{cmd.repeat}}`, LOW RANGE: `{{cmd.lowest_range_values}}`, HIGH RANGE: `{{cmd.highest_range_values}}`
|
||||
//! WORST CASE MAP SIZE: `{{cmd.worst_case_map_values}}`
|
||||
//! HOSTNAME: `{{hostname}}`, CPU: `{{cpuname}}`
|
||||
//! EXECUTION: {{cmd.execution}}, WASM-EXECUTION: {{cmd.wasm_execution}}, CHAIN: {{cmd.chain}}, DB CACHE: {{cmd.db_cache}}
|
||||
|
||||
@@ -23,7 +24,7 @@ pub struct WeightInfo<T>(PhantomData<T>);
|
||||
impl<T: frame_system::Config> {{pallet}}::WeightInfo for WeightInfo<T> {
|
||||
{{#each benchmarks as |benchmark|}}
|
||||
{{#each benchmark.comments as |comment|}}
|
||||
// {{comment}}
|
||||
/// {{comment}}
|
||||
{{/each}}
|
||||
{{#each benchmark.component_ranges as |range|}}
|
||||
/// The range of component `{{range.name}}` is `[{{range.min}}, {{range.max}}]`.
|
||||
@@ -33,8 +34,15 @@ impl<T: frame_system::Config> {{pallet}}::WeightInfo for WeightInfo<T> {
|
||||
{{~#each benchmark.components as |c| ~}}
|
||||
{{~#if (not c.is_used)}}_{{/if}}{{c.name}}: u32, {{/each~}}
|
||||
) -> Weight {
|
||||
// Proof Size summary in bytes:
|
||||
// Measured: `{{benchmark.base_recorded_proof_size}}{{#each benchmark.component_recorded_proof_size as |cp|}} + {{cp.name}} * ({{cp.slope}} ±{{underscore cp.error}}){{/each}}`
|
||||
// Estimated: `{{benchmark.base_calculated_proof_size}}{{#each benchmark.component_calculated_proof_size as |cp|}} + {{cp.name}} * ({{cp.slope}} ±{{underscore cp.error}}){{/each}}`
|
||||
// Minimum execution time: {{underscore benchmark.min_execution_time}} nanoseconds.
|
||||
{{#if (ne benchmark.base_calculated_proof_size "0")}}
|
||||
Weight::from_parts({{underscore benchmark.base_weight}}, {{benchmark.base_calculated_proof_size}})
|
||||
{{else}}
|
||||
Weight::from_ref_time({{underscore benchmark.base_weight}})
|
||||
{{/if}}
|
||||
{{#each benchmark.component_weight as |cw|}}
|
||||
// Standard Error: {{underscore cw.error}}
|
||||
.saturating_add(Weight::from_ref_time({{underscore cw.slope}}).saturating_mul({{cw.name}}.into()))
|
||||
@@ -51,6 +59,9 @@ impl<T: frame_system::Config> {{pallet}}::WeightInfo for WeightInfo<T> {
|
||||
{{#each benchmark.component_writes as |cw|}}
|
||||
.saturating_add(T::DbWeight::get().writes(({{cw.slope}}_u64).saturating_mul({{cw.name}}.into())))
|
||||
{{/each}}
|
||||
{{#each benchmark.component_calculated_proof_size as |cp|}}
|
||||
.saturating_add(Weight::from_proof_size({{cp.slope}}).saturating_mul({{cp.name}}.into()))
|
||||
{{/each}}
|
||||
}
|
||||
{{/each}}
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user