Introduce trie level cache and remove state cache (#11407)

* trie state cache

* Also cache missing access on read.

* fix comp

* bis

* fix

* use has_lru

* remove local storage cache on size 0.

* No cache.

* local cache only

* trie cache and local cache

* storage cache (with local)

* trie cache no local cache

* Add state access benchmark

* Remove warnings etc

* Add trie cache benchmark

* No extra "clone" required

* Change benchmark to use multiple blocks

* Use patches

* Integrate shitty implementation

* More stuff

* Revert "Merge branch 'master' into trie_state_cache"

This reverts commit 947cd8e6d43fced10e21b76d5b92ffa57b57c318, reversing
changes made to 29ff036463.

* Improve benchmark

* Adapt to latest changes

* Adapt to changes in trie

* Add a test that uses iterator

* Start fixing it

* Remove obsolete file

* Make it compile

* Start rewriting the trie node cache

* More work on the cache

* More docs and code etc

* Make data cache an optional

* Tests

* Remove debug stuff

* Recorder

* Some docs and a simple test for the recorder

* Compile fixes

* Make it compile

* More fixes

* More fixes

* Fix fix fix

* Make sure cache and recorder work together for basic stuff

* Test that data caching and recording works

* Test `TrieDBMut` with caching

* Try something

* Fixes, fixes, fixes

* Forward the recorder

* Make it compile

* Use recorder in more places

* Switch to new `with_optional_recorder` fn

* Refactor and cleanups

* Move `ProvingBackend` tests

* Simplify

* Move over all functionality to the essence

* Fix compilation

* Implement estimate encoded size for StorageProof

* Start using the `cache` everywhere

* Use the cache everywhere

* Fix compilation

* Fix tests

* Adds `TrieBackendBuilder` and enhances the tests

* Ensure that recorder drain checks that values are found as expected

* Switch over to `TrieBackendBuilder`

* Start fixing the problem with child tries and recording

* Fix recording of child tries

* Make it compile

* Overwrite `storage_hash` in `TrieBackend`

* Add `storage_cache` to  the benchmarks

* Fix `no_std` build

* Speed up cache lookup

* Extend the state access benchmark to also hash a runtime

* Fix build

* Fix compilation

* Rewrite value cache

* Add lru cache

* Ensure that the cache lru works

* Value cache should not be optional

* Add support for keeping the shared node cache in its bounds

* Make the cache configurable

* Check that the cache respects the bounds

* Adds a new test

* Fixes

* Docs and some renamings

* More docs

* Start using the new recorder

* Fix more code

* Take `self` argument

* Remove warnings

* Fix benchmark

* Fix accounting

* Rip off the state cache

* Start fixing fallout after removing the state cache

* Make it compile after trie changes

* Fix test

* Add some logging

* Some docs

* Some fixups and clean ups

* Fix benchmark

* Remove unneeded file

* Use git for patching

* Make CI happy

* Update primitives/trie/Cargo.toml

Co-authored-by: Koute <koute@users.noreply.github.com>

* Update primitives/state-machine/src/trie_backend.rs

Co-authored-by: cheme <emericchevalier.pro@gmail.com>

* Introduce new `AsTrieBackend` trait

* Make the LocalTrieCache not clonable

* Make it work in no_std and add docs

* Remove duplicate dependency

* Switch to ahash for better performance

* Speedup value cache merge

* Output errors on underflow

* Ensure the internal LRU map doesn't grow too much

* Use const fn to calculate the value cache element size

* Remove cache configuration

* Fix

* Clear the cache in between for more testing

* Try to come up with a failing test case

* Make the test fail

* Fix the child trie recording

* Make everything compile after the changes to trie

* Adapt to latest trie-db changes

* Fix on stable

* Update primitives/trie/src/cache.rs

Co-authored-by: cheme <emericchevalier.pro@gmail.com>

* Fix wrong merge

* Docs

* Fix warnings

* Cargo.lock

* Bump pin-project

* Fix warnings

* Switch to released crate version

* More fixes

* Make clippy and rustdocs happy

* More clippy

* Print error when using deprecated `--state-cache-size`

* 🤦

* Fixes

* Fix storage_hash linkings

* Update client/rpc/src/dev/mod.rs

Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>

* Review feedback

* encode bound

* Rework the shared value cache

Instead of using a `u64` to represent the key we now use an `Arc<[u8]>`. This arc is also stored in
some extra `HashSet`. We store the key are in an extra `HashSet` to de-duplicate the keys accross
different storage roots. When the latest key usage is dropped in the lru, we also remove the key
from the `HashSet`.

* Improve of the cache by merging the old and new solution

* FMT

* Please stop coming back all the time :crying:

* Update primitives/trie/src/cache/shared_cache.rs

Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>

* Fixes

* Make clippy happy

* Ensure we don't deadlock

* Only use one lock to simplify the code

* Do not depend on `Hasher`

* Fix tests

* FMT

* Clippy 🤦

Co-authored-by: cheme <emericchevalier.pro@gmail.com>
Co-authored-by: Koute <koute@users.noreply.github.com>
Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>
This commit is contained in:
Bastian Köcher
2022-08-18 20:59:22 +02:00
committed by GitHub
parent d46f6f0d34
commit 73d9ae3284
55 changed files with 3977 additions and 1344 deletions
+22 -10
View File
@@ -2798,9 +2798,9 @@ dependencies = [
[[package]]
name = "hashbrown"
version = "0.12.0"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c21d40587b92fa6a6c6e3c1bdbf87d75511db5672f9c93175574b3a00df1758"
checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888"
dependencies = [
"ahash",
]
@@ -4352,7 +4352,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6566c70c1016f525ced45d7b7f97730a2bafb037c788211d0c186ef5b2189f0a"
dependencies = [
"hash-db",
"hashbrown 0.12.0",
"hashbrown 0.12.3",
"parity-util-mem",
]
@@ -6572,7 +6572,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c32561d248d352148124f036cac253a644685a21dc9fea383eb4907d7bd35a8f"
dependencies = [
"cfg-if 1.0.0",
"hashbrown 0.12.0",
"hashbrown 0.12.3",
"impl-trait-for-tuples",
"parity-util-mem-derive",
"parking_lot 0.12.0",
@@ -7505,7 +7505,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1f08c8062c1fe1253064043b8fc07bfea1b9702b71b4a86c11ea3588183b12e1"
dependencies = [
"bytecheck",
"hashbrown 0.12.0",
"hashbrown 0.12.3",
"ptr_meta",
"rend",
"rkyv_derive",
@@ -7884,7 +7884,9 @@ dependencies = [
name = "sc-client-db"
version = "0.10.0-dev"
dependencies = [
"criterion",
"hash-db",
"kitchensink-runtime",
"kvdb",
"kvdb-memorydb",
"kvdb-rocksdb",
@@ -7894,6 +7896,7 @@ dependencies = [
"parity-scale-codec",
"parking_lot 0.12.0",
"quickcheck",
"rand 0.8.4",
"sc-client-api",
"sc-state-db",
"sp-arithmetic",
@@ -9396,6 +9399,7 @@ dependencies = [
"sp-state-machine",
"sp-std",
"sp-test-primitives",
"sp-trie",
"sp-version",
"thiserror",
]
@@ -10060,6 +10064,7 @@ dependencies = [
"sp-trie",
"thiserror",
"tracing",
"trie-db",
"trie-root",
]
@@ -10157,16 +10162,23 @@ dependencies = [
name = "sp-trie"
version = "6.0.0"
dependencies = [
"ahash",
"criterion",
"hash-db",
"hashbrown 0.12.3",
"hex-literal",
"lazy_static",
"lru",
"memory-db",
"nohash-hasher",
"parity-scale-codec",
"parking_lot 0.12.0",
"scale-info",
"sp-core",
"sp-runtime",
"sp-std",
"thiserror",
"tracing",
"trie-bench",
"trie-db",
"trie-root",
@@ -10963,9 +10975,9 @@ checksum = "a7f741b240f1a48843f9b8e0444fb55fb2a4ff67293b50a9179dfd5ea67f8d41"
[[package]]
name = "trie-bench"
version = "0.30.0"
version = "0.31.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57ecec5d10427b35e9ae374b059dccc0801d02d832617c04c78afc7a8c5c4a34"
checksum = "c5704f0d6130bd83608e4370c19e20c8a6ec03e80363e493d0234efca005265a"
dependencies = [
"criterion",
"hash-db",
@@ -10979,12 +10991,12 @@ dependencies = [
[[package]]
name = "trie-db"
version = "0.23.1"
version = "0.24.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d32d034c0d3db64b43c31de38e945f15b40cd4ca6d2dcfc26d4798ce8de4ab83"
checksum = "004e1e8f92535694b4cb1444dc5a8073ecf0815e3357f729638b9f8fc4062908"
dependencies = [
"hash-db",
"hashbrown 0.12.0",
"hashbrown 0.12.3",
"log",
"rustc-hex",
"smallvec",
+3 -2
View File
@@ -20,7 +20,7 @@ use std::{collections::HashMap, sync::Arc};
use kvdb::KeyValueDB;
use node_primitives::Hash;
use sp_trie::{trie_types::TrieDBMutV1, TrieMut};
use sp_trie::{trie_types::TrieDBMutBuilderV1, TrieMut};
use crate::simple_trie::SimpleTrie;
@@ -43,7 +43,8 @@ pub fn generate_trie(
);
let mut trie = SimpleTrie { db, overlay: &mut overlay };
{
let mut trie_db = TrieDBMutV1::<crate::simple_trie::Hasher>::new(&mut trie, &mut root);
let mut trie_db =
TrieDBMutBuilderV1::<crate::simple_trie::Hasher>::new(&mut trie, &mut root).build();
for (key, value) in key_values {
trie_db.insert(&key, &value).expect("trie insertion failed");
}
+3 -4
View File
@@ -23,7 +23,7 @@ use kvdb::KeyValueDB;
use lazy_static::lazy_static;
use rand::Rng;
use sp_state_machine::Backend as _;
use sp_trie::{trie_types::TrieDBMutV1, TrieMut as _};
use sp_trie::{trie_types::TrieDBMutBuilderV1, TrieMut as _};
use std::{borrow::Cow, collections::HashMap, sync::Arc};
use node_primitives::Hash;
@@ -180,7 +180,7 @@ impl core::Benchmark for TrieReadBenchmark {
let storage: Arc<dyn sp_state_machine::Storage<sp_core::Blake2Hasher>> =
Arc::new(Storage(db.open(self.database_type)));
let trie_backend = sp_state_machine::TrieBackend::new(storage, self.root);
let trie_backend = sp_state_machine::TrieBackendBuilder::new(storage, self.root).build();
for (warmup_key, warmup_value) in self.warmup_keys.iter() {
let value = trie_backend
.storage(&warmup_key[..])
@@ -286,8 +286,7 @@ impl core::Benchmark for TrieWriteBenchmark {
let mut overlay = HashMap::new();
let mut trie = SimpleTrie { db: kvdb.clone(), overlay: &mut overlay };
let mut trie_db_mut = TrieDBMutV1::from_existing(&mut trie, &mut new_root)
.expect("Failed to create TrieDBMut");
let mut trie_db_mut = TrieDBMutBuilderV1::from_existing(&mut trie, &mut new_root).build();
for (warmup_key, warmup_value) in self.warmup_keys.iter() {
let value = trie_db_mut
@@ -72,8 +72,7 @@ fn new_node(tokio_handle: Handle) -> node_cli::service::NewFullBase {
keystore: KeystoreConfig::InMemory,
keystore_remote: Default::default(),
database: DatabaseSource::RocksDb { path: root.join("db"), cache_size: 128 },
state_cache_size: 67108864,
state_cache_child_ratio: None,
trie_cache_maximum_size: Some(64 * 1024 * 1024),
state_pruning: Some(PruningMode::ArchiveAll),
blocks_pruning: BlocksPruning::All,
chain_spec: spec,
@@ -66,8 +66,7 @@ fn new_node(tokio_handle: Handle) -> node_cli::service::NewFullBase {
keystore: KeystoreConfig::InMemory,
keystore_remote: Default::default(),
database: DatabaseSource::RocksDb { path: root.join("db"), cache_size: 128 },
state_cache_size: 67108864,
state_cache_child_ratio: None,
trie_cache_maximum_size: Some(64 * 1024 * 1024),
state_pruning: Some(PruningMode::ArchiveAll),
blocks_pruning: BlocksPruning::All,
chain_spec: spec,
+1 -2
View File
@@ -388,8 +388,7 @@ impl BenchDb {
keyring: &BenchKeyring,
) -> (Client, std::sync::Arc<Backend>, TaskExecutor) {
let db_config = sc_client_db::DatabaseSettings {
state_cache_size: 16 * 1024 * 1024,
state_cache_child_ratio: Some((0, 100)),
trie_cache_maximum_size: Some(16 * 1024 * 1024),
state_pruning: Some(PruningMode::ArchiveAll),
source: database_type.into_settings(dir.into()),
blocks_pruning: sc_client_db::BlocksPruning::All,
+8 -2
View File
@@ -32,7 +32,8 @@ use sp_runtime::{
Justification, Justifications, StateVersion, Storage,
};
use sp_state_machine::{
ChildStorageCollection, IndexOperation, OffchainChangesCollection, StorageCollection,
backend::AsTrieBackend, ChildStorageCollection, IndexOperation, OffchainChangesCollection,
StorageCollection,
};
use sp_storage::{ChildInfo, StorageData, StorageKey};
use std::collections::{HashMap, HashSet};
@@ -448,7 +449,12 @@ pub trait Backend<Block: BlockT>: AuxStore + Send + Sync {
/// Associated blockchain backend type.
type Blockchain: BlockchainBackend<Block>;
/// Associated state backend type.
type State: StateBackend<HashFor<Block>> + Send;
type State: StateBackend<HashFor<Block>>
+ Send
+ AsTrieBackend<
HashFor<Block>,
TrieBackendStorage = <Self::State as StateBackend<HashFor<Block>>>::TrieBackendStorage,
>;
/// Offchain workers local storage.
type OffchainStorage: OffchainStorage;
@@ -855,10 +855,18 @@ mod tests {
.expect("header get error")
.expect("there should be header");
let extrinsics_num = 4;
let extrinsics = (0..extrinsics_num)
.map(|v| Extrinsic::IncludeData(vec![v as u8; 10]))
.collect::<Vec<_>>();
let extrinsics_num = 5;
let extrinsics = std::iter::once(
Transfer {
from: AccountKeyring::Alice.into(),
to: AccountKeyring::Bob.into(),
amount: 100,
nonce: 0,
}
.into_signed_tx(),
)
.chain((0..extrinsics_num - 1).map(|v| Extrinsic::IncludeData(vec![v as u8; 10])))
.collect::<Vec<_>>();
let block_limit = genesis_header.encoded_size() +
extrinsics
@@ -922,8 +930,9 @@ mod tests {
.unwrap();
// The block limit didn't changed, but we now include the proof in the estimation of the
// block size and thus, one less transaction should fit into the limit.
assert_eq!(block.extrinsics().len(), extrinsics_num - 2);
// block size and thus, only the `Transfer` will fit into the block. It reads more data
// than we have reserved in the block limit.
assert_eq!(block.extrinsics().len(), 1);
}
#[test]
@@ -73,8 +73,7 @@ impl ChainInfoCmd {
B: BlockT,
{
let db_config = sc_client_db::DatabaseSettings {
state_cache_size: config.state_cache_size,
state_cache_child_ratio: config.state_cache_child_ratio.map(|v| (v, 100)),
trie_cache_maximum_size: config.trie_cache_maximum_size,
state_pruning: config.state_pruning.clone(),
source: config.database.clone(),
blocks_pruning: config.blocks_pruning,
+5 -12
View File
@@ -230,18 +230,12 @@ pub trait CliConfiguration<DCV: DefaultConfigurationValues = ()>: Sized {
})
}
/// Get the state cache size.
/// Get the trie cache maximum size.
///
/// By default this is retrieved from `ImportParams` if it is available. Otherwise its `0`.
fn state_cache_size(&self) -> Result<usize> {
Ok(self.import_params().map(|x| x.state_cache_size()).unwrap_or_default())
}
/// Get the state cache child ratio (if any).
///
/// By default this is `None`.
fn state_cache_child_ratio(&self) -> Result<Option<usize>> {
Ok(Default::default())
/// If `None` is returned the trie cache is disabled.
fn trie_cache_maximum_size(&self) -> Result<Option<usize>> {
Ok(self.import_params().map(|x| x.trie_cache_maximum_size()).unwrap_or_default())
}
/// Get the state pruning mode.
@@ -533,8 +527,7 @@ pub trait CliConfiguration<DCV: DefaultConfigurationValues = ()>: Sized {
keystore_remote,
keystore,
database: self.database_config(&config_dir, database_cache_size, database)?,
state_cache_size: self.state_cache_size()?,
state_cache_child_ratio: self.state_cache_child_ratio()?,
trie_cache_maximum_size: self.trie_cache_maximum_size()?,
state_pruning: self.state_pruning()?,
blocks_pruning: self.blocks_pruning()?,
wasm_method: self.wasm_method()?,
@@ -95,14 +95,30 @@ pub struct ImportParams {
pub execution_strategies: ExecutionStrategiesParams,
/// Specify the state cache size.
///
/// Providing `0` will disable the cache.
#[clap(long, value_name = "Bytes", default_value = "67108864")]
pub state_cache_size: usize,
pub trie_cache_size: usize,
/// DEPRECATED
///
/// Switch to `--trie-cache-size`.
#[clap(long)]
state_cache_size: Option<usize>,
}
impl ImportParams {
/// Specify the state cache size.
pub fn state_cache_size(&self) -> usize {
self.state_cache_size
/// Specify the trie cache maximum size.
pub fn trie_cache_maximum_size(&self) -> Option<usize> {
if self.state_cache_size.is_some() {
eprintln!("`--state-cache-size` was deprecated. Please switch to `--trie-cache-size`.");
}
if self.trie_cache_size == 0 {
None
} else {
Some(self.trie_cache_size)
}
}
/// Get the WASM execution method from the parameters
+11 -1
View File
@@ -35,9 +35,12 @@ sp-state-machine = { version = "0.12.0", path = "../../primitives/state-machine"
sp-trie = { version = "6.0.0", path = "../../primitives/trie" }
[dev-dependencies]
criterion = "0.3.3"
kvdb-rocksdb = "0.15.1"
rand = "0.8.4"
tempfile = "3.1.0"
quickcheck = { version = "1.0.3", default-features = false }
tempfile = "3"
kitchensink-runtime = { path = "../../bin/node/runtime" }
sp-tracing = { version = "5.0.0", path = "../../primitives/tracing" }
substrate-test-runtime-client = { version = "2.0.0", path = "../../test-utils/runtime/client" }
@@ -46,3 +49,10 @@ default = []
test-helpers = []
runtime-benchmarks = []
rocksdb = ["kvdb-rocksdb"]
[[bench]]
name = "state_access"
harness = false
[lib]
bench = false
+312
View File
@@ -0,0 +1,312 @@
// This file is part of Substrate.
// Copyright (C) 2021 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: GPL-3.0-or-later WITH Classpath-exception-2.0
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
use criterion::{criterion_group, criterion_main, BatchSize, Criterion};
use rand::{distributions::Uniform, rngs::StdRng, Rng, SeedableRng};
use sc_client_api::{Backend as _, BlockImportOperation, NewBlockState, StateBackend};
use sc_client_db::{Backend, BlocksPruning, DatabaseSettings, DatabaseSource, PruningMode};
use sp_core::H256;
use sp_runtime::{
generic::BlockId,
testing::{Block as RawBlock, ExtrinsicWrapper, Header},
StateVersion, Storage,
};
use tempfile::TempDir;
pub(crate) type Block = RawBlock<ExtrinsicWrapper<u64>>;
fn insert_blocks(db: &Backend<Block>, storage: Vec<(Vec<u8>, Vec<u8>)>) -> H256 {
let mut op = db.begin_operation().unwrap();
let mut header = Header {
number: 0,
parent_hash: Default::default(),
state_root: Default::default(),
digest: Default::default(),
extrinsics_root: Default::default(),
};
header.state_root = op
.set_genesis_state(
Storage {
top: vec![(
sp_core::storage::well_known_keys::CODE.to_vec(),
kitchensink_runtime::wasm_binary_unwrap().to_vec(),
)]
.into_iter()
.collect(),
children_default: Default::default(),
},
true,
StateVersion::V1,
)
.unwrap();
op.set_block_data(header.clone(), Some(vec![]), None, None, NewBlockState::Best)
.unwrap();
db.commit_operation(op).unwrap();
let mut number = 1;
let mut parent_hash = header.hash();
for i in 0..10 {
let mut op = db.begin_operation().unwrap();
db.begin_state_operation(&mut op, BlockId::Hash(parent_hash)).unwrap();
let mut header = Header {
number,
parent_hash,
state_root: Default::default(),
digest: Default::default(),
extrinsics_root: Default::default(),
};
let changes = storage
.iter()
.skip(i * 100_000)
.take(100_000)
.map(|(k, v)| (k.clone(), Some(v.clone())))
.collect::<Vec<_>>();
let (state_root, tx) = db.state_at(BlockId::Number(number - 1)).unwrap().storage_root(
changes.iter().map(|(k, v)| (k.as_slice(), v.as_deref())),
StateVersion::V1,
);
header.state_root = state_root;
op.update_db_storage(tx).unwrap();
op.update_storage(changes.clone(), Default::default()).unwrap();
op.set_block_data(header.clone(), Some(vec![]), None, None, NewBlockState::Best)
.unwrap();
db.commit_operation(op).unwrap();
number += 1;
parent_hash = header.hash();
}
parent_hash
}
enum BenchmarkConfig {
NoCache,
TrieNodeCache,
}
fn create_backend(config: BenchmarkConfig, temp_dir: &TempDir) -> Backend<Block> {
let path = temp_dir.path().to_owned();
let trie_cache_maximum_size = match config {
BenchmarkConfig::NoCache => None,
BenchmarkConfig::TrieNodeCache => Some(2 * 1024 * 1024 * 1024),
};
let settings = DatabaseSettings {
trie_cache_maximum_size,
state_pruning: Some(PruningMode::ArchiveAll),
source: DatabaseSource::ParityDb { path },
blocks_pruning: BlocksPruning::All,
};
Backend::new(settings, 100).expect("Creates backend")
}
/// Generate the storage that will be used for the benchmark
///
/// Returns the `Vec<key>` and the `Vec<(key, value)>`
fn generate_storage() -> (Vec<Vec<u8>>, Vec<(Vec<u8>, Vec<u8>)>) {
let mut rng = StdRng::seed_from_u64(353893213);
let mut storage = Vec::new();
let mut keys = Vec::new();
for _ in 0..1_000_000 {
let key_len: usize = rng.gen_range(32..128);
let key = (&mut rng)
.sample_iter(Uniform::new_inclusive(0, 255))
.take(key_len)
.collect::<Vec<u8>>();
let value_len: usize = rng.gen_range(20..60);
let value = (&mut rng)
.sample_iter(Uniform::new_inclusive(0, 255))
.take(value_len)
.collect::<Vec<u8>>();
keys.push(key.clone());
storage.push((key, value));
}
(keys, storage)
}
fn state_access_benchmarks(c: &mut Criterion) {
sp_tracing::try_init_simple();
let (keys, storage) = generate_storage();
let path = TempDir::new().expect("Creates temporary directory");
let block_hash = {
let backend = create_backend(BenchmarkConfig::NoCache, &path);
insert_blocks(&backend, storage.clone())
};
let mut group = c.benchmark_group("Reading entire state");
group.sample_size(20);
let mut bench_multiple_values = |config, desc, multiplier| {
let backend = create_backend(config, &path);
group.bench_function(desc, |b| {
b.iter_batched(
|| backend.state_at(BlockId::Hash(block_hash)).expect("Creates state"),
|state| {
for key in keys.iter().cycle().take(keys.len() * multiplier) {
let _ = state.storage(&key).expect("Doesn't fail").unwrap();
}
},
BatchSize::SmallInput,
)
});
};
bench_multiple_values(
BenchmarkConfig::TrieNodeCache,
"with trie node cache and reading each key once",
1,
);
bench_multiple_values(BenchmarkConfig::NoCache, "no cache and reading each key once", 1);
bench_multiple_values(
BenchmarkConfig::TrieNodeCache,
"with trie node cache and reading 4 times each key in a row",
4,
);
bench_multiple_values(
BenchmarkConfig::NoCache,
"no cache and reading 4 times each key in a row",
4,
);
group.finish();
let mut group = c.benchmark_group("Reading a single value");
let mut bench_single_value = |config, desc, multiplier| {
let backend = create_backend(config, &path);
group.bench_function(desc, |b| {
b.iter_batched(
|| backend.state_at(BlockId::Hash(block_hash)).expect("Creates state"),
|state| {
for key in keys.iter().take(1).cycle().take(multiplier) {
let _ = state.storage(&key).expect("Doesn't fail").unwrap();
}
},
BatchSize::SmallInput,
)
});
};
bench_single_value(
BenchmarkConfig::TrieNodeCache,
"with trie node cache and reading the key once",
1,
);
bench_single_value(BenchmarkConfig::NoCache, "no cache and reading the key once", 1);
bench_single_value(
BenchmarkConfig::TrieNodeCache,
"with trie node cache and reading 4 times each key in a row",
4,
);
bench_single_value(
BenchmarkConfig::NoCache,
"no cache and reading 4 times each key in a row",
4,
);
group.finish();
let mut group = c.benchmark_group("Hashing a value");
let mut bench_single_value = |config, desc, multiplier| {
let backend = create_backend(config, &path);
group.bench_function(desc, |b| {
b.iter_batched(
|| backend.state_at(BlockId::Hash(block_hash)).expect("Creates state"),
|state| {
for key in keys.iter().take(1).cycle().take(multiplier) {
let _ = state.storage_hash(&key).expect("Doesn't fail").unwrap();
}
},
BatchSize::SmallInput,
)
});
};
bench_single_value(
BenchmarkConfig::TrieNodeCache,
"with trie node cache and hashing the key once",
1,
);
bench_single_value(BenchmarkConfig::NoCache, "no cache and hashing the key once", 1);
bench_single_value(
BenchmarkConfig::TrieNodeCache,
"with trie node cache and hashing 4 times each key in a row",
4,
);
bench_single_value(
BenchmarkConfig::NoCache,
"no cache and hashing 4 times each key in a row",
4,
);
group.finish();
let mut group = c.benchmark_group("Hashing `:code`");
let mut bench_single_value = |config, desc| {
let backend = create_backend(config, &path);
group.bench_function(desc, |b| {
b.iter_batched(
|| backend.state_at(BlockId::Hash(block_hash)).expect("Creates state"),
|state| {
let _ = state
.storage_hash(sp_core::storage::well_known_keys::CODE)
.expect("Doesn't fail")
.unwrap();
},
BatchSize::SmallInput,
)
});
};
bench_single_value(BenchmarkConfig::TrieNodeCache, "with trie node cache");
bench_single_value(BenchmarkConfig::NoCache, "no cache");
group.finish();
}
criterion_group!(benches, state_access_benchmarks);
criterion_main!(benches);
+53 -54
View File
@@ -18,13 +18,7 @@
//! State backend that's useful for benchmarking
use std::{
cell::{Cell, RefCell},
collections::HashMap,
sync::Arc,
};
use crate::storage_cache::{new_shared_cache, CachingState, SharedCache};
use crate::{DbState, DbStateBuilder};
use hash_db::{Hasher, Prefix};
use kvdb::{DBTransaction, KeyValueDB};
use linked_hash_map::LinkedHashMap;
@@ -37,40 +31,31 @@ use sp_runtime::{
StateVersion, Storage,
};
use sp_state_machine::{
backend::Backend as StateBackend, ChildStorageCollection, DBValue, ProofRecorder,
StorageCollection,
backend::Backend as StateBackend, ChildStorageCollection, DBValue, StorageCollection,
};
use sp_trie::{
cache::{CacheSize, SharedTrieCache},
prefixed_key, MemoryDB,
};
use std::{
cell::{Cell, RefCell},
collections::HashMap,
sync::Arc,
};
use sp_trie::{prefixed_key, MemoryDB};
type DbState<B> =
sp_state_machine::TrieBackend<Arc<dyn sp_state_machine::Storage<HashFor<B>>>, HashFor<B>>;
type State<B> = CachingState<DbState<B>, B>;
type State<B> = DbState<B>;
struct StorageDb<Block: BlockT> {
db: Arc<dyn KeyValueDB>,
proof_recorder: Option<ProofRecorder<Block::Hash>>,
_block: std::marker::PhantomData<Block>,
}
impl<Block: BlockT> sp_state_machine::Storage<HashFor<Block>> for StorageDb<Block> {
fn get(&self, key: &Block::Hash, prefix: Prefix) -> Result<Option<DBValue>, String> {
let prefixed_key = prefixed_key::<HashFor<Block>>(key, prefix);
if let Some(recorder) = &self.proof_recorder {
if let Some(v) = recorder.get(key) {
return Ok(v)
}
let backend_value = self
.db
.get(0, &prefixed_key)
.map_err(|e| format!("Database backend error: {:?}", e))?;
recorder.record(*key, backend_value.clone());
Ok(backend_value)
} else {
self.db
.get(0, &prefixed_key)
.map_err(|e| format!("Database backend error: {:?}", e))
}
self.db
.get(0, &prefixed_key)
.map_err(|e| format!("Database backend error: {:?}", e))
}
}
@@ -82,7 +67,6 @@ pub struct BenchmarkingState<B: BlockT> {
db: Cell<Option<Arc<dyn KeyValueDB>>>,
genesis: HashMap<Vec<u8>, (Vec<u8>, i32)>,
record: Cell<Vec<Vec<u8>>>,
shared_cache: SharedCache<B>, // shared cache is always empty
/// Key tracker for keys in the main trie.
/// We track the total number of reads and writes to these keys,
/// not de-duplicated for repeats.
@@ -93,9 +77,10 @@ pub struct BenchmarkingState<B: BlockT> {
/// not de-duplicated for repeats.
child_key_tracker: RefCell<LinkedHashMap<Vec<u8>, LinkedHashMap<Vec<u8>, TrackedStorageKey>>>,
whitelist: RefCell<Vec<TrackedStorageKey>>,
proof_recorder: Option<ProofRecorder<B::Hash>>,
proof_recorder: Option<sp_trie::recorder::Recorder<HashFor<B>>>,
proof_recorder_root: Cell<B::Hash>,
enable_tracking: bool,
shared_trie_cache: SharedTrieCache<HashFor<B>>,
}
impl<B: BlockT> BenchmarkingState<B> {
@@ -109,7 +94,7 @@ impl<B: BlockT> BenchmarkingState<B> {
let state_version = sp_runtime::StateVersion::default();
let mut root = B::Hash::default();
let mut mdb = MemoryDB::<HashFor<B>>::default();
sp_state_machine::TrieDBMutV1::<HashFor<B>>::new(&mut mdb, &mut root);
sp_trie::trie_types::TrieDBMutBuilderV1::<HashFor<B>>::new(&mut mdb, &mut root).build();
let mut state = BenchmarkingState {
state: RefCell::new(None),
@@ -118,13 +103,14 @@ impl<B: BlockT> BenchmarkingState<B> {
genesis: Default::default(),
genesis_root: Default::default(),
record: Default::default(),
shared_cache: new_shared_cache(0, (1, 10)),
main_key_tracker: Default::default(),
child_key_tracker: Default::default(),
whitelist: Default::default(),
proof_recorder: record_proof.then(Default::default),
proof_recorder_root: Cell::new(root),
enable_tracking,
// Enable the cache, but do not sync anything to the shared state.
shared_trie_cache: SharedTrieCache::new(CacheSize::Maximum(0)),
};
state.add_whitelist_to_tracker();
@@ -160,16 +146,13 @@ impl<B: BlockT> BenchmarkingState<B> {
recorder.reset();
self.proof_recorder_root.set(self.root.get());
}
let storage_db = Arc::new(StorageDb::<B> {
db,
proof_recorder: self.proof_recorder.clone(),
_block: Default::default(),
});
*self.state.borrow_mut() = Some(State::new(
DbState::<B>::new(storage_db, self.root.get()),
self.shared_cache.clone(),
None,
));
let storage_db = Arc::new(StorageDb::<B> { db, _block: Default::default() });
*self.state.borrow_mut() = Some(
DbStateBuilder::<B>::new(storage_db, self.root.get())
.with_optional_recorder(self.proof_recorder.clone())
.with_cache(self.shared_trie_cache.local_cache())
.build(),
);
Ok(())
}
@@ -324,6 +307,19 @@ impl<B: BlockT> StateBackend<HashFor<B>> for BenchmarkingState<B> {
.child_storage(child_info, key)
}
fn child_storage_hash(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<B::Hash>, Self::Error> {
self.add_read_key(Some(child_info.storage_key()), key);
self.state
.borrow()
.as_ref()
.ok_or_else(state_err)?
.child_storage_hash(child_info, key)
}
fn exists_storage(&self, key: &[u8]) -> Result<bool, Self::Error> {
self.add_read_key(None, key);
self.state.borrow().as_ref().ok_or_else(state_err)?.exists_storage(key)
@@ -604,22 +600,25 @@ impl<B: BlockT> StateBackend<HashFor<B>> for BenchmarkingState<B> {
fn proof_size(&self) -> Option<u32> {
self.proof_recorder.as_ref().map(|recorder| {
let proof_size = recorder.estimate_encoded_size() as u32;
let proof = recorder.to_storage_proof();
let proof_recorder_root = self.proof_recorder_root.get();
if proof_recorder_root == Default::default() || proof_size == 1 {
// empty trie
proof_size
} else if let Some(size) = proof.encoded_compact_size::<HashFor<B>>(proof_recorder_root)
{
size as u32
} else {
panic!(
"proof rec root {:?}, root {:?}, genesis {:?}, rec_len {:?}",
self.proof_recorder_root.get(),
self.root.get(),
self.genesis_root,
proof_size,
);
if let Some(size) = proof.encoded_compact_size::<HashFor<B>>(proof_recorder_root) {
size as u32
} else {
panic!(
"proof rec root {:?}, root {:?}, genesis {:?}, rec_len {:?}",
self.proof_recorder_root.get(),
self.root.get(),
self.genesis_root,
proof_size,
);
}
}
})
}
+85 -96
View File
@@ -34,8 +34,8 @@ pub mod bench;
mod children;
mod parity_db;
mod record_stats_state;
mod stats;
mod storage_cache;
#[cfg(any(feature = "rocksdb", test))]
mod upgrade;
mod utils;
@@ -51,8 +51,8 @@ use std::{
};
use crate::{
record_stats_state::RecordStatsState,
stats::StateUsageStats,
storage_cache::{new_shared_cache, CachingState, SharedCache, SyncingCachingState},
utils::{meta_keys, read_db, read_meta, DatabaseType, Meta},
};
use codec::{Decode, Encode};
@@ -83,10 +83,11 @@ use sp_runtime::{
Justification, Justifications, StateVersion, Storage,
};
use sp_state_machine::{
backend::Backend as StateBackend, ChildStorageCollection, DBValue, IndexOperation,
OffchainChangesCollection, StateMachineStats, StorageCollection, UsageInfo as StateUsageInfo,
backend::{AsTrieBackend, Backend as StateBackend},
ChildStorageCollection, DBValue, IndexOperation, OffchainChangesCollection, StateMachineStats,
StorageCollection, UsageInfo as StateUsageInfo,
};
use sp_trie::{prefixed_key, MemoryDB, PrefixedMemoryDB};
use sp_trie::{cache::SharedTrieCache, prefixed_key, MemoryDB, PrefixedMemoryDB};
// Re-export the Database trait so that one can pass an implementation of it.
pub use sc_state_db::PruningMode;
@@ -96,13 +97,16 @@ pub use bench::BenchmarkingState;
const CACHE_HEADERS: usize = 8;
/// Default value for storage cache child ratio.
const DEFAULT_CHILD_RATIO: (usize, usize) = (1, 10);
/// DB-backed patricia trie state, transaction type is an overlay of changes to commit.
pub type DbState<B> =
sp_state_machine::TrieBackend<Arc<dyn sp_state_machine::Storage<HashFor<B>>>, HashFor<B>>;
/// Builder for [`DbState`].
pub type DbStateBuilder<B> = sp_state_machine::TrieBackendBuilder<
Arc<dyn sp_state_machine::Storage<HashFor<B>>>,
HashFor<B>,
>;
/// Length of a [`DbHash`].
const DB_HASH_LEN: usize = 32;
@@ -174,6 +178,14 @@ impl<B: BlockT> StateBackend<HashFor<B>> for RefTrackingState<B> {
self.state.child_storage(child_info, key)
}
fn child_storage_hash(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<B::Hash>, Self::Error> {
self.state.child_storage_hash(child_info, key)
}
fn exists_storage(&self, key: &[u8]) -> Result<bool, Self::Error> {
self.state.exists_storage(key)
}
@@ -272,12 +284,6 @@ impl<B: BlockT> StateBackend<HashFor<B>> for RefTrackingState<B> {
self.state.child_keys(child_info, prefix)
}
fn as_trie_backend(
&self,
) -> Option<&sp_state_machine::TrieBackend<Self::TrieBackendStorage, HashFor<B>>> {
self.state.as_trie_backend()
}
fn register_overlay_stats(&self, stats: &StateMachineStats) {
self.state.register_overlay_stats(stats);
}
@@ -287,12 +293,22 @@ impl<B: BlockT> StateBackend<HashFor<B>> for RefTrackingState<B> {
}
}
impl<B: BlockT> AsTrieBackend<HashFor<B>> for RefTrackingState<B> {
type TrieBackendStorage = <DbState<B> as StateBackend<HashFor<B>>>::TrieBackendStorage;
fn as_trie_backend(
&self,
) -> &sp_state_machine::TrieBackend<Self::TrieBackendStorage, HashFor<B>> {
&self.state.as_trie_backend()
}
}
/// Database settings.
pub struct DatabaseSettings {
/// State cache size.
pub state_cache_size: usize,
/// Ratio of cache size dedicated to child tries.
pub state_cache_child_ratio: Option<(usize, usize)>,
/// The maximum trie cache size in bytes.
///
/// If `None` is given, the cache is disabled.
pub trie_cache_maximum_size: Option<usize>,
/// Requested state pruning mode.
pub state_pruning: Option<PruningMode>,
/// Where to find the database.
@@ -730,7 +746,7 @@ impl<Block: BlockT> HeaderMetadata<Block> for BlockchainDb<Block> {
/// Database transaction
pub struct BlockImportOperation<Block: BlockT> {
old_state: SyncingCachingState<RefTrackingState<Block>, Block>,
old_state: RecordStatsState<RefTrackingState<Block>, Block>,
db_updates: PrefixedMemoryDB<HashFor<Block>>,
storage_updates: StorageCollection,
child_storage_updates: ChildStorageCollection,
@@ -800,7 +816,7 @@ impl<Block: BlockT> BlockImportOperation<Block> {
impl<Block: BlockT> sc_client_api::backend::BlockImportOperation<Block>
for BlockImportOperation<Block>
{
type State = SyncingCachingState<RefTrackingState<Block>, Block>;
type State = RecordStatsState<RefTrackingState<Block>, Block>;
fn state(&self) -> ClientResult<Option<&Self::State>> {
Ok(Some(&self.old_state))
@@ -949,7 +965,7 @@ impl<Block: BlockT> EmptyStorage<Block> {
let mut root = Block::Hash::default();
let mut mdb = MemoryDB::<HashFor<Block>>::default();
// both triedbmut are the same on empty storage.
sp_state_machine::TrieDBMutV1::<HashFor<Block>>::new(&mut mdb, &mut root);
sp_trie::trie_types::TrieDBMutBuilderV1::<HashFor<Block>>::new(&mut mdb, &mut root).build();
EmptyStorage(root)
}
}
@@ -1009,13 +1025,13 @@ pub struct Backend<Block: BlockT> {
offchain_storage: offchain::LocalStorage,
blockchain: BlockchainDb<Block>,
canonicalization_delay: u64,
shared_cache: SharedCache<Block>,
import_lock: Arc<RwLock<()>>,
is_archive: bool,
blocks_pruning: BlocksPruning,
io_stats: FrozenForDuration<(kvdb::IoStats, StateUsageInfo)>,
state_usage: Arc<StateUsageStats>,
genesis_state: RwLock<Option<Arc<DbGenesisStorage<Block>>>>,
shared_trie_cache: Option<sp_trie::cache::SharedTrieCache<HashFor<Block>>>,
}
impl<Block: BlockT> Backend<Block> {
@@ -1053,8 +1069,7 @@ impl<Block: BlockT> Backend<Block> {
let db = kvdb_memorydb::create(crate::utils::NUM_COLUMNS);
let db = sp_database::as_database(db);
let db_setting = DatabaseSettings {
state_cache_size: 16777216,
state_cache_child_ratio: Some((50, 100)),
trie_cache_maximum_size: Some(16 * 1024 * 1024),
state_pruning: Some(PruningMode::blocks_pruning(blocks_pruning)),
source: DatabaseSource::Custom { db, require_create_flag: true },
blocks_pruning: BlocksPruning::Some(blocks_pruning),
@@ -1116,16 +1131,15 @@ impl<Block: BlockT> Backend<Block> {
offchain_storage,
blockchain,
canonicalization_delay,
shared_cache: new_shared_cache(
config.state_cache_size,
config.state_cache_child_ratio.unwrap_or(DEFAULT_CHILD_RATIO),
),
import_lock: Default::default(),
is_archive: is_archive_pruning,
io_stats: FrozenForDuration::new(std::time::Duration::from_secs(1)),
state_usage: Arc::new(StateUsageStats::new()),
blocks_pruning: config.blocks_pruning,
genesis_state: RwLock::new(None),
shared_trie_cache: config.trie_cache_maximum_size.map(|maximum_size| {
SharedTrieCache::new(sp_trie::cache::CacheSize::Maximum(maximum_size))
}),
};
// Older DB versions have no last state key. Check if the state is available and set it.
@@ -1194,7 +1208,7 @@ impl<Block: BlockT> Backend<Block> {
(&r.number, &r.hash)
);
return Err(::sp_blockchain::Error::NotInFinalizedChain)
return Err(sp_blockchain::Error::NotInFinalizedChain)
}
retracted.push(r.hash);
@@ -1358,10 +1372,8 @@ impl<Block: BlockT> Backend<Block> {
// blocks are keyed by number + hash.
let lookup_key = utils::number_and_hash_to_lookup_key(number, hash)?;
let (enacted, retracted) = if pending_block.leaf_state.is_best() {
self.set_head_with_transaction(&mut transaction, parent_hash, (number, hash))?
} else {
(Default::default(), Default::default())
if pending_block.leaf_state.is_best() {
self.set_head_with_transaction(&mut transaction, parent_hash, (number, hash))?;
};
utils::insert_hash_to_key_mapping(&mut transaction, columns::KEY_LOOKUP, number, hash)?;
@@ -1488,14 +1500,22 @@ impl<Block: BlockT> Backend<Block> {
let header = &pending_block.header;
let is_best = pending_block.leaf_state.is_best();
debug!(target: "db",
debug!(
target: "db",
"DB Commit {:?} ({}), best={}, state={}, existing={}, finalized={}",
hash, number, is_best, operation.commit_state, existing_header, finalized,
hash,
number,
is_best,
operation.commit_state,
existing_header,
finalized,
);
self.state_usage.merge_sm(operation.old_state.usage_info());
// release state reference so that it can be finalized
let cache = operation.old_state.into_cache_changes();
// VERY IMPORTANT
drop(operation.old_state);
if finalized {
// TODO: ensure best chain contains this block.
@@ -1584,20 +1604,20 @@ impl<Block: BlockT> Backend<Block> {
is_finalized: finalized,
with_state: operation.commit_state,
});
Some((pending_block.header, number, hash, enacted, retracted, is_best, cache))
Some((pending_block.header, hash))
} else {
None
};
let cache_update = if let Some(set_head) = operation.set_head {
if let Some(set_head) = operation.set_head {
if let Some(header) =
sc_client_api::blockchain::HeaderBackend::header(&self.blockchain, set_head)?
{
let number = header.number();
let hash = header.hash();
let (enacted, retracted) =
self.set_head_with_transaction(&mut transaction, hash, (*number, hash))?;
self.set_head_with_transaction(&mut transaction, hash, (*number, hash))?;
meta_updates.push(MetaUpdate {
hash,
number: *number,
@@ -1605,40 +1625,24 @@ impl<Block: BlockT> Backend<Block> {
is_finalized: false,
with_state: false,
});
Some((enacted, retracted))
} else {
return Err(sp_blockchain::Error::UnknownBlock(format!(
"Cannot set head {:?}",
set_head
)))
}
} else {
None
};
}
self.storage.db.commit(transaction)?;
// Apply all in-memory state changes.
// Code beyond this point can't fail.
if let Some((header, number, hash, enacted, retracted, is_best, mut cache)) = imported {
if let Some((header, hash)) = imported {
trace!(target: "db", "DB Commit done {:?}", hash);
let header_metadata = CachedHeaderMetadata::from(&header);
self.blockchain.insert_header_metadata(header_metadata.hash, header_metadata);
cache_header(&mut self.blockchain.header_cache.lock(), hash, Some(header));
cache.sync_cache(
&enacted,
&retracted,
operation.storage_updates,
operation.child_storage_updates,
Some(hash),
Some(number),
is_best,
);
}
if let Some((enacted, retracted)) = cache_update {
self.shared_cache.write().sync(&enacted, &retracted);
}
for m in meta_updates {
@@ -1770,17 +1774,13 @@ impl<Block: BlockT> Backend<Block> {
Ok(())
}
fn empty_state(&self) -> ClientResult<SyncingCachingState<RefTrackingState<Block>, Block>> {
fn empty_state(&self) -> ClientResult<RecordStatsState<RefTrackingState<Block>, Block>> {
let root = EmptyStorage::<Block>::new().0; // Empty trie
let db_state = DbState::<Block>::new(self.storage.clone(), root);
let db_state = DbStateBuilder::<Block>::new(self.storage.clone(), root)
.with_optional_cache(self.shared_trie_cache.as_ref().map(|c| c.local_cache()))
.build();
let state = RefTrackingState::new(db_state, self.storage.clone(), None);
let caching_state = CachingState::new(state, self.shared_cache.clone(), None);
Ok(SyncingCachingState::new(
caching_state,
self.state_usage.clone(),
self.blockchain.meta.clone(),
self.import_lock.clone(),
))
Ok(RecordStatsState::new(state, None, self.state_usage.clone()))
}
}
@@ -1902,16 +1902,13 @@ where
impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
type BlockImportOperation = BlockImportOperation<Block>;
type Blockchain = BlockchainDb<Block>;
type State = SyncingCachingState<RefTrackingState<Block>, Block>;
type State = RecordStatsState<RefTrackingState<Block>, Block>;
type OffchainStorage = offchain::LocalStorage;
fn begin_operation(&self) -> ClientResult<Self::BlockImportOperation> {
let mut old_state = self.empty_state()?;
old_state.disable_syncing();
Ok(BlockImportOperation {
pending_block: None,
old_state,
old_state: self.empty_state()?,
db_updates: PrefixedMemoryDB::default(),
storage_updates: Default::default(),
child_storage_updates: Default::default(),
@@ -1934,7 +1931,6 @@ impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
} else {
operation.old_state = self.state_at(block)?;
}
operation.old_state.disable_syncing();
operation.commit_state = true;
Ok(())
@@ -2035,8 +2031,9 @@ impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
)
});
let database_cache = MemorySize::from_bytes(0);
let state_cache =
MemorySize::from_bytes(self.shared_cache.read().used_storage_cache_size());
let state_cache = MemorySize::from_bytes(
self.shared_trie_cache.as_ref().map_or(0, |c| c.used_memory_size()),
);
let state_db = self.storage.state_db.memory_info();
Some(UsageInfo {
@@ -2278,17 +2275,13 @@ impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
};
if is_genesis {
if let Some(genesis_state) = &*self.genesis_state.read() {
let db_state = DbState::<Block>::new(genesis_state.clone(), genesis_state.root);
let root = genesis_state.root;
let db_state = DbStateBuilder::<Block>::new(genesis_state.clone(), root)
.with_optional_cache(self.shared_trie_cache.as_ref().map(|c| c.local_cache()))
.build();
let state = RefTrackingState::new(db_state, self.storage.clone(), None);
let caching_state = CachingState::new(state, self.shared_cache.clone(), None);
let mut state = SyncingCachingState::new(
caching_state,
self.state_usage.clone(),
self.blockchain.meta.clone(),
self.import_lock.clone(),
);
state.disable_syncing();
return Ok(state)
return Ok(RecordStatsState::new(state, None, self.state_usage.clone()))
}
}
@@ -2309,16 +2302,13 @@ impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
}
if let Ok(()) = self.storage.state_db.pin(&hash) {
let root = hdr.state_root;
let db_state = DbState::<Block>::new(self.storage.clone(), root);
let db_state = DbStateBuilder::<Block>::new(self.storage.clone(), root)
.with_optional_cache(
self.shared_trie_cache.as_ref().map(|c| c.local_cache()),
)
.build();
let state = RefTrackingState::new(db_state, self.storage.clone(), Some(hash));
let caching_state =
CachingState::new(state, self.shared_cache.clone(), Some(hash));
Ok(SyncingCachingState::new(
caching_state,
self.state_usage.clone(),
self.blockchain.meta.clone(),
self.import_lock.clone(),
))
Ok(RecordStatsState::new(state, Some(hash), self.state_usage.clone()))
} else {
Err(sp_blockchain::Error::UnknownBlock(format!(
"State already discarded for {:?}",
@@ -2494,8 +2484,7 @@ pub(crate) mod tests {
let backend = Backend::<Block>::new(
DatabaseSettings {
state_cache_size: 16777216,
state_cache_child_ratio: Some((50, 100)),
trie_cache_maximum_size: Some(16 * 1024 * 1024),
state_pruning: Some(PruningMode::blocks_pruning(1)),
source: DatabaseSource::Custom { db: backing, require_create_flag: false },
blocks_pruning: BlocksPruning::All,
@@ -0,0 +1,230 @@
// This file is part of Substrate.
// Copyright (C) 2019-2022 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: GPL-3.0-or-later WITH Classpath-exception-2.0
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//! Provides [`RecordStatsState`] for recording stats about state access.
use crate::stats::StateUsageStats;
use sp_core::storage::ChildInfo;
use sp_runtime::{
traits::{Block as BlockT, HashFor},
StateVersion,
};
use sp_state_machine::{
backend::{AsTrieBackend, Backend as StateBackend},
TrieBackend,
};
use std::sync::Arc;
/// State abstraction for recording stats about state access.
pub struct RecordStatsState<S, B: BlockT> {
/// Usage statistics
usage: StateUsageStats,
/// State machine registered stats
overlay_stats: sp_state_machine::StateMachineStats,
/// Backing state.
state: S,
/// The hash of the block is state belongs to.
block_hash: Option<B::Hash>,
/// The usage statistics of the backend. These will be updated on drop.
state_usage: Arc<StateUsageStats>,
}
impl<S, B: BlockT> std::fmt::Debug for RecordStatsState<S, B> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Block {:?}", self.block_hash)
}
}
impl<S, B: BlockT> Drop for RecordStatsState<S, B> {
fn drop(&mut self) {
self.state_usage.merge_sm(self.usage.take());
}
}
impl<S: StateBackend<HashFor<B>>, B: BlockT> RecordStatsState<S, B> {
/// Create a new instance wrapping generic State and shared cache.
pub(crate) fn new(
state: S,
block_hash: Option<B::Hash>,
state_usage: Arc<StateUsageStats>,
) -> Self {
RecordStatsState {
usage: StateUsageStats::new(),
overlay_stats: sp_state_machine::StateMachineStats::default(),
state,
block_hash,
state_usage,
}
}
}
impl<S: StateBackend<HashFor<B>>, B: BlockT> StateBackend<HashFor<B>> for RecordStatsState<S, B> {
type Error = S::Error;
type Transaction = S::Transaction;
type TrieBackendStorage = S::TrieBackendStorage;
fn storage(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {
let value = self.state.storage(key)?;
self.usage.tally_key_read(key, value.as_ref(), false);
Ok(value)
}
fn storage_hash(&self, key: &[u8]) -> Result<Option<B::Hash>, Self::Error> {
self.state.storage_hash(key)
}
fn child_storage(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, Self::Error> {
let key = (child_info.storage_key().to_vec(), key.to_vec());
let value = self.state.child_storage(child_info, &key.1)?;
// just pass it through the usage counter
let value = self.usage.tally_child_key_read(&key, value, false);
Ok(value)
}
fn child_storage_hash(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<B::Hash>, Self::Error> {
self.state.child_storage_hash(child_info, key)
}
fn exists_storage(&self, key: &[u8]) -> Result<bool, Self::Error> {
self.state.exists_storage(key)
}
fn exists_child_storage(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<bool, Self::Error> {
self.state.exists_child_storage(child_info, key)
}
fn apply_to_key_values_while<F: FnMut(Vec<u8>, Vec<u8>) -> bool>(
&self,
child_info: Option<&ChildInfo>,
prefix: Option<&[u8]>,
start_at: Option<&[u8]>,
f: F,
allow_missing: bool,
) -> Result<bool, Self::Error> {
self.state
.apply_to_key_values_while(child_info, prefix, start_at, f, allow_missing)
}
fn apply_to_keys_while<F: FnMut(&[u8]) -> bool>(
&self,
child_info: Option<&ChildInfo>,
prefix: Option<&[u8]>,
start_at: Option<&[u8]>,
f: F,
) {
self.state.apply_to_keys_while(child_info, prefix, start_at, f)
}
fn next_storage_key(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {
self.state.next_storage_key(key)
}
fn next_child_storage_key(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, Self::Error> {
self.state.next_child_storage_key(child_info, key)
}
fn for_keys_with_prefix<F: FnMut(&[u8])>(&self, prefix: &[u8], f: F) {
self.state.for_keys_with_prefix(prefix, f)
}
fn for_key_values_with_prefix<F: FnMut(&[u8], &[u8])>(&self, prefix: &[u8], f: F) {
self.state.for_key_values_with_prefix(prefix, f)
}
fn for_child_keys_with_prefix<F: FnMut(&[u8])>(
&self,
child_info: &ChildInfo,
prefix: &[u8],
f: F,
) {
self.state.for_child_keys_with_prefix(child_info, prefix, f)
}
fn storage_root<'a>(
&self,
delta: impl Iterator<Item = (&'a [u8], Option<&'a [u8]>)>,
state_version: StateVersion,
) -> (B::Hash, Self::Transaction)
where
B::Hash: Ord,
{
self.state.storage_root(delta, state_version)
}
fn child_storage_root<'a>(
&self,
child_info: &ChildInfo,
delta: impl Iterator<Item = (&'a [u8], Option<&'a [u8]>)>,
state_version: StateVersion,
) -> (B::Hash, bool, Self::Transaction)
where
B::Hash: Ord,
{
self.state.child_storage_root(child_info, delta, state_version)
}
fn pairs(&self) -> Vec<(Vec<u8>, Vec<u8>)> {
self.state.pairs()
}
fn keys(&self, prefix: &[u8]) -> Vec<Vec<u8>> {
self.state.keys(prefix)
}
fn child_keys(&self, child_info: &ChildInfo, prefix: &[u8]) -> Vec<Vec<u8>> {
self.state.child_keys(child_info, prefix)
}
fn register_overlay_stats(&self, stats: &sp_state_machine::StateMachineStats) {
self.overlay_stats.add(stats);
}
fn usage_info(&self) -> sp_state_machine::UsageInfo {
let mut info = self.usage.take();
info.include_state_machine_states(&self.overlay_stats);
info
}
}
impl<S: StateBackend<HashFor<B>> + AsTrieBackend<HashFor<B>>, B: BlockT> AsTrieBackend<HashFor<B>>
for RecordStatsState<S, B>
{
type TrieBackendStorage = <S as AsTrieBackend<HashFor<B>>>::TrieBackendStorage;
fn as_trie_backend(&self) -> &TrieBackend<Self::TrieBackendStorage, HashFor<B>> {
self.state.as_trie_backend()
}
}
@@ -32,6 +32,9 @@ pub enum Error {
/// The re-execution of the specified block failed.
#[error("Failed to re-execute the specified block")]
BlockExecutionFailed,
/// Failed to extract the proof.
#[error("Failed to extract the proof")]
ProofExtractionFailed,
/// The witness compaction failed.
#[error("Failed to create to compact the witness")]
WitnessCompactionFailed,
@@ -54,6 +57,8 @@ impl From<Error> for JsonRpseeError {
CallError::Custom(ErrorObject::owned(BASE_ERROR + 3, msg, None::<()>)),
Error::WitnessCompactionFailed =>
CallError::Custom(ErrorObject::owned(BASE_ERROR + 4, msg, None::<()>)),
Error::ProofExtractionFailed =>
CallError::Custom(ErrorObject::owned(BASE_ERROR + 5, msg, None::<()>)),
Error::UnsafeRpcCalled(e) => e.into(),
}
.into()
+1 -2
View File
@@ -206,8 +206,7 @@ where
let (client, backend) = {
let db_config = sc_client_db::DatabaseSettings {
state_cache_size: config.state_cache_size,
state_cache_child_ratio: config.state_cache_child_ratio.map(|v| (v, 100)),
trie_cache_maximum_size: config.trie_cache_maximum_size,
state_pruning: config.state_pruning.clone(),
source: config.database.clone(),
blocks_pruning: config.blocks_pruning,
@@ -28,7 +28,7 @@ use sp_core::{
use sp_externalities::Extensions;
use sp_runtime::{generic::BlockId, traits::Block as BlockT};
use sp_state_machine::{
self, backend::Backend as _, ExecutionManager, ExecutionStrategy, Ext, OverlayedChanges,
backend::AsTrieBackend, ExecutionManager, ExecutionStrategy, Ext, OverlayedChanges,
StateMachine, StorageProof,
};
use std::{cell::RefCell, panic::UnwindSafe, result, sync::Arc};
@@ -224,15 +224,11 @@ where
match recorder {
Some(recorder) => {
let trie_state = state.as_trie_backend().ok_or_else(|| {
Box::new(sp_state_machine::ExecutionError::UnableToGenerateProof)
as Box<dyn sp_state_machine::Error>
})?;
let trie_state = state.as_trie_backend();
let backend = sp_state_machine::ProvingBackend::new_with_recorder(
trie_state,
recorder.clone(),
);
let backend = sp_state_machine::TrieBackendBuilder::wrap(&trie_state)
.with_recorder(recorder.clone())
.build();
let mut state_machine = StateMachine::new(
&backend,
@@ -294,10 +290,7 @@ where
) -> sp_blockchain::Result<(Vec<u8>, StorageProof)> {
let state = self.backend.state_at(*at)?;
let trie_backend = state.as_trie_backend().ok_or_else(|| {
Box::new(sp_state_machine::ExecutionError::UnableToGenerateProof)
as Box<dyn sp_state_machine::Error>
})?;
let trie_backend = state.as_trie_backend();
let state_runtime_code = sp_state_machine::backend::BackendRuntimeCode::new(trie_backend);
let runtime_code =
@@ -1327,7 +1327,7 @@ where
Some(&root),
)
.map_err(|e| sp_blockchain::Error::from_state(Box::new(e)))?;
let proving_backend = sp_state_machine::TrieBackend::new(db, root);
let proving_backend = sp_state_machine::TrieBackendBuilder::new(db, root).build();
let state = read_range_proof_check_with_child_on_proving_backend::<HashFor<Block>>(
&proving_backend,
start_key,
@@ -1689,6 +1689,10 @@ where
fn runtime_version_at(&self, at: &BlockId<Block>) -> Result<RuntimeVersion, sp_api::ApiError> {
CallExecutor::runtime_version(&self.executor, at).map_err(Into::into)
}
fn state_at(&self, at: &BlockId<Block>) -> Result<Self::StateBackend, sp_api::ApiError> {
self.state_at(at).map_err(Into::into)
}
}
/// NOTE: only use this implementation when you are sure there are NO consensus-level BlockImport
+4 -4
View File
@@ -70,10 +70,10 @@ pub struct Configuration {
pub keystore_remote: Option<String>,
/// Configuration for the database.
pub database: DatabaseSource,
/// Size of internal state cache in Bytes
pub state_cache_size: usize,
/// Size in percent of cache size dedicated to child tries
pub state_cache_child_ratio: Option<usize>,
/// Maximum size of internal trie cache in bytes.
///
/// If `None` is given the cache is disabled.
pub trie_cache_maximum_size: Option<usize>,
/// State pruning settings.
pub state_pruning: Option<PruningMode>,
/// Number of blocks to keep in the db.
@@ -1197,8 +1197,7 @@ fn doesnt_import_blocks_that_revert_finality() {
let backend = Arc::new(
Backend::new(
DatabaseSettings {
state_cache_size: 1 << 20,
state_cache_child_ratio: None,
trie_cache_maximum_size: Some(1 << 20),
state_pruning: Some(PruningMode::ArchiveAll),
blocks_pruning: BlocksPruning::All,
source: DatabaseSource::RocksDb { path: tmp.path().into(), cache_size: 1024 },
@@ -1424,8 +1423,7 @@ fn returns_status_for_pruned_blocks() {
let backend = Arc::new(
Backend::new(
DatabaseSettings {
state_cache_size: 1 << 20,
state_cache_child_ratio: None,
trie_cache_maximum_size: Some(1 << 20),
state_pruning: Some(PruningMode::blocks_pruning(1)),
blocks_pruning: BlocksPruning::All,
source: DatabaseSource::RocksDb { path: tmp.path().into(), cache_size: 1024 },
+1 -2
View File
@@ -232,8 +232,7 @@ fn node_config<
keystore_remote: Default::default(),
keystore: KeystoreConfig::Path { path: root.join("key"), password: None },
database: DatabaseSource::RocksDb { path: root.join("db"), cache_size: 128 },
state_cache_size: 16777216,
state_cache_child_ratio: None,
trie_cache_maximum_size: Some(16 * 1024 * 1024),
state_pruning: Default::default(),
blocks_pruning: BlocksPruning::All,
chain_spec: Box::new((*spec).clone()),
+17 -16
View File
@@ -39,8 +39,8 @@ use sp_session::{MembershipProof, ValidatorCount};
use sp_staking::SessionIndex;
use sp_std::prelude::*;
use sp_trie::{
trie_types::{TrieDB, TrieDBMutV0},
MemoryDB, Recorder, Trie, TrieMut, EMPTY_PREFIX,
trie_types::{TrieDBBuilder, TrieDBMutBuilderV0},
LayoutV0, MemoryDB, Recorder, Trie, TrieMut, EMPTY_PREFIX,
};
use frame_support::{
@@ -236,7 +236,7 @@ impl<T: Config> ProvingTrie<T> {
let mut root = Default::default();
{
let mut trie = TrieDBMutV0::new(&mut db, &mut root);
let mut trie = TrieDBMutBuilderV0::new(&mut db, &mut root).build();
for (i, (validator, full_id)) in validators.into_iter().enumerate() {
let i = i as u32;
let keys = match <Session<T>>::load_keys(&validator) {
@@ -278,19 +278,20 @@ impl<T: Config> ProvingTrie<T> {
/// Prove the full verification data for a given key and key ID.
pub fn prove(&self, key_id: KeyTypeId, key_data: &[u8]) -> Option<Vec<Vec<u8>>> {
let trie = TrieDB::new(&self.db, &self.root).ok()?;
let mut recorder = Recorder::new();
let val_idx = (key_id, key_data).using_encoded(|s| {
trie.get_with(s, &mut recorder)
.ok()?
.and_then(|raw| u32::decode(&mut &*raw).ok())
})?;
let mut recorder = Recorder::<LayoutV0<T::Hashing>>::new();
{
let trie =
TrieDBBuilder::new(&self.db, &self.root).with_recorder(&mut recorder).build();
let val_idx = (key_id, key_data).using_encoded(|s| {
trie.get(s).ok()?.and_then(|raw| u32::decode(&mut &*raw).ok())
})?;
val_idx.using_encoded(|s| {
trie.get_with(s, &mut recorder)
.ok()?
.and_then(|raw| <IdentificationTuple<T>>::decode(&mut &*raw).ok())
})?;
val_idx.using_encoded(|s| {
trie.get(s)
.ok()?
.and_then(|raw| <IdentificationTuple<T>>::decode(&mut &*raw).ok())
})?;
}
Some(recorder.drain().into_iter().map(|r| r.data).collect())
}
@@ -303,7 +304,7 @@ impl<T: Config> ProvingTrie<T> {
// Check a proof contained within the current memory-db. Returns `None` if the
// nodes within the current `MemoryDB` are insufficient to query the item.
fn query(&self, key_id: KeyTypeId, key_data: &[u8]) -> Option<IdentificationTuple<T>> {
let trie = TrieDB::new(&self.db, &self.root).ok()?;
let trie = TrieDBBuilder::new(&self.db, &self.root).build();
let val_idx = (key_id, key_data)
.using_encoded(|s| trie.get(s))
.ok()?
+2
View File
@@ -20,6 +20,7 @@ sp-std = { version = "4.0.0", default-features = false, path = "../std" }
sp-runtime = { version = "6.0.0", default-features = false, path = "../runtime" }
sp-version = { version = "5.0.0", default-features = false, path = "../version" }
sp-state-machine = { version = "0.12.0", optional = true, path = "../state-machine" }
sp-trie = { version = "6.0.0", optional = true, path = "../trie" }
hash-db = { version = "0.15.2", optional = true }
thiserror = { version = "1.0.30", optional = true }
@@ -36,6 +37,7 @@ std = [
"sp-std/std",
"sp-runtime/std",
"sp-state-machine",
"sp-trie",
"sp-version/std",
"hash-db",
"thiserror",
@@ -277,9 +277,13 @@ fn generate_runtime_api_base_structures() -> Result<TokenStream> {
std::clone::Clone::clone(&self.recorder)
}
fn extract_proof(&mut self) -> std::option::Option<#crate_::StorageProof> {
std::option::Option::take(&mut self.recorder)
.map(|recorder| #crate_::ProofRecorder::<Block>::to_storage_proof(&recorder))
fn extract_proof(
&mut self,
) -> std::option::Option<#crate_::StorageProof> {
let recorder = std::option::Option::take(&mut self.recorder);
std::option::Option::map(recorder, |recorder| {
#crate_::ProofRecorder::<Block>::drain_storage_proof(recorder)
})
}
fn into_storage_changes(
@@ -104,7 +104,9 @@ fn implement_common_api_traits(block_type: TypePath, self_ty: Type) -> Result<To
unimplemented!("`record_proof` not implemented for runtime api mocks")
}
fn extract_proof(&mut self) -> Option<#crate_::StorageProof> {
fn extract_proof(
&mut self,
) -> Option<#crate_::StorageProof> {
unimplemented!("`extract_proof` not implemented for runtime api mocks")
}
+9 -3
View File
@@ -99,7 +99,8 @@ pub use sp_runtime::{
#[doc(hidden)]
#[cfg(feature = "std")]
pub use sp_state_machine::{
Backend as StateBackend, InMemoryBackend, OverlayedChanges, StorageProof,
backend::AsTrieBackend, Backend as StateBackend, InMemoryBackend, OverlayedChanges,
StorageProof, TrieBackend, TrieBackendBuilder,
};
#[cfg(feature = "std")]
use sp_std::result;
@@ -454,7 +455,7 @@ pub use sp_api_proc_macro::mock_impl_runtime_apis;
/// A type that records all accessed trie nodes and generates a proof out of it.
#[cfg(feature = "std")]
pub type ProofRecorder<B> = sp_state_machine::ProofRecorder<<B as BlockT>::Hash>;
pub type ProofRecorder<B> = sp_trie::recorder::Recorder<HashFor<B>>;
/// A type that is used as cache for the storage transactions.
#[cfg(feature = "std")]
@@ -518,6 +519,8 @@ pub enum ApiError {
#[source]
error: codec::Error,
},
#[error("The given `StateBackend` isn't a `TrieBackend`.")]
StateBackendIsNotTrie,
#[error(transparent)]
Application(#[from] Box<dyn std::error::Error + Send + Sync>),
}
@@ -613,7 +616,7 @@ pub struct CallApiAtParams<'a, Block: BlockT, NC, Backend: StateBackend<HashFor<
#[cfg(feature = "std")]
pub trait CallApiAt<Block: BlockT> {
/// The state backend that is used to store the block states.
type StateBackend: StateBackend<HashFor<Block>>;
type StateBackend: StateBackend<HashFor<Block>> + AsTrieBackend<HashFor<Block>>;
/// Calls the given api function with the given encoded arguments at the given block and returns
/// the encoded result.
@@ -627,6 +630,9 @@ pub trait CallApiAt<Block: BlockT> {
/// Returns the runtime version at the given block.
fn runtime_version_at(&self, at: &BlockId<Block>) -> Result<RuntimeVersion, ApiError>;
/// Get the state `at` the given block.
fn state_at(&self, at: &BlockId<Block>) -> Result<Self::StateBackend, ApiError>;
}
/// Auxiliary wrapper that holds an api instance and binds it to the given lifetime.
@@ -35,6 +35,7 @@ hex-literal = "0.3.4"
pretty_assertions = "1.2.1"
rand = "0.7.2"
sp-runtime = { version = "6.0.0", path = "../runtime" }
trie-db = "0.24.0"
assert_matches = "1.5"
[features]
@@ -17,9 +17,11 @@
//! State machine backends. These manage the code and storage of contracts.
#[cfg(feature = "std")]
use crate::trie_backend::TrieBackend;
use crate::{
trie_backend::TrieBackend, trie_backend_essence::TrieBackendStorage, ChildStorageCollection,
StorageCollection, StorageKey, StorageValue, UsageInfo,
trie_backend_essence::TrieBackendStorage, ChildStorageCollection, StorageCollection,
StorageKey, StorageValue, UsageInfo,
};
use codec::Encode;
use hash_db::Hasher;
@@ -46,9 +48,7 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
fn storage(&self, key: &[u8]) -> Result<Option<StorageValue>, Self::Error>;
/// Get keyed storage value hash or None if there is nothing associated.
fn storage_hash(&self, key: &[u8]) -> Result<Option<H::Out>, Self::Error> {
self.storage(key).map(|v| v.map(|v| H::hash(&v)))
}
fn storage_hash(&self, key: &[u8]) -> Result<Option<H::Out>, Self::Error>;
/// Get keyed child storage or None if there is nothing associated.
fn child_storage(
@@ -62,13 +62,11 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<H::Out>, Self::Error> {
self.child_storage(child_info, key).map(|v| v.map(|v| H::hash(&v)))
}
) -> Result<Option<H::Out>, Self::Error>;
/// true if a key exists in storage.
fn exists_storage(&self, key: &[u8]) -> Result<bool, Self::Error> {
Ok(self.storage(key)?.is_some())
Ok(self.storage_hash(key)?.is_some())
}
/// true if a key exists in child storage.
@@ -77,7 +75,7 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
child_info: &ChildInfo,
key: &[u8],
) -> Result<bool, Self::Error> {
Ok(self.child_storage(child_info, key)?.is_some())
Ok(self.child_storage_hash(child_info, key)?.is_some())
}
/// Return the next key in storage in lexicographic order or `None` if there is no value.
@@ -175,10 +173,6 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
all
}
/// Try convert into trie backend.
fn as_trie_backend(&self) -> Option<&TrieBackend<Self::TrieBackendStorage, H>> {
None
}
/// Calculate the storage root, with given delta over what is already stored
/// in the backend, and produce a "transaction" that can be used to commit.
/// Does include child storage updates.
@@ -273,6 +267,16 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
}
}
/// Something that can be converted into a [`TrieBackend`].
#[cfg(feature = "std")]
pub trait AsTrieBackend<H: Hasher, C = sp_trie::cache::LocalTrieCache<H>> {
/// Type of trie backend storage.
type TrieBackendStorage: TrieBackendStorage<H>;
/// Return the type as [`TrieBackend`].
fn as_trie_backend(&self) -> &TrieBackend<Self::TrieBackendStorage, H, C>;
}
/// Trait that allows consolidate two transactions together.
pub trait Consolidate {
/// Consolidate two transactions into one.
@@ -19,6 +19,7 @@
use crate::{
backend::Backend, trie_backend::TrieBackend, StorageCollection, StorageKey, StorageValue,
TrieBackendBuilder,
};
use codec::Codec;
use hash_db::Hasher;
@@ -46,7 +47,7 @@ where
{
let db = GenericMemoryDB::default();
// V1 is same as V0 for an empty trie.
TrieBackend::new(db, empty_trie_root::<LayoutV1<H>>())
TrieBackendBuilder::new(db, empty_trie_root::<LayoutV1<H>>()).build()
}
impl<H: Hasher, KF> TrieBackend<GenericMemoryDB<H, KF>, H>
@@ -87,14 +88,14 @@ where
pub fn update_backend(&self, root: H::Out, changes: GenericMemoryDB<H, KF>) -> Self {
let mut clone = self.backend_storage().clone();
clone.consolidate(changes);
Self::new(clone, root)
TrieBackendBuilder::new(clone, root).build()
}
/// Apply the given transaction to this backend and set the root to the given value.
pub fn apply_transaction(&mut self, root: H::Out, transaction: GenericMemoryDB<H, KF>) {
let mut storage = sp_std::mem::take(self).into_storage();
storage.consolidate(transaction);
*self = TrieBackend::new(storage, root);
*self = TrieBackendBuilder::new(storage, root).build();
}
/// Compare with another in-memory backend.
@@ -109,7 +110,7 @@ where
KF: KeyFunction<H> + Send + Sync,
{
fn clone(&self) -> Self {
TrieBackend::new(self.backend_storage().clone(), *self.root())
TrieBackendBuilder::new(self.backend_storage().clone(), *self.root()).build()
}
}
@@ -203,7 +204,7 @@ where
#[cfg(test)]
mod tests {
use super::*;
use crate::backend::Backend;
use crate::backend::{AsTrieBackend, Backend};
use sp_core::storage::StateVersion;
use sp_runtime::traits::BlakeTwo256;
@@ -218,7 +219,7 @@ mod tests {
vec![(Some(child_info.clone()), vec![(b"2".to_vec(), Some(b"3".to_vec()))])],
state_version,
);
let trie_backend = storage.as_trie_backend().unwrap();
let trie_backend = storage.as_trie_backend();
assert_eq!(trie_backend.child_storage(child_info, b"2").unwrap(), Some(b"3".to_vec()));
let storage_key = child_info.prefixed_storage_key();
assert!(trie_backend.storage(storage_key.as_slice()).unwrap().is_some());
+103 -89
View File
@@ -29,8 +29,6 @@ mod ext;
mod in_memory_backend;
pub(crate) mod overlayed_changes;
#[cfg(feature = "std")]
mod proving_backend;
#[cfg(feature = "std")]
mod read_only;
mod stats;
#[cfg(feature = "std")]
@@ -134,7 +132,7 @@ pub use crate::{
StorageTransactionCache, StorageValue,
},
stats::{StateMachineStats, UsageInfo, UsageUnit},
trie_backend::TrieBackend,
trie_backend::{TrieBackend, TrieBackendBuilder},
trie_backend_essence::{Storage, TrieBackendStorage},
};
@@ -144,11 +142,9 @@ mod std_reexport {
basic::BasicExternalities,
error::{Error, ExecutionError},
in_memory_backend::{new_in_mem, new_in_mem_hash_key},
proving_backend::{
create_proof_check_backend, ProofRecorder, ProvingBackend, ProvingBackendRecorder,
},
read_only::{InspectState, ReadOnlyExternalities},
testing::TestExternalities,
trie_backend::create_proof_check_backend,
};
pub use sp_trie::{
trie_types::{TrieDBMutV0, TrieDBMutV1},
@@ -158,6 +154,8 @@ mod std_reexport {
#[cfg(feature = "std")]
mod execution {
use crate::backend::AsTrieBackend;
use super::*;
use codec::{Codec, Decode, Encode};
use hash_db::Hasher;
@@ -188,9 +186,6 @@ mod execution {
/// Trie backend with in-memory storage.
pub type InMemoryBackend<H> = TrieBackend<MemoryDB<H>, H>;
/// Proving Trie backend with in-memory storage.
pub type InMemoryProvingBackend<'a, H> = ProvingBackend<'a, MemoryDB<H>, H>;
/// Strategy for executing a call into the runtime.
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
pub enum ExecutionStrategy {
@@ -562,15 +557,13 @@ mod execution {
runtime_code: &RuntimeCode,
) -> Result<(Vec<u8>, StorageProof), Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + 'static + codec::Codec,
Exec: CodeExecutor + Clone + 'static,
Spawn: SpawnNamed + Send + 'static,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_execution_on_trie_backend::<_, _, _, _>(
trie_backend,
overlay,
@@ -607,23 +600,31 @@ mod execution {
Exec: CodeExecutor + 'static + Clone,
Spawn: SpawnNamed + Send + 'static,
{
let proving_backend = proving_backend::ProvingBackend::new(trie_backend);
let mut sm = StateMachine::<_, H, Exec>::new(
&proving_backend,
overlay,
exec,
method,
call_data,
Extensions::default(),
runtime_code,
spawn_handle,
);
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(Default::default()).build();
let result = {
let mut sm = StateMachine::<_, H, Exec>::new(
&proving_backend,
overlay,
exec,
method,
call_data,
Extensions::default(),
runtime_code,
spawn_handle,
);
sm.execute_using_consensus_failure_handler::<_, NeverNativeValue, fn() -> _>(
always_wasm(),
None,
)?
};
let proof = proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed");
let result = sm.execute_using_consensus_failure_handler::<_, NeverNativeValue, fn() -> _>(
always_wasm(),
None,
)?;
let proof = sm.backend.extract_proof();
Ok((result.into_encoded(), proof))
}
@@ -639,7 +640,7 @@ mod execution {
runtime_code: &RuntimeCode,
) -> Result<Vec<u8>, Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
Exec: CodeExecutor + Clone + 'static,
H::Out: Ord + 'static + codec::Codec,
Spawn: SpawnNamed + Send + 'static,
@@ -693,15 +694,13 @@ mod execution {
/// Generate storage read proof.
pub fn prove_read<B, H, I>(backend: B, keys: I) -> Result<StorageProof, Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + Codec,
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_read_on_trie_backend(trie_backend, keys)
}
@@ -829,13 +828,11 @@ mod execution {
start_at: &[Vec<u8>],
) -> Result<(StorageProof, u32), Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + Codec,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_range_read_with_child_with_size_on_trie_backend(trie_backend, size_limit, start_at)
}
@@ -856,7 +853,9 @@ mod execution {
return Err(Box::new("Invalid start of range."))
}
let proving_backend = proving_backend::ProvingBackend::<S, H>::new(trie_backend);
let recorder = sp_trie::recorder::Recorder::default();
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(recorder.clone()).build();
let mut count = 0;
let mut child_roots = HashSet::new();
@@ -924,7 +923,7 @@ mod execution {
// do not add two child trie with same root
true
}
} else if proving_backend.estimate_encoded_size() <= size_limit {
} else if recorder.estimate_encoded_size() <= size_limit {
count += 1;
true
} else {
@@ -948,7 +947,11 @@ mod execution {
start_at = None;
}
}
Ok((proving_backend.extract_proof(), count))
let proof = proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed");
Ok((proof, count))
}
/// Generate range storage read proof.
@@ -960,13 +963,11 @@ mod execution {
start_at: Option<&[u8]>,
) -> Result<(StorageProof, u32), Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + Codec,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_range_read_with_size_on_trie_backend(
trie_backend,
child_info,
@@ -989,7 +990,9 @@ mod execution {
H: Hasher,
H::Out: Ord + Codec,
{
let proving_backend = proving_backend::ProvingBackend::<S, H>::new(trie_backend);
let recorder = sp_trie::recorder::Recorder::default();
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(recorder.clone()).build();
let mut count = 0;
proving_backend
.apply_to_key_values_while(
@@ -997,7 +1000,7 @@ mod execution {
prefix,
start_at,
|_key, _value| {
if count == 0 || proving_backend.estimate_encoded_size() <= size_limit {
if count == 0 || recorder.estimate_encoded_size() <= size_limit {
count += 1;
true
} else {
@@ -1007,7 +1010,11 @@ mod execution {
false,
)
.map_err(|e| Box::new(e) as Box<dyn Error>)?;
Ok((proving_backend.extract_proof(), count))
let proof = proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed");
Ok((proof, count))
}
/// Generate child storage read proof.
@@ -1017,15 +1024,13 @@ mod execution {
keys: I,
) -> Result<StorageProof, Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + Codec,
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_child_read_on_trie_backend(trie_backend, child_info, keys)
}
@@ -1041,13 +1046,17 @@ mod execution {
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let proving_backend = proving_backend::ProvingBackend::<_, H>::new(trie_backend);
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(Default::default()).build();
for key in keys.into_iter() {
proving_backend
.storage(key.as_ref())
.map_err(|e| Box::new(e) as Box<dyn Error>)?;
}
Ok(proving_backend.extract_proof())
Ok(proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed"))
}
/// Generate storage read proof on pre-created trie backend.
@@ -1063,13 +1072,17 @@ mod execution {
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let proving_backend = proving_backend::ProvingBackend::<_, H>::new(trie_backend);
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(Default::default()).build();
for key in keys.into_iter() {
proving_backend
.child_storage(child_info, key.as_ref())
.map_err(|e| Box::new(e) as Box<dyn Error>)?;
}
Ok(proving_backend.extract_proof())
Ok(proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed"))
}
/// Check storage read proof, generated by `prove_read` call.
@@ -1079,7 +1092,7 @@ mod execution {
keys: I,
) -> Result<HashMap<Vec<u8>, Option<Vec<u8>>>, Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
H::Out: Ord + Codec,
I: IntoIterator,
I::Item: AsRef<[u8]>,
@@ -1104,7 +1117,7 @@ mod execution {
start_at: &[Vec<u8>],
) -> Result<(KeyValueStates, usize), Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
H::Out: Ord + Codec,
{
let proving_backend = create_proof_check_backend::<H>(root, proof)?;
@@ -1121,7 +1134,7 @@ mod execution {
start_at: Option<&[u8]>,
) -> Result<(Vec<(Vec<u8>, Vec<u8>)>, bool), Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
H::Out: Ord + Codec,
{
let proving_backend = create_proof_check_backend::<H>(root, proof)?;
@@ -1142,7 +1155,7 @@ mod execution {
keys: I,
) -> Result<HashMap<Vec<u8>, Option<Vec<u8>>>, Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
H::Out: Ord + Codec,
I: IntoIterator,
I::Item: AsRef<[u8]>,
@@ -1346,7 +1359,7 @@ mod execution {
#[cfg(test)]
mod tests {
use super::{ext::Ext, *};
use super::{backend::AsTrieBackend, ext::Ext, *};
use crate::{execution::CallResult, in_memory_backend::new_in_mem_hash_key};
use assert_matches::assert_matches;
use codec::{Decode, Encode};
@@ -1358,6 +1371,7 @@ mod tests {
NativeOrEncoded, NeverNativeValue,
};
use sp_runtime::traits::BlakeTwo256;
use sp_trie::trie_types::{TrieDBMutBuilderV0, TrieDBMutBuilderV1};
use std::{
collections::{BTreeMap, HashMap},
panic::UnwindSafe,
@@ -1419,7 +1433,7 @@ mod tests {
execute_works_inner(StateVersion::V1);
}
fn execute_works_inner(state_version: StateVersion) {
let backend = trie_backend::tests::test_trie(state_version);
let backend = trie_backend::tests::test_trie(state_version, None, None);
let mut overlayed_changes = Default::default();
let wasm_code = RuntimeCode::empty();
@@ -1447,7 +1461,7 @@ mod tests {
execute_works_with_native_else_wasm_inner(StateVersion::V1);
}
fn execute_works_with_native_else_wasm_inner(state_version: StateVersion) {
let backend = trie_backend::tests::test_trie(state_version);
let backend = trie_backend::tests::test_trie(state_version, None, None);
let mut overlayed_changes = Default::default();
let wasm_code = RuntimeCode::empty();
@@ -1476,7 +1490,7 @@ mod tests {
}
fn dual_execution_strategy_detects_consensus_failure_inner(state_version: StateVersion) {
let mut consensus_failed = false;
let backend = trie_backend::tests::test_trie(state_version);
let backend = trie_backend::tests::test_trie(state_version, None, None);
let mut overlayed_changes = Default::default();
let wasm_code = RuntimeCode::empty();
@@ -1520,7 +1534,7 @@ mod tests {
};
// fetch execution proof from 'remote' full node
let mut remote_backend = trie_backend::tests::test_trie(state_version);
let mut remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let (remote_result, remote_proof) = prove_execution(
&mut remote_backend,
@@ -1560,7 +1574,7 @@ mod tests {
b"bbb".to_vec() => b"3".to_vec()
];
let state = InMemoryBackend::<BlakeTwo256>::from((initial, StateVersion::default()));
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
overlay.set_storage(b"aba".to_vec(), Some(b"1312".to_vec()));
@@ -1716,7 +1730,7 @@ mod tests {
let child_info = ChildInfo::new_default(b"sub1");
let child_info = &child_info;
let state = new_in_mem_hash_key::<BlakeTwo256>();
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
let mut cache = StorageTransactionCache::default();
let mut ext = Ext::new(&mut overlay, &mut cache, backend, None);
@@ -1732,7 +1746,7 @@ mod tests {
let reference_data = vec![b"data1".to_vec(), b"2".to_vec(), b"D3".to_vec(), b"d4".to_vec()];
let key = b"key".to_vec();
let state = new_in_mem_hash_key::<BlakeTwo256>();
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
let mut cache = StorageTransactionCache::default();
{
@@ -1769,7 +1783,7 @@ mod tests {
let key = b"events".to_vec();
let mut cache = StorageTransactionCache::default();
let state = new_in_mem_hash_key::<BlakeTwo256>();
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
// For example, block initialization with event.
@@ -1840,7 +1854,7 @@ mod tests {
let child_info = &child_info;
let missing_child_info = &missing_child_info;
// fetch read proof from 'remote' full node
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let remote_proof = prove_read(remote_backend, &[b"value2"]).unwrap();
let remote_proof = test_compact(remote_proof, &remote_root);
@@ -1857,7 +1871,7 @@ mod tests {
);
assert_eq!(local_result2, false);
// on child trie
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let remote_proof = prove_child_read(remote_backend, child_info, &[b"value3"]).unwrap();
let remote_proof = test_compact(remote_proof, &remote_root);
@@ -1924,8 +1938,8 @@ mod tests {
let trie: InMemoryBackend<BlakeTwo256> =
(storage.clone(), StateVersion::default()).into();
let trie_root = trie.root();
let backend = crate::ProvingBackend::new(&trie);
let trie_root = *trie.root();
let backend = TrieBackendBuilder::wrap(&trie).with_recorder(Default::default()).build();
let mut queries = Vec::new();
for c in 0..(5 + nb_child_trie / 2) {
// random existing query
@@ -1970,10 +1984,10 @@ mod tests {
}
}
let storage_proof = backend.extract_proof();
let storage_proof = backend.extract_proof().expect("Failed to extract proof");
let remote_proof = test_compact(storage_proof, &trie_root);
let proof_check =
create_proof_check_backend::<BlakeTwo256>(*trie_root, remote_proof).unwrap();
create_proof_check_backend::<BlakeTwo256>(trie_root, remote_proof).unwrap();
for (child_info, key, expected) in queries {
assert_eq!(
@@ -1987,7 +2001,7 @@ mod tests {
#[test]
fn prove_read_with_size_limit_works() {
let state_version = StateVersion::V0;
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(::std::iter::empty(), state_version).0;
let (proof, count) =
prove_range_read_with_size(remote_backend, None, None, 0, None).unwrap();
@@ -1995,7 +2009,7 @@ mod tests {
assert_eq!(proof.into_memory_db::<BlakeTwo256>().drain().len(), 3);
assert_eq!(count, 1);
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let (proof, count) =
prove_range_read_with_size(remote_backend, None, None, 800, Some(&[])).unwrap();
assert_eq!(proof.clone().into_memory_db::<BlakeTwo256>().drain().len(), 9);
@@ -2018,7 +2032,7 @@ mod tests {
assert_eq!(results.len() as u32, 101);
assert_eq!(completed, false);
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let (proof, count) =
prove_range_read_with_size(remote_backend, None, None, 50000, Some(&[])).unwrap();
assert_eq!(proof.clone().into_memory_db::<BlakeTwo256>().drain().len(), 11);
@@ -2035,7 +2049,7 @@ mod tests {
let mut state_version = StateVersion::V0;
let (mut mdb, mut root) = trie_backend::tests::test_db(state_version);
{
let mut trie = TrieDBMutV0::from_existing(&mut mdb, &mut root).unwrap();
let mut trie = TrieDBMutBuilderV0::from_existing(&mut mdb, &mut root).build();
trie.insert(b"foo", vec![1u8; 1_000].as_slice()) // big inner hash
.expect("insert failed");
trie.insert(b"foo2", vec![3u8; 16].as_slice()) // no inner hash
@@ -2045,7 +2059,7 @@ mod tests {
}
let check_proof = |mdb, root, state_version| -> StorageProof {
let remote_backend = TrieBackend::new(mdb, root);
let remote_backend = TrieBackendBuilder::new(mdb, root).build();
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let remote_proof = prove_read(remote_backend, &[b"foo222"]).unwrap();
// check proof locally
@@ -2069,7 +2083,7 @@ mod tests {
// do switch
state_version = StateVersion::V1;
{
let mut trie = TrieDBMutV1::from_existing(&mut mdb, &mut root).unwrap();
let mut trie = TrieDBMutBuilderV1::from_existing(&mut mdb, &mut root).build();
trie.insert(b"foo222", vec![5u8; 100].as_slice()) // inner hash
.expect("insert failed");
// update with same value do change
@@ -2088,10 +2102,10 @@ mod tests {
#[test]
fn prove_range_with_child_works() {
let state_version = StateVersion::V0;
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let mut start_at = smallvec::SmallVec::<[Vec<u8>; 2]>::new();
let trie_backend = remote_backend.as_trie_backend().unwrap();
let trie_backend = remote_backend.as_trie_backend();
let max_iter = 1000;
let mut nb_loop = 0;
loop {
@@ -2138,7 +2152,7 @@ mod tests {
let child_info2 = ChildInfo::new_default(b"sub2");
// this root will be include in proof
let child_info3 = ChildInfo::new_default(b"sub");
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let long_vec: Vec<u8> = (0..1024usize).map(|_| 8u8).collect();
let (remote_root, transaction) = remote_backend.full_storage_root(
std::iter::empty(),
@@ -2170,9 +2184,9 @@ mod tests {
.into_iter(),
state_version,
);
let mut remote_storage = remote_backend.into_storage();
let mut remote_storage = remote_backend.backend_storage().clone();
remote_storage.consolidate(transaction);
let remote_backend = TrieBackend::new(remote_storage, remote_root);
let remote_backend = TrieBackendBuilder::new(remote_storage, remote_root).build();
let remote_proof = prove_child_read(remote_backend, &child_info1, &[b"key1"]).unwrap();
let size = remote_proof.encoded_size();
let remote_proof = test_compact(remote_proof, &remote_root);
@@ -2198,7 +2212,7 @@ mod tests {
let mut overlay = OverlayedChanges::default();
let mut transaction = {
let backend = test_trie(state_version);
let backend = test_trie(state_version, None, None);
let mut cache = StorageTransactionCache::default();
let mut ext = Ext::new(&mut overlay, &mut cache, &backend, None);
ext.set_child_storage(&child_info_1, b"abc".to_vec(), b"def".to_vec());
@@ -2224,7 +2238,7 @@ mod tests {
b"bbb".to_vec() => b"".to_vec()
];
let state = InMemoryBackend::<BlakeTwo256>::from((initial, StateVersion::default()));
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
overlay.start_transaction();
@@ -2255,7 +2269,7 @@ mod tests {
struct DummyExt(u32);
}
let backend = trie_backend::tests::test_trie(state_version);
let backend = trie_backend::tests::test_trie(state_version, None, None);
let mut overlayed_changes = Default::default();
let wasm_code = RuntimeCode::empty();
@@ -1,611 +0,0 @@
// This file is part of Substrate.
// Copyright (C) 2017-2022 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Proving state machine backend.
use crate::{
trie_backend::TrieBackend,
trie_backend_essence::{Ephemeral, TrieBackendEssence, TrieBackendStorage},
Backend, DBValue, Error, ExecutionError,
};
use codec::{Codec, Decode, Encode};
use hash_db::{HashDB, Hasher, Prefix, EMPTY_PREFIX};
use log::debug;
use parking_lot::RwLock;
use sp_core::storage::{ChildInfo, StateVersion};
pub use sp_trie::trie_types::TrieError;
use sp_trie::{
empty_child_trie_root, read_child_trie_value_with, read_trie_value_with, record_all_keys,
LayoutV1, MemoryDB, Recorder, StorageProof,
};
use std::{
collections::{hash_map::Entry, HashMap},
sync::Arc,
};
/// Patricia trie-based backend specialized in get value proofs.
pub struct ProvingBackendRecorder<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> {
pub(crate) backend: &'a TrieBackendEssence<S, H>,
pub(crate) proof_recorder: &'a mut Recorder<H::Out>,
}
impl<'a, S, H> ProvingBackendRecorder<'a, S, H>
where
S: TrieBackendStorage<H>,
H: Hasher,
H::Out: Codec,
{
/// Produce proof for a key query.
pub fn storage(&mut self, key: &[u8]) -> Result<Option<Vec<u8>>, String> {
let mut read_overlay = S::Overlay::default();
let eph = Ephemeral::new(self.backend.backend_storage(), &mut read_overlay);
let map_e = |e| format!("Trie lookup error: {}", e);
// V1 is equivalent to V0 on read.
read_trie_value_with::<LayoutV1<H>, _, Ephemeral<S, H>>(
&eph,
self.backend.root(),
key,
&mut *self.proof_recorder,
)
.map_err(map_e)
}
/// Produce proof for a child key query.
pub fn child_storage(
&mut self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, String> {
let storage_key = child_info.storage_key();
let root = self
.storage(storage_key)?
.and_then(|r| Decode::decode(&mut &r[..]).ok())
// V1 is equivalent to V0 on empty trie
.unwrap_or_else(empty_child_trie_root::<LayoutV1<H>>);
let mut read_overlay = S::Overlay::default();
let eph = Ephemeral::new(self.backend.backend_storage(), &mut read_overlay);
let map_e = |e| format!("Trie lookup error: {}", e);
// V1 is equivalent to V0 on read
read_child_trie_value_with::<LayoutV1<H>, _, _>(
child_info.keyspace(),
&eph,
root.as_ref(),
key,
&mut *self.proof_recorder,
)
.map_err(map_e)
}
/// Produce proof for the whole backend.
pub fn record_all_keys(&mut self) {
let mut read_overlay = S::Overlay::default();
let eph = Ephemeral::new(self.backend.backend_storage(), &mut read_overlay);
let mut iter = move || -> Result<(), Box<TrieError<H::Out>>> {
let root = self.backend.root();
// V1 and V is equivalent to V0 on read and recorder is key read.
record_all_keys::<LayoutV1<H>, _>(&eph, root, &mut *self.proof_recorder)
};
if let Err(e) = iter() {
debug!(target: "trie", "Error while recording all keys: {}", e);
}
}
}
#[derive(Default)]
struct ProofRecorderInner<Hash> {
/// All the records that we have stored so far.
records: HashMap<Hash, Option<DBValue>>,
/// The encoded size of all recorded values.
encoded_size: usize,
}
/// Global proof recorder, act as a layer over a hash db for recording queried data.
#[derive(Clone, Default)]
pub struct ProofRecorder<Hash> {
inner: Arc<RwLock<ProofRecorderInner<Hash>>>,
}
impl<Hash: std::hash::Hash + Eq> ProofRecorder<Hash> {
/// Record the given `key` => `val` combination.
pub fn record(&self, key: Hash, val: Option<DBValue>) {
let mut inner = self.inner.write();
let encoded_size = if let Entry::Vacant(entry) = inner.records.entry(key) {
let encoded_size = val.as_ref().map(Encode::encoded_size).unwrap_or(0);
entry.insert(val);
encoded_size
} else {
0
};
inner.encoded_size += encoded_size;
}
/// Returns the value at the given `key`.
pub fn get(&self, key: &Hash) -> Option<Option<DBValue>> {
self.inner.read().records.get(key).cloned()
}
/// Returns the estimated encoded size of the proof.
///
/// The estimation is maybe bigger (by in maximum 4 bytes), but never smaller than the actual
/// encoded proof.
pub fn estimate_encoded_size(&self) -> usize {
let inner = self.inner.read();
inner.encoded_size + codec::Compact(inner.records.len() as u32).encoded_size()
}
/// Convert into a [`StorageProof`].
pub fn to_storage_proof(&self) -> StorageProof {
StorageProof::new(
self.inner
.read()
.records
.iter()
.filter_map(|(_k, v)| v.as_ref().map(|v| v.to_vec())),
)
}
/// Reset the internal state.
pub fn reset(&self) {
let mut inner = self.inner.write();
inner.records.clear();
inner.encoded_size = 0;
}
}
/// Patricia trie-based backend which also tracks all touched storage trie values.
/// These can be sent to remote node and used as a proof of execution.
pub struct ProvingBackend<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher>(
TrieBackend<ProofRecorderBackend<'a, S, H>, H>,
);
/// Trie backend storage with its proof recorder.
pub struct ProofRecorderBackend<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> {
backend: &'a S,
proof_recorder: ProofRecorder<H::Out>,
}
impl<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> ProvingBackend<'a, S, H>
where
H::Out: Codec,
{
/// Create new proving backend.
pub fn new(backend: &'a TrieBackend<S, H>) -> Self {
let proof_recorder = Default::default();
Self::new_with_recorder(backend, proof_recorder)
}
/// Create new proving backend with the given recorder.
pub fn new_with_recorder(
backend: &'a TrieBackend<S, H>,
proof_recorder: ProofRecorder<H::Out>,
) -> Self {
let essence = backend.essence();
let root = *essence.root();
let recorder = ProofRecorderBackend { backend: essence.backend_storage(), proof_recorder };
ProvingBackend(TrieBackend::new(recorder, root))
}
/// Extracting the gathered unordered proof.
pub fn extract_proof(&self) -> StorageProof {
self.0.essence().backend_storage().proof_recorder.to_storage_proof()
}
/// Returns the estimated encoded size of the proof.
///
/// The estimation is maybe bigger (by in maximum 4 bytes), but never smaller than the actual
/// encoded proof.
pub fn estimate_encoded_size(&self) -> usize {
self.0.essence().backend_storage().proof_recorder.estimate_encoded_size()
}
/// Clear the proof recorded data.
pub fn clear_recorder(&self) {
self.0.essence().backend_storage().proof_recorder.reset()
}
}
impl<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> TrieBackendStorage<H>
for ProofRecorderBackend<'a, S, H>
{
type Overlay = S::Overlay;
fn get(&self, key: &H::Out, prefix: Prefix) -> Result<Option<DBValue>, String> {
if let Some(v) = self.proof_recorder.get(key) {
return Ok(v)
}
let backend_value = self.backend.get(key, prefix)?;
self.proof_recorder.record(*key, backend_value.clone());
Ok(backend_value)
}
}
impl<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> std::fmt::Debug
for ProvingBackend<'a, S, H>
{
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "ProvingBackend")
}
}
impl<'a, S, H> Backend<H> for ProvingBackend<'a, S, H>
where
S: 'a + TrieBackendStorage<H>,
H: 'a + Hasher,
H::Out: Ord + Codec,
{
type Error = String;
type Transaction = S::Overlay;
type TrieBackendStorage = S;
fn storage(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {
self.0.storage(key)
}
fn child_storage(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, Self::Error> {
self.0.child_storage(child_info, key)
}
fn apply_to_key_values_while<F: FnMut(Vec<u8>, Vec<u8>) -> bool>(
&self,
child_info: Option<&ChildInfo>,
prefix: Option<&[u8]>,
start_at: Option<&[u8]>,
f: F,
allow_missing: bool,
) -> Result<bool, Self::Error> {
self.0.apply_to_key_values_while(child_info, prefix, start_at, f, allow_missing)
}
fn apply_to_keys_while<F: FnMut(&[u8]) -> bool>(
&self,
child_info: Option<&ChildInfo>,
prefix: Option<&[u8]>,
start_at: Option<&[u8]>,
f: F,
) {
self.0.apply_to_keys_while(child_info, prefix, start_at, f)
}
fn next_storage_key(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {
self.0.next_storage_key(key)
}
fn next_child_storage_key(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, Self::Error> {
self.0.next_child_storage_key(child_info, key)
}
fn for_keys_with_prefix<F: FnMut(&[u8])>(&self, prefix: &[u8], f: F) {
self.0.for_keys_with_prefix(prefix, f)
}
fn for_key_values_with_prefix<F: FnMut(&[u8], &[u8])>(&self, prefix: &[u8], f: F) {
self.0.for_key_values_with_prefix(prefix, f)
}
fn for_child_keys_with_prefix<F: FnMut(&[u8])>(
&self,
child_info: &ChildInfo,
prefix: &[u8],
f: F,
) {
self.0.for_child_keys_with_prefix(child_info, prefix, f)
}
fn pairs(&self) -> Vec<(Vec<u8>, Vec<u8>)> {
self.0.pairs()
}
fn keys(&self, prefix: &[u8]) -> Vec<Vec<u8>> {
self.0.keys(prefix)
}
fn child_keys(&self, child_info: &ChildInfo, prefix: &[u8]) -> Vec<Vec<u8>> {
self.0.child_keys(child_info, prefix)
}
fn storage_root<'b>(
&self,
delta: impl Iterator<Item = (&'b [u8], Option<&'b [u8]>)>,
state_version: StateVersion,
) -> (H::Out, Self::Transaction)
where
H::Out: Ord,
{
self.0.storage_root(delta, state_version)
}
fn child_storage_root<'b>(
&self,
child_info: &ChildInfo,
delta: impl Iterator<Item = (&'b [u8], Option<&'b [u8]>)>,
state_version: StateVersion,
) -> (H::Out, bool, Self::Transaction)
where
H::Out: Ord,
{
self.0.child_storage_root(child_info, delta, state_version)
}
fn register_overlay_stats(&self, _stats: &crate::stats::StateMachineStats) {}
fn usage_info(&self) -> crate::stats::UsageInfo {
self.0.usage_info()
}
}
/// Create a backend used for checking the proof., using `H` as hasher.
///
/// `proof` and `root` must match, i.e. `root` must be the correct root of `proof` nodes.
pub fn create_proof_check_backend<H>(
root: H::Out,
proof: StorageProof,
) -> Result<TrieBackend<MemoryDB<H>, H>, Box<dyn Error>>
where
H: Hasher,
H::Out: Codec,
{
let db = proof.into_memory_db();
if db.contains(&root, EMPTY_PREFIX) {
Ok(TrieBackend::new(db, root))
} else {
Err(Box::new(ExecutionError::InvalidProof))
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
proving_backend::create_proof_check_backend, trie_backend::tests::test_trie,
InMemoryBackend,
};
use sp_core::H256;
use sp_runtime::traits::BlakeTwo256;
use sp_trie::PrefixedMemoryDB;
fn test_proving(
trie_backend: &TrieBackend<PrefixedMemoryDB<BlakeTwo256>, BlakeTwo256>,
) -> ProvingBackend<PrefixedMemoryDB<BlakeTwo256>, BlakeTwo256> {
ProvingBackend::new(trie_backend)
}
#[test]
fn proof_is_empty_until_value_is_read() {
proof_is_empty_until_value_is_read_inner(StateVersion::V0);
proof_is_empty_until_value_is_read_inner(StateVersion::V1);
}
fn proof_is_empty_until_value_is_read_inner(test_hash: StateVersion) {
let trie_backend = test_trie(test_hash);
assert!(test_proving(&trie_backend).extract_proof().is_empty());
}
#[test]
fn proof_is_non_empty_after_value_is_read() {
proof_is_non_empty_after_value_is_read_inner(StateVersion::V0);
proof_is_non_empty_after_value_is_read_inner(StateVersion::V1);
}
fn proof_is_non_empty_after_value_is_read_inner(test_hash: StateVersion) {
let trie_backend = test_trie(test_hash);
let backend = test_proving(&trie_backend);
assert_eq!(backend.storage(b"key").unwrap(), Some(b"value".to_vec()));
assert!(!backend.extract_proof().is_empty());
}
#[test]
fn proof_is_invalid_when_does_not_contains_root() {
let result = create_proof_check_backend::<BlakeTwo256>(
H256::from_low_u64_be(1),
StorageProof::empty(),
);
assert!(result.is_err());
}
#[test]
fn passes_through_backend_calls() {
passes_through_backend_calls_inner(StateVersion::V0);
passes_through_backend_calls_inner(StateVersion::V1);
}
fn passes_through_backend_calls_inner(state_version: StateVersion) {
let trie_backend = test_trie(state_version);
let proving_backend = test_proving(&trie_backend);
assert_eq!(trie_backend.storage(b"key").unwrap(), proving_backend.storage(b"key").unwrap());
assert_eq!(trie_backend.pairs(), proving_backend.pairs());
let (trie_root, mut trie_mdb) =
trie_backend.storage_root(std::iter::empty(), state_version);
let (proving_root, mut proving_mdb) =
proving_backend.storage_root(std::iter::empty(), state_version);
assert_eq!(trie_root, proving_root);
assert_eq!(trie_mdb.drain(), proving_mdb.drain());
}
#[test]
fn proof_recorded_and_checked_top() {
proof_recorded_and_checked_inner(StateVersion::V0);
proof_recorded_and_checked_inner(StateVersion::V1);
}
fn proof_recorded_and_checked_inner(state_version: StateVersion) {
let size_content = 34; // above hashable value treshold.
let value_range = 0..64;
let contents = value_range
.clone()
.map(|i| (vec![i], Some(vec![i; size_content])))
.collect::<Vec<_>>();
let in_memory = InMemoryBackend::<BlakeTwo256>::default();
let in_memory = in_memory.update(vec![(None, contents)], state_version);
let in_memory_root = in_memory.storage_root(std::iter::empty(), state_version).0;
value_range.clone().for_each(|i| {
assert_eq!(in_memory.storage(&[i]).unwrap().unwrap(), vec![i; size_content])
});
let trie = in_memory.as_trie_backend().unwrap();
let trie_root = trie.storage_root(std::iter::empty(), state_version).0;
assert_eq!(in_memory_root, trie_root);
value_range
.for_each(|i| assert_eq!(trie.storage(&[i]).unwrap().unwrap(), vec![i; size_content]));
let proving = ProvingBackend::new(trie);
assert_eq!(proving.storage(&[42]).unwrap().unwrap(), vec![42; size_content]);
let proof = proving.extract_proof();
let proof_check = create_proof_check_backend::<BlakeTwo256>(in_memory_root, proof).unwrap();
assert_eq!(proof_check.storage(&[42]).unwrap().unwrap(), vec![42; size_content]);
}
#[test]
fn proof_recorded_and_checked_with_child() {
proof_recorded_and_checked_with_child_inner(StateVersion::V0);
proof_recorded_and_checked_with_child_inner(StateVersion::V1);
}
fn proof_recorded_and_checked_with_child_inner(state_version: StateVersion) {
let child_info_1 = ChildInfo::new_default(b"sub1");
let child_info_2 = ChildInfo::new_default(b"sub2");
let child_info_1 = &child_info_1;
let child_info_2 = &child_info_2;
let contents = vec![
(None, (0..64).map(|i| (vec![i], Some(vec![i]))).collect::<Vec<_>>()),
(Some(child_info_1.clone()), (28..65).map(|i| (vec![i], Some(vec![i]))).collect()),
(Some(child_info_2.clone()), (10..15).map(|i| (vec![i], Some(vec![i]))).collect()),
];
let in_memory = InMemoryBackend::<BlakeTwo256>::default();
let in_memory = in_memory.update(contents, state_version);
let child_storage_keys = vec![child_info_1.to_owned(), child_info_2.to_owned()];
let in_memory_root = in_memory
.full_storage_root(
std::iter::empty(),
child_storage_keys.iter().map(|k| (k, std::iter::empty())),
state_version,
)
.0;
(0..64).for_each(|i| assert_eq!(in_memory.storage(&[i]).unwrap().unwrap(), vec![i]));
(28..65).for_each(|i| {
assert_eq!(in_memory.child_storage(child_info_1, &[i]).unwrap().unwrap(), vec![i])
});
(10..15).for_each(|i| {
assert_eq!(in_memory.child_storage(child_info_2, &[i]).unwrap().unwrap(), vec![i])
});
let trie = in_memory.as_trie_backend().unwrap();
let trie_root = trie.storage_root(std::iter::empty(), state_version).0;
assert_eq!(in_memory_root, trie_root);
(0..64).for_each(|i| assert_eq!(trie.storage(&[i]).unwrap().unwrap(), vec![i]));
let proving = ProvingBackend::new(trie);
assert_eq!(proving.storage(&[42]).unwrap().unwrap(), vec![42]);
let proof = proving.extract_proof();
let proof_check = create_proof_check_backend::<BlakeTwo256>(in_memory_root, proof).unwrap();
assert!(proof_check.storage(&[0]).is_err());
assert_eq!(proof_check.storage(&[42]).unwrap().unwrap(), vec![42]);
// note that it is include in root because proof close
assert_eq!(proof_check.storage(&[41]).unwrap().unwrap(), vec![41]);
assert_eq!(proof_check.storage(&[64]).unwrap(), None);
let proving = ProvingBackend::new(trie);
assert_eq!(proving.child_storage(child_info_1, &[64]), Ok(Some(vec![64])));
let proof = proving.extract_proof();
let proof_check = create_proof_check_backend::<BlakeTwo256>(in_memory_root, proof).unwrap();
assert_eq!(proof_check.child_storage(child_info_1, &[64]).unwrap().unwrap(), vec![64]);
}
#[test]
fn storage_proof_encoded_size_estimation_works() {
storage_proof_encoded_size_estimation_works_inner(StateVersion::V0);
storage_proof_encoded_size_estimation_works_inner(StateVersion::V1);
}
fn storage_proof_encoded_size_estimation_works_inner(state_version: StateVersion) {
let trie_backend = test_trie(state_version);
let backend = test_proving(&trie_backend);
let check_estimation =
|backend: &ProvingBackend<'_, PrefixedMemoryDB<BlakeTwo256>, BlakeTwo256>| {
let storage_proof = backend.extract_proof();
let estimation =
backend.0.essence().backend_storage().proof_recorder.estimate_encoded_size();
assert_eq!(storage_proof.encoded_size(), estimation);
};
assert_eq!(backend.storage(b"key").unwrap(), Some(b"value".to_vec()));
check_estimation(&backend);
assert_eq!(backend.storage(b"value1").unwrap(), Some(vec![42]));
check_estimation(&backend);
assert_eq!(backend.storage(b"value2").unwrap(), Some(vec![24]));
check_estimation(&backend);
assert!(backend.storage(b"doesnotexist").unwrap().is_none());
check_estimation(&backend);
assert!(backend.storage(b"doesnotexist2").unwrap().is_none());
check_estimation(&backend);
}
#[test]
fn proof_recorded_for_same_execution_should_be_deterministic() {
let storage_changes = vec![
(H256::random(), Some(b"value1".to_vec())),
(H256::random(), Some(b"value2".to_vec())),
(H256::random(), Some(b"value3".to_vec())),
(H256::random(), Some(b"value4".to_vec())),
(H256::random(), Some(b"value5".to_vec())),
(H256::random(), Some(b"value6".to_vec())),
(H256::random(), Some(b"value7".to_vec())),
(H256::random(), Some(b"value8".to_vec())),
];
let proof_recorder =
ProofRecorder::<H256> { inner: Arc::new(RwLock::new(ProofRecorderInner::default())) };
storage_changes
.clone()
.into_iter()
.for_each(|(key, val)| proof_recorder.record(key, val));
let proof1 = proof_recorder.to_storage_proof();
let proof_recorder =
ProofRecorder::<H256> { inner: Arc::new(RwLock::new(ProofRecorderInner::default())) };
storage_changes
.into_iter()
.for_each(|(key, val)| proof_recorder.record(key, val));
let proof2 = proof_recorder.to_storage_proof();
assert_eq!(proof1, proof2);
}
}
@@ -23,7 +23,6 @@ use hash_db::Hasher;
use sp_core::{
storage::{ChildInfo, StateVersion, TrackedStorageKey},
traits::Externalities,
Blake2Hasher,
};
use sp_externalities::MultiRemovalResults;
use std::{
@@ -44,7 +43,10 @@ pub trait InspectState<H: Hasher, B: Backend<H>> {
fn inspect_state<F: FnOnce() -> R, R>(&self, f: F) -> R;
}
impl<H: Hasher, B: Backend<H>> InspectState<H, B> for B {
impl<H: Hasher, B: Backend<H>> InspectState<H, B> for B
where
H::Out: Encode,
{
fn inspect_state<F: FnOnce() -> R, R>(&self, f: F) -> R {
ReadOnlyExternalities::from(self).execute_with(f)
}
@@ -66,7 +68,10 @@ impl<'a, H: Hasher, B: 'a + Backend<H>> From<&'a B> for ReadOnlyExternalities<'a
}
}
impl<'a, H: Hasher, B: 'a + Backend<H>> ReadOnlyExternalities<'a, H, B> {
impl<'a, H: Hasher, B: 'a + Backend<H>> ReadOnlyExternalities<'a, H, B>
where
H::Out: Encode,
{
/// Execute the given closure while `self` is set as externalities.
///
/// Returns the result of the given closure.
@@ -75,7 +80,10 @@ impl<'a, H: Hasher, B: 'a + Backend<H>> ReadOnlyExternalities<'a, H, B> {
}
}
impl<'a, H: Hasher, B: 'a + Backend<H>> Externalities for ReadOnlyExternalities<'a, H, B> {
impl<'a, H: Hasher, B: 'a + Backend<H>> Externalities for ReadOnlyExternalities<'a, H, B>
where
H::Out: Encode,
{
fn set_offchain_storage(&mut self, _key: &[u8], _value: Option<&[u8]>) {
panic!("Should not be used in read-only externalities!")
}
@@ -87,7 +95,10 @@ impl<'a, H: Hasher, B: 'a + Backend<H>> Externalities for ReadOnlyExternalities<
}
fn storage_hash(&self, key: &[u8]) -> Option<Vec<u8>> {
self.storage(key).map(|v| Blake2Hasher::hash(&v).encode())
self.backend
.storage_hash(key)
.expect("Backed failed for storage_hash in ReadOnlyExternalities")
.map(|h| h.encode())
}
fn child_storage(&self, child_info: &ChildInfo, key: &[u8]) -> Option<StorageValue> {
@@ -97,7 +108,10 @@ impl<'a, H: Hasher, B: 'a + Backend<H>> Externalities for ReadOnlyExternalities<
}
fn child_storage_hash(&self, child_info: &ChildInfo, key: &[u8]) -> Option<Vec<u8>> {
self.child_storage(child_info, key).map(|v| Blake2Hasher::hash(&v).encode())
self.backend
.child_storage_hash(child_info, key)
.expect("Backed failed for child_storage_hash in ReadOnlyExternalities")
.map(|h| h.encode())
}
fn next_storage_key(&self, key: &[u8]) -> Option<StorageKey> {
@@ -24,7 +24,7 @@ use std::{
use crate::{
backend::Backend, ext::Ext, InMemoryBackend, OverlayedChanges, StorageKey,
StorageTransactionCache, StorageValue,
StorageTransactionCache, StorageValue, TrieBackendBuilder,
};
use hash_db::Hasher;
@@ -41,8 +41,9 @@ use sp_externalities::{Extension, ExtensionStore, Extensions};
use sp_trie::StorageProof;
/// Simple HashMap-based Externalities impl.
pub struct TestExternalities<H: Hasher>
pub struct TestExternalities<H>
where
H: Hasher + 'static,
H::Out: codec::Codec + Ord,
{
/// The overlay changed storage.
@@ -58,8 +59,9 @@ where
pub state_version: StateVersion,
}
impl<H: Hasher> TestExternalities<H>
impl<H> TestExternalities<H>
where
H: Hasher + 'static,
H::Out: Ord + 'static + codec::Codec,
{
/// Get externalities implementation.
@@ -202,7 +204,9 @@ where
/// This implementation will wipe the proof recorded in between calls. Consecutive calls will
/// get their own proof from scratch.
pub fn execute_and_prove<R>(&mut self, execute: impl FnOnce() -> R) -> (R, StorageProof) {
let proving_backend = crate::InMemoryProvingBackend::new(&self.backend);
let proving_backend = TrieBackendBuilder::wrap(&self.backend)
.with_recorder(Default::default())
.build();
let mut proving_ext = Ext::new(
&mut self.overlay,
&mut self.storage_transaction_cache,
@@ -211,7 +215,7 @@ where
);
let outcome = sp_externalities::set_and_run_with_externalities(&mut proving_ext, execute);
let proof = proving_backend.extract_proof();
let proof = proving_backend.extract_proof().expect("Failed to extract storage proof");
(outcome, proof)
}
File diff suppressed because it is too large Load Diff
@@ -18,23 +18,32 @@
//! Trie-based state machine backend essence used to read values
//! from storage.
use crate::{backend::Consolidate, debug, warn, StorageKey, StorageValue};
use codec::Encode;
use crate::{
backend::Consolidate, debug, trie_backend::AsLocalTrieCache, warn, StorageKey, StorageValue,
};
use codec::Codec;
use hash_db::{self, AsHashDB, HashDB, HashDBRef, Hasher, Prefix};
#[cfg(feature = "std")]
use parking_lot::RwLock;
use sp_core::storage::{ChildInfo, ChildType, StateVersion};
#[cfg(not(feature = "std"))]
use sp_std::marker::PhantomData;
use sp_std::{boxed::Box, vec::Vec};
#[cfg(feature = "std")]
use sp_trie::recorder::Recorder;
use sp_trie::{
child_delta_trie_root, delta_trie_root, empty_child_trie_root, read_child_trie_value,
read_trie_value,
trie_types::{TrieDB, TrieError},
DBValue, KeySpacedDB, LayoutV1 as Layout, Trie, TrieDBIterator, TrieDBKeyIterator,
child_delta_trie_root, delta_trie_root, empty_child_trie_root, read_child_trie_hash,
read_child_trie_value, read_trie_value,
trie_types::{TrieDBBuilder, TrieError},
DBValue, KeySpacedDB, NodeCodec, Trie, TrieCache, TrieDBIterator, TrieDBKeyIterator,
TrieRecorder,
};
#[cfg(feature = "std")]
use std::collections::HashMap;
#[cfg(feature = "std")]
use std::sync::Arc;
use std::{collections::HashMap, sync::Arc};
// In this module, we only use layout for read operation and empty root,
// where V1 and V0 are equivalent.
use sp_trie::LayoutV1 as Layout;
#[cfg(not(feature = "std"))]
macro_rules! format {
@@ -68,18 +77,21 @@ impl<H> Cache<H> {
}
/// Patricia trie-based pairs storage essence.
pub struct TrieBackendEssence<S: TrieBackendStorage<H>, H: Hasher> {
pub struct TrieBackendEssence<S: TrieBackendStorage<H>, H: Hasher, C> {
storage: S,
root: H::Out,
empty: H::Out,
#[cfg(feature = "std")]
pub(crate) cache: Arc<RwLock<Cache<H::Out>>>,
#[cfg(feature = "std")]
pub(crate) trie_node_cache: Option<C>,
#[cfg(feature = "std")]
pub(crate) recorder: Option<Recorder<H>>,
#[cfg(not(feature = "std"))]
_phantom: PhantomData<C>,
}
impl<S: TrieBackendStorage<H>, H: Hasher> TrieBackendEssence<S, H>
where
H::Out: Encode,
{
impl<S: TrieBackendStorage<H>, H: Hasher, C> TrieBackendEssence<S, H, C> {
/// Create new trie-based backend.
pub fn new(storage: S, root: H::Out) -> Self {
TrieBackendEssence {
@@ -88,6 +100,30 @@ where
empty: H::hash(&[0u8]),
#[cfg(feature = "std")]
cache: Arc::new(RwLock::new(Cache::new())),
#[cfg(feature = "std")]
trie_node_cache: None,
#[cfg(feature = "std")]
recorder: None,
#[cfg(not(feature = "std"))]
_phantom: PhantomData,
}
}
/// Create new trie-based backend.
#[cfg(feature = "std")]
pub fn new_with_cache_and_recorder(
storage: S,
root: H::Out,
cache: Option<C>,
recorder: Option<Recorder<H>>,
) -> Self {
TrieBackendEssence {
storage,
root,
empty: H::hash(&[0u8]),
cache: Arc::new(RwLock::new(Cache::new())),
trie_node_cache: cache,
recorder,
}
}
@@ -96,6 +132,11 @@ where
&self.storage
}
/// Get backend storage mutable reference.
pub fn backend_storage_mut(&mut self) -> &mut S {
&mut self.storage
}
/// Get trie root.
pub fn root(&self) -> &H::Out {
&self.root
@@ -120,7 +161,97 @@ where
pub fn into_storage(self) -> S {
self.storage
}
}
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H>> TrieBackendEssence<S, H, C> {
/// Call the given closure passing it the recorder and the cache.
///
/// If the given `storage_root` is `None`, `self.root` will be used.
#[cfg(feature = "std")]
fn with_recorder_and_cache<R>(
&self,
storage_root: Option<H::Out>,
callback: impl FnOnce(
Option<&mut dyn TrieRecorder<H::Out>>,
Option<&mut dyn TrieCache<NodeCodec<H>>>,
) -> R,
) -> R {
let storage_root = storage_root.unwrap_or_else(|| self.root);
let mut recorder = self.recorder.as_ref().map(|r| r.as_trie_recorder());
let recorder = recorder.as_mut().map(|r| r as _);
let mut cache = self
.trie_node_cache
.as_ref()
.map(|c| c.as_local_trie_cache().as_trie_db_cache(storage_root));
let cache = cache.as_mut().map(|c| c as _);
callback(recorder, cache)
}
#[cfg(not(feature = "std"))]
fn with_recorder_and_cache<R>(
&self,
_: Option<H::Out>,
callback: impl FnOnce(
Option<&mut dyn TrieRecorder<H::Out>>,
Option<&mut dyn TrieCache<NodeCodec<H>>>,
) -> R,
) -> R {
callback(None, None)
}
/// Call the given closure passing it the recorder and the cache.
///
/// This function must only be used when the operation in `callback` is
/// calculating a `storage_root`. It is expected that `callback` returns
/// the new storage root. This is required to register the changes in the cache
/// for the correct storage root.
#[cfg(feature = "std")]
fn with_recorder_and_cache_for_storage_root<R>(
&self,
callback: impl FnOnce(
Option<&mut dyn TrieRecorder<H::Out>>,
Option<&mut dyn TrieCache<NodeCodec<H>>>,
) -> (Option<H::Out>, R),
) -> R {
let mut recorder = self.recorder.as_ref().map(|r| r.as_trie_recorder());
let recorder = recorder.as_mut().map(|r| r as _);
let result = if let Some(local_cache) = self.trie_node_cache.as_ref() {
let mut cache = local_cache.as_local_trie_cache().as_trie_db_mut_cache();
let (new_root, r) = callback(recorder, Some(&mut cache));
if let Some(new_root) = new_root {
cache.merge_into(local_cache.as_local_trie_cache(), new_root);
}
r
} else {
callback(recorder, None).1
};
result
}
#[cfg(not(feature = "std"))]
fn with_recorder_and_cache_for_storage_root<R>(
&self,
callback: impl FnOnce(
Option<&mut dyn TrieRecorder<H::Out>>,
Option<&mut dyn TrieCache<NodeCodec<H>>>,
) -> (Option<H::Out>, R),
) -> R {
callback(None, None).1
}
}
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H> + Send + Sync>
TrieBackendEssence<S, H, C>
where
H::Out: Codec + Ord,
{
/// Return the next key in the trie i.e. the minimum key that is strictly superior to `key` in
/// lexicographic order.
pub fn next_storage_key(&self, key: &[u8]) -> Result<Option<StorageKey>> {
@@ -184,39 +315,82 @@ where
dyn_eph = self;
}
let trie =
TrieDB::<H>::new(dyn_eph, root).map_err(|e| format!("TrieDB creation error: {}", e))?;
let mut iter = trie.key_iter().map_err(|e| format!("TrieDB iteration error: {}", e))?;
self.with_recorder_and_cache(Some(*root), |recorder, cache| {
let trie = TrieDBBuilder::<H>::new(dyn_eph, root)
.with_optional_recorder(recorder)
.with_optional_cache(cache)
.build();
// The key just after the one given in input, basically `key++0`.
// Note: We are sure this is the next key if:
// * size of key has no limit (i.e. we can always add 0 to the path),
// * and no keys can be inserted between `key` and `key++0` (this is ensured by sp-io).
let mut potential_next_key = Vec::with_capacity(key.len() + 1);
potential_next_key.extend_from_slice(key);
potential_next_key.push(0);
let mut iter = trie.key_iter().map_err(|e| format!("TrieDB iteration error: {}", e))?;
iter.seek(&potential_next_key)
.map_err(|e| format!("TrieDB iterator seek error: {}", e))?;
// The key just after the one given in input, basically `key++0`.
// Note: We are sure this is the next key if:
// * size of key has no limit (i.e. we can always add 0 to the path),
// * and no keys can be inserted between `key` and `key++0` (this is ensured by sp-io).
let mut potential_next_key = Vec::with_capacity(key.len() + 1);
potential_next_key.extend_from_slice(key);
potential_next_key.push(0);
let next_element = iter.next();
iter.seek(&potential_next_key)
.map_err(|e| format!("TrieDB iterator seek error: {}", e))?;
let next_key = if let Some(next_element) = next_element {
let next_key =
next_element.map_err(|e| format!("TrieDB iterator next error: {}", e))?;
Some(next_key)
} else {
None
};
let next_element = iter.next();
Ok(next_key)
let next_key = if let Some(next_element) = next_element {
let next_key =
next_element.map_err(|e| format!("TrieDB iterator next error: {}", e))?;
Some(next_key)
} else {
None
};
Ok(next_key)
})
}
/// Returns the hash value
pub fn storage_hash(&self, key: &[u8]) -> Result<Option<H::Out>> {
let map_e = |e| format!("Trie lookup error: {}", e);
self.with_recorder_and_cache(None, |recorder, cache| {
TrieDBBuilder::new(self, &self.root)
.with_optional_cache(cache)
.with_optional_recorder(recorder)
.build()
.get_hash(key)
.map_err(map_e)
})
}
/// Get the value of storage at given key.
pub fn storage(&self, key: &[u8]) -> Result<Option<StorageValue>> {
let map_e = |e| format!("Trie lookup error: {}", e);
read_trie_value::<Layout<H>, _>(self, &self.root, key).map_err(map_e)
self.with_recorder_and_cache(None, |recorder, cache| {
read_trie_value::<Layout<H>, _>(self, &self.root, key, recorder, cache).map_err(map_e)
})
}
/// Returns the hash value
pub fn child_storage_hash(&self, child_info: &ChildInfo, key: &[u8]) -> Result<Option<H::Out>> {
let child_root = match self.child_root(child_info)? {
Some(root) => root,
None => return Ok(None),
};
let map_e = |e| format!("Trie lookup error: {}", e);
self.with_recorder_and_cache(Some(child_root), |recorder, cache| {
read_child_trie_hash::<Layout<H>, _>(
child_info.keyspace(),
self,
&child_root,
key,
recorder,
cache,
)
.map_err(map_e)
})
}
/// Get the value of child storage at given key.
@@ -225,15 +399,24 @@ where
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<StorageValue>> {
let root = match self.child_root(child_info)? {
let child_root = match self.child_root(child_info)? {
Some(root) => root,
None => return Ok(None),
};
let map_e = |e| format!("Trie lookup error: {}", e);
read_child_trie_value::<Layout<H>, _>(child_info.keyspace(), self, &root, key)
self.with_recorder_and_cache(Some(child_root), |recorder, cache| {
read_child_trie_value::<Layout<H>, _>(
child_info.keyspace(),
self,
&child_root,
key,
recorder,
cache,
)
.map_err(map_e)
})
}
/// Retrieve all entries keys of storage and call `f` for each of those keys.
@@ -338,28 +521,33 @@ where
maybe_start_at: Option<&[u8]>,
) {
let mut iter = move |db| -> sp_std::result::Result<(), Box<TrieError<H::Out>>> {
let trie = TrieDB::<H>::new(db, root)?;
let prefix = maybe_prefix.unwrap_or(&[]);
let iter = match maybe_start_at {
Some(start_at) =>
TrieDBKeyIterator::new_prefixed_then_seek(&trie, prefix, start_at),
None => TrieDBKeyIterator::new_prefixed(&trie, prefix),
}?;
self.with_recorder_and_cache(Some(*root), |recorder, cache| {
let trie = TrieDBBuilder::<H>::new(db, root)
.with_optional_recorder(recorder)
.with_optional_cache(cache)
.build();
let prefix = maybe_prefix.unwrap_or(&[]);
let iter = match maybe_start_at {
Some(start_at) =>
TrieDBKeyIterator::new_prefixed_then_seek(&trie, prefix, start_at),
None => TrieDBKeyIterator::new_prefixed(&trie, prefix),
}?;
for x in iter {
let key = x?;
for x in iter {
let key = x?;
debug_assert!(maybe_prefix
.as_ref()
.map(|prefix| key.starts_with(prefix))
.unwrap_or(true));
debug_assert!(maybe_prefix
.as_ref()
.map(|prefix| key.starts_with(prefix))
.unwrap_or(true));
if !f(&key) {
break
if !f(&key) {
break
}
}
}
Ok(())
Ok(())
})
};
let result = if let Some(child_info) = child_info {
@@ -383,25 +571,30 @@ where
allow_missing_nodes: bool,
) -> Result<bool> {
let mut iter = move |db| -> sp_std::result::Result<bool, Box<TrieError<H::Out>>> {
let trie = TrieDB::<H>::new(db, root)?;
self.with_recorder_and_cache(Some(*root), |recorder, cache| {
let trie = TrieDBBuilder::<H>::new(db, root)
.with_optional_recorder(recorder)
.with_optional_cache(cache)
.build();
let prefix = prefix.unwrap_or(&[]);
let iterator = if let Some(start_at) = start_at {
TrieDBIterator::new_prefixed_then_seek(&trie, prefix, start_at)?
} else {
TrieDBIterator::new_prefixed(&trie, prefix)?
};
for x in iterator {
let (key, value) = x?;
let prefix = prefix.unwrap_or(&[]);
let iterator = if let Some(start_at) = start_at {
TrieDBIterator::new_prefixed_then_seek(&trie, prefix, start_at)?
} else {
TrieDBIterator::new_prefixed(&trie, prefix)?
};
for x in iterator {
let (key, value) = x?;
debug_assert!(key.starts_with(prefix));
debug_assert!(key.starts_with(prefix));
if !f(key, value) {
return Ok(false)
if !f(key, value) {
return Ok(false)
}
}
}
Ok(true)
Ok(true)
})
};
let result = if let Some(child_info) = child_info {
@@ -436,14 +629,20 @@ where
/// Returns all `(key, value)` pairs in the trie.
pub fn pairs(&self) -> Vec<(StorageKey, StorageValue)> {
let collect_all = || -> sp_std::result::Result<_, Box<TrieError<H::Out>>> {
let trie = TrieDB::<H>::new(self, &self.root)?;
let mut v = Vec::new();
for x in trie.iter()? {
let (key, value) = x?;
v.push((key.to_vec(), value.to_vec()));
}
self.with_recorder_and_cache(None, |recorder, cache| {
let trie = TrieDBBuilder::<H>::new(self, self.root())
.with_optional_cache(cache)
.with_optional_recorder(recorder)
.build();
Ok(v)
let mut v = Vec::new();
for x in trie.iter()? {
let (key, value) = x?;
v.push((key.to_vec(), value.to_vec()));
}
Ok(v)
})
};
match collect_all() {
@@ -467,27 +666,28 @@ where
&self,
delta: impl Iterator<Item = (&'a [u8], Option<&'a [u8]>)>,
state_version: StateVersion,
) -> (H::Out, S::Overlay)
where
H::Out: Ord,
{
) -> (H::Out, S::Overlay) {
let mut write_overlay = S::Overlay::default();
let mut root = self.root;
{
let root = self.with_recorder_and_cache_for_storage_root(|recorder, cache| {
let mut eph = Ephemeral::new(self.backend_storage(), &mut write_overlay);
let res = match state_version {
StateVersion::V0 =>
delta_trie_root::<sp_trie::LayoutV0<H>, _, _, _, _, _>(&mut eph, root, delta),
StateVersion::V1 =>
delta_trie_root::<sp_trie::LayoutV1<H>, _, _, _, _, _>(&mut eph, root, delta),
StateVersion::V0 => delta_trie_root::<sp_trie::LayoutV0<H>, _, _, _, _, _>(
&mut eph, self.root, delta, recorder, cache,
),
StateVersion::V1 => delta_trie_root::<sp_trie::LayoutV1<H>, _, _, _, _, _>(
&mut eph, self.root, delta, recorder, cache,
),
};
match res {
Ok(ret) => root = ret,
Err(e) => warn!(target: "trie", "Failed to write to trie: {}", e),
Ok(ret) => (Some(ret), ret),
Err(e) => {
warn!(target: "trie", "Failed to write to trie: {}", e);
(None, self.root)
},
}
}
});
(root, write_overlay)
}
@@ -499,15 +699,12 @@ where
child_info: &ChildInfo,
delta: impl Iterator<Item = (&'a [u8], Option<&'a [u8]>)>,
state_version: StateVersion,
) -> (H::Out, bool, S::Overlay)
where
H::Out: Ord,
{
) -> (H::Out, bool, S::Overlay) {
let default_root = match child_info.child_type() {
ChildType::ParentKeyId => empty_child_trie_root::<sp_trie::LayoutV1<H>>(),
};
let mut write_overlay = S::Overlay::default();
let mut root = match self.child_root(child_info) {
let child_root = match self.child_root(child_info) {
Ok(Some(hash)) => hash,
Ok(None) => default_root,
Err(e) => {
@@ -516,32 +713,39 @@ where
},
};
{
let new_child_root = self.with_recorder_and_cache_for_storage_root(|recorder, cache| {
let mut eph = Ephemeral::new(self.backend_storage(), &mut write_overlay);
match match state_version {
StateVersion::V0 =>
child_delta_trie_root::<sp_trie::LayoutV0<H>, _, _, _, _, _, _>(
child_info.keyspace(),
&mut eph,
root,
child_root,
delta,
recorder,
cache,
),
StateVersion::V1 =>
child_delta_trie_root::<sp_trie::LayoutV1<H>, _, _, _, _, _, _>(
child_info.keyspace(),
&mut eph,
root,
child_root,
delta,
recorder,
cache,
),
} {
Ok(ret) => root = ret,
Err(e) => warn!(target: "trie", "Failed to write to trie: {}", e),
Ok(ret) => (Some(ret), ret),
Err(e) => {
warn!(target: "trie", "Failed to write to trie: {}", e);
(None, child_root)
},
}
}
});
let is_default = root == default_root;
let is_default = new_child_root == default_root;
(root, is_default, write_overlay)
(new_child_root, is_default, write_overlay)
}
}
@@ -615,6 +819,14 @@ pub trait TrieBackendStorage<H: Hasher>: Send + Sync {
fn get(&self, key: &H::Out, prefix: Prefix) -> Result<Option<DBValue>>;
}
impl<T: TrieBackendStorage<H>, H: Hasher> TrieBackendStorage<H> for &T {
type Overlay = T::Overlay;
fn get(&self, key: &H::Out, prefix: Prefix) -> Result<Option<DBValue>> {
(*self).get(key, prefix)
}
}
// This implementation is used by normal storage trie clients.
#[cfg(feature = "std")]
impl<H: Hasher> TrieBackendStorage<H> for Arc<dyn Storage<H>> {
@@ -637,7 +849,9 @@ where
}
}
impl<S: TrieBackendStorage<H>, H: Hasher> AsHashDB<H, DBValue> for TrieBackendEssence<S, H> {
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H> + Send + Sync> AsHashDB<H, DBValue>
for TrieBackendEssence<S, H, C>
{
fn as_hash_db<'b>(&'b self) -> &'b (dyn HashDB<H, DBValue> + 'b) {
self
}
@@ -646,7 +860,9 @@ impl<S: TrieBackendStorage<H>, H: Hasher> AsHashDB<H, DBValue> for TrieBackendEs
}
}
impl<S: TrieBackendStorage<H>, H: Hasher> HashDB<H, DBValue> for TrieBackendEssence<S, H> {
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H> + Send + Sync> HashDB<H, DBValue>
for TrieBackendEssence<S, H, C>
{
fn get(&self, key: &H::Out, prefix: Prefix) -> Option<DBValue> {
if *key == self.empty {
return Some([0u8].to_vec())
@@ -677,7 +893,9 @@ impl<S: TrieBackendStorage<H>, H: Hasher> HashDB<H, DBValue> for TrieBackendEsse
}
}
impl<S: TrieBackendStorage<H>, H: Hasher> HashDBRef<H, DBValue> for TrieBackendEssence<S, H> {
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H> + Send + Sync>
HashDBRef<H, DBValue> for TrieBackendEssence<S, H, C>
{
fn get(&self, key: &H::Out, prefix: Prefix) -> Option<DBValue> {
HashDB::get(self, key, prefix)
}
@@ -692,7 +910,8 @@ mod test {
use super::*;
use sp_core::{Blake2Hasher, H256};
use sp_trie::{
trie_types::TrieDBMutV1 as TrieDBMut, KeySpacedDBMut, PrefixedMemoryDB, TrieMut,
cache::LocalTrieCache, trie_types::TrieDBMutBuilderV1 as TrieDBMutBuilder, KeySpacedDBMut,
PrefixedMemoryDB, TrieMut,
};
#[test]
@@ -706,7 +925,7 @@ mod test {
let mut mdb = PrefixedMemoryDB::<Blake2Hasher>::default();
{
let mut trie = TrieDBMut::new(&mut mdb, &mut root_1);
let mut trie = TrieDBMutBuilder::new(&mut mdb, &mut root_1).build();
trie.insert(b"3", &[1]).expect("insert failed");
trie.insert(b"4", &[1]).expect("insert failed");
trie.insert(b"6", &[1]).expect("insert failed");
@@ -715,18 +934,18 @@ mod test {
let mut mdb = KeySpacedDBMut::new(&mut mdb, child_info.keyspace());
// reuse of root_1 implicitly assert child trie root is same
// as top trie (contents must remain the same).
let mut trie = TrieDBMut::new(&mut mdb, &mut root_1);
let mut trie = TrieDBMutBuilder::new(&mut mdb, &mut root_1).build();
trie.insert(b"3", &[1]).expect("insert failed");
trie.insert(b"4", &[1]).expect("insert failed");
trie.insert(b"6", &[1]).expect("insert failed");
}
{
let mut trie = TrieDBMut::new(&mut mdb, &mut root_2);
let mut trie = TrieDBMutBuilder::new(&mut mdb, &mut root_2).build();
trie.insert(child_info.prefixed_storage_key().as_slice(), root_1.as_ref())
.expect("insert failed");
};
let essence_1 = TrieBackendEssence::new(mdb, root_1);
let essence_1 = TrieBackendEssence::<_, _, LocalTrieCache<_>>::new(mdb, root_1);
assert_eq!(essence_1.next_storage_key(b"2"), Ok(Some(b"3".to_vec())));
assert_eq!(essence_1.next_storage_key(b"3"), Ok(Some(b"4".to_vec())));
@@ -734,8 +953,8 @@ mod test {
assert_eq!(essence_1.next_storage_key(b"5"), Ok(Some(b"6".to_vec())));
assert_eq!(essence_1.next_storage_key(b"6"), Ok(None));
let mdb = essence_1.into_storage();
let essence_2 = TrieBackendEssence::new(mdb, root_2);
let mdb = essence_1.backend_storage().clone();
let essence_2 = TrieBackendEssence::<_, _, LocalTrieCache<_>>::new(mdb, root_2);
assert_eq!(essence_2.next_child_storage_key(child_info, b"2"), Ok(Some(b"3".to_vec())));
assert_eq!(essence_2.next_child_storage_key(child_info, b"3"), Ok(Some(b"4".to_vec())));
@@ -200,7 +200,8 @@ pub mod registration {
let mut transaction_root = sp_trie::empty_trie_root::<TrieLayout>();
{
let mut trie =
sp_trie::TrieDBMut::<TrieLayout>::new(&mut db, &mut transaction_root);
sp_trie::TrieDBMutBuilder::<TrieLayout>::new(&mut db, &mut transaction_root)
.build();
let chunks = transaction.chunks(CHUNK_SIZE).map(|c| c.to_vec());
for (index, chunk) in chunks.enumerate() {
let index = encode_index(index as u32);
+16 -2
View File
@@ -18,12 +18,19 @@ name = "bench"
harness = false
[dependencies]
ahash = { version = "0.7.6", optional = true }
codec = { package = "parity-scale-codec", version = "3.0.0", default-features = false }
hashbrown = { version = "0.12.3", optional = true }
hash-db = { version = "0.15.2", default-features = false }
lazy_static = { version = "1.4.0", optional = true }
lru = { version = "0.7.5", optional = true }
memory-db = { version = "0.29.0", default-features = false }
nohash-hasher = { version = "0.2.0", optional = true }
parking_lot = { version = "0.12.0", optional = true }
scale-info = { version = "2.1.1", default-features = false, features = ["derive"] }
thiserror = { version = "1.0.30", optional = true }
trie-db = { version = "0.23.1", default-features = false }
tracing = { version = "0.1.29", optional = true }
trie-db = { version = "0.24.0", default-features = false }
trie-root = { version = "0.17.0", default-features = false }
sp-core = { version = "6.0.0", default-features = false, path = "../core" }
sp-std = { version = "4.0.0", default-features = false, path = "../std" }
@@ -31,20 +38,27 @@ sp-std = { version = "4.0.0", default-features = false, path = "../std" }
[dev-dependencies]
criterion = "0.3.3"
hex-literal = "0.3.4"
trie-bench = "0.30.0"
trie-bench = "0.31.0"
trie-standardmap = "0.15.2"
sp-runtime = { version = "6.0.0", path = "../runtime" }
[features]
default = ["std"]
std = [
"ahash",
"codec/std",
"hashbrown",
"hash-db/std",
"lazy_static",
"lru",
"memory-db/std",
"nohash-hasher",
"parking_lot",
"scale-info/std",
"sp-core/std",
"sp-std/std",
"thiserror",
"tracing",
"trie-db/std",
"trie-root/std",
]
+686
View File
@@ -0,0 +1,686 @@
// This file is part of Substrate.
// Copyright (C) 2022 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Trie Cache
//!
//! Provides an implementation of the [`TrieCache`](trie_db::TrieCache) trait.
//! The implementation is split into three types [`SharedTrieCache`], [`LocalTrieCache`] and
//! [`TrieCache`]. The [`SharedTrieCache`] is the instance that should be kept around for the entire
//! lifetime of the node. It will store all cached trie nodes and values on a global level. Then
//! there is the [`LocalTrieCache`] that should be kept around per state instance requested from the
//! backend. As there are very likely multiple accesses to the state per instance, this
//! [`LocalTrieCache`] is used to cache the nodes and the values before they are merged back to the
//! shared instance. Last but not least there is the [`TrieCache`] that is being used per access to
//! the state. It will use the [`SharedTrieCache`] and the [`LocalTrieCache`] to fulfill cache
//! requests. If both of them don't provide the requested data it will be inserted into the
//! [`LocalTrieCache`] and then later into the [`SharedTrieCache`].
//!
//! The [`SharedTrieCache`] is bound to some maximum number of bytes. It is ensured that it never
//! runs above this limit. However as long as data is cached inside a [`LocalTrieCache`] it isn't
//! taken into account when limiting the [`SharedTrieCache`]. This means that for the lifetime of a
//! [`LocalTrieCache`] the actual memory usage could be above the allowed maximum.
use crate::{Error, NodeCodec};
use hash_db::Hasher;
use hashbrown::HashSet;
use nohash_hasher::BuildNoHashHasher;
use parking_lot::{Mutex, MutexGuard, RwLockReadGuard};
use shared_cache::{SharedValueCache, ValueCacheKey};
use std::{
collections::{hash_map::Entry as MapEntry, HashMap},
sync::Arc,
};
use trie_db::{node::NodeOwned, CachedValue};
mod shared_cache;
pub use shared_cache::SharedTrieCache;
use self::shared_cache::{SharedTrieCacheInner, ValueCacheKeyHash};
const LOG_TARGET: &str = "trie-cache";
/// The size of the cache.
#[derive(Debug, Clone, Copy)]
pub enum CacheSize {
/// Do not limit the cache size.
Unlimited,
/// Let the cache in maximum use the given amount of bytes.
Maximum(usize),
}
impl CacheSize {
/// Returns `true` if the `current_size` exceeds the allowed size.
fn exceeds(&self, current_size: usize) -> bool {
match self {
Self::Unlimited => false,
Self::Maximum(max) => *max < current_size,
}
}
}
/// The local trie cache.
///
/// This cache should be used per state instance created by the backend. One state instance is
/// referring to the state of one block. It will cache all the accesses that are done to the state
/// which could not be fullfilled by the [`SharedTrieCache`]. These locally cached items are merged
/// back to the shared trie cache when this instance is dropped.
///
/// When using [`Self::as_trie_db_cache`] or [`Self::as_trie_db_mut_cache`], it will lock Mutexes.
/// So, it is important that these methods are not called multiple times, because they otherwise
/// deadlock.
pub struct LocalTrieCache<H: Hasher> {
/// The shared trie cache that created this instance.
shared: SharedTrieCache<H>,
/// The local cache for the trie nodes.
node_cache: Mutex<HashMap<H::Out, NodeOwned<H::Out>>>,
/// Keeps track of all the trie nodes accessed in the shared cache.
///
/// This will be used to ensure that these nodes are brought to the front of the lru when this
/// local instance is merged back to the shared cache.
shared_node_cache_access: Mutex<HashSet<H::Out>>,
/// The local cache for the values.
value_cache: Mutex<
HashMap<
ValueCacheKey<'static, H::Out>,
CachedValue<H::Out>,
BuildNoHashHasher<ValueCacheKey<'static, H::Out>>,
>,
>,
/// Keeps track of all values accessed in the shared cache.
///
/// This will be used to ensure that these nodes are brought to the front of the lru when this
/// local instance is merged back to the shared cache. This can actually lead to collision when
/// two [`ValueCacheKey`]s with different storage roots and keys map to the same hash. However,
/// as we only use this set to update the lru position it is fine, even if we bring the wrong
/// value to the top. The important part is that we always get the correct value from the value
/// cache for a given key.
shared_value_cache_access:
Mutex<HashSet<ValueCacheKeyHash, BuildNoHashHasher<ValueCacheKeyHash>>>,
}
impl<H: Hasher> LocalTrieCache<H> {
/// Return self as a [`TrieDB`](trie_db::TrieDB) compatible cache.
///
/// The given `storage_root` needs to be the storage root of the trie this cache is used for.
pub fn as_trie_db_cache(&self, storage_root: H::Out) -> TrieCache<'_, H> {
let shared_inner = self.shared.read_lock_inner();
let value_cache = ValueCache::ForStorageRoot {
storage_root,
local_value_cache: self.value_cache.lock(),
shared_value_cache_access: self.shared_value_cache_access.lock(),
};
TrieCache {
shared_inner,
local_cache: self.node_cache.lock(),
value_cache,
shared_node_cache_access: self.shared_node_cache_access.lock(),
}
}
/// Return self as [`TrieDBMut`](trie_db::TrieDBMut) compatible cache.
///
/// After finishing all operations with [`TrieDBMut`](trie_db::TrieDBMut) and having obtained
/// the new storage root, [`TrieCache::merge_into`] should be called to update this local
/// cache instance. If the function is not called, cached data is just thrown away and not
/// propagated to the shared cache. So, accessing these new items will be slower, but nothing
/// would break because of this.
pub fn as_trie_db_mut_cache(&self) -> TrieCache<'_, H> {
TrieCache {
shared_inner: self.shared.read_lock_inner(),
local_cache: self.node_cache.lock(),
value_cache: ValueCache::Fresh(Default::default()),
shared_node_cache_access: self.shared_node_cache_access.lock(),
}
}
}
impl<H: Hasher> Drop for LocalTrieCache<H> {
fn drop(&mut self) {
let mut shared_inner = self.shared.write_lock_inner();
shared_inner
.node_cache_mut()
.update(self.node_cache.lock().drain(), self.shared_node_cache_access.lock().drain());
shared_inner
.value_cache_mut()
.update(self.value_cache.lock().drain(), self.shared_value_cache_access.lock().drain());
}
}
/// The abstraction of the value cache for the [`TrieCache`].
enum ValueCache<'a, H> {
/// The value cache is fresh, aka not yet associated to any storage root.
/// This is used for example when a new trie is being build, to cache new values.
Fresh(HashMap<Arc<[u8]>, CachedValue<H>>),
/// The value cache is already bound to a specific storage root.
ForStorageRoot {
shared_value_cache_access: MutexGuard<
'a,
HashSet<ValueCacheKeyHash, nohash_hasher::BuildNoHashHasher<ValueCacheKeyHash>>,
>,
local_value_cache: MutexGuard<
'a,
HashMap<
ValueCacheKey<'static, H>,
CachedValue<H>,
nohash_hasher::BuildNoHashHasher<ValueCacheKey<'static, H>>,
>,
>,
storage_root: H,
},
}
impl<H: AsRef<[u8]> + std::hash::Hash + Eq + Clone + Copy> ValueCache<'_, H> {
/// Get the value for the given `key`.
fn get<'a>(
&'a mut self,
key: &[u8],
shared_value_cache: &'a SharedValueCache<H>,
) -> Option<&CachedValue<H>> {
match self {
Self::Fresh(map) => map.get(key),
Self::ForStorageRoot { local_value_cache, shared_value_cache_access, storage_root } => {
let key = ValueCacheKey::new_ref(key, *storage_root);
// We first need to look up in the local cache and then the shared cache.
// It can happen that some value is cached in the shared cache, but the
// weak reference of the data can not be upgraded anymore. This for example
// happens when the node is dropped that contains the strong reference to the data.
//
// So, the logic of the trie would lookup the data and the node and store both
// in our local caches.
local_value_cache
.get(unsafe {
// SAFETY
//
// We need to convert the lifetime to make the compiler happy. However, as
// we only use the `key` to looking up the value this lifetime conversion is
// safe.
std::mem::transmute::<&ValueCacheKey<'_, H>, &ValueCacheKey<'static, H>>(
&key,
)
})
.or_else(|| {
shared_value_cache.get(&key).map(|v| {
shared_value_cache_access.insert(key.get_hash());
v
})
})
},
}
}
/// Insert some new `value` under the given `key`.
fn insert(&mut self, key: &[u8], value: CachedValue<H>) {
match self {
Self::Fresh(map) => {
map.insert(key.into(), value);
},
Self::ForStorageRoot { local_value_cache, storage_root, .. } => {
local_value_cache.insert(ValueCacheKey::new_value(key, *storage_root), value);
},
}
}
}
/// The actual [`TrieCache`](trie_db::TrieCache) implementation.
///
/// If this instance was created for using it with a [`TrieDBMut`](trie_db::TrieDBMut), it needs to
/// be merged back into the [`LocalTrieCache`] with [`Self::merge_into`] after all operations are
/// done.
pub struct TrieCache<'a, H: Hasher> {
shared_inner: RwLockReadGuard<'a, SharedTrieCacheInner<H>>,
shared_node_cache_access: MutexGuard<'a, HashSet<H::Out>>,
local_cache: MutexGuard<'a, HashMap<H::Out, NodeOwned<H::Out>>>,
value_cache: ValueCache<'a, H::Out>,
}
impl<'a, H: Hasher> TrieCache<'a, H> {
/// Merge this cache into the given [`LocalTrieCache`].
///
/// This function is only required to be called when this instance was created through
/// [`LocalTrieCache::as_trie_db_mut_cache`], otherwise this method is a no-op. The given
/// `storage_root` is the new storage root that was obtained after finishing all operations
/// using the [`TrieDBMut`](trie_db::TrieDBMut).
pub fn merge_into(self, local: &LocalTrieCache<H>, storage_root: H::Out) {
let cache = if let ValueCache::Fresh(cache) = self.value_cache { cache } else { return };
if !cache.is_empty() {
let mut value_cache = local.value_cache.lock();
let partial_hash = ValueCacheKey::hash_partial_data(&storage_root);
cache
.into_iter()
.map(|(k, v)| {
let hash =
ValueCacheKeyHash::from_hasher_and_storage_key(partial_hash.clone(), &k);
(ValueCacheKey::Value { storage_key: k, storage_root, hash }, v)
})
.for_each(|(k, v)| {
value_cache.insert(k, v);
});
}
}
}
impl<'a, H: Hasher> trie_db::TrieCache<NodeCodec<H>> for TrieCache<'a, H> {
fn get_or_insert_node(
&mut self,
hash: H::Out,
fetch_node: &mut dyn FnMut() -> trie_db::Result<NodeOwned<H::Out>, H::Out, Error<H::Out>>,
) -> trie_db::Result<&NodeOwned<H::Out>, H::Out, Error<H::Out>> {
if let Some(res) = self.shared_inner.node_cache().get(&hash) {
tracing::trace!(target: LOG_TARGET, ?hash, "Serving node from shared cache");
self.shared_node_cache_access.insert(hash);
return Ok(res)
}
match self.local_cache.entry(hash) {
MapEntry::Occupied(res) => {
tracing::trace!(target: LOG_TARGET, ?hash, "Serving node from local cache");
Ok(res.into_mut())
},
MapEntry::Vacant(vacant) => {
let node = (*fetch_node)();
tracing::trace!(
target: LOG_TARGET,
?hash,
fetch_successful = node.is_ok(),
"Node not found, needed to fetch it."
);
Ok(vacant.insert(node?))
},
}
}
fn get_node(&mut self, hash: &H::Out) -> Option<&NodeOwned<H::Out>> {
if let Some(node) = self.shared_inner.node_cache().get(hash) {
tracing::trace!(target: LOG_TARGET, ?hash, "Getting node from shared cache");
self.shared_node_cache_access.insert(*hash);
return Some(node)
}
let res = self.local_cache.get(hash);
tracing::trace!(
target: LOG_TARGET,
?hash,
found = res.is_some(),
"Getting node from local cache"
);
res
}
fn lookup_value_for_key(&mut self, key: &[u8]) -> Option<&CachedValue<H::Out>> {
let res = self.value_cache.get(key, self.shared_inner.value_cache());
tracing::trace!(
target: LOG_TARGET,
key = ?sp_core::hexdisplay::HexDisplay::from(&key),
found = res.is_some(),
"Looked up value for key",
);
res
}
fn cache_value_for_key(&mut self, key: &[u8], data: CachedValue<H::Out>) {
tracing::trace!(
target: LOG_TARGET,
key = ?sp_core::hexdisplay::HexDisplay::from(&key),
"Caching value for key",
);
self.value_cache.insert(key.into(), data);
}
}
#[cfg(test)]
mod tests {
use super::*;
use trie_db::{Bytes, Trie, TrieDBBuilder, TrieDBMutBuilder, TrieHash, TrieMut};
type MemoryDB = crate::MemoryDB<sp_core::Blake2Hasher>;
type Layout = crate::LayoutV1<sp_core::Blake2Hasher>;
type Cache = super::SharedTrieCache<sp_core::Blake2Hasher>;
type Recorder = crate::recorder::Recorder<sp_core::Blake2Hasher>;
const TEST_DATA: &[(&[u8], &[u8])] =
&[(b"key1", b"val1"), (b"key2", &[2; 64]), (b"key3", b"val3"), (b"key4", &[4; 64])];
const CACHE_SIZE_RAW: usize = 1024 * 10;
const CACHE_SIZE: CacheSize = CacheSize::Maximum(CACHE_SIZE_RAW);
fn create_trie() -> (MemoryDB, TrieHash<Layout>) {
let mut db = MemoryDB::default();
let mut root = Default::default();
{
let mut trie = TrieDBMutBuilder::<Layout>::new(&mut db, &mut root).build();
for (k, v) in TEST_DATA {
trie.insert(k, v).expect("Inserts data");
}
}
(db, root)
}
#[test]
fn basic_cache_works() {
let (db, root) = create_trie();
let shared_cache = Cache::new(CACHE_SIZE);
let local_cache = shared_cache.local_cache();
{
let mut cache = local_cache.as_trie_db_cache(root);
let trie = TrieDBBuilder::<Layout>::new(&db, &root).with_cache(&mut cache).build();
assert_eq!(TEST_DATA[0].1.to_vec(), trie.get(TEST_DATA[0].0).unwrap().unwrap());
}
// Local cache wasn't dropped yet, so there should nothing in the shared caches.
assert!(shared_cache.read_lock_inner().value_cache().lru.is_empty());
assert!(shared_cache.read_lock_inner().node_cache().lru.is_empty());
drop(local_cache);
// Now we should have the cached items in the shared cache.
assert!(shared_cache.read_lock_inner().node_cache().lru.len() >= 1);
let cached_data = shared_cache
.read_lock_inner()
.value_cache()
.lru
.peek(&ValueCacheKey::new_value(TEST_DATA[0].0, root))
.unwrap()
.clone();
assert_eq!(Bytes::from(TEST_DATA[0].1.to_vec()), cached_data.data().flatten().unwrap());
let fake_data = Bytes::from(&b"fake_data"[..]);
let local_cache = shared_cache.local_cache();
shared_cache.write_lock_inner().value_cache_mut().lru.put(
ValueCacheKey::new_value(TEST_DATA[1].0, root),
(fake_data.clone(), Default::default()).into(),
);
{
let mut cache = local_cache.as_trie_db_cache(root);
let trie = TrieDBBuilder::<Layout>::new(&db, &root).with_cache(&mut cache).build();
// We should now get the "fake_data", because we inserted this manually to the cache.
assert_eq!(b"fake_data".to_vec(), trie.get(TEST_DATA[1].0).unwrap().unwrap());
}
}
#[test]
fn trie_db_mut_cache_works() {
let (mut db, root) = create_trie();
let new_key = b"new_key".to_vec();
// Use some long value to not have it inlined
let new_value = vec![23; 64];
let shared_cache = Cache::new(CACHE_SIZE);
let mut new_root = root;
{
let local_cache = shared_cache.local_cache();
let mut cache = local_cache.as_trie_db_mut_cache();
{
let mut trie = TrieDBMutBuilder::<Layout>::from_existing(&mut db, &mut new_root)
.with_cache(&mut cache)
.build();
trie.insert(&new_key, &new_value).unwrap();
}
cache.merge_into(&local_cache, new_root);
}
// After the local cache is dropped, all changes should have been merged back to the shared
// cache.
let cached_data = shared_cache
.read_lock_inner()
.value_cache()
.lru
.peek(&ValueCacheKey::new_value(new_key, new_root))
.unwrap()
.clone();
assert_eq!(Bytes::from(new_value), cached_data.data().flatten().unwrap());
}
#[test]
fn trie_db_cache_and_recorder_work_together() {
let (db, root) = create_trie();
let shared_cache = Cache::new(CACHE_SIZE);
for i in 0..5 {
// Clear some of the caches.
if i == 2 {
shared_cache.reset_node_cache();
} else if i == 3 {
shared_cache.reset_value_cache();
}
let local_cache = shared_cache.local_cache();
let recorder = Recorder::default();
{
let mut cache = local_cache.as_trie_db_cache(root);
let mut recorder = recorder.as_trie_recorder();
let trie = TrieDBBuilder::<Layout>::new(&db, &root)
.with_cache(&mut cache)
.with_recorder(&mut recorder)
.build();
for (key, value) in TEST_DATA {
assert_eq!(*value, trie.get(&key).unwrap().unwrap());
}
}
let storage_proof = recorder.drain_storage_proof();
let memory_db: MemoryDB = storage_proof.into_memory_db();
{
let trie = TrieDBBuilder::<Layout>::new(&memory_db, &root).build();
for (key, value) in TEST_DATA {
assert_eq!(*value, trie.get(&key).unwrap().unwrap());
}
}
}
}
#[test]
fn trie_db_mut_cache_and_recorder_work_together() {
const DATA_TO_ADD: &[(&[u8], &[u8])] = &[(b"key11", &[45; 78]), (b"key33", &[78; 89])];
let (db, root) = create_trie();
let shared_cache = Cache::new(CACHE_SIZE);
// Run this twice so that we use the data cache in the second run.
for i in 0..5 {
// Clear some of the caches.
if i == 2 {
shared_cache.reset_node_cache();
} else if i == 3 {
shared_cache.reset_value_cache();
}
let recorder = Recorder::default();
let local_cache = shared_cache.local_cache();
let mut new_root = root;
{
let mut db = db.clone();
let mut cache = local_cache.as_trie_db_cache(root);
let mut recorder = recorder.as_trie_recorder();
let mut trie = TrieDBMutBuilder::<Layout>::from_existing(&mut db, &mut new_root)
.with_cache(&mut cache)
.with_recorder(&mut recorder)
.build();
for (key, value) in DATA_TO_ADD {
trie.insert(key, value).unwrap();
}
}
let storage_proof = recorder.drain_storage_proof();
let mut memory_db: MemoryDB = storage_proof.into_memory_db();
let mut proof_root = root;
{
let mut trie =
TrieDBMutBuilder::<Layout>::from_existing(&mut memory_db, &mut proof_root)
.build();
for (key, value) in DATA_TO_ADD {
trie.insert(key, value).unwrap();
}
}
assert_eq!(new_root, proof_root)
}
}
#[test]
fn cache_lru_works() {
let (db, root) = create_trie();
let shared_cache = Cache::new(CACHE_SIZE);
{
let local_cache = shared_cache.local_cache();
let mut cache = local_cache.as_trie_db_cache(root);
let trie = TrieDBBuilder::<Layout>::new(&db, &root).with_cache(&mut cache).build();
for (k, _) in TEST_DATA {
trie.get(k).unwrap().unwrap();
}
}
// Check that all items are there.
assert!(shared_cache
.read_lock_inner()
.value_cache()
.lru
.iter()
.map(|d| d.0)
.all(|l| TEST_DATA.iter().any(|d| l.storage_key().unwrap() == d.0)));
{
let local_cache = shared_cache.local_cache();
let mut cache = local_cache.as_trie_db_cache(root);
let trie = TrieDBBuilder::<Layout>::new(&db, &root).with_cache(&mut cache).build();
for (k, _) in TEST_DATA.iter().take(2) {
trie.get(k).unwrap().unwrap();
}
}
// Ensure that the accessed items are most recently used items of the shared value cache.
assert!(shared_cache
.read_lock_inner()
.value_cache()
.lru
.iter()
.take(2)
.map(|d| d.0)
.all(|l| { TEST_DATA.iter().take(2).any(|d| l.storage_key().unwrap() == d.0) }));
let most_recently_used_nodes = shared_cache
.read_lock_inner()
.node_cache()
.lru
.iter()
.map(|d| *d.0)
.collect::<Vec<_>>();
// Delete the value cache, so that we access the nodes.
shared_cache.reset_value_cache();
{
let local_cache = shared_cache.local_cache();
let mut cache = local_cache.as_trie_db_cache(root);
let trie = TrieDBBuilder::<Layout>::new(&db, &root).with_cache(&mut cache).build();
for (k, _) in TEST_DATA.iter().take(2) {
trie.get(k).unwrap().unwrap();
}
}
// Ensure that the most recently used nodes changed as well.
assert_ne!(
most_recently_used_nodes,
shared_cache
.read_lock_inner()
.node_cache()
.lru
.iter()
.map(|d| *d.0)
.collect::<Vec<_>>()
);
}
#[test]
fn cache_respects_bounds() {
let (mut db, root) = create_trie();
let shared_cache = Cache::new(CACHE_SIZE);
{
let local_cache = shared_cache.local_cache();
let mut new_root = root;
{
let mut cache = local_cache.as_trie_db_cache(root);
{
let mut trie =
TrieDBMutBuilder::<Layout>::from_existing(&mut db, &mut new_root)
.with_cache(&mut cache)
.build();
let value = vec![10u8; 100];
// Ensure we add enough data that would overflow the cache.
for i in 0..CACHE_SIZE_RAW / 100 * 2 {
trie.insert(format!("key{}", i).as_bytes(), &value).unwrap();
}
}
cache.merge_into(&local_cache, new_root);
}
}
let node_cache_size = shared_cache.read_lock_inner().node_cache().size_in_bytes;
let value_cache_size = shared_cache.read_lock_inner().value_cache().size_in_bytes;
assert!(node_cache_size + value_cache_size < CACHE_SIZE_RAW);
}
}
+677
View File
@@ -0,0 +1,677 @@
// This file is part of Substrate.
// Copyright (C) 2022 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
///! Provides the [`SharedNodeCache`], the [`SharedValueCache`] and the [`SharedTrieCache`]
///! that combines both caches and is exported to the outside.
use super::{CacheSize, LOG_TARGET};
use hash_db::Hasher;
use hashbrown::{hash_set::Entry as SetEntry, HashSet};
use lru::LruCache;
use nohash_hasher::BuildNoHashHasher;
use parking_lot::{RwLock, RwLockReadGuard, RwLockWriteGuard};
use std::{
hash::{BuildHasher, Hasher as _},
mem,
sync::Arc,
};
use trie_db::{node::NodeOwned, CachedValue};
lazy_static::lazy_static! {
static ref RANDOM_STATE: ahash::RandomState = ahash::RandomState::default();
}
/// No hashing [`LruCache`].
type NoHashingLruCache<K, T> = lru::LruCache<K, T, BuildNoHashHasher<K>>;
/// The shared node cache.
///
/// Internally this stores all cached nodes in a [`LruCache`]. It ensures that when updating the
/// cache, that the cache stays within its allowed bounds.
pub(super) struct SharedNodeCache<H> {
/// The cached nodes, ordered by least recently used.
pub(super) lru: LruCache<H, NodeOwned<H>>,
/// The size of [`Self::lru`] in bytes.
pub(super) size_in_bytes: usize,
/// The maximum cache size of [`Self::lru`].
maximum_cache_size: CacheSize,
}
impl<H: AsRef<[u8]> + Eq + std::hash::Hash> SharedNodeCache<H> {
/// Create a new instance.
fn new(cache_size: CacheSize) -> Self {
Self { lru: LruCache::unbounded(), size_in_bytes: 0, maximum_cache_size: cache_size }
}
/// Get the node for `key`.
///
/// This doesn't change the least recently order in the internal [`LruCache`].
pub fn get(&self, key: &H) -> Option<&NodeOwned<H>> {
self.lru.peek(key)
}
/// Update the cache with the `added` nodes and the `accessed` nodes.
///
/// The `added` nodes are the ones that have been collected by doing operations on the trie and
/// now should be stored in the shared cache. The `accessed` nodes are only referenced by hash
/// and represent the nodes that were retrieved from this shared cache through [`Self::get`].
/// These `accessed` nodes are being put to the front of the internal [`LruCache`] like the
/// `added` ones.
///
/// After the internal [`LruCache`] was updated, it is ensured that the internal [`LruCache`] is
/// inside its bounds ([`Self::maximum_size_in_bytes`]).
pub fn update(
&mut self,
added: impl IntoIterator<Item = (H, NodeOwned<H>)>,
accessed: impl IntoIterator<Item = H>,
) {
let update_size_in_bytes = |size_in_bytes: &mut usize, key: &H, node: &NodeOwned<H>| {
if let Some(new_size_in_bytes) =
size_in_bytes.checked_sub(key.as_ref().len() + node.size_in_bytes())
{
*size_in_bytes = new_size_in_bytes;
} else {
*size_in_bytes = 0;
tracing::error!(target: LOG_TARGET, "`SharedNodeCache` underflow detected!",);
}
};
accessed.into_iter().for_each(|key| {
// Access every node in the lru to put it to the front.
self.lru.get(&key);
});
added.into_iter().for_each(|(key, node)| {
self.size_in_bytes += key.as_ref().len() + node.size_in_bytes();
if let Some((r_key, r_node)) = self.lru.push(key, node) {
update_size_in_bytes(&mut self.size_in_bytes, &r_key, &r_node);
}
// Directly ensure that we respect the maximum size. By doing it directly here we ensure
// that the internal map of the [`LruCache`] doesn't grow too much.
while self.maximum_cache_size.exceeds(self.size_in_bytes) {
// This should always be `Some(_)`, otherwise something is wrong!
if let Some((key, node)) = self.lru.pop_lru() {
update_size_in_bytes(&mut self.size_in_bytes, &key, &node);
}
}
});
}
/// Reset the cache.
fn reset(&mut self) {
self.size_in_bytes = 0;
self.lru.clear();
}
}
/// The hash of [`ValueCacheKey`].
#[derive(Eq, Clone, Copy)]
pub struct ValueCacheKeyHash(u64);
impl ValueCacheKeyHash {
pub fn from_hasher_and_storage_key(
mut hasher: impl std::hash::Hasher,
storage_key: &[u8],
) -> Self {
hasher.write(storage_key);
Self(hasher.finish())
}
}
impl PartialEq for ValueCacheKeyHash {
fn eq(&self, other: &Self) -> bool {
self.0 == other.0
}
}
impl std::hash::Hash for ValueCacheKeyHash {
fn hash<Hasher: std::hash::Hasher>(&self, state: &mut Hasher) {
state.write_u64(self.0);
}
}
impl nohash_hasher::IsEnabled for ValueCacheKeyHash {}
/// A type that can only be constructed inside of this file.
///
/// It "requires" that the user has read the docs to prevent fuck ups.
#[derive(Eq, PartialEq)]
pub(super) struct IReadTheDocumentation(());
/// The key type that is being used to address a [`CachedValue`].
///
/// This type is implemented as `enum` to improve the performance when accessing the value cache.
/// The problem being that we need to calculate the `hash` of [`Self`] in worst case three times
/// when trying to find a value in the value cache. First to lookup the local cache, then the shared
/// cache and if we found it in the shared cache a third time to insert it into the list of accessed
/// values. To work around each variant stores the `hash` to identify a unique combination of
/// `storage_key` and `storage_root`. However, be aware that this `hash` can lead to collisions when
/// there are two different `storage_key` and `storage_root` pairs that map to the same `hash`. This
/// type also has the `Hash` variant. This variant should only be used for the use case of updating
/// the lru for a key. Because when using only the `Hash` variant to getting a value from a hash map
/// it could happen that a wrong value is returned when there is another key in the same hash map
/// that maps to the same `hash`. The [`PartialEq`] implementation is written in a way that when one
/// of the two compared instances is the `Hash` variant, we will only compare the hashes. This
/// ensures that we can use the `Hash` variant to bring values up in the lru.
#[derive(Eq)]
pub(super) enum ValueCacheKey<'a, H> {
/// Variant that stores the `storage_key` by value.
Value {
/// The storage root of the trie this key belongs to.
storage_root: H,
/// The key to access the value in the storage.
storage_key: Arc<[u8]>,
/// The hash that identifying this instance of `storage_root` and `storage_key`.
hash: ValueCacheKeyHash,
},
/// Variant that only references the `storage_key`.
Ref {
/// The storage root of the trie this key belongs to.
storage_root: H,
/// The key to access the value in the storage.
storage_key: &'a [u8],
/// The hash that identifying this instance of `storage_root` and `storage_key`.
hash: ValueCacheKeyHash,
},
/// Variant that only stores the hash that represents the `storage_root` and `storage_key`.
///
/// This should be used by caution, because it can lead to accessing the wrong value in a
/// hash map/set when there exists two different `storage_root`s and `storage_key`s that
/// map to the same `hash`.
Hash { hash: ValueCacheKeyHash, _i_read_the_documentation: IReadTheDocumentation },
}
impl<'a, H> ValueCacheKey<'a, H> {
/// Constructs [`Self::Value`].
pub fn new_value(storage_key: impl Into<Arc<[u8]>>, storage_root: H) -> Self
where
H: AsRef<[u8]>,
{
let storage_key = storage_key.into();
let hash = Self::hash_data(&storage_key, &storage_root);
Self::Value { storage_root, storage_key, hash }
}
/// Constructs [`Self::Ref`].
pub fn new_ref(storage_key: &'a [u8], storage_root: H) -> Self
where
H: AsRef<[u8]>,
{
let storage_key = storage_key.into();
let hash = Self::hash_data(storage_key, &storage_root);
Self::Ref { storage_root, storage_key, hash }
}
/// Returns a hasher prepared to build the final hash to identify [`Self`].
///
/// See [`Self::hash_data`] for building the hash directly.
pub fn hash_partial_data(storage_root: &H) -> impl std::hash::Hasher + Clone
where
H: AsRef<[u8]>,
{
let mut hasher = RANDOM_STATE.build_hasher();
hasher.write(storage_root.as_ref());
hasher
}
/// Hash the `key` and `storage_root` that identify [`Self`].
///
/// Returns a `u64` which represents the unique hash for the given inputs.
pub fn hash_data(key: &[u8], storage_root: &H) -> ValueCacheKeyHash
where
H: AsRef<[u8]>,
{
let hasher = Self::hash_partial_data(storage_root);
ValueCacheKeyHash::from_hasher_and_storage_key(hasher, key)
}
/// Returns the `hash` that identifies the current instance.
pub fn get_hash(&self) -> ValueCacheKeyHash {
match self {
Self::Value { hash, .. } | Self::Ref { hash, .. } | Self::Hash { hash, .. } => *hash,
}
}
/// Returns the stored storage root.
pub fn storage_root(&self) -> Option<&H> {
match self {
Self::Value { storage_root, .. } | Self::Ref { storage_root, .. } => Some(storage_root),
Self::Hash { .. } => None,
}
}
/// Returns the stored storage key.
pub fn storage_key(&self) -> Option<&[u8]> {
match self {
Self::Ref { storage_key, .. } => Some(&storage_key),
Self::Value { storage_key, .. } => Some(storage_key),
Self::Hash { .. } => None,
}
}
}
// Implement manually to ensure that the `Value` and `Hash` are treated equally.
impl<H: std::hash::Hash> std::hash::Hash for ValueCacheKey<'_, H> {
fn hash<Hasher: std::hash::Hasher>(&self, state: &mut Hasher) {
self.get_hash().hash(state)
}
}
impl<H> nohash_hasher::IsEnabled for ValueCacheKey<'_, H> {}
// Implement manually to ensure that the `Value` and `Hash` are treated equally.
impl<H: PartialEq> PartialEq for ValueCacheKey<'_, H> {
fn eq(&self, other: &Self) -> bool {
// First check if `self` or `other` is only the `Hash`.
// Then we only compare the `hash`. So, there could actually be some collision
// if two different storage roots and keys are mapping to the same key. See the
// [`ValueCacheKey`] docs for more information.
match (self, other) {
(Self::Hash { hash, .. }, Self::Hash { hash: other_hash, .. }) => hash == other_hash,
(Self::Hash { hash, .. }, _) => *hash == other.get_hash(),
(_, Self::Hash { hash: other_hash, .. }) => self.get_hash() == *other_hash,
// If both are not the `Hash` variant, we compare all the values.
_ =>
self.get_hash() == other.get_hash() &&
self.storage_root() == other.storage_root() &&
self.storage_key() == other.storage_key(),
}
}
}
/// The shared value cache.
///
/// The cache ensures that it stays in the configured size bounds.
pub(super) struct SharedValueCache<H> {
/// The cached nodes, ordered by least recently used.
pub(super) lru: NoHashingLruCache<ValueCacheKey<'static, H>, CachedValue<H>>,
/// The size of [`Self::lru`] in bytes.
pub(super) size_in_bytes: usize,
/// The maximum cache size of [`Self::lru`].
maximum_cache_size: CacheSize,
/// All known storage keys that are stored in [`Self::lru`].
///
/// This is used to de-duplicate keys in memory that use the
/// same [`SharedValueCache::storage_key`], but have a different
/// [`SharedValueCache::storage_root`].
known_storage_keys: HashSet<Arc<[u8]>>,
}
impl<H: Eq + std::hash::Hash + Clone + Copy + AsRef<[u8]>> SharedValueCache<H> {
/// Create a new instance.
fn new(cache_size: CacheSize) -> Self {
Self {
lru: NoHashingLruCache::unbounded_with_hasher(Default::default()),
size_in_bytes: 0,
maximum_cache_size: cache_size,
known_storage_keys: Default::default(),
}
}
/// Get the [`CachedValue`] for `key`.
///
/// This doesn't change the least recently order in the internal [`LruCache`].
pub fn get<'a>(&'a self, key: &ValueCacheKey<H>) -> Option<&'a CachedValue<H>> {
debug_assert!(
!matches!(key, ValueCacheKey::Hash { .. }),
"`get` can not be called with `Hash` variant as this may returns the wrong value."
);
self.lru.peek(unsafe {
// SAFETY
//
// We need to convert the lifetime to make the compiler happy. However, as
// we only use the `key` to looking up the value this lifetime conversion is
// safe.
mem::transmute::<&ValueCacheKey<'_, H>, &ValueCacheKey<'static, H>>(key)
})
}
/// Update the cache with the `added` values and the `accessed` values.
///
/// The `added` values are the ones that have been collected by doing operations on the trie and
/// now should be stored in the shared cache. The `accessed` values are only referenced by the
/// [`ValueCacheKeyHash`] and represent the values that were retrieved from this shared cache
/// through [`Self::get`]. These `accessed` values are being put to the front of the internal
/// [`LruCache`] like the `added` ones.
///
/// After the internal [`LruCache`] was updated, it is ensured that the internal [`LruCache`] is
/// inside its bounds ([`Self::maximum_size_in_bytes`]).
pub fn update(
&mut self,
added: impl IntoIterator<Item = (ValueCacheKey<'static, H>, CachedValue<H>)>,
accessed: impl IntoIterator<Item = ValueCacheKeyHash>,
) {
// The base size in memory per ([`ValueCacheKey<H>`], [`CachedValue`]).
let base_size = mem::size_of::<ValueCacheKey<H>>() + mem::size_of::<CachedValue<H>>();
let known_keys_entry_size = mem::size_of::<Arc<[u8]>>();
let update_size_in_bytes =
|size_in_bytes: &mut usize, r_key: Arc<[u8]>, known_keys: &mut HashSet<Arc<[u8]>>| {
// If the `strong_count == 2`, it means this is the last instance of the key.
// One being `r_key` and the other being stored in `known_storage_keys`.
let last_instance = Arc::strong_count(&r_key) == 2;
let key_len = if last_instance {
known_keys.remove(&r_key);
r_key.len() + known_keys_entry_size
} else {
// The key is still in `keys`, because it is still used by another
// `ValueCacheKey<H>`.
0
};
if let Some(new_size_in_bytes) = size_in_bytes.checked_sub(key_len + base_size) {
*size_in_bytes = new_size_in_bytes;
} else {
*size_in_bytes = 0;
tracing::error!(target: LOG_TARGET, "`SharedValueCache` underflow detected!",);
}
};
accessed.into_iter().for_each(|key| {
// Access every node in the lru to put it to the front.
// As we are using the `Hash` variant here, it may leads to putting the wrong value to
// the top. However, the only consequence of this is that we may prune a recently used
// value to early.
self.lru.get(&ValueCacheKey::Hash {
hash: key,
_i_read_the_documentation: IReadTheDocumentation(()),
});
});
added.into_iter().for_each(|(key, value)| {
let (storage_root, storage_key, key_hash) = match key {
ValueCacheKey::Hash { .. } => {
// Ignore the hash variant and try the next.
tracing::error!(
target: LOG_TARGET,
"`SharedValueCached::update` was called with a key to add \
that uses the `Hash` variant. This would lead to potential hash collision!",
);
return
},
ValueCacheKey::Ref { storage_key, storage_root, hash } =>
(storage_root, storage_key.into(), hash),
ValueCacheKey::Value { storage_root, storage_key, hash } =>
(storage_root, storage_key, hash),
};
let (size_update, storage_key) =
match self.known_storage_keys.entry(storage_key.clone()) {
SetEntry::Vacant(v) => {
let len = v.get().len();
v.insert();
// If the key was unknown, we need to also take its length and the size of
// the entry of `known_keys` into account.
(len + base_size + known_keys_entry_size, storage_key)
},
SetEntry::Occupied(o) => {
// Key is known
(base_size, o.get().clone())
},
};
self.size_in_bytes += size_update;
if let Some((r_key, _)) = self
.lru
.push(ValueCacheKey::Value { storage_key, storage_root, hash: key_hash }, value)
{
if let ValueCacheKey::Value { storage_key, .. } = r_key {
update_size_in_bytes(
&mut self.size_in_bytes,
storage_key,
&mut self.known_storage_keys,
);
}
}
// Directly ensure that we respect the maximum size. By doing it directly here we
// ensure that the internal map of the [`LruCache`] doesn't grow too much.
while self.maximum_cache_size.exceeds(self.size_in_bytes) {
// This should always be `Some(_)`, otherwise something is wrong!
if let Some((r_key, _)) = self.lru.pop_lru() {
if let ValueCacheKey::Value { storage_key, .. } = r_key {
update_size_in_bytes(
&mut self.size_in_bytes,
storage_key,
&mut self.known_storage_keys,
);
}
}
}
});
}
/// Reset the cache.
fn reset(&mut self) {
self.size_in_bytes = 0;
self.lru.clear();
self.known_storage_keys.clear();
}
}
/// The inner of [`SharedTrieCache`].
pub(super) struct SharedTrieCacheInner<H: Hasher> {
node_cache: SharedNodeCache<H::Out>,
value_cache: SharedValueCache<H::Out>,
}
impl<H: Hasher> SharedTrieCacheInner<H> {
/// Returns a reference to the [`SharedValueCache`].
pub(super) fn value_cache(&self) -> &SharedValueCache<H::Out> {
&self.value_cache
}
/// Returns a mutable reference to the [`SharedValueCache`].
pub(super) fn value_cache_mut(&mut self) -> &mut SharedValueCache<H::Out> {
&mut self.value_cache
}
/// Returns a reference to the [`SharedNodeCache`].
pub(super) fn node_cache(&self) -> &SharedNodeCache<H::Out> {
&self.node_cache
}
/// Returns a mutable reference to the [`SharedNodeCache`].
pub(super) fn node_cache_mut(&mut self) -> &mut SharedNodeCache<H::Out> {
&mut self.node_cache
}
}
/// The shared trie cache.
///
/// It should be instantiated once per node. It will hold the trie nodes and values of all
/// operations to the state. To not use all available memory it will ensure to stay in the
/// bounds given via the [`CacheSize`] at startup.
///
/// The instance of this object can be shared between multiple threads.
pub struct SharedTrieCache<H: Hasher> {
inner: Arc<RwLock<SharedTrieCacheInner<H>>>,
}
impl<H: Hasher> Clone for SharedTrieCache<H> {
fn clone(&self) -> Self {
Self { inner: self.inner.clone() }
}
}
impl<H: Hasher> SharedTrieCache<H> {
/// Create a new [`SharedTrieCache`].
pub fn new(cache_size: CacheSize) -> Self {
let (node_cache_size, value_cache_size) = match cache_size {
CacheSize::Maximum(max) => {
// Allocate 20% for the value cache.
let value_cache_size_in_bytes = (max as f32 * 0.20) as usize;
(
CacheSize::Maximum(max - value_cache_size_in_bytes),
CacheSize::Maximum(value_cache_size_in_bytes),
)
},
CacheSize::Unlimited => (CacheSize::Unlimited, CacheSize::Unlimited),
};
Self {
inner: Arc::new(RwLock::new(SharedTrieCacheInner {
node_cache: SharedNodeCache::new(node_cache_size),
value_cache: SharedValueCache::new(value_cache_size),
})),
}
}
/// Create a new [`LocalTrieCache`](super::LocalTrieCache) instance from this shared cache.
pub fn local_cache(&self) -> super::LocalTrieCache<H> {
super::LocalTrieCache {
shared: self.clone(),
node_cache: Default::default(),
value_cache: Default::default(),
shared_node_cache_access: Default::default(),
shared_value_cache_access: Default::default(),
}
}
/// Returns the used memory size of this cache in bytes.
pub fn used_memory_size(&self) -> usize {
let inner = self.inner.read();
let value_cache_size = inner.value_cache.size_in_bytes;
let node_cache_size = inner.node_cache.size_in_bytes;
node_cache_size + value_cache_size
}
/// Reset the node cache.
pub fn reset_node_cache(&self) {
self.inner.write().node_cache.reset();
}
/// Reset the value cache.
pub fn reset_value_cache(&self) {
self.inner.write().value_cache.reset();
}
/// Reset the entire cache.
pub fn reset(&self) {
self.reset_node_cache();
self.reset_value_cache();
}
/// Returns the read locked inner.
pub(super) fn read_lock_inner(&self) -> RwLockReadGuard<'_, SharedTrieCacheInner<H>> {
self.inner.read()
}
/// Returns the write locked inner.
pub(super) fn write_lock_inner(&self) -> RwLockWriteGuard<'_, SharedTrieCacheInner<H>> {
self.inner.write()
}
}
#[cfg(test)]
mod tests {
use super::*;
use sp_core::H256 as Hash;
#[test]
fn shared_value_cache_works() {
let base_size = mem::size_of::<CachedValue<Hash>>() + mem::size_of::<ValueCacheKey<Hash>>();
let arc_size = mem::size_of::<Arc<[u8]>>();
let mut cache = SharedValueCache::<sp_core::H256>::new(CacheSize::Maximum(
(base_size + arc_size + 10) * 10,
));
let key = vec![0; 10];
let root0 = Hash::repeat_byte(1);
let root1 = Hash::repeat_byte(2);
cache.update(
vec![
(ValueCacheKey::new_value(&key[..], root0), CachedValue::NonExisting),
(ValueCacheKey::new_value(&key[..], root1), CachedValue::NonExisting),
],
vec![],
);
// Ensure that the basics are working
assert_eq!(1, cache.known_storage_keys.len());
assert_eq!(3, Arc::strong_count(cache.known_storage_keys.get(&key[..]).unwrap()));
assert_eq!(base_size * 2 + key.len() + arc_size, cache.size_in_bytes);
// Just accessing a key should not change anything on the size and number of entries.
cache.update(vec![], vec![ValueCacheKey::hash_data(&key[..], &root0)]);
assert_eq!(1, cache.known_storage_keys.len());
assert_eq!(3, Arc::strong_count(cache.known_storage_keys.get(&key[..]).unwrap()));
assert_eq!(base_size * 2 + key.len() + arc_size, cache.size_in_bytes);
// Add 9 other entries and this should move out the key for `root1`.
cache.update(
(1..10)
.map(|i| vec![i; 10])
.map(|key| (ValueCacheKey::new_value(&key[..], root0), CachedValue::NonExisting)),
vec![],
);
assert_eq!(10, cache.known_storage_keys.len());
assert_eq!(2, Arc::strong_count(cache.known_storage_keys.get(&key[..]).unwrap()));
assert_eq!((base_size + key.len() + arc_size) * 10, cache.size_in_bytes);
assert!(matches!(
cache.get(&ValueCacheKey::new_ref(&key, root0)).unwrap(),
CachedValue::<Hash>::NonExisting
));
assert!(cache.get(&ValueCacheKey::new_ref(&key, root1)).is_none());
cache.update(
vec![(ValueCacheKey::new_value(vec![10; 10], root0), CachedValue::NonExisting)],
vec![],
);
assert!(cache.known_storage_keys.get(&key[..]).is_none());
}
#[test]
fn value_cache_key_eq_works() {
let storage_key = &b"something"[..];
let storage_key2 = &b"something2"[..];
let storage_root = Hash::random();
let value = ValueCacheKey::new_value(storage_key, storage_root);
// Ref gets the same hash, but a different storage key
let ref_ =
ValueCacheKey::Ref { storage_root, storage_key: storage_key2, hash: value.get_hash() };
let hash = ValueCacheKey::Hash {
hash: value.get_hash(),
_i_read_the_documentation: IReadTheDocumentation(()),
};
// Ensure that the hash variants is equal to `value`, `ref_` and itself.
assert!(hash == value);
assert!(value == hash);
assert!(hash == ref_);
assert!(ref_ == hash);
assert!(hash == hash);
// But when we compare `value` and `ref_` the different storage key is detected.
assert!(value != ref_);
assert!(ref_ != value);
}
}
+18 -3
View File
@@ -15,18 +15,33 @@
// See the License for the specific language governing permissions and
// limitations under the License.
/// Error for trie node decoding.
use sp_std::{boxed::Box, vec::Vec};
/// Error type used for trie related errors.
#[derive(Debug, PartialEq, Eq, Clone)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum Error {
pub enum Error<H> {
#[cfg_attr(feature = "std", error("Bad format"))]
BadFormat,
#[cfg_attr(feature = "std", error("Decoding failed: {0}"))]
Decode(#[cfg_attr(feature = "std", source)] codec::Error),
#[cfg_attr(
feature = "std",
error("Recorded key ({0:x?}) access with value as found={1}, but could not confirm with trie.")
)]
InvalidRecording(Vec<u8>, bool),
#[cfg_attr(feature = "std", error("Trie error: {0:?}"))]
TrieError(Box<trie_db::TrieError<H, Self>>),
}
impl From<codec::Error> for Error {
impl<H> From<codec::Error> for Error<H> {
fn from(x: codec::Error) -> Self {
Error::Decode(x)
}
}
impl<H> From<Box<trie_db::TrieError<H, Self>>> for Error<H> {
fn from(x: Box<trie_db::TrieError<H, Self>>) -> Self {
Error::TrieError(x)
}
}
+96 -80
View File
@@ -19,9 +19,13 @@
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(feature = "std")]
pub mod cache;
mod error;
mod node_codec;
mod node_header;
#[cfg(feature = "std")]
pub mod recorder;
mod storage_proof;
mod trie_codec;
mod trie_stream;
@@ -46,8 +50,8 @@ use trie_db::proof::{generate_proof, verify_proof};
pub use trie_db::{
nibble_ops,
node::{NodePlan, ValuePlan},
CError, DBValue, Query, Recorder, Trie, TrieConfiguration, TrieDBIterator, TrieDBKeyIterator,
TrieLayout, TrieMut,
CError, DBValue, Query, Recorder, Trie, TrieCache, TrieConfiguration, TrieDBIterator,
TrieDBKeyIterator, TrieLayout, TrieMut, TrieRecorder,
};
/// The Substrate format implementation of `TrieStream`.
pub use trie_stream::TrieStream;
@@ -167,11 +171,15 @@ pub type MemoryDB<H> = memory_db::MemoryDB<H, memory_db::HashKey<H>, trie_db::DB
pub type GenericMemoryDB<H, KF> = memory_db::MemoryDB<H, KF, trie_db::DBValue, MemTracker>;
/// Persistent trie database read-access interface for the a given hasher.
pub type TrieDB<'a, L> = trie_db::TrieDB<'a, L>;
pub type TrieDB<'a, 'cache, L> = trie_db::TrieDB<'a, 'cache, L>;
/// Builder for creating a [`TrieDB`].
pub type TrieDBBuilder<'a, 'cache, L> = trie_db::TrieDBBuilder<'a, 'cache, L>;
/// Persistent trie database write-access interface for the a given hasher.
pub type TrieDBMut<'a, L> = trie_db::TrieDBMut<'a, L>;
/// Builder for creating a [`TrieDBMut`].
pub type TrieDBMutBuilder<'a, L> = trie_db::TrieDBMutBuilder<'a, L>;
/// Querying interface, as in `trie_db` but less generic.
pub type Lookup<'a, L, Q> = trie_db::Lookup<'a, L, Q>;
pub type Lookup<'a, 'cache, L, Q> = trie_db::Lookup<'a, 'cache, L, Q>;
/// Hash type for a trie layout.
pub type TrieHash<L> = <<L as TrieLayout>::Hash as Hasher>::Out;
/// This module is for non generic definition of trie type.
@@ -180,18 +188,23 @@ pub mod trie_types {
use super::*;
/// Persistent trie database read-access interface for the a given hasher.
///
/// Read only V1 and V0 are compatible, thus we always use V1.
pub type TrieDB<'a, H> = super::TrieDB<'a, LayoutV1<H>>;
pub type TrieDB<'a, 'cache, H> = super::TrieDB<'a, 'cache, LayoutV1<H>>;
/// Builder for creating a [`TrieDB`].
pub type TrieDBBuilder<'a, 'cache, H> = super::TrieDBBuilder<'a, 'cache, LayoutV1<H>>;
/// Persistent trie database write-access interface for the a given hasher.
pub type TrieDBMutV0<'a, H> = super::TrieDBMut<'a, LayoutV0<H>>;
/// Builder for creating a [`TrieDBMutV0`].
pub type TrieDBMutBuilderV0<'a, H> = super::TrieDBMutBuilder<'a, LayoutV0<H>>;
/// Persistent trie database write-access interface for the a given hasher.
pub type TrieDBMutV1<'a, H> = super::TrieDBMut<'a, LayoutV1<H>>;
/// Builder for creating a [`TrieDBMutV1`].
pub type TrieDBMutBuilderV1<'a, H> = super::TrieDBMutBuilder<'a, LayoutV1<H>>;
/// Querying interface, as in `trie_db` but less generic.
pub type LookupV0<'a, H, Q> = trie_db::Lookup<'a, LayoutV0<H>, Q>;
/// Querying interface, as in `trie_db` but less generic.
pub type LookupV1<'a, H, Q> = trie_db::Lookup<'a, LayoutV1<H>, Q>;
pub type Lookup<'a, 'cache, H, Q> = trie_db::Lookup<'a, 'cache, LayoutV1<H>, Q>;
/// As in `trie_db`, but less generic, error type for the crate.
pub type TrieError<H> = trie_db::TrieError<H, super::Error>;
pub type TrieError<H> = trie_db::TrieError<H, super::Error<H>>;
}
/// Create a proof for a subset of keys in a trie.
@@ -213,9 +226,7 @@ where
K: 'a + AsRef<[u8]>,
DB: hash_db::HashDBRef<L::Hash, trie_db::DBValue>,
{
// Can use default layout (read only).
let trie = TrieDB::<L>::new(db, &root)?;
generate_proof(&trie, keys)
generate_proof::<_, L, _, _>(db, &root, keys)
}
/// Verify a set of key-value pairs against a trie root and a proof.
@@ -245,6 +256,8 @@ pub fn delta_trie_root<L: TrieConfiguration, I, A, B, DB, V>(
db: &mut DB,
mut root: TrieHash<L>,
delta: I,
recorder: Option<&mut dyn trie_db::TrieRecorder<TrieHash<L>>>,
cache: Option<&mut dyn TrieCache<L::Codec>>,
) -> Result<TrieHash<L>, Box<TrieError<L>>>
where
I: IntoIterator<Item = (A, B)>,
@@ -254,7 +267,10 @@ where
DB: hash_db::HashDB<L::Hash, trie_db::DBValue>,
{
{
let mut trie = TrieDBMut::<L>::from_existing(db, &mut root)?;
let mut trie = TrieDBMutBuilder::<L>::from_existing(db, &mut root)
.with_optional_cache(cache)
.with_optional_recorder(recorder)
.build();
let mut delta = delta.into_iter().collect::<Vec<_>>();
delta.sort_by(|l, r| l.0.borrow().cmp(r.0.borrow()));
@@ -271,33 +287,32 @@ where
}
/// Read a value from the trie.
pub fn read_trie_value<L, DB>(
pub fn read_trie_value<L: TrieLayout, DB: hash_db::HashDBRef<L::Hash, trie_db::DBValue>>(
db: &DB,
root: &TrieHash<L>,
key: &[u8],
) -> Result<Option<Vec<u8>>, Box<TrieError<L>>>
where
L: TrieConfiguration,
DB: hash_db::HashDBRef<L::Hash, trie_db::DBValue>,
{
TrieDB::<L>::new(db, root)?.get(key).map(|x| x.map(|val| val.to_vec()))
recorder: Option<&mut dyn TrieRecorder<TrieHash<L>>>,
cache: Option<&mut dyn TrieCache<L::Codec>>,
) -> Result<Option<Vec<u8>>, Box<TrieError<L>>> {
TrieDBBuilder::<L>::new(db, root)
.with_optional_cache(cache)
.with_optional_recorder(recorder)
.build()
.get(key)
}
/// Read a value from the trie with given Query.
pub fn read_trie_value_with<L, Q, DB>(
pub fn read_trie_value_with<
L: TrieLayout,
Q: Query<L::Hash, Item = Vec<u8>>,
DB: hash_db::HashDBRef<L::Hash, trie_db::DBValue>,
>(
db: &DB,
root: &TrieHash<L>,
key: &[u8],
query: Q,
) -> Result<Option<Vec<u8>>, Box<TrieError<L>>>
where
L: TrieConfiguration,
Q: Query<L::Hash, Item = DBValue>,
DB: hash_db::HashDBRef<L::Hash, trie_db::DBValue>,
{
TrieDB::<L>::new(db, root)?
.get_with(key, query)
.map(|x| x.map(|val| val.to_vec()))
) -> Result<Option<Vec<u8>>, Box<TrieError<L>>> {
TrieDBBuilder::<L>::new(db, root).build().get_with(key, query)
}
/// Determine the empty trie root.
@@ -328,6 +343,8 @@ pub fn child_delta_trie_root<L: TrieConfiguration, I, A, B, DB, RD, V>(
db: &mut DB,
root_data: RD,
delta: I,
recorder: Option<&mut dyn TrieRecorder<TrieHash<L>>>,
cache: Option<&mut dyn TrieCache<L::Codec>>,
) -> Result<<L::Hash as Hasher>::Out, Box<TrieError<L>>>
where
I: IntoIterator<Item = (A, B)>,
@@ -341,32 +358,8 @@ where
// root is fetched from DB, not writable by runtime, so it's always valid.
root.as_mut().copy_from_slice(root_data.as_ref());
let mut db = KeySpacedDBMut::new(&mut *db, keyspace);
delta_trie_root::<L, _, _, _, _, _>(&mut db, root, delta)
}
/// Record all keys for a given root.
pub fn record_all_keys<L: TrieConfiguration, DB>(
db: &DB,
root: &TrieHash<L>,
recorder: &mut Recorder<TrieHash<L>>,
) -> Result<(), Box<TrieError<L>>>
where
DB: hash_db::HashDBRef<L::Hash, trie_db::DBValue>,
{
let trie = TrieDB::<L>::new(db, root)?;
let iter = trie.iter()?;
for x in iter {
let (key, _) = x?;
// there's currently no API like iter_with()
// => use iter to enumerate all keys AND lookup each
// key using get_with
trie.get_with(&key, &mut *recorder)?;
}
Ok(())
let mut db = KeySpacedDBMut::new(db, keyspace);
delta_trie_root::<L, _, _, _, _, _>(&mut db, root, delta, recorder, cache)
}
/// Read a value from the child trie.
@@ -375,12 +368,39 @@ pub fn read_child_trie_value<L: TrieConfiguration, DB>(
db: &DB,
root: &TrieHash<L>,
key: &[u8],
recorder: Option<&mut dyn TrieRecorder<TrieHash<L>>>,
cache: Option<&mut dyn TrieCache<L::Codec>>,
) -> Result<Option<Vec<u8>>, Box<TrieError<L>>>
where
DB: hash_db::HashDBRef<L::Hash, trie_db::DBValue>,
{
let db = KeySpacedDB::new(db, keyspace);
TrieDB::<L>::new(&db, root)?.get(key).map(|x| x.map(|val| val.to_vec()))
TrieDBBuilder::<L>::new(&db, &root)
.with_optional_recorder(recorder)
.with_optional_cache(cache)
.build()
.get(key)
.map(|x| x.map(|val| val.to_vec()))
}
/// Read a hash from the child trie.
pub fn read_child_trie_hash<L: TrieConfiguration, DB>(
keyspace: &[u8],
db: &DB,
root: &TrieHash<L>,
key: &[u8],
recorder: Option<&mut dyn TrieRecorder<TrieHash<L>>>,
cache: Option<&mut dyn TrieCache<L::Codec>>,
) -> Result<Option<TrieHash<L>>, Box<TrieError<L>>>
where
DB: hash_db::HashDBRef<L::Hash, trie_db::DBValue>,
{
let db = KeySpacedDB::new(db, keyspace);
TrieDBBuilder::<L>::new(&db, &root)
.with_optional_recorder(recorder)
.with_optional_cache(cache)
.build()
.get_hash(key)
}
/// Read a value from the child trie with given query.
@@ -401,20 +421,21 @@ where
root.as_mut().copy_from_slice(root_slice);
let db = KeySpacedDB::new(db, keyspace);
TrieDB::<L>::new(&db, &root)?
TrieDBBuilder::<L>::new(&db, &root)
.build()
.get_with(key, query)
.map(|x| x.map(|val| val.to_vec()))
}
/// `HashDB` implementation that append a encoded prefix (unique id bytes) in addition to the
/// prefix of every key value.
pub struct KeySpacedDB<'a, DB, H>(&'a DB, &'a [u8], PhantomData<H>);
pub struct KeySpacedDB<'a, DB: ?Sized, H>(&'a DB, &'a [u8], PhantomData<H>);
/// `HashDBMut` implementation that append a encoded prefix (unique id bytes) in addition to the
/// prefix of every key value.
///
/// Mutable variant of `KeySpacedDB`, see [`KeySpacedDB`].
pub struct KeySpacedDBMut<'a, DB, H>(&'a mut DB, &'a [u8], PhantomData<H>);
pub struct KeySpacedDBMut<'a, DB: ?Sized, H>(&'a mut DB, &'a [u8], PhantomData<H>);
/// Utility function used to merge some byte data (keyspace) and `prefix` data
/// before calling key value database primitives.
@@ -425,20 +446,14 @@ fn keyspace_as_prefix_alloc(ks: &[u8], prefix: Prefix) -> (Vec<u8>, Option<u8>)
(result, prefix.1)
}
impl<'a, DB, H> KeySpacedDB<'a, DB, H>
where
H: Hasher,
{
impl<'a, DB: ?Sized, H> KeySpacedDB<'a, DB, H> {
/// instantiate new keyspaced db
pub fn new(db: &'a DB, ks: &'a [u8]) -> Self {
KeySpacedDB(db, ks, PhantomData)
}
}
impl<'a, DB, H> KeySpacedDBMut<'a, DB, H>
where
H: Hasher,
{
impl<'a, DB: ?Sized, H> KeySpacedDBMut<'a, DB, H> {
/// instantiate new keyspaced db
pub fn new(db: &'a mut DB, ks: &'a [u8]) -> Self {
KeySpacedDBMut(db, ks, PhantomData)
@@ -447,7 +462,7 @@ where
impl<'a, DB, H, T> hash_db::HashDBRef<H, T> for KeySpacedDB<'a, DB, H>
where
DB: hash_db::HashDBRef<H, T>,
DB: hash_db::HashDBRef<H, T> + ?Sized,
H: Hasher,
T: From<&'static [u8]>,
{
@@ -550,7 +565,7 @@ mod tests {
let persistent = {
let mut memdb = MemoryDBMeta::default();
let mut root = Default::default();
let mut t = TrieDBMut::<T>::new(&mut memdb, &mut root);
let mut t = TrieDBMutBuilder::<T>::new(&mut memdb, &mut root).build();
for (x, y) in input.iter().rev() {
t.insert(x, y).unwrap();
}
@@ -564,13 +579,13 @@ mod tests {
let mut memdb = MemoryDBMeta::default();
let mut root = Default::default();
{
let mut t = TrieDBMut::<T>::new(&mut memdb, &mut root);
let mut t = TrieDBMutBuilder::<T>::new(&mut memdb, &mut root).build();
for (x, y) in input.clone() {
t.insert(x, y).unwrap();
}
}
{
let t = TrieDB::<T>::new(&memdb, &root).unwrap();
let t = TrieDBBuilder::<T>::new(&memdb, &root).build();
assert_eq!(
input.iter().map(|(i, j)| (i.to_vec(), j.to_vec())).collect::<Vec<_>>(),
t.iter()
@@ -592,7 +607,7 @@ mod tests {
fn default_trie_root() {
let mut db = MemoryDB::default();
let mut root = TrieHash::<LayoutV1>::default();
let mut empty = TrieDBMut::<LayoutV1>::new(&mut db, &mut root);
let mut empty = TrieDBMutBuilder::<LayoutV1>::new(&mut db, &mut root).build();
empty.commit();
let root1 = empty.root().as_ref().to_vec();
let root2: Vec<u8> = LayoutV1::trie_root::<_, Vec<u8>, Vec<u8>>(std::iter::empty())
@@ -695,15 +710,12 @@ mod tests {
check_input(&input);
}
fn populate_trie<'db, T>(
fn populate_trie<'db, T: TrieConfiguration>(
db: &'db mut dyn HashDB<T::Hash, DBValue>,
root: &'db mut TrieHash<T>,
v: &[(Vec<u8>, Vec<u8>)],
) -> TrieDBMut<'db, T>
where
T: TrieConfiguration,
{
let mut t = TrieDBMut::<T>::new(db, root);
) -> TrieDBMut<'db, T> {
let mut t = TrieDBMutBuilder::<T>::new(db, root).build();
for i in 0..v.len() {
let key: &[u8] = &v[i].0;
let val: &[u8] = &v[i].1;
@@ -841,7 +853,7 @@ mod tests {
let mut root = Default::default();
let _ = populate_trie::<Layout>(&mut mdb, &mut root, &pairs);
let trie = TrieDB::<Layout>::new(&mdb, &root).unwrap();
let trie = TrieDBBuilder::<Layout>::new(&mdb, &root).build();
let iter = trie.iter().unwrap();
let mut iter_pairs = Vec::new();
@@ -954,12 +966,16 @@ mod tests {
&mut proof_db.clone(),
storage_root,
valid_delta,
None,
None,
)
.unwrap();
let second_storage_root = delta_trie_root::<LayoutV0, _, _, _, _, _>(
&mut proof_db.clone(),
storage_root,
invalid_delta,
None,
None,
)
.unwrap();
+8 -33
View File
@@ -25,7 +25,7 @@ use sp_std::{borrow::Borrow, marker::PhantomData, ops::Range, vec::Vec};
use trie_db::{
nibble_ops,
node::{NibbleSlicePlan, NodeHandlePlan, NodePlan, Value, ValuePlan},
ChildReference, NodeCodec as NodeCodecT, Partial,
ChildReference, NodeCodec as NodeCodecT,
};
/// Helper struct for trie node decoder. This implements `codec::Input` on a byte slice, while
@@ -85,7 +85,7 @@ where
H: Hasher,
{
const ESCAPE_HEADER: Option<u8> = Some(trie_constants::ESCAPE_COMPACT_HEADER);
type Error = Error;
type Error = Error<H::Out>;
type HashOut = H::Out;
fn hashed_null_node() -> <H as Hasher>::Out {
@@ -185,19 +185,19 @@ where
&[trie_constants::EMPTY_TRIE]
}
fn leaf_node(partial: Partial, value: Value) -> Vec<u8> {
fn leaf_node(partial: impl Iterator<Item = u8>, number_nibble: usize, value: Value) -> Vec<u8> {
let contains_hash = matches!(&value, Value::Node(..));
let mut output = if contains_hash {
partial_encode(partial, NodeKind::HashedValueLeaf)
partial_from_iterator_encode(partial, number_nibble, NodeKind::HashedValueLeaf)
} else {
partial_encode(partial, NodeKind::Leaf)
partial_from_iterator_encode(partial, number_nibble, NodeKind::Leaf)
};
match value {
Value::Inline(value) => {
Compact(value.len() as u32).encode_to(&mut output);
output.extend_from_slice(value);
},
Value::Node(hash, _) => {
Value::Node(hash) => {
debug_assert!(hash.len() == H::LENGTH);
output.extend_from_slice(hash);
},
@@ -244,7 +244,7 @@ where
Compact(value.len() as u32).encode_to(&mut output);
output.extend_from_slice(value);
},
Some(Value::Node(hash, _)) => {
Some(Value::Node(hash)) => {
debug_assert!(hash.len() == H::LENGTH);
output.extend_from_slice(hash);
},
@@ -295,31 +295,6 @@ fn partial_from_iterator_encode<I: Iterator<Item = u8>>(
output
}
/// Encode and allocate node type header (type and size), and partial value.
/// Same as `partial_from_iterator_encode` but uses non encoded `Partial` as input.
fn partial_encode(partial: Partial, node_kind: NodeKind) -> Vec<u8> {
let number_nibble_encoded = (partial.0).0 as usize;
let nibble_count = partial.1.len() * nibble_ops::NIBBLE_PER_BYTE + number_nibble_encoded;
let nibble_count = sp_std::cmp::min(trie_constants::NIBBLE_SIZE_BOUND, nibble_count);
let mut output = Vec::with_capacity(4 + partial.1.len());
match node_kind {
NodeKind::Leaf => NodeHeader::Leaf(nibble_count).encode_to(&mut output),
NodeKind::BranchWithValue => NodeHeader::Branch(true, nibble_count).encode_to(&mut output),
NodeKind::BranchNoValue => NodeHeader::Branch(false, nibble_count).encode_to(&mut output),
NodeKind::HashedValueLeaf =>
NodeHeader::HashedValueLeaf(nibble_count).encode_to(&mut output),
NodeKind::HashedValueBranch =>
NodeHeader::HashedValueBranch(nibble_count).encode_to(&mut output),
};
if number_nibble_encoded > 0 {
output.push(nibble_ops::pad_right((partial.0).1));
}
output.extend_from_slice(partial.1);
output
}
const BITMAP_LENGTH: usize = 2;
/// Radix 16 trie, bitmap encoding implementation,
@@ -329,7 +304,7 @@ const BITMAP_LENGTH: usize = 2;
pub(crate) struct Bitmap(u16);
impl Bitmap {
pub fn decode(mut data: &[u8]) -> Result<Self, Error> {
pub fn decode(mut data: &[u8]) -> Result<Self, codec::Error> {
Ok(Bitmap(u16::decode(&mut data)?))
}
+284
View File
@@ -0,0 +1,284 @@
// This file is part of Substrate.
// Copyright (C) 2021 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Trie recorder
//!
//! Provides an implementation of the [`TrieRecorder`](trie_db::TrieRecorder) trait. It can be used
//! to record storage accesses to the state to generate a [`StorageProof`].
use crate::{NodeCodec, StorageProof};
use codec::Encode;
use hash_db::Hasher;
use parking_lot::Mutex;
use std::{
collections::HashMap,
marker::PhantomData,
mem,
ops::DerefMut,
sync::{
atomic::{AtomicUsize, Ordering},
Arc,
},
};
use trie_db::{RecordedForKey, TrieAccess};
const LOG_TARGET: &str = "trie-recorder";
/// The internals of [`Recorder`].
struct RecorderInner<H> {
/// The keys for that we have recorded the trie nodes and if we have recorded up to the value.
recorded_keys: HashMap<Vec<u8>, RecordedForKey>,
/// The encoded nodes we accessed while recording.
accessed_nodes: HashMap<H, Vec<u8>>,
}
impl<H> Default for RecorderInner<H> {
fn default() -> Self {
Self { recorded_keys: Default::default(), accessed_nodes: Default::default() }
}
}
/// The trie recorder.
///
/// It can be used to record accesses to the trie and then to convert them into a [`StorageProof`].
pub struct Recorder<H: Hasher> {
inner: Arc<Mutex<RecorderInner<H::Out>>>,
/// The estimated encoded size of the storage proof this recorder will produce.
///
/// We store this in an atomic to be able to fetch the value while the `inner` is may locked.
encoded_size_estimation: Arc<AtomicUsize>,
}
impl<H: Hasher> Default for Recorder<H> {
fn default() -> Self {
Self { inner: Default::default(), encoded_size_estimation: Arc::new(0.into()) }
}
}
impl<H: Hasher> Clone for Recorder<H> {
fn clone(&self) -> Self {
Self {
inner: self.inner.clone(),
encoded_size_estimation: self.encoded_size_estimation.clone(),
}
}
}
impl<H: Hasher> Recorder<H> {
/// Returns the recorder as [`TrieRecorder`](trie_db::TrieRecorder) compatible type.
pub fn as_trie_recorder(&self) -> impl trie_db::TrieRecorder<H::Out> + '_ {
TrieRecorder::<H, _> {
inner: self.inner.lock(),
encoded_size_estimation: self.encoded_size_estimation.clone(),
_phantom: PhantomData,
}
}
/// Drain the recording into a [`StorageProof`].
///
/// While a recorder can be cloned, all share the same internal state. After calling this
/// function, all other instances will have their internal state reset as well.
///
/// If you don't want to drain the recorded state, use [`Self::to_storage_proof`].
///
/// Returns the [`StorageProof`].
pub fn drain_storage_proof(self) -> StorageProof {
let mut recorder = mem::take(&mut *self.inner.lock());
StorageProof::new(recorder.accessed_nodes.drain().map(|(_, v)| v))
}
/// Convert the recording to a [`StorageProof`].
///
/// In contrast to [`Self::drain_storage_proof`] this doesn't consumes and doesn't clears the
/// recordings.
///
/// Returns the [`StorageProof`].
pub fn to_storage_proof(&self) -> StorageProof {
let recorder = self.inner.lock();
StorageProof::new(recorder.accessed_nodes.iter().map(|(_, v)| v.clone()))
}
/// Returns the estimated encoded size of the proof.
///
/// The estimation is based on all the nodes that were accessed until now while
/// accessing the trie.
pub fn estimate_encoded_size(&self) -> usize {
self.encoded_size_estimation.load(Ordering::Relaxed)
}
/// Reset the state.
///
/// This discards all recorded data.
pub fn reset(&self) {
mem::take(&mut *self.inner.lock());
self.encoded_size_estimation.store(0, Ordering::Relaxed);
}
}
/// The [`TrieRecorder`](trie_db::TrieRecorder) implementation.
struct TrieRecorder<H: Hasher, I> {
inner: I,
encoded_size_estimation: Arc<AtomicUsize>,
_phantom: PhantomData<H>,
}
impl<H: Hasher, I: DerefMut<Target = RecorderInner<H::Out>>> trie_db::TrieRecorder<H::Out>
for TrieRecorder<H, I>
{
fn record<'b>(&mut self, access: TrieAccess<'b, H::Out>) {
let mut encoded_size_update = 0;
match access {
TrieAccess::NodeOwned { hash, node_owned } => {
tracing::trace!(
target: LOG_TARGET,
hash = ?hash,
"Recording node",
);
self.inner.accessed_nodes.entry(hash).or_insert_with(|| {
let node = node_owned.to_encoded::<NodeCodec<H>>();
encoded_size_update += node.encoded_size();
node
});
},
TrieAccess::EncodedNode { hash, encoded_node } => {
tracing::trace!(
target: LOG_TARGET,
hash = ?hash,
"Recording node",
);
self.inner.accessed_nodes.entry(hash).or_insert_with(|| {
let node = encoded_node.into_owned();
encoded_size_update += node.encoded_size();
node
});
},
TrieAccess::Value { hash, value, full_key } => {
tracing::trace!(
target: LOG_TARGET,
hash = ?hash,
key = ?sp_core::hexdisplay::HexDisplay::from(&full_key),
"Recording value",
);
self.inner.accessed_nodes.entry(hash).or_insert_with(|| {
let value = value.into_owned();
encoded_size_update += value.encoded_size();
value
});
self.inner
.recorded_keys
.entry(full_key.to_vec())
.and_modify(|e| *e = RecordedForKey::Value)
.or_insert(RecordedForKey::Value);
},
TrieAccess::Hash { full_key } => {
tracing::trace!(
target: LOG_TARGET,
key = ?sp_core::hexdisplay::HexDisplay::from(&full_key),
"Recorded hash access for key",
);
// We don't need to update the `encoded_size_update` as the hash was already
// accounted for by the recorded node that holds the hash.
self.inner
.recorded_keys
.entry(full_key.to_vec())
.or_insert(RecordedForKey::Hash);
},
TrieAccess::NonExisting { full_key } => {
tracing::trace!(
target: LOG_TARGET,
key = ?sp_core::hexdisplay::HexDisplay::from(&full_key),
"Recorded non-existing value access for key",
);
// Non-existing access means we recorded all trie nodes up to the value.
// Not the actual value, as it doesn't exist, but all trie nodes to know
// that the value doesn't exist in the trie.
self.inner
.recorded_keys
.entry(full_key.to_vec())
.and_modify(|e| *e = RecordedForKey::Value)
.or_insert(RecordedForKey::Value);
},
};
self.encoded_size_estimation.fetch_add(encoded_size_update, Ordering::Relaxed);
}
fn trie_nodes_recorded_for_key(&self, key: &[u8]) -> RecordedForKey {
self.inner.recorded_keys.get(key).copied().unwrap_or(RecordedForKey::None)
}
}
#[cfg(test)]
mod tests {
use trie_db::{Trie, TrieDBBuilder, TrieDBMutBuilder, TrieHash, TrieMut};
type MemoryDB = crate::MemoryDB<sp_core::Blake2Hasher>;
type Layout = crate::LayoutV1<sp_core::Blake2Hasher>;
type Recorder = super::Recorder<sp_core::Blake2Hasher>;
const TEST_DATA: &[(&[u8], &[u8])] =
&[(b"key1", b"val1"), (b"key2", b"val2"), (b"key3", b"val3"), (b"key4", b"val4")];
fn create_trie() -> (MemoryDB, TrieHash<Layout>) {
let mut db = MemoryDB::default();
let mut root = Default::default();
{
let mut trie = TrieDBMutBuilder::<Layout>::new(&mut db, &mut root).build();
for (k, v) in TEST_DATA {
trie.insert(k, v).expect("Inserts data");
}
}
(db, root)
}
#[test]
fn recorder_works() {
let (db, root) = create_trie();
let recorder = Recorder::default();
{
let mut trie_recorder = recorder.as_trie_recorder();
let trie = TrieDBBuilder::<Layout>::new(&db, &root)
.with_recorder(&mut trie_recorder)
.build();
assert_eq!(TEST_DATA[0].1.to_vec(), trie.get(TEST_DATA[0].0).unwrap().unwrap());
}
let storage_proof = recorder.drain_storage_proof();
let memory_db: MemoryDB = storage_proof.into_memory_db();
// Check that we recorded the required data
let trie = TrieDBBuilder::<Layout>::new(&memory_db, &root).build();
assert_eq!(TEST_DATA[0].1.to_vec(), trie.get(TEST_DATA[0].0).unwrap().unwrap());
}
}
@@ -88,7 +88,7 @@ impl StorageProof {
pub fn into_compact_proof<H: Hasher>(
self,
root: H::Out,
) -> Result<CompactProof, crate::CompactProofError<H::Out, crate::Error>> {
) -> Result<CompactProof, crate::CompactProofError<H::Out, crate::Error<H::Out>>> {
crate::encode_compact::<Layout<H>>(self, root)
}
@@ -130,7 +130,7 @@ impl CompactProof {
pub fn to_storage_proof<H: Hasher>(
&self,
expected_root: Option<&H::Out>,
) -> Result<(StorageProof, H::Out), crate::CompactProofError<H::Out, crate::Error>> {
) -> Result<(StorageProof, H::Out), crate::CompactProofError<H::Out, crate::Error<H::Out>>> {
let mut db = crate::MemoryDB::<H>::new(&[]);
let root = crate::decode_compact::<Layout<H>, _, _>(
&mut db,
@@ -157,7 +157,8 @@ impl CompactProof {
pub fn to_memory_db<H: Hasher>(
&self,
expected_root: Option<&H::Out>,
) -> Result<(crate::MemoryDB<H>, H::Out), crate::CompactProofError<H::Out, crate::Error>> {
) -> Result<(crate::MemoryDB<H>, H::Out), crate::CompactProofError<H::Out, crate::Error<H::Out>>>
{
let mut db = crate::MemoryDB::<H>::new(&[]);
let root = crate::decode_compact::<Layout<H>, _, _>(
&mut db,
+3 -3
View File
@@ -78,7 +78,7 @@ where
let mut child_tries = Vec::new();
{
// fetch child trie roots
let trie = crate::TrieDB::<L>::new(db, &top_root)?;
let trie = crate::TrieDBBuilder::<L>::new(db, &top_root).build();
let mut iter = trie.iter()?;
@@ -159,7 +159,7 @@ where
let mut child_tries = Vec::new();
let partial_db = proof.into_memory_db();
let mut compact_proof = {
let trie = crate::TrieDB::<L>::new(&partial_db, &root)?;
let trie = crate::TrieDBBuilder::<L>::new(&partial_db, &root).build();
let mut iter = trie.iter()?;
@@ -197,7 +197,7 @@ where
continue
}
let trie = crate::TrieDB::<L>::new(&partial_db, &child_root)?;
let trie = crate::TrieDBBuilder::<L>::new(&partial_db, &child_root).build();
let child_proof = trie_db::encode_compact::<L>(&trie)?;
compact_proof.extend(child_proof);
+1 -1
View File
@@ -41,7 +41,7 @@ pallet-timestamp = { version = "4.0.0-dev", default-features = false, path = "..
sp-finality-grandpa = { version = "4.0.0-dev", default-features = false, path = "../../primitives/finality-grandpa" }
sp-trie = { version = "6.0.0", default-features = false, path = "../../primitives/trie" }
sp-transaction-pool = { version = "4.0.0-dev", default-features = false, path = "../../primitives/transaction-pool" }
trie-db = { version = "0.23.1", default-features = false }
trie-db = { version = "0.24.0", default-features = false }
parity-util-mem = { version = "0.11.0", default-features = false, features = ["primitive-types"] }
sc-service = { version = "0.10.0-dev", default-features = false, optional = true, features = ["test-helpers"], path = "../../client/service" }
sp-state-machine = { version = "0.12.0", default-features = false, path = "../../primitives/state-machine" }
+16 -19
View File
@@ -29,7 +29,10 @@ use sp_std::{marker::PhantomData, prelude::*};
use sp_application_crypto::{ecdsa, ed25519, sr25519, RuntimeAppPublic};
use sp_core::{offchain::KeyTypeId, OpaqueMetadata, RuntimeDebug};
use sp_trie::{trie_types::TrieDB, PrefixedMemoryDB, StorageProof};
use sp_trie::{
trie_types::{TrieDBBuilder, TrieDBMutBuilderV1},
PrefixedMemoryDB, StorageProof,
};
use trie_db::{Trie, TrieMut};
use cfg_if::cfg_if;
@@ -59,8 +62,6 @@ use sp_runtime::{
#[cfg(any(feature = "std", test))]
use sp_version::NativeVersion;
use sp_version::RuntimeVersion;
// bench on latest state.
use sp_trie::trie_types::TrieDBMutV1 as TrieDBMut;
// Ensure Babe and Aura use the same crypto to simplify things a bit.
pub use sp_consensus_babe::{AllowedSlots, AuthorityId, Slot};
@@ -663,25 +664,19 @@ fn code_using_trie() -> u64 {
let mut mdb = PrefixedMemoryDB::default();
let mut root = sp_std::default::Default::default();
let _ = {
let mut t = TrieDBMut::<Hashing>::new(&mut mdb, &mut root);
{
let mut t = TrieDBMutBuilderV1::<Hashing>::new(&mut mdb, &mut root).build();
for (key, value) in &pairs {
if t.insert(key, value).is_err() {
return 101
}
}
t
};
if let Ok(trie) = TrieDB::<Hashing>::new(&mdb, &root) {
if let Ok(iter) = trie.iter() {
iter.flatten().count() as u64
} else {
102
}
} else {
103
}
let trie = TrieDBBuilder::<Hashing>::new(&mdb, &root).build();
let res = if let Ok(iter) = trie.iter() { iter.flatten().count() as u64 } else { 102 };
res
}
impl_opaque_keys! {
@@ -1277,7 +1272,7 @@ fn test_read_child_storage() {
fn test_witness(proof: StorageProof, root: crate::Hash) {
use sp_externalities::Externalities;
let db: sp_trie::MemoryDB<crate::Hashing> = proof.into_memory_db();
let backend = sp_state_machine::TrieBackend::<_, crate::Hashing>::new(db, root);
let backend = sp_state_machine::TrieBackendBuilder::<_, crate::Hashing>::new(db, root).build();
let mut overlay = sp_state_machine::OverlayedChanges::default();
let mut cache = sp_state_machine::StorageTransactionCache::<_, _>::default();
let mut ext = sp_state_machine::Ext::new(
@@ -1354,7 +1349,8 @@ mod tests {
let mut root = crate::Hash::default();
let mut mdb = sp_trie::MemoryDB::<crate::Hashing>::default();
{
let mut trie = sp_trie::trie_types::TrieDBMutV1::new(&mut mdb, &mut root);
let mut trie =
sp_trie::trie_types::TrieDBMutBuilderV1::new(&mut mdb, &mut root).build();
trie.insert(b"value3", &[142]).expect("insert failed");
trie.insert(b"value4", &[124]).expect("insert failed");
};
@@ -1364,7 +1360,8 @@ mod tests {
#[test]
fn witness_backend_works() {
let (db, root) = witness_backend();
let backend = sp_state_machine::TrieBackend::<_, crate::Hashing>::new(db, root);
let backend =
sp_state_machine::TrieBackendBuilder::<_, crate::Hashing>::new(db, root).build();
let proof = sp_state_machine::prove_read(backend, vec![b"value3"]).unwrap();
let client =
TestClientBuilder::new().set_execution_strategy(ExecutionStrategy::Both).build();
@@ -93,9 +93,9 @@ impl CliConfiguration for BenchmarkCmd {
}
}
fn state_cache_size(&self) -> Result<usize> {
fn trie_cache_maximum_size(&self) -> Result<Option<usize>> {
unwrap_cmd! {
self, cmd, cmd.state_cache_size()
self, cmd, cmd.trie_cache_maximum_size()
}
}
@@ -96,9 +96,11 @@ pub struct StorageParams {
#[clap(long, possible_values = ["0", "1"])]
pub state_version: u8,
/// State cache size.
#[clap(long, default_value = "0")]
pub state_cache_size: usize,
/// Trie cache size in bytes.
///
/// Providing `0` will disable the cache.
#[clap(long, default_value = "1024")]
pub trie_cache_size: usize,
/// Include child trees in benchmark.
#[clap(long)]
@@ -211,7 +213,11 @@ impl CliConfiguration for StorageCmd {
Some(&self.pruning_params)
}
fn state_cache_size(&self) -> Result<usize> {
Ok(self.params.state_cache_size)
fn trie_cache_maximum_size(&self) -> Result<Option<usize>> {
if self.params.trie_cache_size == 0 {
Ok(None)
} else {
Ok(Some(self.params.trie_cache_size))
}
}
}
@@ -17,7 +17,7 @@
use sc_cli::Result;
use sc_client_api::{Backend as ClientBackend, StorageProvider, UsageProvider};
use sc_client_db::{DbHash, DbState};
use sc_client_db::{DbHash, DbState, DbStateBuilder};
use sp_api::StateBackend;
use sp_blockchain::HeaderBackend;
use sp_database::{ColumnId, Transaction};
@@ -60,7 +60,7 @@ impl StorageCmd {
let block = BlockId::Number(client.usage_info().chain.best_number);
let header = client.header(block)?.ok_or("Header not found")?;
let original_root = *header.state_root();
let trie = DbState::<Block>::new(storage.clone(), original_root);
let trie = DbStateBuilder::<Block>::new(storage.clone(), original_root).build();
info!("Preparing keys from block {}", block);
// Load all KV pairs and randomly shuffle them.
@@ -23,7 +23,7 @@ sp-io = { path = "../../../../primitives/io" }
sp-core = { path = "../../../../primitives/core" }
sp-state-machine = { path = "../../../../primitives/state-machine" }
sp-trie = { path = "../../../../primitives/trie" }
trie-db = { version = "0.23.1" }
trie-db = "0.24.0"
jsonrpsee = { version = "0.15.1", features = ["server", "macros"] }
@@ -31,8 +31,11 @@ use sp_core::{
storage::{ChildInfo, ChildType, PrefixedStorageKey},
Hasher,
};
use sp_state_machine::Backend;
use sp_trie::{trie_types::TrieDB, KeySpacedDB, Trie};
use sp_state_machine::backend::AsTrieBackend;
use sp_trie::{
trie_types::{TrieDB, TrieDBBuilder},
KeySpacedDB, Trie,
};
use trie_db::{
node::{NodePlan, ValuePlan},
TrieDBNodeIterator,
@@ -41,9 +44,9 @@ use trie_db::{
fn count_migrate<'a, H: Hasher>(
storage: &'a dyn trie_db::HashDBRef<H, Vec<u8>>,
root: &'a H::Out,
) -> std::result::Result<(u64, TrieDB<'a, H>), String> {
) -> std::result::Result<(u64, TrieDB<'a, 'a, H>), String> {
let mut nb = 0u64;
let trie = TrieDB::new(storage, root).map_err(|e| format!("TrieDB creation error: {}", e))?;
let trie = TrieDBBuilder::new(storage, root).build();
let iter_node =
TrieDBNodeIterator::new(&trie).map_err(|e| format!("TrieDB node iterator error: {}", e))?;
for node in iter_node {
@@ -68,13 +71,9 @@ pub fn migration_status<H, B>(backend: &B) -> std::result::Result<(u64, u64), St
where
H: Hasher,
H::Out: codec::Codec,
B: Backend<H>,
B: AsTrieBackend<H>,
{
let trie_backend = if let Some(backend) = backend.as_trie_backend() {
backend
} else {
return Err("No access to trie from backend.".to_string())
};
let trie_backend = backend.as_trie_backend();
let essence = trie_backend.essence();
let (nb_to_migrate, trie) = count_migrate(essence, essence.root())?;
@@ -293,7 +293,7 @@ use sp_runtime::{
traits::{Block as BlockT, NumberFor},
DeserializeOwned,
};
use sp_state_machine::{InMemoryProvingBackend, OverlayedChanges, StateMachine};
use sp_state_machine::{OverlayedChanges, StateMachine, TrieBackendBuilder};
use std::{fmt::Debug, path::PathBuf, str::FromStr};
mod commands;
@@ -746,9 +746,11 @@ pub(crate) fn state_machine_call_with_proof<Block: BlockT, D: NativeExecutionDis
let mut changes = Default::default();
let backend = ext.backend.clone();
let proving_backend = InMemoryProvingBackend::new(&backend);
let runtime_code_backend = sp_state_machine::backend::BackendRuntimeCode::new(&backend);
let proving_backend =
TrieBackendBuilder::wrap(&backend).with_recorder(Default::default()).build();
let runtime_code_backend = sp_state_machine::backend::BackendRuntimeCode::new(&proving_backend);
let runtime_code = runtime_code_backend.runtime_code()?;
let pre_root = *backend.root();
@@ -767,7 +769,9 @@ pub(crate) fn state_machine_call_with_proof<Block: BlockT, D: NativeExecutionDis
.map_err(|e| format!("failed to execute {}: {}", method, e))
.map_err::<sc_cli::Error, _>(Into::into)?;
let proof = proving_backend.extract_proof();
let proof = proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed");
let proof_size = proof.encoded_size();
let compact_proof = proof
.clone()