Introduce trie level cache and remove state cache (#11407)

* trie state cache

* Also cache missing access on read.

* fix comp

* bis

* fix

* use has_lru

* remove local storage cache on size 0.

* No cache.

* local cache only

* trie cache and local cache

* storage cache (with local)

* trie cache no local cache

* Add state access benchmark

* Remove warnings etc

* Add trie cache benchmark

* No extra "clone" required

* Change benchmark to use multiple blocks

* Use patches

* Integrate shitty implementation

* More stuff

* Revert "Merge branch 'master' into trie_state_cache"

This reverts commit 947cd8e6d43fced10e21b76d5b92ffa57b57c318, reversing
changes made to 29ff036463.

* Improve benchmark

* Adapt to latest changes

* Adapt to changes in trie

* Add a test that uses iterator

* Start fixing it

* Remove obsolete file

* Make it compile

* Start rewriting the trie node cache

* More work on the cache

* More docs and code etc

* Make data cache an optional

* Tests

* Remove debug stuff

* Recorder

* Some docs and a simple test for the recorder

* Compile fixes

* Make it compile

* More fixes

* More fixes

* Fix fix fix

* Make sure cache and recorder work together for basic stuff

* Test that data caching and recording works

* Test `TrieDBMut` with caching

* Try something

* Fixes, fixes, fixes

* Forward the recorder

* Make it compile

* Use recorder in more places

* Switch to new `with_optional_recorder` fn

* Refactor and cleanups

* Move `ProvingBackend` tests

* Simplify

* Move over all functionality to the essence

* Fix compilation

* Implement estimate encoded size for StorageProof

* Start using the `cache` everywhere

* Use the cache everywhere

* Fix compilation

* Fix tests

* Adds `TrieBackendBuilder` and enhances the tests

* Ensure that recorder drain checks that values are found as expected

* Switch over to `TrieBackendBuilder`

* Start fixing the problem with child tries and recording

* Fix recording of child tries

* Make it compile

* Overwrite `storage_hash` in `TrieBackend`

* Add `storage_cache` to  the benchmarks

* Fix `no_std` build

* Speed up cache lookup

* Extend the state access benchmark to also hash a runtime

* Fix build

* Fix compilation

* Rewrite value cache

* Add lru cache

* Ensure that the cache lru works

* Value cache should not be optional

* Add support for keeping the shared node cache in its bounds

* Make the cache configurable

* Check that the cache respects the bounds

* Adds a new test

* Fixes

* Docs and some renamings

* More docs

* Start using the new recorder

* Fix more code

* Take `self` argument

* Remove warnings

* Fix benchmark

* Fix accounting

* Rip off the state cache

* Start fixing fallout after removing the state cache

* Make it compile after trie changes

* Fix test

* Add some logging

* Some docs

* Some fixups and clean ups

* Fix benchmark

* Remove unneeded file

* Use git for patching

* Make CI happy

* Update primitives/trie/Cargo.toml

Co-authored-by: Koute <koute@users.noreply.github.com>

* Update primitives/state-machine/src/trie_backend.rs

Co-authored-by: cheme <emericchevalier.pro@gmail.com>

* Introduce new `AsTrieBackend` trait

* Make the LocalTrieCache not clonable

* Make it work in no_std and add docs

* Remove duplicate dependency

* Switch to ahash for better performance

* Speedup value cache merge

* Output errors on underflow

* Ensure the internal LRU map doesn't grow too much

* Use const fn to calculate the value cache element size

* Remove cache configuration

* Fix

* Clear the cache in between for more testing

* Try to come up with a failing test case

* Make the test fail

* Fix the child trie recording

* Make everything compile after the changes to trie

* Adapt to latest trie-db changes

* Fix on stable

* Update primitives/trie/src/cache.rs

Co-authored-by: cheme <emericchevalier.pro@gmail.com>

* Fix wrong merge

* Docs

* Fix warnings

* Cargo.lock

* Bump pin-project

* Fix warnings

* Switch to released crate version

* More fixes

* Make clippy and rustdocs happy

* More clippy

* Print error when using deprecated `--state-cache-size`

* 🤦

* Fixes

* Fix storage_hash linkings

* Update client/rpc/src/dev/mod.rs

Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>

* Review feedback

* encode bound

* Rework the shared value cache

Instead of using a `u64` to represent the key we now use an `Arc<[u8]>`. This arc is also stored in
some extra `HashSet`. We store the key are in an extra `HashSet` to de-duplicate the keys accross
different storage roots. When the latest key usage is dropped in the lru, we also remove the key
from the `HashSet`.

* Improve of the cache by merging the old and new solution

* FMT

* Please stop coming back all the time :crying:

* Update primitives/trie/src/cache/shared_cache.rs

Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>

* Fixes

* Make clippy happy

* Ensure we don't deadlock

* Only use one lock to simplify the code

* Do not depend on `Hasher`

* Fix tests

* FMT

* Clippy 🤦

Co-authored-by: cheme <emericchevalier.pro@gmail.com>
Co-authored-by: Koute <koute@users.noreply.github.com>
Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>
This commit is contained in:
Bastian Köcher
2022-08-18 20:59:22 +02:00
committed by GitHub
parent d46f6f0d34
commit 73d9ae3284
55 changed files with 3977 additions and 1344 deletions
@@ -17,9 +17,11 @@
//! State machine backends. These manage the code and storage of contracts.
#[cfg(feature = "std")]
use crate::trie_backend::TrieBackend;
use crate::{
trie_backend::TrieBackend, trie_backend_essence::TrieBackendStorage, ChildStorageCollection,
StorageCollection, StorageKey, StorageValue, UsageInfo,
trie_backend_essence::TrieBackendStorage, ChildStorageCollection, StorageCollection,
StorageKey, StorageValue, UsageInfo,
};
use codec::Encode;
use hash_db::Hasher;
@@ -46,9 +48,7 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
fn storage(&self, key: &[u8]) -> Result<Option<StorageValue>, Self::Error>;
/// Get keyed storage value hash or None if there is nothing associated.
fn storage_hash(&self, key: &[u8]) -> Result<Option<H::Out>, Self::Error> {
self.storage(key).map(|v| v.map(|v| H::hash(&v)))
}
fn storage_hash(&self, key: &[u8]) -> Result<Option<H::Out>, Self::Error>;
/// Get keyed child storage or None if there is nothing associated.
fn child_storage(
@@ -62,13 +62,11 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<H::Out>, Self::Error> {
self.child_storage(child_info, key).map(|v| v.map(|v| H::hash(&v)))
}
) -> Result<Option<H::Out>, Self::Error>;
/// true if a key exists in storage.
fn exists_storage(&self, key: &[u8]) -> Result<bool, Self::Error> {
Ok(self.storage(key)?.is_some())
Ok(self.storage_hash(key)?.is_some())
}
/// true if a key exists in child storage.
@@ -77,7 +75,7 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
child_info: &ChildInfo,
key: &[u8],
) -> Result<bool, Self::Error> {
Ok(self.child_storage(child_info, key)?.is_some())
Ok(self.child_storage_hash(child_info, key)?.is_some())
}
/// Return the next key in storage in lexicographic order or `None` if there is no value.
@@ -175,10 +173,6 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
all
}
/// Try convert into trie backend.
fn as_trie_backend(&self) -> Option<&TrieBackend<Self::TrieBackendStorage, H>> {
None
}
/// Calculate the storage root, with given delta over what is already stored
/// in the backend, and produce a "transaction" that can be used to commit.
/// Does include child storage updates.
@@ -273,6 +267,16 @@ pub trait Backend<H: Hasher>: sp_std::fmt::Debug {
}
}
/// Something that can be converted into a [`TrieBackend`].
#[cfg(feature = "std")]
pub trait AsTrieBackend<H: Hasher, C = sp_trie::cache::LocalTrieCache<H>> {
/// Type of trie backend storage.
type TrieBackendStorage: TrieBackendStorage<H>;
/// Return the type as [`TrieBackend`].
fn as_trie_backend(&self) -> &TrieBackend<Self::TrieBackendStorage, H, C>;
}
/// Trait that allows consolidate two transactions together.
pub trait Consolidate {
/// Consolidate two transactions into one.
@@ -19,6 +19,7 @@
use crate::{
backend::Backend, trie_backend::TrieBackend, StorageCollection, StorageKey, StorageValue,
TrieBackendBuilder,
};
use codec::Codec;
use hash_db::Hasher;
@@ -46,7 +47,7 @@ where
{
let db = GenericMemoryDB::default();
// V1 is same as V0 for an empty trie.
TrieBackend::new(db, empty_trie_root::<LayoutV1<H>>())
TrieBackendBuilder::new(db, empty_trie_root::<LayoutV1<H>>()).build()
}
impl<H: Hasher, KF> TrieBackend<GenericMemoryDB<H, KF>, H>
@@ -87,14 +88,14 @@ where
pub fn update_backend(&self, root: H::Out, changes: GenericMemoryDB<H, KF>) -> Self {
let mut clone = self.backend_storage().clone();
clone.consolidate(changes);
Self::new(clone, root)
TrieBackendBuilder::new(clone, root).build()
}
/// Apply the given transaction to this backend and set the root to the given value.
pub fn apply_transaction(&mut self, root: H::Out, transaction: GenericMemoryDB<H, KF>) {
let mut storage = sp_std::mem::take(self).into_storage();
storage.consolidate(transaction);
*self = TrieBackend::new(storage, root);
*self = TrieBackendBuilder::new(storage, root).build();
}
/// Compare with another in-memory backend.
@@ -109,7 +110,7 @@ where
KF: KeyFunction<H> + Send + Sync,
{
fn clone(&self) -> Self {
TrieBackend::new(self.backend_storage().clone(), *self.root())
TrieBackendBuilder::new(self.backend_storage().clone(), *self.root()).build()
}
}
@@ -203,7 +204,7 @@ where
#[cfg(test)]
mod tests {
use super::*;
use crate::backend::Backend;
use crate::backend::{AsTrieBackend, Backend};
use sp_core::storage::StateVersion;
use sp_runtime::traits::BlakeTwo256;
@@ -218,7 +219,7 @@ mod tests {
vec![(Some(child_info.clone()), vec![(b"2".to_vec(), Some(b"3".to_vec()))])],
state_version,
);
let trie_backend = storage.as_trie_backend().unwrap();
let trie_backend = storage.as_trie_backend();
assert_eq!(trie_backend.child_storage(child_info, b"2").unwrap(), Some(b"3".to_vec()));
let storage_key = child_info.prefixed_storage_key();
assert!(trie_backend.storage(storage_key.as_slice()).unwrap().is_some());
+103 -89
View File
@@ -29,8 +29,6 @@ mod ext;
mod in_memory_backend;
pub(crate) mod overlayed_changes;
#[cfg(feature = "std")]
mod proving_backend;
#[cfg(feature = "std")]
mod read_only;
mod stats;
#[cfg(feature = "std")]
@@ -134,7 +132,7 @@ pub use crate::{
StorageTransactionCache, StorageValue,
},
stats::{StateMachineStats, UsageInfo, UsageUnit},
trie_backend::TrieBackend,
trie_backend::{TrieBackend, TrieBackendBuilder},
trie_backend_essence::{Storage, TrieBackendStorage},
};
@@ -144,11 +142,9 @@ mod std_reexport {
basic::BasicExternalities,
error::{Error, ExecutionError},
in_memory_backend::{new_in_mem, new_in_mem_hash_key},
proving_backend::{
create_proof_check_backend, ProofRecorder, ProvingBackend, ProvingBackendRecorder,
},
read_only::{InspectState, ReadOnlyExternalities},
testing::TestExternalities,
trie_backend::create_proof_check_backend,
};
pub use sp_trie::{
trie_types::{TrieDBMutV0, TrieDBMutV1},
@@ -158,6 +154,8 @@ mod std_reexport {
#[cfg(feature = "std")]
mod execution {
use crate::backend::AsTrieBackend;
use super::*;
use codec::{Codec, Decode, Encode};
use hash_db::Hasher;
@@ -188,9 +186,6 @@ mod execution {
/// Trie backend with in-memory storage.
pub type InMemoryBackend<H> = TrieBackend<MemoryDB<H>, H>;
/// Proving Trie backend with in-memory storage.
pub type InMemoryProvingBackend<'a, H> = ProvingBackend<'a, MemoryDB<H>, H>;
/// Strategy for executing a call into the runtime.
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
pub enum ExecutionStrategy {
@@ -562,15 +557,13 @@ mod execution {
runtime_code: &RuntimeCode,
) -> Result<(Vec<u8>, StorageProof), Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + 'static + codec::Codec,
Exec: CodeExecutor + Clone + 'static,
Spawn: SpawnNamed + Send + 'static,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_execution_on_trie_backend::<_, _, _, _>(
trie_backend,
overlay,
@@ -607,23 +600,31 @@ mod execution {
Exec: CodeExecutor + 'static + Clone,
Spawn: SpawnNamed + Send + 'static,
{
let proving_backend = proving_backend::ProvingBackend::new(trie_backend);
let mut sm = StateMachine::<_, H, Exec>::new(
&proving_backend,
overlay,
exec,
method,
call_data,
Extensions::default(),
runtime_code,
spawn_handle,
);
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(Default::default()).build();
let result = {
let mut sm = StateMachine::<_, H, Exec>::new(
&proving_backend,
overlay,
exec,
method,
call_data,
Extensions::default(),
runtime_code,
spawn_handle,
);
sm.execute_using_consensus_failure_handler::<_, NeverNativeValue, fn() -> _>(
always_wasm(),
None,
)?
};
let proof = proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed");
let result = sm.execute_using_consensus_failure_handler::<_, NeverNativeValue, fn() -> _>(
always_wasm(),
None,
)?;
let proof = sm.backend.extract_proof();
Ok((result.into_encoded(), proof))
}
@@ -639,7 +640,7 @@ mod execution {
runtime_code: &RuntimeCode,
) -> Result<Vec<u8>, Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
Exec: CodeExecutor + Clone + 'static,
H::Out: Ord + 'static + codec::Codec,
Spawn: SpawnNamed + Send + 'static,
@@ -693,15 +694,13 @@ mod execution {
/// Generate storage read proof.
pub fn prove_read<B, H, I>(backend: B, keys: I) -> Result<StorageProof, Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + Codec,
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_read_on_trie_backend(trie_backend, keys)
}
@@ -829,13 +828,11 @@ mod execution {
start_at: &[Vec<u8>],
) -> Result<(StorageProof, u32), Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + Codec,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_range_read_with_child_with_size_on_trie_backend(trie_backend, size_limit, start_at)
}
@@ -856,7 +853,9 @@ mod execution {
return Err(Box::new("Invalid start of range."))
}
let proving_backend = proving_backend::ProvingBackend::<S, H>::new(trie_backend);
let recorder = sp_trie::recorder::Recorder::default();
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(recorder.clone()).build();
let mut count = 0;
let mut child_roots = HashSet::new();
@@ -924,7 +923,7 @@ mod execution {
// do not add two child trie with same root
true
}
} else if proving_backend.estimate_encoded_size() <= size_limit {
} else if recorder.estimate_encoded_size() <= size_limit {
count += 1;
true
} else {
@@ -948,7 +947,11 @@ mod execution {
start_at = None;
}
}
Ok((proving_backend.extract_proof(), count))
let proof = proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed");
Ok((proof, count))
}
/// Generate range storage read proof.
@@ -960,13 +963,11 @@ mod execution {
start_at: Option<&[u8]>,
) -> Result<(StorageProof, u32), Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + Codec,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_range_read_with_size_on_trie_backend(
trie_backend,
child_info,
@@ -989,7 +990,9 @@ mod execution {
H: Hasher,
H::Out: Ord + Codec,
{
let proving_backend = proving_backend::ProvingBackend::<S, H>::new(trie_backend);
let recorder = sp_trie::recorder::Recorder::default();
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(recorder.clone()).build();
let mut count = 0;
proving_backend
.apply_to_key_values_while(
@@ -997,7 +1000,7 @@ mod execution {
prefix,
start_at,
|_key, _value| {
if count == 0 || proving_backend.estimate_encoded_size() <= size_limit {
if count == 0 || recorder.estimate_encoded_size() <= size_limit {
count += 1;
true
} else {
@@ -1007,7 +1010,11 @@ mod execution {
false,
)
.map_err(|e| Box::new(e) as Box<dyn Error>)?;
Ok((proving_backend.extract_proof(), count))
let proof = proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed");
Ok((proof, count))
}
/// Generate child storage read proof.
@@ -1017,15 +1024,13 @@ mod execution {
keys: I,
) -> Result<StorageProof, Box<dyn Error>>
where
B: Backend<H>,
B: AsTrieBackend<H>,
H: Hasher,
H::Out: Ord + Codec,
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let trie_backend = backend
.as_trie_backend()
.ok_or_else(|| Box::new(ExecutionError::UnableToGenerateProof) as Box<dyn Error>)?;
let trie_backend = backend.as_trie_backend();
prove_child_read_on_trie_backend(trie_backend, child_info, keys)
}
@@ -1041,13 +1046,17 @@ mod execution {
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let proving_backend = proving_backend::ProvingBackend::<_, H>::new(trie_backend);
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(Default::default()).build();
for key in keys.into_iter() {
proving_backend
.storage(key.as_ref())
.map_err(|e| Box::new(e) as Box<dyn Error>)?;
}
Ok(proving_backend.extract_proof())
Ok(proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed"))
}
/// Generate storage read proof on pre-created trie backend.
@@ -1063,13 +1072,17 @@ mod execution {
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let proving_backend = proving_backend::ProvingBackend::<_, H>::new(trie_backend);
let proving_backend =
TrieBackendBuilder::wrap(trie_backend).with_recorder(Default::default()).build();
for key in keys.into_iter() {
proving_backend
.child_storage(child_info, key.as_ref())
.map_err(|e| Box::new(e) as Box<dyn Error>)?;
}
Ok(proving_backend.extract_proof())
Ok(proving_backend
.extract_proof()
.expect("A recorder was set and thus, a storage proof can be extracted; qed"))
}
/// Check storage read proof, generated by `prove_read` call.
@@ -1079,7 +1092,7 @@ mod execution {
keys: I,
) -> Result<HashMap<Vec<u8>, Option<Vec<u8>>>, Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
H::Out: Ord + Codec,
I: IntoIterator,
I::Item: AsRef<[u8]>,
@@ -1104,7 +1117,7 @@ mod execution {
start_at: &[Vec<u8>],
) -> Result<(KeyValueStates, usize), Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
H::Out: Ord + Codec,
{
let proving_backend = create_proof_check_backend::<H>(root, proof)?;
@@ -1121,7 +1134,7 @@ mod execution {
start_at: Option<&[u8]>,
) -> Result<(Vec<(Vec<u8>, Vec<u8>)>, bool), Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
H::Out: Ord + Codec,
{
let proving_backend = create_proof_check_backend::<H>(root, proof)?;
@@ -1142,7 +1155,7 @@ mod execution {
keys: I,
) -> Result<HashMap<Vec<u8>, Option<Vec<u8>>>, Box<dyn Error>>
where
H: Hasher,
H: Hasher + 'static,
H::Out: Ord + Codec,
I: IntoIterator,
I::Item: AsRef<[u8]>,
@@ -1346,7 +1359,7 @@ mod execution {
#[cfg(test)]
mod tests {
use super::{ext::Ext, *};
use super::{backend::AsTrieBackend, ext::Ext, *};
use crate::{execution::CallResult, in_memory_backend::new_in_mem_hash_key};
use assert_matches::assert_matches;
use codec::{Decode, Encode};
@@ -1358,6 +1371,7 @@ mod tests {
NativeOrEncoded, NeverNativeValue,
};
use sp_runtime::traits::BlakeTwo256;
use sp_trie::trie_types::{TrieDBMutBuilderV0, TrieDBMutBuilderV1};
use std::{
collections::{BTreeMap, HashMap},
panic::UnwindSafe,
@@ -1419,7 +1433,7 @@ mod tests {
execute_works_inner(StateVersion::V1);
}
fn execute_works_inner(state_version: StateVersion) {
let backend = trie_backend::tests::test_trie(state_version);
let backend = trie_backend::tests::test_trie(state_version, None, None);
let mut overlayed_changes = Default::default();
let wasm_code = RuntimeCode::empty();
@@ -1447,7 +1461,7 @@ mod tests {
execute_works_with_native_else_wasm_inner(StateVersion::V1);
}
fn execute_works_with_native_else_wasm_inner(state_version: StateVersion) {
let backend = trie_backend::tests::test_trie(state_version);
let backend = trie_backend::tests::test_trie(state_version, None, None);
let mut overlayed_changes = Default::default();
let wasm_code = RuntimeCode::empty();
@@ -1476,7 +1490,7 @@ mod tests {
}
fn dual_execution_strategy_detects_consensus_failure_inner(state_version: StateVersion) {
let mut consensus_failed = false;
let backend = trie_backend::tests::test_trie(state_version);
let backend = trie_backend::tests::test_trie(state_version, None, None);
let mut overlayed_changes = Default::default();
let wasm_code = RuntimeCode::empty();
@@ -1520,7 +1534,7 @@ mod tests {
};
// fetch execution proof from 'remote' full node
let mut remote_backend = trie_backend::tests::test_trie(state_version);
let mut remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let (remote_result, remote_proof) = prove_execution(
&mut remote_backend,
@@ -1560,7 +1574,7 @@ mod tests {
b"bbb".to_vec() => b"3".to_vec()
];
let state = InMemoryBackend::<BlakeTwo256>::from((initial, StateVersion::default()));
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
overlay.set_storage(b"aba".to_vec(), Some(b"1312".to_vec()));
@@ -1716,7 +1730,7 @@ mod tests {
let child_info = ChildInfo::new_default(b"sub1");
let child_info = &child_info;
let state = new_in_mem_hash_key::<BlakeTwo256>();
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
let mut cache = StorageTransactionCache::default();
let mut ext = Ext::new(&mut overlay, &mut cache, backend, None);
@@ -1732,7 +1746,7 @@ mod tests {
let reference_data = vec![b"data1".to_vec(), b"2".to_vec(), b"D3".to_vec(), b"d4".to_vec()];
let key = b"key".to_vec();
let state = new_in_mem_hash_key::<BlakeTwo256>();
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
let mut cache = StorageTransactionCache::default();
{
@@ -1769,7 +1783,7 @@ mod tests {
let key = b"events".to_vec();
let mut cache = StorageTransactionCache::default();
let state = new_in_mem_hash_key::<BlakeTwo256>();
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
// For example, block initialization with event.
@@ -1840,7 +1854,7 @@ mod tests {
let child_info = &child_info;
let missing_child_info = &missing_child_info;
// fetch read proof from 'remote' full node
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let remote_proof = prove_read(remote_backend, &[b"value2"]).unwrap();
let remote_proof = test_compact(remote_proof, &remote_root);
@@ -1857,7 +1871,7 @@ mod tests {
);
assert_eq!(local_result2, false);
// on child trie
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let remote_proof = prove_child_read(remote_backend, child_info, &[b"value3"]).unwrap();
let remote_proof = test_compact(remote_proof, &remote_root);
@@ -1924,8 +1938,8 @@ mod tests {
let trie: InMemoryBackend<BlakeTwo256> =
(storage.clone(), StateVersion::default()).into();
let trie_root = trie.root();
let backend = crate::ProvingBackend::new(&trie);
let trie_root = *trie.root();
let backend = TrieBackendBuilder::wrap(&trie).with_recorder(Default::default()).build();
let mut queries = Vec::new();
for c in 0..(5 + nb_child_trie / 2) {
// random existing query
@@ -1970,10 +1984,10 @@ mod tests {
}
}
let storage_proof = backend.extract_proof();
let storage_proof = backend.extract_proof().expect("Failed to extract proof");
let remote_proof = test_compact(storage_proof, &trie_root);
let proof_check =
create_proof_check_backend::<BlakeTwo256>(*trie_root, remote_proof).unwrap();
create_proof_check_backend::<BlakeTwo256>(trie_root, remote_proof).unwrap();
for (child_info, key, expected) in queries {
assert_eq!(
@@ -1987,7 +2001,7 @@ mod tests {
#[test]
fn prove_read_with_size_limit_works() {
let state_version = StateVersion::V0;
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(::std::iter::empty(), state_version).0;
let (proof, count) =
prove_range_read_with_size(remote_backend, None, None, 0, None).unwrap();
@@ -1995,7 +2009,7 @@ mod tests {
assert_eq!(proof.into_memory_db::<BlakeTwo256>().drain().len(), 3);
assert_eq!(count, 1);
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let (proof, count) =
prove_range_read_with_size(remote_backend, None, None, 800, Some(&[])).unwrap();
assert_eq!(proof.clone().into_memory_db::<BlakeTwo256>().drain().len(), 9);
@@ -2018,7 +2032,7 @@ mod tests {
assert_eq!(results.len() as u32, 101);
assert_eq!(completed, false);
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let (proof, count) =
prove_range_read_with_size(remote_backend, None, None, 50000, Some(&[])).unwrap();
assert_eq!(proof.clone().into_memory_db::<BlakeTwo256>().drain().len(), 11);
@@ -2035,7 +2049,7 @@ mod tests {
let mut state_version = StateVersion::V0;
let (mut mdb, mut root) = trie_backend::tests::test_db(state_version);
{
let mut trie = TrieDBMutV0::from_existing(&mut mdb, &mut root).unwrap();
let mut trie = TrieDBMutBuilderV0::from_existing(&mut mdb, &mut root).build();
trie.insert(b"foo", vec![1u8; 1_000].as_slice()) // big inner hash
.expect("insert failed");
trie.insert(b"foo2", vec![3u8; 16].as_slice()) // no inner hash
@@ -2045,7 +2059,7 @@ mod tests {
}
let check_proof = |mdb, root, state_version| -> StorageProof {
let remote_backend = TrieBackend::new(mdb, root);
let remote_backend = TrieBackendBuilder::new(mdb, root).build();
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let remote_proof = prove_read(remote_backend, &[b"foo222"]).unwrap();
// check proof locally
@@ -2069,7 +2083,7 @@ mod tests {
// do switch
state_version = StateVersion::V1;
{
let mut trie = TrieDBMutV1::from_existing(&mut mdb, &mut root).unwrap();
let mut trie = TrieDBMutBuilderV1::from_existing(&mut mdb, &mut root).build();
trie.insert(b"foo222", vec![5u8; 100].as_slice()) // inner hash
.expect("insert failed");
// update with same value do change
@@ -2088,10 +2102,10 @@ mod tests {
#[test]
fn prove_range_with_child_works() {
let state_version = StateVersion::V0;
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let remote_root = remote_backend.storage_root(std::iter::empty(), state_version).0;
let mut start_at = smallvec::SmallVec::<[Vec<u8>; 2]>::new();
let trie_backend = remote_backend.as_trie_backend().unwrap();
let trie_backend = remote_backend.as_trie_backend();
let max_iter = 1000;
let mut nb_loop = 0;
loop {
@@ -2138,7 +2152,7 @@ mod tests {
let child_info2 = ChildInfo::new_default(b"sub2");
// this root will be include in proof
let child_info3 = ChildInfo::new_default(b"sub");
let remote_backend = trie_backend::tests::test_trie(state_version);
let remote_backend = trie_backend::tests::test_trie(state_version, None, None);
let long_vec: Vec<u8> = (0..1024usize).map(|_| 8u8).collect();
let (remote_root, transaction) = remote_backend.full_storage_root(
std::iter::empty(),
@@ -2170,9 +2184,9 @@ mod tests {
.into_iter(),
state_version,
);
let mut remote_storage = remote_backend.into_storage();
let mut remote_storage = remote_backend.backend_storage().clone();
remote_storage.consolidate(transaction);
let remote_backend = TrieBackend::new(remote_storage, remote_root);
let remote_backend = TrieBackendBuilder::new(remote_storage, remote_root).build();
let remote_proof = prove_child_read(remote_backend, &child_info1, &[b"key1"]).unwrap();
let size = remote_proof.encoded_size();
let remote_proof = test_compact(remote_proof, &remote_root);
@@ -2198,7 +2212,7 @@ mod tests {
let mut overlay = OverlayedChanges::default();
let mut transaction = {
let backend = test_trie(state_version);
let backend = test_trie(state_version, None, None);
let mut cache = StorageTransactionCache::default();
let mut ext = Ext::new(&mut overlay, &mut cache, &backend, None);
ext.set_child_storage(&child_info_1, b"abc".to_vec(), b"def".to_vec());
@@ -2224,7 +2238,7 @@ mod tests {
b"bbb".to_vec() => b"".to_vec()
];
let state = InMemoryBackend::<BlakeTwo256>::from((initial, StateVersion::default()));
let backend = state.as_trie_backend().unwrap();
let backend = state.as_trie_backend();
let mut overlay = OverlayedChanges::default();
overlay.start_transaction();
@@ -2255,7 +2269,7 @@ mod tests {
struct DummyExt(u32);
}
let backend = trie_backend::tests::test_trie(state_version);
let backend = trie_backend::tests::test_trie(state_version, None, None);
let mut overlayed_changes = Default::default();
let wasm_code = RuntimeCode::empty();
@@ -1,611 +0,0 @@
// This file is part of Substrate.
// Copyright (C) 2017-2022 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Proving state machine backend.
use crate::{
trie_backend::TrieBackend,
trie_backend_essence::{Ephemeral, TrieBackendEssence, TrieBackendStorage},
Backend, DBValue, Error, ExecutionError,
};
use codec::{Codec, Decode, Encode};
use hash_db::{HashDB, Hasher, Prefix, EMPTY_PREFIX};
use log::debug;
use parking_lot::RwLock;
use sp_core::storage::{ChildInfo, StateVersion};
pub use sp_trie::trie_types::TrieError;
use sp_trie::{
empty_child_trie_root, read_child_trie_value_with, read_trie_value_with, record_all_keys,
LayoutV1, MemoryDB, Recorder, StorageProof,
};
use std::{
collections::{hash_map::Entry, HashMap},
sync::Arc,
};
/// Patricia trie-based backend specialized in get value proofs.
pub struct ProvingBackendRecorder<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> {
pub(crate) backend: &'a TrieBackendEssence<S, H>,
pub(crate) proof_recorder: &'a mut Recorder<H::Out>,
}
impl<'a, S, H> ProvingBackendRecorder<'a, S, H>
where
S: TrieBackendStorage<H>,
H: Hasher,
H::Out: Codec,
{
/// Produce proof for a key query.
pub fn storage(&mut self, key: &[u8]) -> Result<Option<Vec<u8>>, String> {
let mut read_overlay = S::Overlay::default();
let eph = Ephemeral::new(self.backend.backend_storage(), &mut read_overlay);
let map_e = |e| format!("Trie lookup error: {}", e);
// V1 is equivalent to V0 on read.
read_trie_value_with::<LayoutV1<H>, _, Ephemeral<S, H>>(
&eph,
self.backend.root(),
key,
&mut *self.proof_recorder,
)
.map_err(map_e)
}
/// Produce proof for a child key query.
pub fn child_storage(
&mut self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, String> {
let storage_key = child_info.storage_key();
let root = self
.storage(storage_key)?
.and_then(|r| Decode::decode(&mut &r[..]).ok())
// V1 is equivalent to V0 on empty trie
.unwrap_or_else(empty_child_trie_root::<LayoutV1<H>>);
let mut read_overlay = S::Overlay::default();
let eph = Ephemeral::new(self.backend.backend_storage(), &mut read_overlay);
let map_e = |e| format!("Trie lookup error: {}", e);
// V1 is equivalent to V0 on read
read_child_trie_value_with::<LayoutV1<H>, _, _>(
child_info.keyspace(),
&eph,
root.as_ref(),
key,
&mut *self.proof_recorder,
)
.map_err(map_e)
}
/// Produce proof for the whole backend.
pub fn record_all_keys(&mut self) {
let mut read_overlay = S::Overlay::default();
let eph = Ephemeral::new(self.backend.backend_storage(), &mut read_overlay);
let mut iter = move || -> Result<(), Box<TrieError<H::Out>>> {
let root = self.backend.root();
// V1 and V is equivalent to V0 on read and recorder is key read.
record_all_keys::<LayoutV1<H>, _>(&eph, root, &mut *self.proof_recorder)
};
if let Err(e) = iter() {
debug!(target: "trie", "Error while recording all keys: {}", e);
}
}
}
#[derive(Default)]
struct ProofRecorderInner<Hash> {
/// All the records that we have stored so far.
records: HashMap<Hash, Option<DBValue>>,
/// The encoded size of all recorded values.
encoded_size: usize,
}
/// Global proof recorder, act as a layer over a hash db for recording queried data.
#[derive(Clone, Default)]
pub struct ProofRecorder<Hash> {
inner: Arc<RwLock<ProofRecorderInner<Hash>>>,
}
impl<Hash: std::hash::Hash + Eq> ProofRecorder<Hash> {
/// Record the given `key` => `val` combination.
pub fn record(&self, key: Hash, val: Option<DBValue>) {
let mut inner = self.inner.write();
let encoded_size = if let Entry::Vacant(entry) = inner.records.entry(key) {
let encoded_size = val.as_ref().map(Encode::encoded_size).unwrap_or(0);
entry.insert(val);
encoded_size
} else {
0
};
inner.encoded_size += encoded_size;
}
/// Returns the value at the given `key`.
pub fn get(&self, key: &Hash) -> Option<Option<DBValue>> {
self.inner.read().records.get(key).cloned()
}
/// Returns the estimated encoded size of the proof.
///
/// The estimation is maybe bigger (by in maximum 4 bytes), but never smaller than the actual
/// encoded proof.
pub fn estimate_encoded_size(&self) -> usize {
let inner = self.inner.read();
inner.encoded_size + codec::Compact(inner.records.len() as u32).encoded_size()
}
/// Convert into a [`StorageProof`].
pub fn to_storage_proof(&self) -> StorageProof {
StorageProof::new(
self.inner
.read()
.records
.iter()
.filter_map(|(_k, v)| v.as_ref().map(|v| v.to_vec())),
)
}
/// Reset the internal state.
pub fn reset(&self) {
let mut inner = self.inner.write();
inner.records.clear();
inner.encoded_size = 0;
}
}
/// Patricia trie-based backend which also tracks all touched storage trie values.
/// These can be sent to remote node and used as a proof of execution.
pub struct ProvingBackend<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher>(
TrieBackend<ProofRecorderBackend<'a, S, H>, H>,
);
/// Trie backend storage with its proof recorder.
pub struct ProofRecorderBackend<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> {
backend: &'a S,
proof_recorder: ProofRecorder<H::Out>,
}
impl<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> ProvingBackend<'a, S, H>
where
H::Out: Codec,
{
/// Create new proving backend.
pub fn new(backend: &'a TrieBackend<S, H>) -> Self {
let proof_recorder = Default::default();
Self::new_with_recorder(backend, proof_recorder)
}
/// Create new proving backend with the given recorder.
pub fn new_with_recorder(
backend: &'a TrieBackend<S, H>,
proof_recorder: ProofRecorder<H::Out>,
) -> Self {
let essence = backend.essence();
let root = *essence.root();
let recorder = ProofRecorderBackend { backend: essence.backend_storage(), proof_recorder };
ProvingBackend(TrieBackend::new(recorder, root))
}
/// Extracting the gathered unordered proof.
pub fn extract_proof(&self) -> StorageProof {
self.0.essence().backend_storage().proof_recorder.to_storage_proof()
}
/// Returns the estimated encoded size of the proof.
///
/// The estimation is maybe bigger (by in maximum 4 bytes), but never smaller than the actual
/// encoded proof.
pub fn estimate_encoded_size(&self) -> usize {
self.0.essence().backend_storage().proof_recorder.estimate_encoded_size()
}
/// Clear the proof recorded data.
pub fn clear_recorder(&self) {
self.0.essence().backend_storage().proof_recorder.reset()
}
}
impl<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> TrieBackendStorage<H>
for ProofRecorderBackend<'a, S, H>
{
type Overlay = S::Overlay;
fn get(&self, key: &H::Out, prefix: Prefix) -> Result<Option<DBValue>, String> {
if let Some(v) = self.proof_recorder.get(key) {
return Ok(v)
}
let backend_value = self.backend.get(key, prefix)?;
self.proof_recorder.record(*key, backend_value.clone());
Ok(backend_value)
}
}
impl<'a, S: 'a + TrieBackendStorage<H>, H: 'a + Hasher> std::fmt::Debug
for ProvingBackend<'a, S, H>
{
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "ProvingBackend")
}
}
impl<'a, S, H> Backend<H> for ProvingBackend<'a, S, H>
where
S: 'a + TrieBackendStorage<H>,
H: 'a + Hasher,
H::Out: Ord + Codec,
{
type Error = String;
type Transaction = S::Overlay;
type TrieBackendStorage = S;
fn storage(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {
self.0.storage(key)
}
fn child_storage(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, Self::Error> {
self.0.child_storage(child_info, key)
}
fn apply_to_key_values_while<F: FnMut(Vec<u8>, Vec<u8>) -> bool>(
&self,
child_info: Option<&ChildInfo>,
prefix: Option<&[u8]>,
start_at: Option<&[u8]>,
f: F,
allow_missing: bool,
) -> Result<bool, Self::Error> {
self.0.apply_to_key_values_while(child_info, prefix, start_at, f, allow_missing)
}
fn apply_to_keys_while<F: FnMut(&[u8]) -> bool>(
&self,
child_info: Option<&ChildInfo>,
prefix: Option<&[u8]>,
start_at: Option<&[u8]>,
f: F,
) {
self.0.apply_to_keys_while(child_info, prefix, start_at, f)
}
fn next_storage_key(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {
self.0.next_storage_key(key)
}
fn next_child_storage_key(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, Self::Error> {
self.0.next_child_storage_key(child_info, key)
}
fn for_keys_with_prefix<F: FnMut(&[u8])>(&self, prefix: &[u8], f: F) {
self.0.for_keys_with_prefix(prefix, f)
}
fn for_key_values_with_prefix<F: FnMut(&[u8], &[u8])>(&self, prefix: &[u8], f: F) {
self.0.for_key_values_with_prefix(prefix, f)
}
fn for_child_keys_with_prefix<F: FnMut(&[u8])>(
&self,
child_info: &ChildInfo,
prefix: &[u8],
f: F,
) {
self.0.for_child_keys_with_prefix(child_info, prefix, f)
}
fn pairs(&self) -> Vec<(Vec<u8>, Vec<u8>)> {
self.0.pairs()
}
fn keys(&self, prefix: &[u8]) -> Vec<Vec<u8>> {
self.0.keys(prefix)
}
fn child_keys(&self, child_info: &ChildInfo, prefix: &[u8]) -> Vec<Vec<u8>> {
self.0.child_keys(child_info, prefix)
}
fn storage_root<'b>(
&self,
delta: impl Iterator<Item = (&'b [u8], Option<&'b [u8]>)>,
state_version: StateVersion,
) -> (H::Out, Self::Transaction)
where
H::Out: Ord,
{
self.0.storage_root(delta, state_version)
}
fn child_storage_root<'b>(
&self,
child_info: &ChildInfo,
delta: impl Iterator<Item = (&'b [u8], Option<&'b [u8]>)>,
state_version: StateVersion,
) -> (H::Out, bool, Self::Transaction)
where
H::Out: Ord,
{
self.0.child_storage_root(child_info, delta, state_version)
}
fn register_overlay_stats(&self, _stats: &crate::stats::StateMachineStats) {}
fn usage_info(&self) -> crate::stats::UsageInfo {
self.0.usage_info()
}
}
/// Create a backend used for checking the proof., using `H` as hasher.
///
/// `proof` and `root` must match, i.e. `root` must be the correct root of `proof` nodes.
pub fn create_proof_check_backend<H>(
root: H::Out,
proof: StorageProof,
) -> Result<TrieBackend<MemoryDB<H>, H>, Box<dyn Error>>
where
H: Hasher,
H::Out: Codec,
{
let db = proof.into_memory_db();
if db.contains(&root, EMPTY_PREFIX) {
Ok(TrieBackend::new(db, root))
} else {
Err(Box::new(ExecutionError::InvalidProof))
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
proving_backend::create_proof_check_backend, trie_backend::tests::test_trie,
InMemoryBackend,
};
use sp_core::H256;
use sp_runtime::traits::BlakeTwo256;
use sp_trie::PrefixedMemoryDB;
fn test_proving(
trie_backend: &TrieBackend<PrefixedMemoryDB<BlakeTwo256>, BlakeTwo256>,
) -> ProvingBackend<PrefixedMemoryDB<BlakeTwo256>, BlakeTwo256> {
ProvingBackend::new(trie_backend)
}
#[test]
fn proof_is_empty_until_value_is_read() {
proof_is_empty_until_value_is_read_inner(StateVersion::V0);
proof_is_empty_until_value_is_read_inner(StateVersion::V1);
}
fn proof_is_empty_until_value_is_read_inner(test_hash: StateVersion) {
let trie_backend = test_trie(test_hash);
assert!(test_proving(&trie_backend).extract_proof().is_empty());
}
#[test]
fn proof_is_non_empty_after_value_is_read() {
proof_is_non_empty_after_value_is_read_inner(StateVersion::V0);
proof_is_non_empty_after_value_is_read_inner(StateVersion::V1);
}
fn proof_is_non_empty_after_value_is_read_inner(test_hash: StateVersion) {
let trie_backend = test_trie(test_hash);
let backend = test_proving(&trie_backend);
assert_eq!(backend.storage(b"key").unwrap(), Some(b"value".to_vec()));
assert!(!backend.extract_proof().is_empty());
}
#[test]
fn proof_is_invalid_when_does_not_contains_root() {
let result = create_proof_check_backend::<BlakeTwo256>(
H256::from_low_u64_be(1),
StorageProof::empty(),
);
assert!(result.is_err());
}
#[test]
fn passes_through_backend_calls() {
passes_through_backend_calls_inner(StateVersion::V0);
passes_through_backend_calls_inner(StateVersion::V1);
}
fn passes_through_backend_calls_inner(state_version: StateVersion) {
let trie_backend = test_trie(state_version);
let proving_backend = test_proving(&trie_backend);
assert_eq!(trie_backend.storage(b"key").unwrap(), proving_backend.storage(b"key").unwrap());
assert_eq!(trie_backend.pairs(), proving_backend.pairs());
let (trie_root, mut trie_mdb) =
trie_backend.storage_root(std::iter::empty(), state_version);
let (proving_root, mut proving_mdb) =
proving_backend.storage_root(std::iter::empty(), state_version);
assert_eq!(trie_root, proving_root);
assert_eq!(trie_mdb.drain(), proving_mdb.drain());
}
#[test]
fn proof_recorded_and_checked_top() {
proof_recorded_and_checked_inner(StateVersion::V0);
proof_recorded_and_checked_inner(StateVersion::V1);
}
fn proof_recorded_and_checked_inner(state_version: StateVersion) {
let size_content = 34; // above hashable value treshold.
let value_range = 0..64;
let contents = value_range
.clone()
.map(|i| (vec![i], Some(vec![i; size_content])))
.collect::<Vec<_>>();
let in_memory = InMemoryBackend::<BlakeTwo256>::default();
let in_memory = in_memory.update(vec![(None, contents)], state_version);
let in_memory_root = in_memory.storage_root(std::iter::empty(), state_version).0;
value_range.clone().for_each(|i| {
assert_eq!(in_memory.storage(&[i]).unwrap().unwrap(), vec![i; size_content])
});
let trie = in_memory.as_trie_backend().unwrap();
let trie_root = trie.storage_root(std::iter::empty(), state_version).0;
assert_eq!(in_memory_root, trie_root);
value_range
.for_each(|i| assert_eq!(trie.storage(&[i]).unwrap().unwrap(), vec![i; size_content]));
let proving = ProvingBackend::new(trie);
assert_eq!(proving.storage(&[42]).unwrap().unwrap(), vec![42; size_content]);
let proof = proving.extract_proof();
let proof_check = create_proof_check_backend::<BlakeTwo256>(in_memory_root, proof).unwrap();
assert_eq!(proof_check.storage(&[42]).unwrap().unwrap(), vec![42; size_content]);
}
#[test]
fn proof_recorded_and_checked_with_child() {
proof_recorded_and_checked_with_child_inner(StateVersion::V0);
proof_recorded_and_checked_with_child_inner(StateVersion::V1);
}
fn proof_recorded_and_checked_with_child_inner(state_version: StateVersion) {
let child_info_1 = ChildInfo::new_default(b"sub1");
let child_info_2 = ChildInfo::new_default(b"sub2");
let child_info_1 = &child_info_1;
let child_info_2 = &child_info_2;
let contents = vec![
(None, (0..64).map(|i| (vec![i], Some(vec![i]))).collect::<Vec<_>>()),
(Some(child_info_1.clone()), (28..65).map(|i| (vec![i], Some(vec![i]))).collect()),
(Some(child_info_2.clone()), (10..15).map(|i| (vec![i], Some(vec![i]))).collect()),
];
let in_memory = InMemoryBackend::<BlakeTwo256>::default();
let in_memory = in_memory.update(contents, state_version);
let child_storage_keys = vec![child_info_1.to_owned(), child_info_2.to_owned()];
let in_memory_root = in_memory
.full_storage_root(
std::iter::empty(),
child_storage_keys.iter().map(|k| (k, std::iter::empty())),
state_version,
)
.0;
(0..64).for_each(|i| assert_eq!(in_memory.storage(&[i]).unwrap().unwrap(), vec![i]));
(28..65).for_each(|i| {
assert_eq!(in_memory.child_storage(child_info_1, &[i]).unwrap().unwrap(), vec![i])
});
(10..15).for_each(|i| {
assert_eq!(in_memory.child_storage(child_info_2, &[i]).unwrap().unwrap(), vec![i])
});
let trie = in_memory.as_trie_backend().unwrap();
let trie_root = trie.storage_root(std::iter::empty(), state_version).0;
assert_eq!(in_memory_root, trie_root);
(0..64).for_each(|i| assert_eq!(trie.storage(&[i]).unwrap().unwrap(), vec![i]));
let proving = ProvingBackend::new(trie);
assert_eq!(proving.storage(&[42]).unwrap().unwrap(), vec![42]);
let proof = proving.extract_proof();
let proof_check = create_proof_check_backend::<BlakeTwo256>(in_memory_root, proof).unwrap();
assert!(proof_check.storage(&[0]).is_err());
assert_eq!(proof_check.storage(&[42]).unwrap().unwrap(), vec![42]);
// note that it is include in root because proof close
assert_eq!(proof_check.storage(&[41]).unwrap().unwrap(), vec![41]);
assert_eq!(proof_check.storage(&[64]).unwrap(), None);
let proving = ProvingBackend::new(trie);
assert_eq!(proving.child_storage(child_info_1, &[64]), Ok(Some(vec![64])));
let proof = proving.extract_proof();
let proof_check = create_proof_check_backend::<BlakeTwo256>(in_memory_root, proof).unwrap();
assert_eq!(proof_check.child_storage(child_info_1, &[64]).unwrap().unwrap(), vec![64]);
}
#[test]
fn storage_proof_encoded_size_estimation_works() {
storage_proof_encoded_size_estimation_works_inner(StateVersion::V0);
storage_proof_encoded_size_estimation_works_inner(StateVersion::V1);
}
fn storage_proof_encoded_size_estimation_works_inner(state_version: StateVersion) {
let trie_backend = test_trie(state_version);
let backend = test_proving(&trie_backend);
let check_estimation =
|backend: &ProvingBackend<'_, PrefixedMemoryDB<BlakeTwo256>, BlakeTwo256>| {
let storage_proof = backend.extract_proof();
let estimation =
backend.0.essence().backend_storage().proof_recorder.estimate_encoded_size();
assert_eq!(storage_proof.encoded_size(), estimation);
};
assert_eq!(backend.storage(b"key").unwrap(), Some(b"value".to_vec()));
check_estimation(&backend);
assert_eq!(backend.storage(b"value1").unwrap(), Some(vec![42]));
check_estimation(&backend);
assert_eq!(backend.storage(b"value2").unwrap(), Some(vec![24]));
check_estimation(&backend);
assert!(backend.storage(b"doesnotexist").unwrap().is_none());
check_estimation(&backend);
assert!(backend.storage(b"doesnotexist2").unwrap().is_none());
check_estimation(&backend);
}
#[test]
fn proof_recorded_for_same_execution_should_be_deterministic() {
let storage_changes = vec![
(H256::random(), Some(b"value1".to_vec())),
(H256::random(), Some(b"value2".to_vec())),
(H256::random(), Some(b"value3".to_vec())),
(H256::random(), Some(b"value4".to_vec())),
(H256::random(), Some(b"value5".to_vec())),
(H256::random(), Some(b"value6".to_vec())),
(H256::random(), Some(b"value7".to_vec())),
(H256::random(), Some(b"value8".to_vec())),
];
let proof_recorder =
ProofRecorder::<H256> { inner: Arc::new(RwLock::new(ProofRecorderInner::default())) };
storage_changes
.clone()
.into_iter()
.for_each(|(key, val)| proof_recorder.record(key, val));
let proof1 = proof_recorder.to_storage_proof();
let proof_recorder =
ProofRecorder::<H256> { inner: Arc::new(RwLock::new(ProofRecorderInner::default())) };
storage_changes
.into_iter()
.for_each(|(key, val)| proof_recorder.record(key, val));
let proof2 = proof_recorder.to_storage_proof();
assert_eq!(proof1, proof2);
}
}
@@ -23,7 +23,6 @@ use hash_db::Hasher;
use sp_core::{
storage::{ChildInfo, StateVersion, TrackedStorageKey},
traits::Externalities,
Blake2Hasher,
};
use sp_externalities::MultiRemovalResults;
use std::{
@@ -44,7 +43,10 @@ pub trait InspectState<H: Hasher, B: Backend<H>> {
fn inspect_state<F: FnOnce() -> R, R>(&self, f: F) -> R;
}
impl<H: Hasher, B: Backend<H>> InspectState<H, B> for B {
impl<H: Hasher, B: Backend<H>> InspectState<H, B> for B
where
H::Out: Encode,
{
fn inspect_state<F: FnOnce() -> R, R>(&self, f: F) -> R {
ReadOnlyExternalities::from(self).execute_with(f)
}
@@ -66,7 +68,10 @@ impl<'a, H: Hasher, B: 'a + Backend<H>> From<&'a B> for ReadOnlyExternalities<'a
}
}
impl<'a, H: Hasher, B: 'a + Backend<H>> ReadOnlyExternalities<'a, H, B> {
impl<'a, H: Hasher, B: 'a + Backend<H>> ReadOnlyExternalities<'a, H, B>
where
H::Out: Encode,
{
/// Execute the given closure while `self` is set as externalities.
///
/// Returns the result of the given closure.
@@ -75,7 +80,10 @@ impl<'a, H: Hasher, B: 'a + Backend<H>> ReadOnlyExternalities<'a, H, B> {
}
}
impl<'a, H: Hasher, B: 'a + Backend<H>> Externalities for ReadOnlyExternalities<'a, H, B> {
impl<'a, H: Hasher, B: 'a + Backend<H>> Externalities for ReadOnlyExternalities<'a, H, B>
where
H::Out: Encode,
{
fn set_offchain_storage(&mut self, _key: &[u8], _value: Option<&[u8]>) {
panic!("Should not be used in read-only externalities!")
}
@@ -87,7 +95,10 @@ impl<'a, H: Hasher, B: 'a + Backend<H>> Externalities for ReadOnlyExternalities<
}
fn storage_hash(&self, key: &[u8]) -> Option<Vec<u8>> {
self.storage(key).map(|v| Blake2Hasher::hash(&v).encode())
self.backend
.storage_hash(key)
.expect("Backed failed for storage_hash in ReadOnlyExternalities")
.map(|h| h.encode())
}
fn child_storage(&self, child_info: &ChildInfo, key: &[u8]) -> Option<StorageValue> {
@@ -97,7 +108,10 @@ impl<'a, H: Hasher, B: 'a + Backend<H>> Externalities for ReadOnlyExternalities<
}
fn child_storage_hash(&self, child_info: &ChildInfo, key: &[u8]) -> Option<Vec<u8>> {
self.child_storage(child_info, key).map(|v| Blake2Hasher::hash(&v).encode())
self.backend
.child_storage_hash(child_info, key)
.expect("Backed failed for child_storage_hash in ReadOnlyExternalities")
.map(|h| h.encode())
}
fn next_storage_key(&self, key: &[u8]) -> Option<StorageKey> {
@@ -24,7 +24,7 @@ use std::{
use crate::{
backend::Backend, ext::Ext, InMemoryBackend, OverlayedChanges, StorageKey,
StorageTransactionCache, StorageValue,
StorageTransactionCache, StorageValue, TrieBackendBuilder,
};
use hash_db::Hasher;
@@ -41,8 +41,9 @@ use sp_externalities::{Extension, ExtensionStore, Extensions};
use sp_trie::StorageProof;
/// Simple HashMap-based Externalities impl.
pub struct TestExternalities<H: Hasher>
pub struct TestExternalities<H>
where
H: Hasher + 'static,
H::Out: codec::Codec + Ord,
{
/// The overlay changed storage.
@@ -58,8 +59,9 @@ where
pub state_version: StateVersion,
}
impl<H: Hasher> TestExternalities<H>
impl<H> TestExternalities<H>
where
H: Hasher + 'static,
H::Out: Ord + 'static + codec::Codec,
{
/// Get externalities implementation.
@@ -202,7 +204,9 @@ where
/// This implementation will wipe the proof recorded in between calls. Consecutive calls will
/// get their own proof from scratch.
pub fn execute_and_prove<R>(&mut self, execute: impl FnOnce() -> R) -> (R, StorageProof) {
let proving_backend = crate::InMemoryProvingBackend::new(&self.backend);
let proving_backend = TrieBackendBuilder::wrap(&self.backend)
.with_recorder(Default::default())
.build();
let mut proving_ext = Ext::new(
&mut self.overlay,
&mut self.storage_transaction_cache,
@@ -211,7 +215,7 @@ where
);
let outcome = sp_externalities::set_and_run_with_externalities(&mut proving_ext, execute);
let proof = proving_backend.extract_proof();
let proof = proving_backend.extract_proof().expect("Failed to extract storage proof");
(outcome, proof)
}
File diff suppressed because it is too large Load Diff
@@ -18,23 +18,32 @@
//! Trie-based state machine backend essence used to read values
//! from storage.
use crate::{backend::Consolidate, debug, warn, StorageKey, StorageValue};
use codec::Encode;
use crate::{
backend::Consolidate, debug, trie_backend::AsLocalTrieCache, warn, StorageKey, StorageValue,
};
use codec::Codec;
use hash_db::{self, AsHashDB, HashDB, HashDBRef, Hasher, Prefix};
#[cfg(feature = "std")]
use parking_lot::RwLock;
use sp_core::storage::{ChildInfo, ChildType, StateVersion};
#[cfg(not(feature = "std"))]
use sp_std::marker::PhantomData;
use sp_std::{boxed::Box, vec::Vec};
#[cfg(feature = "std")]
use sp_trie::recorder::Recorder;
use sp_trie::{
child_delta_trie_root, delta_trie_root, empty_child_trie_root, read_child_trie_value,
read_trie_value,
trie_types::{TrieDB, TrieError},
DBValue, KeySpacedDB, LayoutV1 as Layout, Trie, TrieDBIterator, TrieDBKeyIterator,
child_delta_trie_root, delta_trie_root, empty_child_trie_root, read_child_trie_hash,
read_child_trie_value, read_trie_value,
trie_types::{TrieDBBuilder, TrieError},
DBValue, KeySpacedDB, NodeCodec, Trie, TrieCache, TrieDBIterator, TrieDBKeyIterator,
TrieRecorder,
};
#[cfg(feature = "std")]
use std::collections::HashMap;
#[cfg(feature = "std")]
use std::sync::Arc;
use std::{collections::HashMap, sync::Arc};
// In this module, we only use layout for read operation and empty root,
// where V1 and V0 are equivalent.
use sp_trie::LayoutV1 as Layout;
#[cfg(not(feature = "std"))]
macro_rules! format {
@@ -68,18 +77,21 @@ impl<H> Cache<H> {
}
/// Patricia trie-based pairs storage essence.
pub struct TrieBackendEssence<S: TrieBackendStorage<H>, H: Hasher> {
pub struct TrieBackendEssence<S: TrieBackendStorage<H>, H: Hasher, C> {
storage: S,
root: H::Out,
empty: H::Out,
#[cfg(feature = "std")]
pub(crate) cache: Arc<RwLock<Cache<H::Out>>>,
#[cfg(feature = "std")]
pub(crate) trie_node_cache: Option<C>,
#[cfg(feature = "std")]
pub(crate) recorder: Option<Recorder<H>>,
#[cfg(not(feature = "std"))]
_phantom: PhantomData<C>,
}
impl<S: TrieBackendStorage<H>, H: Hasher> TrieBackendEssence<S, H>
where
H::Out: Encode,
{
impl<S: TrieBackendStorage<H>, H: Hasher, C> TrieBackendEssence<S, H, C> {
/// Create new trie-based backend.
pub fn new(storage: S, root: H::Out) -> Self {
TrieBackendEssence {
@@ -88,6 +100,30 @@ where
empty: H::hash(&[0u8]),
#[cfg(feature = "std")]
cache: Arc::new(RwLock::new(Cache::new())),
#[cfg(feature = "std")]
trie_node_cache: None,
#[cfg(feature = "std")]
recorder: None,
#[cfg(not(feature = "std"))]
_phantom: PhantomData,
}
}
/// Create new trie-based backend.
#[cfg(feature = "std")]
pub fn new_with_cache_and_recorder(
storage: S,
root: H::Out,
cache: Option<C>,
recorder: Option<Recorder<H>>,
) -> Self {
TrieBackendEssence {
storage,
root,
empty: H::hash(&[0u8]),
cache: Arc::new(RwLock::new(Cache::new())),
trie_node_cache: cache,
recorder,
}
}
@@ -96,6 +132,11 @@ where
&self.storage
}
/// Get backend storage mutable reference.
pub fn backend_storage_mut(&mut self) -> &mut S {
&mut self.storage
}
/// Get trie root.
pub fn root(&self) -> &H::Out {
&self.root
@@ -120,7 +161,97 @@ where
pub fn into_storage(self) -> S {
self.storage
}
}
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H>> TrieBackendEssence<S, H, C> {
/// Call the given closure passing it the recorder and the cache.
///
/// If the given `storage_root` is `None`, `self.root` will be used.
#[cfg(feature = "std")]
fn with_recorder_and_cache<R>(
&self,
storage_root: Option<H::Out>,
callback: impl FnOnce(
Option<&mut dyn TrieRecorder<H::Out>>,
Option<&mut dyn TrieCache<NodeCodec<H>>>,
) -> R,
) -> R {
let storage_root = storage_root.unwrap_or_else(|| self.root);
let mut recorder = self.recorder.as_ref().map(|r| r.as_trie_recorder());
let recorder = recorder.as_mut().map(|r| r as _);
let mut cache = self
.trie_node_cache
.as_ref()
.map(|c| c.as_local_trie_cache().as_trie_db_cache(storage_root));
let cache = cache.as_mut().map(|c| c as _);
callback(recorder, cache)
}
#[cfg(not(feature = "std"))]
fn with_recorder_and_cache<R>(
&self,
_: Option<H::Out>,
callback: impl FnOnce(
Option<&mut dyn TrieRecorder<H::Out>>,
Option<&mut dyn TrieCache<NodeCodec<H>>>,
) -> R,
) -> R {
callback(None, None)
}
/// Call the given closure passing it the recorder and the cache.
///
/// This function must only be used when the operation in `callback` is
/// calculating a `storage_root`. It is expected that `callback` returns
/// the new storage root. This is required to register the changes in the cache
/// for the correct storage root.
#[cfg(feature = "std")]
fn with_recorder_and_cache_for_storage_root<R>(
&self,
callback: impl FnOnce(
Option<&mut dyn TrieRecorder<H::Out>>,
Option<&mut dyn TrieCache<NodeCodec<H>>>,
) -> (Option<H::Out>, R),
) -> R {
let mut recorder = self.recorder.as_ref().map(|r| r.as_trie_recorder());
let recorder = recorder.as_mut().map(|r| r as _);
let result = if let Some(local_cache) = self.trie_node_cache.as_ref() {
let mut cache = local_cache.as_local_trie_cache().as_trie_db_mut_cache();
let (new_root, r) = callback(recorder, Some(&mut cache));
if let Some(new_root) = new_root {
cache.merge_into(local_cache.as_local_trie_cache(), new_root);
}
r
} else {
callback(recorder, None).1
};
result
}
#[cfg(not(feature = "std"))]
fn with_recorder_and_cache_for_storage_root<R>(
&self,
callback: impl FnOnce(
Option<&mut dyn TrieRecorder<H::Out>>,
Option<&mut dyn TrieCache<NodeCodec<H>>>,
) -> (Option<H::Out>, R),
) -> R {
callback(None, None).1
}
}
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H> + Send + Sync>
TrieBackendEssence<S, H, C>
where
H::Out: Codec + Ord,
{
/// Return the next key in the trie i.e. the minimum key that is strictly superior to `key` in
/// lexicographic order.
pub fn next_storage_key(&self, key: &[u8]) -> Result<Option<StorageKey>> {
@@ -184,39 +315,82 @@ where
dyn_eph = self;
}
let trie =
TrieDB::<H>::new(dyn_eph, root).map_err(|e| format!("TrieDB creation error: {}", e))?;
let mut iter = trie.key_iter().map_err(|e| format!("TrieDB iteration error: {}", e))?;
self.with_recorder_and_cache(Some(*root), |recorder, cache| {
let trie = TrieDBBuilder::<H>::new(dyn_eph, root)
.with_optional_recorder(recorder)
.with_optional_cache(cache)
.build();
// The key just after the one given in input, basically `key++0`.
// Note: We are sure this is the next key if:
// * size of key has no limit (i.e. we can always add 0 to the path),
// * and no keys can be inserted between `key` and `key++0` (this is ensured by sp-io).
let mut potential_next_key = Vec::with_capacity(key.len() + 1);
potential_next_key.extend_from_slice(key);
potential_next_key.push(0);
let mut iter = trie.key_iter().map_err(|e| format!("TrieDB iteration error: {}", e))?;
iter.seek(&potential_next_key)
.map_err(|e| format!("TrieDB iterator seek error: {}", e))?;
// The key just after the one given in input, basically `key++0`.
// Note: We are sure this is the next key if:
// * size of key has no limit (i.e. we can always add 0 to the path),
// * and no keys can be inserted between `key` and `key++0` (this is ensured by sp-io).
let mut potential_next_key = Vec::with_capacity(key.len() + 1);
potential_next_key.extend_from_slice(key);
potential_next_key.push(0);
let next_element = iter.next();
iter.seek(&potential_next_key)
.map_err(|e| format!("TrieDB iterator seek error: {}", e))?;
let next_key = if let Some(next_element) = next_element {
let next_key =
next_element.map_err(|e| format!("TrieDB iterator next error: {}", e))?;
Some(next_key)
} else {
None
};
let next_element = iter.next();
Ok(next_key)
let next_key = if let Some(next_element) = next_element {
let next_key =
next_element.map_err(|e| format!("TrieDB iterator next error: {}", e))?;
Some(next_key)
} else {
None
};
Ok(next_key)
})
}
/// Returns the hash value
pub fn storage_hash(&self, key: &[u8]) -> Result<Option<H::Out>> {
let map_e = |e| format!("Trie lookup error: {}", e);
self.with_recorder_and_cache(None, |recorder, cache| {
TrieDBBuilder::new(self, &self.root)
.with_optional_cache(cache)
.with_optional_recorder(recorder)
.build()
.get_hash(key)
.map_err(map_e)
})
}
/// Get the value of storage at given key.
pub fn storage(&self, key: &[u8]) -> Result<Option<StorageValue>> {
let map_e = |e| format!("Trie lookup error: {}", e);
read_trie_value::<Layout<H>, _>(self, &self.root, key).map_err(map_e)
self.with_recorder_and_cache(None, |recorder, cache| {
read_trie_value::<Layout<H>, _>(self, &self.root, key, recorder, cache).map_err(map_e)
})
}
/// Returns the hash value
pub fn child_storage_hash(&self, child_info: &ChildInfo, key: &[u8]) -> Result<Option<H::Out>> {
let child_root = match self.child_root(child_info)? {
Some(root) => root,
None => return Ok(None),
};
let map_e = |e| format!("Trie lookup error: {}", e);
self.with_recorder_and_cache(Some(child_root), |recorder, cache| {
read_child_trie_hash::<Layout<H>, _>(
child_info.keyspace(),
self,
&child_root,
key,
recorder,
cache,
)
.map_err(map_e)
})
}
/// Get the value of child storage at given key.
@@ -225,15 +399,24 @@ where
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<StorageValue>> {
let root = match self.child_root(child_info)? {
let child_root = match self.child_root(child_info)? {
Some(root) => root,
None => return Ok(None),
};
let map_e = |e| format!("Trie lookup error: {}", e);
read_child_trie_value::<Layout<H>, _>(child_info.keyspace(), self, &root, key)
self.with_recorder_and_cache(Some(child_root), |recorder, cache| {
read_child_trie_value::<Layout<H>, _>(
child_info.keyspace(),
self,
&child_root,
key,
recorder,
cache,
)
.map_err(map_e)
})
}
/// Retrieve all entries keys of storage and call `f` for each of those keys.
@@ -338,28 +521,33 @@ where
maybe_start_at: Option<&[u8]>,
) {
let mut iter = move |db| -> sp_std::result::Result<(), Box<TrieError<H::Out>>> {
let trie = TrieDB::<H>::new(db, root)?;
let prefix = maybe_prefix.unwrap_or(&[]);
let iter = match maybe_start_at {
Some(start_at) =>
TrieDBKeyIterator::new_prefixed_then_seek(&trie, prefix, start_at),
None => TrieDBKeyIterator::new_prefixed(&trie, prefix),
}?;
self.with_recorder_and_cache(Some(*root), |recorder, cache| {
let trie = TrieDBBuilder::<H>::new(db, root)
.with_optional_recorder(recorder)
.with_optional_cache(cache)
.build();
let prefix = maybe_prefix.unwrap_or(&[]);
let iter = match maybe_start_at {
Some(start_at) =>
TrieDBKeyIterator::new_prefixed_then_seek(&trie, prefix, start_at),
None => TrieDBKeyIterator::new_prefixed(&trie, prefix),
}?;
for x in iter {
let key = x?;
for x in iter {
let key = x?;
debug_assert!(maybe_prefix
.as_ref()
.map(|prefix| key.starts_with(prefix))
.unwrap_or(true));
debug_assert!(maybe_prefix
.as_ref()
.map(|prefix| key.starts_with(prefix))
.unwrap_or(true));
if !f(&key) {
break
if !f(&key) {
break
}
}
}
Ok(())
Ok(())
})
};
let result = if let Some(child_info) = child_info {
@@ -383,25 +571,30 @@ where
allow_missing_nodes: bool,
) -> Result<bool> {
let mut iter = move |db| -> sp_std::result::Result<bool, Box<TrieError<H::Out>>> {
let trie = TrieDB::<H>::new(db, root)?;
self.with_recorder_and_cache(Some(*root), |recorder, cache| {
let trie = TrieDBBuilder::<H>::new(db, root)
.with_optional_recorder(recorder)
.with_optional_cache(cache)
.build();
let prefix = prefix.unwrap_or(&[]);
let iterator = if let Some(start_at) = start_at {
TrieDBIterator::new_prefixed_then_seek(&trie, prefix, start_at)?
} else {
TrieDBIterator::new_prefixed(&trie, prefix)?
};
for x in iterator {
let (key, value) = x?;
let prefix = prefix.unwrap_or(&[]);
let iterator = if let Some(start_at) = start_at {
TrieDBIterator::new_prefixed_then_seek(&trie, prefix, start_at)?
} else {
TrieDBIterator::new_prefixed(&trie, prefix)?
};
for x in iterator {
let (key, value) = x?;
debug_assert!(key.starts_with(prefix));
debug_assert!(key.starts_with(prefix));
if !f(key, value) {
return Ok(false)
if !f(key, value) {
return Ok(false)
}
}
}
Ok(true)
Ok(true)
})
};
let result = if let Some(child_info) = child_info {
@@ -436,14 +629,20 @@ where
/// Returns all `(key, value)` pairs in the trie.
pub fn pairs(&self) -> Vec<(StorageKey, StorageValue)> {
let collect_all = || -> sp_std::result::Result<_, Box<TrieError<H::Out>>> {
let trie = TrieDB::<H>::new(self, &self.root)?;
let mut v = Vec::new();
for x in trie.iter()? {
let (key, value) = x?;
v.push((key.to_vec(), value.to_vec()));
}
self.with_recorder_and_cache(None, |recorder, cache| {
let trie = TrieDBBuilder::<H>::new(self, self.root())
.with_optional_cache(cache)
.with_optional_recorder(recorder)
.build();
Ok(v)
let mut v = Vec::new();
for x in trie.iter()? {
let (key, value) = x?;
v.push((key.to_vec(), value.to_vec()));
}
Ok(v)
})
};
match collect_all() {
@@ -467,27 +666,28 @@ where
&self,
delta: impl Iterator<Item = (&'a [u8], Option<&'a [u8]>)>,
state_version: StateVersion,
) -> (H::Out, S::Overlay)
where
H::Out: Ord,
{
) -> (H::Out, S::Overlay) {
let mut write_overlay = S::Overlay::default();
let mut root = self.root;
{
let root = self.with_recorder_and_cache_for_storage_root(|recorder, cache| {
let mut eph = Ephemeral::new(self.backend_storage(), &mut write_overlay);
let res = match state_version {
StateVersion::V0 =>
delta_trie_root::<sp_trie::LayoutV0<H>, _, _, _, _, _>(&mut eph, root, delta),
StateVersion::V1 =>
delta_trie_root::<sp_trie::LayoutV1<H>, _, _, _, _, _>(&mut eph, root, delta),
StateVersion::V0 => delta_trie_root::<sp_trie::LayoutV0<H>, _, _, _, _, _>(
&mut eph, self.root, delta, recorder, cache,
),
StateVersion::V1 => delta_trie_root::<sp_trie::LayoutV1<H>, _, _, _, _, _>(
&mut eph, self.root, delta, recorder, cache,
),
};
match res {
Ok(ret) => root = ret,
Err(e) => warn!(target: "trie", "Failed to write to trie: {}", e),
Ok(ret) => (Some(ret), ret),
Err(e) => {
warn!(target: "trie", "Failed to write to trie: {}", e);
(None, self.root)
},
}
}
});
(root, write_overlay)
}
@@ -499,15 +699,12 @@ where
child_info: &ChildInfo,
delta: impl Iterator<Item = (&'a [u8], Option<&'a [u8]>)>,
state_version: StateVersion,
) -> (H::Out, bool, S::Overlay)
where
H::Out: Ord,
{
) -> (H::Out, bool, S::Overlay) {
let default_root = match child_info.child_type() {
ChildType::ParentKeyId => empty_child_trie_root::<sp_trie::LayoutV1<H>>(),
};
let mut write_overlay = S::Overlay::default();
let mut root = match self.child_root(child_info) {
let child_root = match self.child_root(child_info) {
Ok(Some(hash)) => hash,
Ok(None) => default_root,
Err(e) => {
@@ -516,32 +713,39 @@ where
},
};
{
let new_child_root = self.with_recorder_and_cache_for_storage_root(|recorder, cache| {
let mut eph = Ephemeral::new(self.backend_storage(), &mut write_overlay);
match match state_version {
StateVersion::V0 =>
child_delta_trie_root::<sp_trie::LayoutV0<H>, _, _, _, _, _, _>(
child_info.keyspace(),
&mut eph,
root,
child_root,
delta,
recorder,
cache,
),
StateVersion::V1 =>
child_delta_trie_root::<sp_trie::LayoutV1<H>, _, _, _, _, _, _>(
child_info.keyspace(),
&mut eph,
root,
child_root,
delta,
recorder,
cache,
),
} {
Ok(ret) => root = ret,
Err(e) => warn!(target: "trie", "Failed to write to trie: {}", e),
Ok(ret) => (Some(ret), ret),
Err(e) => {
warn!(target: "trie", "Failed to write to trie: {}", e);
(None, child_root)
},
}
}
});
let is_default = root == default_root;
let is_default = new_child_root == default_root;
(root, is_default, write_overlay)
(new_child_root, is_default, write_overlay)
}
}
@@ -615,6 +819,14 @@ pub trait TrieBackendStorage<H: Hasher>: Send + Sync {
fn get(&self, key: &H::Out, prefix: Prefix) -> Result<Option<DBValue>>;
}
impl<T: TrieBackendStorage<H>, H: Hasher> TrieBackendStorage<H> for &T {
type Overlay = T::Overlay;
fn get(&self, key: &H::Out, prefix: Prefix) -> Result<Option<DBValue>> {
(*self).get(key, prefix)
}
}
// This implementation is used by normal storage trie clients.
#[cfg(feature = "std")]
impl<H: Hasher> TrieBackendStorage<H> for Arc<dyn Storage<H>> {
@@ -637,7 +849,9 @@ where
}
}
impl<S: TrieBackendStorage<H>, H: Hasher> AsHashDB<H, DBValue> for TrieBackendEssence<S, H> {
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H> + Send + Sync> AsHashDB<H, DBValue>
for TrieBackendEssence<S, H, C>
{
fn as_hash_db<'b>(&'b self) -> &'b (dyn HashDB<H, DBValue> + 'b) {
self
}
@@ -646,7 +860,9 @@ impl<S: TrieBackendStorage<H>, H: Hasher> AsHashDB<H, DBValue> for TrieBackendEs
}
}
impl<S: TrieBackendStorage<H>, H: Hasher> HashDB<H, DBValue> for TrieBackendEssence<S, H> {
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H> + Send + Sync> HashDB<H, DBValue>
for TrieBackendEssence<S, H, C>
{
fn get(&self, key: &H::Out, prefix: Prefix) -> Option<DBValue> {
if *key == self.empty {
return Some([0u8].to_vec())
@@ -677,7 +893,9 @@ impl<S: TrieBackendStorage<H>, H: Hasher> HashDB<H, DBValue> for TrieBackendEsse
}
}
impl<S: TrieBackendStorage<H>, H: Hasher> HashDBRef<H, DBValue> for TrieBackendEssence<S, H> {
impl<S: TrieBackendStorage<H>, H: Hasher, C: AsLocalTrieCache<H> + Send + Sync>
HashDBRef<H, DBValue> for TrieBackendEssence<S, H, C>
{
fn get(&self, key: &H::Out, prefix: Prefix) -> Option<DBValue> {
HashDB::get(self, key, prefix)
}
@@ -692,7 +910,8 @@ mod test {
use super::*;
use sp_core::{Blake2Hasher, H256};
use sp_trie::{
trie_types::TrieDBMutV1 as TrieDBMut, KeySpacedDBMut, PrefixedMemoryDB, TrieMut,
cache::LocalTrieCache, trie_types::TrieDBMutBuilderV1 as TrieDBMutBuilder, KeySpacedDBMut,
PrefixedMemoryDB, TrieMut,
};
#[test]
@@ -706,7 +925,7 @@ mod test {
let mut mdb = PrefixedMemoryDB::<Blake2Hasher>::default();
{
let mut trie = TrieDBMut::new(&mut mdb, &mut root_1);
let mut trie = TrieDBMutBuilder::new(&mut mdb, &mut root_1).build();
trie.insert(b"3", &[1]).expect("insert failed");
trie.insert(b"4", &[1]).expect("insert failed");
trie.insert(b"6", &[1]).expect("insert failed");
@@ -715,18 +934,18 @@ mod test {
let mut mdb = KeySpacedDBMut::new(&mut mdb, child_info.keyspace());
// reuse of root_1 implicitly assert child trie root is same
// as top trie (contents must remain the same).
let mut trie = TrieDBMut::new(&mut mdb, &mut root_1);
let mut trie = TrieDBMutBuilder::new(&mut mdb, &mut root_1).build();
trie.insert(b"3", &[1]).expect("insert failed");
trie.insert(b"4", &[1]).expect("insert failed");
trie.insert(b"6", &[1]).expect("insert failed");
}
{
let mut trie = TrieDBMut::new(&mut mdb, &mut root_2);
let mut trie = TrieDBMutBuilder::new(&mut mdb, &mut root_2).build();
trie.insert(child_info.prefixed_storage_key().as_slice(), root_1.as_ref())
.expect("insert failed");
};
let essence_1 = TrieBackendEssence::new(mdb, root_1);
let essence_1 = TrieBackendEssence::<_, _, LocalTrieCache<_>>::new(mdb, root_1);
assert_eq!(essence_1.next_storage_key(b"2"), Ok(Some(b"3".to_vec())));
assert_eq!(essence_1.next_storage_key(b"3"), Ok(Some(b"4".to_vec())));
@@ -734,8 +953,8 @@ mod test {
assert_eq!(essence_1.next_storage_key(b"5"), Ok(Some(b"6".to_vec())));
assert_eq!(essence_1.next_storage_key(b"6"), Ok(None));
let mdb = essence_1.into_storage();
let essence_2 = TrieBackendEssence::new(mdb, root_2);
let mdb = essence_1.backend_storage().clone();
let essence_2 = TrieBackendEssence::<_, _, LocalTrieCache<_>>::new(mdb, root_2);
assert_eq!(essence_2.next_child_storage_key(child_info, b"2"), Ok(Some(b"3".to_vec())));
assert_eq!(essence_2.next_child_storage_key(child_info, b"3"), Ok(Some(b"4".to_vec())));