Introduce trie level cache and remove state cache (#11407)

* trie state cache

* Also cache missing access on read.

* fix comp

* bis

* fix

* use has_lru

* remove local storage cache on size 0.

* No cache.

* local cache only

* trie cache and local cache

* storage cache (with local)

* trie cache no local cache

* Add state access benchmark

* Remove warnings etc

* Add trie cache benchmark

* No extra "clone" required

* Change benchmark to use multiple blocks

* Use patches

* Integrate shitty implementation

* More stuff

* Revert "Merge branch 'master' into trie_state_cache"

This reverts commit 947cd8e6d43fced10e21b76d5b92ffa57b57c318, reversing
changes made to 29ff036463.

* Improve benchmark

* Adapt to latest changes

* Adapt to changes in trie

* Add a test that uses iterator

* Start fixing it

* Remove obsolete file

* Make it compile

* Start rewriting the trie node cache

* More work on the cache

* More docs and code etc

* Make data cache an optional

* Tests

* Remove debug stuff

* Recorder

* Some docs and a simple test for the recorder

* Compile fixes

* Make it compile

* More fixes

* More fixes

* Fix fix fix

* Make sure cache and recorder work together for basic stuff

* Test that data caching and recording works

* Test `TrieDBMut` with caching

* Try something

* Fixes, fixes, fixes

* Forward the recorder

* Make it compile

* Use recorder in more places

* Switch to new `with_optional_recorder` fn

* Refactor and cleanups

* Move `ProvingBackend` tests

* Simplify

* Move over all functionality to the essence

* Fix compilation

* Implement estimate encoded size for StorageProof

* Start using the `cache` everywhere

* Use the cache everywhere

* Fix compilation

* Fix tests

* Adds `TrieBackendBuilder` and enhances the tests

* Ensure that recorder drain checks that values are found as expected

* Switch over to `TrieBackendBuilder`

* Start fixing the problem with child tries and recording

* Fix recording of child tries

* Make it compile

* Overwrite `storage_hash` in `TrieBackend`

* Add `storage_cache` to  the benchmarks

* Fix `no_std` build

* Speed up cache lookup

* Extend the state access benchmark to also hash a runtime

* Fix build

* Fix compilation

* Rewrite value cache

* Add lru cache

* Ensure that the cache lru works

* Value cache should not be optional

* Add support for keeping the shared node cache in its bounds

* Make the cache configurable

* Check that the cache respects the bounds

* Adds a new test

* Fixes

* Docs and some renamings

* More docs

* Start using the new recorder

* Fix more code

* Take `self` argument

* Remove warnings

* Fix benchmark

* Fix accounting

* Rip off the state cache

* Start fixing fallout after removing the state cache

* Make it compile after trie changes

* Fix test

* Add some logging

* Some docs

* Some fixups and clean ups

* Fix benchmark

* Remove unneeded file

* Use git for patching

* Make CI happy

* Update primitives/trie/Cargo.toml

Co-authored-by: Koute <koute@users.noreply.github.com>

* Update primitives/state-machine/src/trie_backend.rs

Co-authored-by: cheme <emericchevalier.pro@gmail.com>

* Introduce new `AsTrieBackend` trait

* Make the LocalTrieCache not clonable

* Make it work in no_std and add docs

* Remove duplicate dependency

* Switch to ahash for better performance

* Speedup value cache merge

* Output errors on underflow

* Ensure the internal LRU map doesn't grow too much

* Use const fn to calculate the value cache element size

* Remove cache configuration

* Fix

* Clear the cache in between for more testing

* Try to come up with a failing test case

* Make the test fail

* Fix the child trie recording

* Make everything compile after the changes to trie

* Adapt to latest trie-db changes

* Fix on stable

* Update primitives/trie/src/cache.rs

Co-authored-by: cheme <emericchevalier.pro@gmail.com>

* Fix wrong merge

* Docs

* Fix warnings

* Cargo.lock

* Bump pin-project

* Fix warnings

* Switch to released crate version

* More fixes

* Make clippy and rustdocs happy

* More clippy

* Print error when using deprecated `--state-cache-size`

* 🤦

* Fixes

* Fix storage_hash linkings

* Update client/rpc/src/dev/mod.rs

Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>

* Review feedback

* encode bound

* Rework the shared value cache

Instead of using a `u64` to represent the key we now use an `Arc<[u8]>`. This arc is also stored in
some extra `HashSet`. We store the key are in an extra `HashSet` to de-duplicate the keys accross
different storage roots. When the latest key usage is dropped in the lru, we also remove the key
from the `HashSet`.

* Improve of the cache by merging the old and new solution

* FMT

* Please stop coming back all the time :crying:

* Update primitives/trie/src/cache/shared_cache.rs

Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>

* Fixes

* Make clippy happy

* Ensure we don't deadlock

* Only use one lock to simplify the code

* Do not depend on `Hasher`

* Fix tests

* FMT

* Clippy 🤦

Co-authored-by: cheme <emericchevalier.pro@gmail.com>
Co-authored-by: Koute <koute@users.noreply.github.com>
Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>
This commit is contained in:
Bastian Köcher
2022-08-18 20:59:22 +02:00
committed by GitHub
parent d46f6f0d34
commit 73d9ae3284
55 changed files with 3977 additions and 1344 deletions
+53 -54
View File
@@ -18,13 +18,7 @@
//! State backend that's useful for benchmarking
use std::{
cell::{Cell, RefCell},
collections::HashMap,
sync::Arc,
};
use crate::storage_cache::{new_shared_cache, CachingState, SharedCache};
use crate::{DbState, DbStateBuilder};
use hash_db::{Hasher, Prefix};
use kvdb::{DBTransaction, KeyValueDB};
use linked_hash_map::LinkedHashMap;
@@ -37,40 +31,31 @@ use sp_runtime::{
StateVersion, Storage,
};
use sp_state_machine::{
backend::Backend as StateBackend, ChildStorageCollection, DBValue, ProofRecorder,
StorageCollection,
backend::Backend as StateBackend, ChildStorageCollection, DBValue, StorageCollection,
};
use sp_trie::{
cache::{CacheSize, SharedTrieCache},
prefixed_key, MemoryDB,
};
use std::{
cell::{Cell, RefCell},
collections::HashMap,
sync::Arc,
};
use sp_trie::{prefixed_key, MemoryDB};
type DbState<B> =
sp_state_machine::TrieBackend<Arc<dyn sp_state_machine::Storage<HashFor<B>>>, HashFor<B>>;
type State<B> = CachingState<DbState<B>, B>;
type State<B> = DbState<B>;
struct StorageDb<Block: BlockT> {
db: Arc<dyn KeyValueDB>,
proof_recorder: Option<ProofRecorder<Block::Hash>>,
_block: std::marker::PhantomData<Block>,
}
impl<Block: BlockT> sp_state_machine::Storage<HashFor<Block>> for StorageDb<Block> {
fn get(&self, key: &Block::Hash, prefix: Prefix) -> Result<Option<DBValue>, String> {
let prefixed_key = prefixed_key::<HashFor<Block>>(key, prefix);
if let Some(recorder) = &self.proof_recorder {
if let Some(v) = recorder.get(key) {
return Ok(v)
}
let backend_value = self
.db
.get(0, &prefixed_key)
.map_err(|e| format!("Database backend error: {:?}", e))?;
recorder.record(*key, backend_value.clone());
Ok(backend_value)
} else {
self.db
.get(0, &prefixed_key)
.map_err(|e| format!("Database backend error: {:?}", e))
}
self.db
.get(0, &prefixed_key)
.map_err(|e| format!("Database backend error: {:?}", e))
}
}
@@ -82,7 +67,6 @@ pub struct BenchmarkingState<B: BlockT> {
db: Cell<Option<Arc<dyn KeyValueDB>>>,
genesis: HashMap<Vec<u8>, (Vec<u8>, i32)>,
record: Cell<Vec<Vec<u8>>>,
shared_cache: SharedCache<B>, // shared cache is always empty
/// Key tracker for keys in the main trie.
/// We track the total number of reads and writes to these keys,
/// not de-duplicated for repeats.
@@ -93,9 +77,10 @@ pub struct BenchmarkingState<B: BlockT> {
/// not de-duplicated for repeats.
child_key_tracker: RefCell<LinkedHashMap<Vec<u8>, LinkedHashMap<Vec<u8>, TrackedStorageKey>>>,
whitelist: RefCell<Vec<TrackedStorageKey>>,
proof_recorder: Option<ProofRecorder<B::Hash>>,
proof_recorder: Option<sp_trie::recorder::Recorder<HashFor<B>>>,
proof_recorder_root: Cell<B::Hash>,
enable_tracking: bool,
shared_trie_cache: SharedTrieCache<HashFor<B>>,
}
impl<B: BlockT> BenchmarkingState<B> {
@@ -109,7 +94,7 @@ impl<B: BlockT> BenchmarkingState<B> {
let state_version = sp_runtime::StateVersion::default();
let mut root = B::Hash::default();
let mut mdb = MemoryDB::<HashFor<B>>::default();
sp_state_machine::TrieDBMutV1::<HashFor<B>>::new(&mut mdb, &mut root);
sp_trie::trie_types::TrieDBMutBuilderV1::<HashFor<B>>::new(&mut mdb, &mut root).build();
let mut state = BenchmarkingState {
state: RefCell::new(None),
@@ -118,13 +103,14 @@ impl<B: BlockT> BenchmarkingState<B> {
genesis: Default::default(),
genesis_root: Default::default(),
record: Default::default(),
shared_cache: new_shared_cache(0, (1, 10)),
main_key_tracker: Default::default(),
child_key_tracker: Default::default(),
whitelist: Default::default(),
proof_recorder: record_proof.then(Default::default),
proof_recorder_root: Cell::new(root),
enable_tracking,
// Enable the cache, but do not sync anything to the shared state.
shared_trie_cache: SharedTrieCache::new(CacheSize::Maximum(0)),
};
state.add_whitelist_to_tracker();
@@ -160,16 +146,13 @@ impl<B: BlockT> BenchmarkingState<B> {
recorder.reset();
self.proof_recorder_root.set(self.root.get());
}
let storage_db = Arc::new(StorageDb::<B> {
db,
proof_recorder: self.proof_recorder.clone(),
_block: Default::default(),
});
*self.state.borrow_mut() = Some(State::new(
DbState::<B>::new(storage_db, self.root.get()),
self.shared_cache.clone(),
None,
));
let storage_db = Arc::new(StorageDb::<B> { db, _block: Default::default() });
*self.state.borrow_mut() = Some(
DbStateBuilder::<B>::new(storage_db, self.root.get())
.with_optional_recorder(self.proof_recorder.clone())
.with_cache(self.shared_trie_cache.local_cache())
.build(),
);
Ok(())
}
@@ -324,6 +307,19 @@ impl<B: BlockT> StateBackend<HashFor<B>> for BenchmarkingState<B> {
.child_storage(child_info, key)
}
fn child_storage_hash(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<B::Hash>, Self::Error> {
self.add_read_key(Some(child_info.storage_key()), key);
self.state
.borrow()
.as_ref()
.ok_or_else(state_err)?
.child_storage_hash(child_info, key)
}
fn exists_storage(&self, key: &[u8]) -> Result<bool, Self::Error> {
self.add_read_key(None, key);
self.state.borrow().as_ref().ok_or_else(state_err)?.exists_storage(key)
@@ -604,22 +600,25 @@ impl<B: BlockT> StateBackend<HashFor<B>> for BenchmarkingState<B> {
fn proof_size(&self) -> Option<u32> {
self.proof_recorder.as_ref().map(|recorder| {
let proof_size = recorder.estimate_encoded_size() as u32;
let proof = recorder.to_storage_proof();
let proof_recorder_root = self.proof_recorder_root.get();
if proof_recorder_root == Default::default() || proof_size == 1 {
// empty trie
proof_size
} else if let Some(size) = proof.encoded_compact_size::<HashFor<B>>(proof_recorder_root)
{
size as u32
} else {
panic!(
"proof rec root {:?}, root {:?}, genesis {:?}, rec_len {:?}",
self.proof_recorder_root.get(),
self.root.get(),
self.genesis_root,
proof_size,
);
if let Some(size) = proof.encoded_compact_size::<HashFor<B>>(proof_recorder_root) {
size as u32
} else {
panic!(
"proof rec root {:?}, root {:?}, genesis {:?}, rec_len {:?}",
self.proof_recorder_root.get(),
self.root.get(),
self.genesis_root,
proof_size,
);
}
}
})
}
+85 -96
View File
@@ -34,8 +34,8 @@ pub mod bench;
mod children;
mod parity_db;
mod record_stats_state;
mod stats;
mod storage_cache;
#[cfg(any(feature = "rocksdb", test))]
mod upgrade;
mod utils;
@@ -51,8 +51,8 @@ use std::{
};
use crate::{
record_stats_state::RecordStatsState,
stats::StateUsageStats,
storage_cache::{new_shared_cache, CachingState, SharedCache, SyncingCachingState},
utils::{meta_keys, read_db, read_meta, DatabaseType, Meta},
};
use codec::{Decode, Encode};
@@ -83,10 +83,11 @@ use sp_runtime::{
Justification, Justifications, StateVersion, Storage,
};
use sp_state_machine::{
backend::Backend as StateBackend, ChildStorageCollection, DBValue, IndexOperation,
OffchainChangesCollection, StateMachineStats, StorageCollection, UsageInfo as StateUsageInfo,
backend::{AsTrieBackend, Backend as StateBackend},
ChildStorageCollection, DBValue, IndexOperation, OffchainChangesCollection, StateMachineStats,
StorageCollection, UsageInfo as StateUsageInfo,
};
use sp_trie::{prefixed_key, MemoryDB, PrefixedMemoryDB};
use sp_trie::{cache::SharedTrieCache, prefixed_key, MemoryDB, PrefixedMemoryDB};
// Re-export the Database trait so that one can pass an implementation of it.
pub use sc_state_db::PruningMode;
@@ -96,13 +97,16 @@ pub use bench::BenchmarkingState;
const CACHE_HEADERS: usize = 8;
/// Default value for storage cache child ratio.
const DEFAULT_CHILD_RATIO: (usize, usize) = (1, 10);
/// DB-backed patricia trie state, transaction type is an overlay of changes to commit.
pub type DbState<B> =
sp_state_machine::TrieBackend<Arc<dyn sp_state_machine::Storage<HashFor<B>>>, HashFor<B>>;
/// Builder for [`DbState`].
pub type DbStateBuilder<B> = sp_state_machine::TrieBackendBuilder<
Arc<dyn sp_state_machine::Storage<HashFor<B>>>,
HashFor<B>,
>;
/// Length of a [`DbHash`].
const DB_HASH_LEN: usize = 32;
@@ -174,6 +178,14 @@ impl<B: BlockT> StateBackend<HashFor<B>> for RefTrackingState<B> {
self.state.child_storage(child_info, key)
}
fn child_storage_hash(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<B::Hash>, Self::Error> {
self.state.child_storage_hash(child_info, key)
}
fn exists_storage(&self, key: &[u8]) -> Result<bool, Self::Error> {
self.state.exists_storage(key)
}
@@ -272,12 +284,6 @@ impl<B: BlockT> StateBackend<HashFor<B>> for RefTrackingState<B> {
self.state.child_keys(child_info, prefix)
}
fn as_trie_backend(
&self,
) -> Option<&sp_state_machine::TrieBackend<Self::TrieBackendStorage, HashFor<B>>> {
self.state.as_trie_backend()
}
fn register_overlay_stats(&self, stats: &StateMachineStats) {
self.state.register_overlay_stats(stats);
}
@@ -287,12 +293,22 @@ impl<B: BlockT> StateBackend<HashFor<B>> for RefTrackingState<B> {
}
}
impl<B: BlockT> AsTrieBackend<HashFor<B>> for RefTrackingState<B> {
type TrieBackendStorage = <DbState<B> as StateBackend<HashFor<B>>>::TrieBackendStorage;
fn as_trie_backend(
&self,
) -> &sp_state_machine::TrieBackend<Self::TrieBackendStorage, HashFor<B>> {
&self.state.as_trie_backend()
}
}
/// Database settings.
pub struct DatabaseSettings {
/// State cache size.
pub state_cache_size: usize,
/// Ratio of cache size dedicated to child tries.
pub state_cache_child_ratio: Option<(usize, usize)>,
/// The maximum trie cache size in bytes.
///
/// If `None` is given, the cache is disabled.
pub trie_cache_maximum_size: Option<usize>,
/// Requested state pruning mode.
pub state_pruning: Option<PruningMode>,
/// Where to find the database.
@@ -730,7 +746,7 @@ impl<Block: BlockT> HeaderMetadata<Block> for BlockchainDb<Block> {
/// Database transaction
pub struct BlockImportOperation<Block: BlockT> {
old_state: SyncingCachingState<RefTrackingState<Block>, Block>,
old_state: RecordStatsState<RefTrackingState<Block>, Block>,
db_updates: PrefixedMemoryDB<HashFor<Block>>,
storage_updates: StorageCollection,
child_storage_updates: ChildStorageCollection,
@@ -800,7 +816,7 @@ impl<Block: BlockT> BlockImportOperation<Block> {
impl<Block: BlockT> sc_client_api::backend::BlockImportOperation<Block>
for BlockImportOperation<Block>
{
type State = SyncingCachingState<RefTrackingState<Block>, Block>;
type State = RecordStatsState<RefTrackingState<Block>, Block>;
fn state(&self) -> ClientResult<Option<&Self::State>> {
Ok(Some(&self.old_state))
@@ -949,7 +965,7 @@ impl<Block: BlockT> EmptyStorage<Block> {
let mut root = Block::Hash::default();
let mut mdb = MemoryDB::<HashFor<Block>>::default();
// both triedbmut are the same on empty storage.
sp_state_machine::TrieDBMutV1::<HashFor<Block>>::new(&mut mdb, &mut root);
sp_trie::trie_types::TrieDBMutBuilderV1::<HashFor<Block>>::new(&mut mdb, &mut root).build();
EmptyStorage(root)
}
}
@@ -1009,13 +1025,13 @@ pub struct Backend<Block: BlockT> {
offchain_storage: offchain::LocalStorage,
blockchain: BlockchainDb<Block>,
canonicalization_delay: u64,
shared_cache: SharedCache<Block>,
import_lock: Arc<RwLock<()>>,
is_archive: bool,
blocks_pruning: BlocksPruning,
io_stats: FrozenForDuration<(kvdb::IoStats, StateUsageInfo)>,
state_usage: Arc<StateUsageStats>,
genesis_state: RwLock<Option<Arc<DbGenesisStorage<Block>>>>,
shared_trie_cache: Option<sp_trie::cache::SharedTrieCache<HashFor<Block>>>,
}
impl<Block: BlockT> Backend<Block> {
@@ -1053,8 +1069,7 @@ impl<Block: BlockT> Backend<Block> {
let db = kvdb_memorydb::create(crate::utils::NUM_COLUMNS);
let db = sp_database::as_database(db);
let db_setting = DatabaseSettings {
state_cache_size: 16777216,
state_cache_child_ratio: Some((50, 100)),
trie_cache_maximum_size: Some(16 * 1024 * 1024),
state_pruning: Some(PruningMode::blocks_pruning(blocks_pruning)),
source: DatabaseSource::Custom { db, require_create_flag: true },
blocks_pruning: BlocksPruning::Some(blocks_pruning),
@@ -1116,16 +1131,15 @@ impl<Block: BlockT> Backend<Block> {
offchain_storage,
blockchain,
canonicalization_delay,
shared_cache: new_shared_cache(
config.state_cache_size,
config.state_cache_child_ratio.unwrap_or(DEFAULT_CHILD_RATIO),
),
import_lock: Default::default(),
is_archive: is_archive_pruning,
io_stats: FrozenForDuration::new(std::time::Duration::from_secs(1)),
state_usage: Arc::new(StateUsageStats::new()),
blocks_pruning: config.blocks_pruning,
genesis_state: RwLock::new(None),
shared_trie_cache: config.trie_cache_maximum_size.map(|maximum_size| {
SharedTrieCache::new(sp_trie::cache::CacheSize::Maximum(maximum_size))
}),
};
// Older DB versions have no last state key. Check if the state is available and set it.
@@ -1194,7 +1208,7 @@ impl<Block: BlockT> Backend<Block> {
(&r.number, &r.hash)
);
return Err(::sp_blockchain::Error::NotInFinalizedChain)
return Err(sp_blockchain::Error::NotInFinalizedChain)
}
retracted.push(r.hash);
@@ -1358,10 +1372,8 @@ impl<Block: BlockT> Backend<Block> {
// blocks are keyed by number + hash.
let lookup_key = utils::number_and_hash_to_lookup_key(number, hash)?;
let (enacted, retracted) = if pending_block.leaf_state.is_best() {
self.set_head_with_transaction(&mut transaction, parent_hash, (number, hash))?
} else {
(Default::default(), Default::default())
if pending_block.leaf_state.is_best() {
self.set_head_with_transaction(&mut transaction, parent_hash, (number, hash))?;
};
utils::insert_hash_to_key_mapping(&mut transaction, columns::KEY_LOOKUP, number, hash)?;
@@ -1488,14 +1500,22 @@ impl<Block: BlockT> Backend<Block> {
let header = &pending_block.header;
let is_best = pending_block.leaf_state.is_best();
debug!(target: "db",
debug!(
target: "db",
"DB Commit {:?} ({}), best={}, state={}, existing={}, finalized={}",
hash, number, is_best, operation.commit_state, existing_header, finalized,
hash,
number,
is_best,
operation.commit_state,
existing_header,
finalized,
);
self.state_usage.merge_sm(operation.old_state.usage_info());
// release state reference so that it can be finalized
let cache = operation.old_state.into_cache_changes();
// VERY IMPORTANT
drop(operation.old_state);
if finalized {
// TODO: ensure best chain contains this block.
@@ -1584,20 +1604,20 @@ impl<Block: BlockT> Backend<Block> {
is_finalized: finalized,
with_state: operation.commit_state,
});
Some((pending_block.header, number, hash, enacted, retracted, is_best, cache))
Some((pending_block.header, hash))
} else {
None
};
let cache_update = if let Some(set_head) = operation.set_head {
if let Some(set_head) = operation.set_head {
if let Some(header) =
sc_client_api::blockchain::HeaderBackend::header(&self.blockchain, set_head)?
{
let number = header.number();
let hash = header.hash();
let (enacted, retracted) =
self.set_head_with_transaction(&mut transaction, hash, (*number, hash))?;
self.set_head_with_transaction(&mut transaction, hash, (*number, hash))?;
meta_updates.push(MetaUpdate {
hash,
number: *number,
@@ -1605,40 +1625,24 @@ impl<Block: BlockT> Backend<Block> {
is_finalized: false,
with_state: false,
});
Some((enacted, retracted))
} else {
return Err(sp_blockchain::Error::UnknownBlock(format!(
"Cannot set head {:?}",
set_head
)))
}
} else {
None
};
}
self.storage.db.commit(transaction)?;
// Apply all in-memory state changes.
// Code beyond this point can't fail.
if let Some((header, number, hash, enacted, retracted, is_best, mut cache)) = imported {
if let Some((header, hash)) = imported {
trace!(target: "db", "DB Commit done {:?}", hash);
let header_metadata = CachedHeaderMetadata::from(&header);
self.blockchain.insert_header_metadata(header_metadata.hash, header_metadata);
cache_header(&mut self.blockchain.header_cache.lock(), hash, Some(header));
cache.sync_cache(
&enacted,
&retracted,
operation.storage_updates,
operation.child_storage_updates,
Some(hash),
Some(number),
is_best,
);
}
if let Some((enacted, retracted)) = cache_update {
self.shared_cache.write().sync(&enacted, &retracted);
}
for m in meta_updates {
@@ -1770,17 +1774,13 @@ impl<Block: BlockT> Backend<Block> {
Ok(())
}
fn empty_state(&self) -> ClientResult<SyncingCachingState<RefTrackingState<Block>, Block>> {
fn empty_state(&self) -> ClientResult<RecordStatsState<RefTrackingState<Block>, Block>> {
let root = EmptyStorage::<Block>::new().0; // Empty trie
let db_state = DbState::<Block>::new(self.storage.clone(), root);
let db_state = DbStateBuilder::<Block>::new(self.storage.clone(), root)
.with_optional_cache(self.shared_trie_cache.as_ref().map(|c| c.local_cache()))
.build();
let state = RefTrackingState::new(db_state, self.storage.clone(), None);
let caching_state = CachingState::new(state, self.shared_cache.clone(), None);
Ok(SyncingCachingState::new(
caching_state,
self.state_usage.clone(),
self.blockchain.meta.clone(),
self.import_lock.clone(),
))
Ok(RecordStatsState::new(state, None, self.state_usage.clone()))
}
}
@@ -1902,16 +1902,13 @@ where
impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
type BlockImportOperation = BlockImportOperation<Block>;
type Blockchain = BlockchainDb<Block>;
type State = SyncingCachingState<RefTrackingState<Block>, Block>;
type State = RecordStatsState<RefTrackingState<Block>, Block>;
type OffchainStorage = offchain::LocalStorage;
fn begin_operation(&self) -> ClientResult<Self::BlockImportOperation> {
let mut old_state = self.empty_state()?;
old_state.disable_syncing();
Ok(BlockImportOperation {
pending_block: None,
old_state,
old_state: self.empty_state()?,
db_updates: PrefixedMemoryDB::default(),
storage_updates: Default::default(),
child_storage_updates: Default::default(),
@@ -1934,7 +1931,6 @@ impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
} else {
operation.old_state = self.state_at(block)?;
}
operation.old_state.disable_syncing();
operation.commit_state = true;
Ok(())
@@ -2035,8 +2031,9 @@ impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
)
});
let database_cache = MemorySize::from_bytes(0);
let state_cache =
MemorySize::from_bytes(self.shared_cache.read().used_storage_cache_size());
let state_cache = MemorySize::from_bytes(
self.shared_trie_cache.as_ref().map_or(0, |c| c.used_memory_size()),
);
let state_db = self.storage.state_db.memory_info();
Some(UsageInfo {
@@ -2278,17 +2275,13 @@ impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
};
if is_genesis {
if let Some(genesis_state) = &*self.genesis_state.read() {
let db_state = DbState::<Block>::new(genesis_state.clone(), genesis_state.root);
let root = genesis_state.root;
let db_state = DbStateBuilder::<Block>::new(genesis_state.clone(), root)
.with_optional_cache(self.shared_trie_cache.as_ref().map(|c| c.local_cache()))
.build();
let state = RefTrackingState::new(db_state, self.storage.clone(), None);
let caching_state = CachingState::new(state, self.shared_cache.clone(), None);
let mut state = SyncingCachingState::new(
caching_state,
self.state_usage.clone(),
self.blockchain.meta.clone(),
self.import_lock.clone(),
);
state.disable_syncing();
return Ok(state)
return Ok(RecordStatsState::new(state, None, self.state_usage.clone()))
}
}
@@ -2309,16 +2302,13 @@ impl<Block: BlockT> sc_client_api::backend::Backend<Block> for Backend<Block> {
}
if let Ok(()) = self.storage.state_db.pin(&hash) {
let root = hdr.state_root;
let db_state = DbState::<Block>::new(self.storage.clone(), root);
let db_state = DbStateBuilder::<Block>::new(self.storage.clone(), root)
.with_optional_cache(
self.shared_trie_cache.as_ref().map(|c| c.local_cache()),
)
.build();
let state = RefTrackingState::new(db_state, self.storage.clone(), Some(hash));
let caching_state =
CachingState::new(state, self.shared_cache.clone(), Some(hash));
Ok(SyncingCachingState::new(
caching_state,
self.state_usage.clone(),
self.blockchain.meta.clone(),
self.import_lock.clone(),
))
Ok(RecordStatsState::new(state, Some(hash), self.state_usage.clone()))
} else {
Err(sp_blockchain::Error::UnknownBlock(format!(
"State already discarded for {:?}",
@@ -2494,8 +2484,7 @@ pub(crate) mod tests {
let backend = Backend::<Block>::new(
DatabaseSettings {
state_cache_size: 16777216,
state_cache_child_ratio: Some((50, 100)),
trie_cache_maximum_size: Some(16 * 1024 * 1024),
state_pruning: Some(PruningMode::blocks_pruning(1)),
source: DatabaseSource::Custom { db: backing, require_create_flag: false },
blocks_pruning: BlocksPruning::All,
@@ -0,0 +1,230 @@
// This file is part of Substrate.
// Copyright (C) 2019-2022 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: GPL-3.0-or-later WITH Classpath-exception-2.0
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//! Provides [`RecordStatsState`] for recording stats about state access.
use crate::stats::StateUsageStats;
use sp_core::storage::ChildInfo;
use sp_runtime::{
traits::{Block as BlockT, HashFor},
StateVersion,
};
use sp_state_machine::{
backend::{AsTrieBackend, Backend as StateBackend},
TrieBackend,
};
use std::sync::Arc;
/// State abstraction for recording stats about state access.
pub struct RecordStatsState<S, B: BlockT> {
/// Usage statistics
usage: StateUsageStats,
/// State machine registered stats
overlay_stats: sp_state_machine::StateMachineStats,
/// Backing state.
state: S,
/// The hash of the block is state belongs to.
block_hash: Option<B::Hash>,
/// The usage statistics of the backend. These will be updated on drop.
state_usage: Arc<StateUsageStats>,
}
impl<S, B: BlockT> std::fmt::Debug for RecordStatsState<S, B> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Block {:?}", self.block_hash)
}
}
impl<S, B: BlockT> Drop for RecordStatsState<S, B> {
fn drop(&mut self) {
self.state_usage.merge_sm(self.usage.take());
}
}
impl<S: StateBackend<HashFor<B>>, B: BlockT> RecordStatsState<S, B> {
/// Create a new instance wrapping generic State and shared cache.
pub(crate) fn new(
state: S,
block_hash: Option<B::Hash>,
state_usage: Arc<StateUsageStats>,
) -> Self {
RecordStatsState {
usage: StateUsageStats::new(),
overlay_stats: sp_state_machine::StateMachineStats::default(),
state,
block_hash,
state_usage,
}
}
}
impl<S: StateBackend<HashFor<B>>, B: BlockT> StateBackend<HashFor<B>> for RecordStatsState<S, B> {
type Error = S::Error;
type Transaction = S::Transaction;
type TrieBackendStorage = S::TrieBackendStorage;
fn storage(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {
let value = self.state.storage(key)?;
self.usage.tally_key_read(key, value.as_ref(), false);
Ok(value)
}
fn storage_hash(&self, key: &[u8]) -> Result<Option<B::Hash>, Self::Error> {
self.state.storage_hash(key)
}
fn child_storage(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, Self::Error> {
let key = (child_info.storage_key().to_vec(), key.to_vec());
let value = self.state.child_storage(child_info, &key.1)?;
// just pass it through the usage counter
let value = self.usage.tally_child_key_read(&key, value, false);
Ok(value)
}
fn child_storage_hash(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<B::Hash>, Self::Error> {
self.state.child_storage_hash(child_info, key)
}
fn exists_storage(&self, key: &[u8]) -> Result<bool, Self::Error> {
self.state.exists_storage(key)
}
fn exists_child_storage(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<bool, Self::Error> {
self.state.exists_child_storage(child_info, key)
}
fn apply_to_key_values_while<F: FnMut(Vec<u8>, Vec<u8>) -> bool>(
&self,
child_info: Option<&ChildInfo>,
prefix: Option<&[u8]>,
start_at: Option<&[u8]>,
f: F,
allow_missing: bool,
) -> Result<bool, Self::Error> {
self.state
.apply_to_key_values_while(child_info, prefix, start_at, f, allow_missing)
}
fn apply_to_keys_while<F: FnMut(&[u8]) -> bool>(
&self,
child_info: Option<&ChildInfo>,
prefix: Option<&[u8]>,
start_at: Option<&[u8]>,
f: F,
) {
self.state.apply_to_keys_while(child_info, prefix, start_at, f)
}
fn next_storage_key(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {
self.state.next_storage_key(key)
}
fn next_child_storage_key(
&self,
child_info: &ChildInfo,
key: &[u8],
) -> Result<Option<Vec<u8>>, Self::Error> {
self.state.next_child_storage_key(child_info, key)
}
fn for_keys_with_prefix<F: FnMut(&[u8])>(&self, prefix: &[u8], f: F) {
self.state.for_keys_with_prefix(prefix, f)
}
fn for_key_values_with_prefix<F: FnMut(&[u8], &[u8])>(&self, prefix: &[u8], f: F) {
self.state.for_key_values_with_prefix(prefix, f)
}
fn for_child_keys_with_prefix<F: FnMut(&[u8])>(
&self,
child_info: &ChildInfo,
prefix: &[u8],
f: F,
) {
self.state.for_child_keys_with_prefix(child_info, prefix, f)
}
fn storage_root<'a>(
&self,
delta: impl Iterator<Item = (&'a [u8], Option<&'a [u8]>)>,
state_version: StateVersion,
) -> (B::Hash, Self::Transaction)
where
B::Hash: Ord,
{
self.state.storage_root(delta, state_version)
}
fn child_storage_root<'a>(
&self,
child_info: &ChildInfo,
delta: impl Iterator<Item = (&'a [u8], Option<&'a [u8]>)>,
state_version: StateVersion,
) -> (B::Hash, bool, Self::Transaction)
where
B::Hash: Ord,
{
self.state.child_storage_root(child_info, delta, state_version)
}
fn pairs(&self) -> Vec<(Vec<u8>, Vec<u8>)> {
self.state.pairs()
}
fn keys(&self, prefix: &[u8]) -> Vec<Vec<u8>> {
self.state.keys(prefix)
}
fn child_keys(&self, child_info: &ChildInfo, prefix: &[u8]) -> Vec<Vec<u8>> {
self.state.child_keys(child_info, prefix)
}
fn register_overlay_stats(&self, stats: &sp_state_machine::StateMachineStats) {
self.overlay_stats.add(stats);
}
fn usage_info(&self) -> sp_state_machine::UsageInfo {
let mut info = self.usage.take();
info.include_state_machine_states(&self.overlay_stats);
info
}
}
impl<S: StateBackend<HashFor<B>> + AsTrieBackend<HashFor<B>>, B: BlockT> AsTrieBackend<HashFor<B>>
for RecordStatsState<S, B>
{
type TrieBackendStorage = <S as AsTrieBackend<HashFor<B>>>::TrieBackendStorage;
fn as_trie_backend(&self) -> &TrieBackend<Self::TrieBackendStorage, HashFor<B>> {
self.state.as_trie_backend()
}
}