Typed Storage Keys (#1419)

* first iteration on storage multi keys

* decoding values from concat style hashers

* move util functions and remove comments

* change codegen for storage keys and fix examples

* trait bounds don't match scale value...

* fix trait bounds and examples

* reconstruct storage keys in iterations

* build(deps): bump js-sys from 0.3.67 to 0.3.68 (#1428)

Bumps [js-sys](https://github.com/rustwasm/wasm-bindgen) from 0.3.67 to 0.3.68.
- [Release notes](https://github.com/rustwasm/wasm-bindgen/releases)
- [Changelog](https://github.com/rustwasm/wasm-bindgen/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rustwasm/wasm-bindgen/commits)

---
updated-dependencies:
- dependency-name: js-sys
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump clap from 4.4.18 to 4.5.0 (#1427)

Bumps [clap](https://github.com/clap-rs/clap) from 4.4.18 to 4.5.0.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v4.4.18...clap_complete-v4.5.0)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump either from 1.9.0 to 1.10.0 (#1425)

Bumps [either](https://github.com/rayon-rs/either) from 1.9.0 to 1.10.0.
- [Commits](https://github.com/rayon-rs/either/compare/1.9.0...1.10.0)

---
updated-dependencies:
- dependency-name: either
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump thiserror from 1.0.56 to 1.0.57 (#1424)

Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.56 to 1.0.57.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.56...1.0.57)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump jsonrpsee from 0.21.0 to 0.22.0 (#1426)

* build(deps): bump jsonrpsee from 0.21.0 to 0.22.0

Bumps [jsonrpsee](https://github.com/paritytech/jsonrpsee) from 0.21.0 to 0.22.0.
- [Release notes](https://github.com/paritytech/jsonrpsee/releases)
- [Changelog](https://github.com/paritytech/jsonrpsee/blob/master/CHANGELOG.md)
- [Commits](https://github.com/paritytech/jsonrpsee/compare/v0.21.0...v0.22.0)

---
updated-dependencies:
- dependency-name: jsonrpsee
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update Cargo.lock

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: James Wilson <james@jsdw.me>
Co-authored-by: Niklas Adolfsson <niklasadolfsson1@gmail.com>

* subxt: Derive `std::cmp` traits for subxt payloads and addresses (#1429)

* subxt/tx: Derive std::cmp traits

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/runtime_api: Derive std::cmp traits

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/constants: Derive std::cmp traits

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/custom_values: Derive std::cmp traits

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/storage: Derive std::cmp traits

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Fix non_canonical_partial_ord_impl clippy introduced in 1.73

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Add comment wrt derivative issue that triggers clippy warning

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update subxt/src/backend/mod.rs

* Update subxt/src/constants/constant_address.rs

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Niklas Adolfsson <niklasadolfsson1@gmail.com>

* fix clippy

* add integration tests

* fix doc tests

* change hashing logic for hashers=1

* refactor

* clippy and fmt

* regenerate polkadot file which got changed by the automatic PR

* nested design for storage keys

* refactor codegen

* codegen adjustments

* fix storage hasher codegen test

* Suggestions for storage value decoding (#1457)

* Storage decode tweaks

* doc tweak

* more precise error when leftover or not enough bytes

* integrate nits from PR

* add fuzztest for storage keys, fix decoding bug

* clippy and fmt

* clippy

* Niklas Suggestions

* lifetime issues and iterator impls

* fmt and clippy

* regenerate polkadot.rs

* fix storage key encoding for empty keys

* rename trait methods for storage keys

* fix hasher bug...

* impl nits, add iterator struct seperate from `StorageHashers`

* clippy fix

* remove println

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: James Wilson <james@jsdw.me>
Co-authored-by: Niklas Adolfsson <niklasadolfsson1@gmail.com>
Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
This commit is contained in:
Tadeo Hepperle
2024-03-06 14:04:51 +01:00
committed by GitHub
parent 55e87c86ba
commit a2ee750365
16 changed files with 1960 additions and 1488 deletions
+124 -25
View File
@@ -8,7 +8,7 @@ use quote::{format_ident, quote};
use scale_info::TypeDef;
use scale_typegen::{typegen::type_path::TypePath, TypeGenerator};
use subxt_metadata::{
PalletMetadata, StorageEntryMetadata, StorageEntryModifier, StorageEntryType,
PalletMetadata, StorageEntryMetadata, StorageEntryModifier, StorageEntryType, StorageHasher,
};
use super::CodegenError;
@@ -75,8 +75,15 @@ fn generate_storage_entry_fns(
let alias_module_name = format_ident!("{snake_case_name}");
let alias_storage_path = quote!( types::#alias_module_name::#alias_name );
let storage_entry_map = |idx, id| {
let ident: Ident = format_ident!("_{}", idx);
struct MapEntryKey {
arg_name: Ident,
alias_type_def: TokenStream,
alias_type_path: TokenStream,
hasher: StorageHasher,
}
let map_entry_key = |idx, id, hasher| -> MapEntryKey {
let arg_name: Ident = format_ident!("_{}", idx);
let ty_path = type_gen
.resolve_type_path(id)
.expect("type is in metadata; qed");
@@ -84,34 +91,67 @@ fn generate_storage_entry_fns(
let alias_name = format_ident!("Param{}", idx);
let alias_type = primitive_type_alias(&ty_path);
let alias_type = quote!( pub type #alias_name = #alias_type; );
let path_to_alias = quote!( types::#alias_module_name::#alias_name );
let alias_type_def = quote!( pub type #alias_name = #alias_type; );
let alias_type_path = quote!( types::#alias_module_name::#alias_name );
(ident, alias_type, path_to_alias)
MapEntryKey {
arg_name,
alias_type_def,
alias_type_path,
hasher,
}
};
let keys: Vec<(Ident, TokenStream, TokenStream)> = match storage_entry.entry_type() {
let keys: Vec<MapEntryKey> = match storage_entry.entry_type() {
StorageEntryType::Plain(_) => vec![],
StorageEntryType::Map { key_ty, .. } => {
StorageEntryType::Map {
key_ty, hashers, ..
} => {
match &type_gen
.resolve_type(*key_ty)
.expect("key type should be present")
.type_def
{
// An N-map; return each of the keys separately.
TypeDef::Tuple(tuple) => tuple
.fields
.iter()
.enumerate()
.map(|(idx, f)| storage_entry_map(idx, f.id))
.collect::<Vec<_>>(),
TypeDef::Tuple(tuple) => {
let key_count = tuple.fields.len();
let hasher_count = hashers.len();
if hasher_count != 1 && hasher_count != key_count {
return Err(CodegenError::InvalidStorageHasherCount {
storage_entry_name: storage_entry.name().to_owned(),
key_count,
hasher_count,
});
}
let mut map_entry_keys: Vec<MapEntryKey> = vec![];
for (idx, field) in tuple.fields.iter().enumerate() {
// Note: these are in bounds because of the checks above, qed;
let hasher = if idx >= hasher_count {
hashers[0]
} else {
hashers[idx]
};
map_entry_keys.push(map_entry_key(idx, field.id, hasher));
}
map_entry_keys
}
// A map with a single key; return the single key.
_ => {
vec![storage_entry_map(0, *key_ty)]
let Some(hasher) = hashers.first() else {
return Err(CodegenError::InvalidStorageHasherCount {
storage_entry_name: storage_entry.name().to_owned(),
key_count: 1,
hasher_count: 0,
});
};
vec![map_entry_key(0, *key_ty, *hasher)]
}
}
}
};
let pallet_name = pallet.name();
let storage_name = storage_entry.name();
let Some(storage_hash) = pallet.storage_hash(storage_name) else {
@@ -133,6 +173,10 @@ fn generate_storage_entry_fns(
StorageEntryModifier::Optional => quote!(()),
};
// Note: putting `#crate_path::storage::address::StaticStorageKey` into this variable is necessary
// to get the line width below a certain limit. If not done, rustfmt will refuse to format the following big expression.
// for more information see [this post](https://users.rust-lang.org/t/rustfmt-silently-fails-to-work/75485/4).
let static_storage_key: TokenStream = quote!(#crate_path::storage::address::StaticStorageKey);
let all_fns = (0..=keys.len()).map(|n_keys| {
let keys_slice = &keys[..n_keys];
let (fn_name, is_fetchable, is_iterable) = if n_keys == keys.len() {
@@ -146,12 +190,65 @@ fn generate_storage_entry_fns(
};
(fn_name, false, true)
};
let is_fetchable_type = is_fetchable.then_some(quote!(#crate_path::storage::address::Yes)).unwrap_or(quote!(()));
let is_iterable_type = is_iterable.then_some(quote!(#crate_path::storage::address::Yes)).unwrap_or(quote!(()));
let key_impls = keys_slice.iter().map(|(field_name, _, _)| quote!( #crate_path::storage::address::make_static_storage_map_key(#field_name.borrow()) ));
let key_args = keys_slice.iter().map(|(field_name, _, path_to_alias )| {
quote!( #field_name: impl ::std::borrow::Borrow<#path_to_alias> )
});
let is_fetchable_type = is_fetchable
.then_some(quote!(#crate_path::storage::address::Yes))
.unwrap_or(quote!(()));
let is_iterable_type = is_iterable
.then_some(quote!(#crate_path::storage::address::Yes))
.unwrap_or(quote!(()));
let (keys, keys_type) = match keys_slice.len() {
0 => (quote!(()), quote!(())),
1 => {
let key = &keys_slice[0];
if key.hasher.ends_with_key() {
let arg = &key.arg_name;
let keys = quote!(#static_storage_key::new(#arg.borrow()));
let path = &key.alias_type_path;
let path = quote!(#static_storage_key<#path>);
(keys, path)
} else {
(quote!(()), quote!(()))
}
}
_ => {
let keys_iter = keys_slice.iter().map(
|MapEntryKey {
arg_name, hasher, ..
}| {
if hasher.ends_with_key() {
quote!( #static_storage_key::new(#arg_name.borrow()) )
} else {
quote!(())
}
},
);
let keys = quote!( (#(#keys_iter,)*) );
let paths_iter = keys_slice.iter().map(
|MapEntryKey {
alias_type_path,
hasher,
..
}| {
if hasher.ends_with_key() {
quote!( #static_storage_key<#alias_type_path> )
} else {
quote!(())
}
},
);
let paths = quote!( (#(#paths_iter,)*) );
(keys, paths)
}
};
let key_args = keys_slice.iter().map(
|MapEntryKey {
arg_name,
alias_type_path,
..
}| quote!( #arg_name: impl ::std::borrow::Borrow<#alias_type_path> ),
);
quote!(
#docs
@@ -159,7 +256,7 @@ fn generate_storage_entry_fns(
&self,
#(#key_args,)*
) -> #crate_path::storage::address::Address::<
#crate_path::storage::address::StaticStorageMapKey,
#keys_type,
#alias_storage_path,
#is_fetchable_type,
#is_defaultable_type,
@@ -168,14 +265,16 @@ fn generate_storage_entry_fns(
#crate_path::storage::address::Address::new_static(
#pallet_name,
#storage_name,
vec![#(#key_impls,)*],
#keys,
[#(#storage_hash,)*]
)
}
)
});
let alias_types = keys.iter().map(|(_, alias_type, _)| alias_type);
let alias_types = keys
.iter()
.map(|MapEntryKey { alias_type_def, .. }| alias_type_def);
let types_mod_ident = type_gen.types_mod_ident();
// Generate type alias for the return type only, since
@@ -231,7 +330,7 @@ mod tests {
name,
modifier: v15::StorageEntryModifier::Optional,
ty: v15::StorageEntryType::Map {
hashers: vec![],
hashers: vec![v15::StorageHasher::Blake2_128Concat],
key,
value: meta_type::<bool>(),
},
+12 -6
View File
@@ -39,15 +39,21 @@ pub enum CodegenError {
#[error("Call variant for type {0} must have all named fields. Make sure you are providing a valid substrate-based metadata")]
InvalidCallVariant(u32),
/// Type should be an variant/enum.
#[error(
"{0} type should be an variant/enum type. Make sure you are providing a valid substrate-based metadata"
)]
#[error("{0} type should be an variant/enum type. Make sure you are providing a valid substrate-based metadata")]
InvalidType(String),
/// Extrinsic call type could not be found.
#[error(
"Extrinsic call type could not be found. Make sure you are providing a valid substrate-based metadata"
)]
#[error("Extrinsic call type could not be found. Make sure you are providing a valid substrate-based metadata")]
MissingCallType,
/// There are too many or too few hashers.
#[error("Could not generate functions for storage entry {storage_entry_name}. There are {key_count} keys, but only {hasher_count} hashers. The number of hashers must equal the number of keys or be exactly 1.")]
InvalidStorageHasherCount {
/// The name of the storage entry
storage_entry_name: String,
/// Number of keys
key_count: usize,
/// Number of hashers
hasher_count: usize,
},
/// Cannot generate types.
#[error("Type Generation failed: {0}")]
TypeGeneration(#[from] TypegenError),
+2 -6
View File
@@ -316,9 +316,7 @@ fn generate_outer_enums(
) -> Result<v15::OuterEnums<scale_info::form::PortableForm>, TryFromError> {
let find_type = |name: &str| {
metadata.types.types.iter().find_map(|ty| {
let Some(ident) = ty.ty.path.ident() else {
return None;
};
let ident = ty.ty.path.ident()?;
if ident != name {
return None;
@@ -368,9 +366,7 @@ fn generate_outer_error_enum_type(
.pallets
.iter()
.filter_map(|pallet| {
let Some(error) = &pallet.error else {
return None;
};
let error = pallet.error.as_ref()?;
// Note: using the `alloc::format!` macro like in `let path = format!("{}Error", pallet.name);`
// leads to linker errors about extern function `_Unwind_Resume` not being defined.
+29
View File
@@ -475,6 +475,35 @@ pub enum StorageHasher {
Identity,
}
impl StorageHasher {
/// The hash produced by a [`StorageHasher`] can have these two components, in order:
///
/// 1. A fixed size hash. (not present for [`StorageHasher::Identity`]).
/// 2. The SCALE encoded key that was used as an input to the hasher (only present for
/// [`StorageHasher::Twox64Concat`], [`StorageHasher::Blake2_128Concat`] or [`StorageHasher::Identity`]).
///
/// This function returns the number of bytes used to represent the first of these.
pub fn len_excluding_key(&self) -> usize {
match self {
StorageHasher::Blake2_128Concat => 16,
StorageHasher::Twox64Concat => 8,
StorageHasher::Blake2_128 => 16,
StorageHasher::Blake2_256 => 32,
StorageHasher::Twox128 => 16,
StorageHasher::Twox256 => 32,
StorageHasher::Identity => 0,
}
}
/// Returns true if the key used to produce the hash is appended to the hash itself.
pub fn ends_with_key(&self) -> bool {
matches!(
self,
StorageHasher::Blake2_128Concat | StorageHasher::Twox64Concat | StorageHasher::Identity
)
}
}
/// Is the storage entry optional, or does it have a default value.
#[derive(Debug, Clone, Copy, Eq, PartialEq)]
pub enum StorageEntryModifier {
+4 -3
View File
@@ -16,9 +16,10 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
// a time from the node, but we always iterate over one at a time).
let mut results = api.storage().at_latest().await?.iter(storage_query).await?;
while let Some(Ok((key, value))) = results.next().await {
println!("Key: 0x{}", hex::encode(&key));
println!("Value: {:?}", value);
while let Some(Ok(kv)) = results.next().await {
println!("Keys decoded: {:?}", kv.keys);
println!("Key: 0x{}", hex::encode(&kv.key_bytes));
println!("Value: {:?}", kv.value);
}
Ok(())
+6 -5
View File
@@ -7,16 +7,17 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api = OnlineClient::<PolkadotConfig>::new().await?;
// Build a dynamic storage query to iterate account information.
// With a dynamic query, we can just provide an empty Vec as the keys to iterate over all entries.
let keys = Vec::<()>::new();
// With a dynamic query, we can just provide an empty vector as the keys to iterate over all entries.
let keys: Vec<scale_value::Value> = vec![];
let storage_query = subxt::dynamic::storage("System", "Account", keys);
// Use that query to return an iterator over the results.
let mut results = api.storage().at_latest().await?.iter(storage_query).await?;
while let Some(Ok((key, value))) = results.next().await {
println!("Key: 0x{}", hex::encode(&key));
println!("Value: {:?}", value.to_value()?);
while let Some(Ok(kv)) = results.next().await {
println!("Keys decoded: {:?}", kv.keys);
println!("Key: 0x{}", hex::encode(&kv.key_bytes));
println!("Value: {:?}", kv.value.to_value()?);
}
Ok(())
+4 -4
View File
@@ -38,11 +38,11 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Get back an iterator of results.
let mut results = api.storage().at_latest().await?.iter(storage_query).await?;
while let Some(Ok((key, value))) = results.next().await {
println!("Key: 0x{}", hex::encode(&key));
println!("Value: {:?}", value);
while let Some(Ok(kv)) = results.next().await {
println!("Keys decoded: {:?}", kv.keys);
println!("Key: 0x{}", hex::encode(&kv.key_bytes));
println!("Value: {:?}", kv.value);
}
Ok(())
}
+1 -1
View File
@@ -49,7 +49,7 @@
//! // A static query capable of iterating over accounts:
//! let storage_query = polkadot::storage().system().account_iter();
//! // A dynamic query to do the same:
//! let storage_query = subxt::dynamic::storage("System", "Account", Vec::<u8>::new());
//! let storage_query = subxt::dynamic::storage("System", "Account", ());
//! ```
//!
//! Some storage entries are maps with multiple keys. As an example, we might end up with
+20 -7
View File
@@ -14,6 +14,7 @@ crate::macros::cfg_unstable_light_client! {
pub use dispatch_error::{
ArithmeticError, DispatchError, ModuleError, TokenError, TransactionalError,
};
use subxt_metadata::StorageHasher;
// Re-expose the errors we use from other crates here:
pub use crate::config::ExtrinsicParamsError;
@@ -187,14 +188,9 @@ pub enum TransactionError {
#[derive(Clone, Debug, thiserror::Error)]
#[non_exhaustive]
pub enum StorageAddressError {
/// Storage map type must be a composite type.
#[error("Storage map type must be a composite type")]
MapTypeMustBeTuple,
/// Storage lookup does not have the expected number of keys.
#[error("Storage lookup requires {expected} keys but got {actual} keys")]
WrongNumberOfKeys {
/// The actual number of keys needed, based on the metadata.
actual: usize,
#[error("Storage lookup requires {expected} keys but more keys have been provided.")]
TooManyKeys {
/// The number of keys provided in the storage address.
expected: usize,
},
@@ -206,6 +202,23 @@ pub enum StorageAddressError {
/// The number of fields in the metadata for this storage entry.
fields: usize,
},
/// We weren't given enough bytes to decode the storage address/key.
#[error("Not enough remaining bytes to decode the storage address/key")]
NotEnoughBytes,
/// We have leftover bytes after decoding the storage address.
#[error("We have leftover bytes after decoding the storage address")]
TooManyBytes,
/// The bytes of a storage address are not the expected address for decoding the storage keys of the address.
#[error("Storage address bytes are not the expected format. Addresses need to be at least 16 bytes (pallet ++ entry) and follow a structure given by the hashers defined in the metadata")]
UnexpectedAddressBytes,
/// An invalid hasher was used to reconstruct a value from a chunk of bytes that is part of a storage address. Hashers where the hash does not contain the original value are invalid for this purpose.
#[error("An invalid hasher was used to reconstruct a value with type ID {ty_id} from a hash formed by a {hasher:?} hasher. This is only possible for concat-style hashers or the identity hasher")]
HasherCannotReconstructKey {
/// Type id of the key's type.
ty_id: u32,
/// The invalid hasher that caused this error.
hasher: StorageHasher,
},
}
/// Something went wrong trying to access details in the metadata.
+7 -7
View File
@@ -6,23 +6,23 @@
mod storage_address;
mod storage_client;
mod storage_key;
mod storage_type;
pub mod utils;
mod utils;
pub use storage_client::StorageClient;
pub use storage_type::Storage;
pub use storage_type::{Storage, StorageKeyValuePair};
/// Types representing an address which describes where a storage
/// entry lives and how to properly decode it.
pub mod address {
pub use super::storage_address::{
dynamic, make_static_storage_map_key, Address, DynamicAddress, StaticStorageMapKey,
StorageAddress, Yes,
};
pub use super::storage_address::{dynamic, Address, DynamicAddress, StorageAddress, Yes};
pub use super::storage_key::{StaticStorageKey, StorageKey};
}
pub use storage_key::StorageKey;
// For consistency with other modules, also expose
// the basic address stuff at the root of the module.
pub use storage_address::{dynamic, Address, DynamicAddress, StorageAddress};
+47 -136
View File
@@ -4,20 +4,22 @@
use crate::{
dynamic::DecodedValueThunk,
error::{Error, MetadataError, StorageAddressError},
metadata::{DecodeWithMetadata, EncodeWithMetadata, Metadata},
utils::{Encoded, Static},
error::{Error, MetadataError},
metadata::{DecodeWithMetadata, Metadata},
};
use derivative::Derivative;
use scale_info::TypeDef;
use std::borrow::Cow;
use subxt_metadata::{StorageEntryType, StorageHasher};
use super::{storage_key::StorageHashers, StorageKey};
/// This represents a storage address. Anything implementing this trait
/// can be used to fetch and iterate over storage entries.
pub trait StorageAddress {
/// The target type of the value that lives at this address.
type Target: DecodeWithMetadata;
/// The keys type used to construct this address.
type Keys: StorageKey;
/// Can an entry be fetched from this address?
/// Set this type to [`Yes`] to enable the corresponding calls to be made.
type IsFetchable;
@@ -54,64 +56,69 @@ pub struct Yes;
/// via the `subxt` macro) or dynamic values via [`dynamic`].
#[derive(Derivative)]
#[derivative(
Clone(bound = "StorageKey: Clone"),
Debug(bound = "StorageKey: std::fmt::Debug"),
Eq(bound = "StorageKey: std::cmp::Eq"),
Ord(bound = "StorageKey: std::cmp::Ord"),
PartialEq(bound = "StorageKey: std::cmp::PartialEq"),
PartialOrd(bound = "StorageKey: std::cmp::PartialOrd")
Clone(bound = "Keys: Clone"),
Debug(bound = "Keys: std::fmt::Debug"),
Eq(bound = "Keys: std::cmp::Eq"),
Ord(bound = "Keys: std::cmp::Ord"),
PartialEq(bound = "Keys: std::cmp::PartialEq"),
PartialOrd(bound = "Keys: std::cmp::PartialOrd")
)]
pub struct Address<StorageKey, ReturnTy, Fetchable, Defaultable, Iterable> {
pub struct Address<Keys: StorageKey, ReturnTy, Fetchable, Defaultable, Iterable> {
pallet_name: Cow<'static, str>,
entry_name: Cow<'static, str>,
storage_entry_keys: Vec<StorageKey>,
keys: Keys,
validation_hash: Option<[u8; 32]>,
_marker: std::marker::PhantomData<(ReturnTy, Fetchable, Defaultable, Iterable)>,
}
/// A typical storage address constructed at runtime rather than via the `subxt` macro; this
/// has no restriction on what it can be used for (since we don't statically know).
pub type DynamicAddress<StorageKey> = Address<StorageKey, DecodedValueThunk, Yes, Yes, Yes>;
pub type DynamicAddress<Keys> = Address<Keys, DecodedValueThunk, Yes, Yes, Yes>;
impl<StorageKey, ReturnTy, Fetchable, Defaultable, Iterable>
Address<StorageKey, ReturnTy, Fetchable, Defaultable, Iterable>
where
StorageKey: EncodeWithMetadata,
ReturnTy: DecodeWithMetadata,
{
/// Create a new [`Address`] to use to access a storage entry.
pub fn new(
pallet_name: impl Into<String>,
entry_name: impl Into<String>,
storage_entry_keys: Vec<StorageKey>,
) -> Self {
impl<Keys: StorageKey> DynamicAddress<Keys> {
/// Creates a new dynamic address. As `Keys` you can use a `Vec<scale_value::Value>`
pub fn new(pallet_name: impl Into<String>, entry_name: impl Into<String>, keys: Keys) -> Self {
Self {
pallet_name: Cow::Owned(pallet_name.into()),
entry_name: Cow::Owned(entry_name.into()),
storage_entry_keys: storage_entry_keys.into_iter().collect(),
keys,
validation_hash: None,
_marker: std::marker::PhantomData,
}
}
}
impl<Keys, ReturnTy, Fetchable, Defaultable, Iterable>
Address<Keys, ReturnTy, Fetchable, Defaultable, Iterable>
where
Keys: StorageKey,
ReturnTy: DecodeWithMetadata,
{
/// Create a new [`Address`] using static strings for the pallet and call name.
/// This is only expected to be used from codegen.
#[doc(hidden)]
pub fn new_static(
pallet_name: &'static str,
entry_name: &'static str,
storage_entry_keys: Vec<StorageKey>,
keys: Keys,
hash: [u8; 32],
) -> Self {
Self {
pallet_name: Cow::Borrowed(pallet_name),
entry_name: Cow::Borrowed(entry_name),
storage_entry_keys: storage_entry_keys.into_iter().collect(),
keys,
validation_hash: Some(hash),
_marker: std::marker::PhantomData,
}
}
}
impl<Keys, ReturnTy, Fetchable, Defaultable, Iterable>
Address<Keys, ReturnTy, Fetchable, Defaultable, Iterable>
where
Keys: StorageKey,
ReturnTy: DecodeWithMetadata,
{
/// Do not validate this storage entry prior to accessing it.
pub fn unvalidated(self) -> Self {
Self {
@@ -128,13 +135,14 @@ where
}
}
impl<StorageKey, ReturnTy, Fetchable, Defaultable, Iterable> StorageAddress
for Address<StorageKey, ReturnTy, Fetchable, Defaultable, Iterable>
impl<Keys, ReturnTy, Fetchable, Defaultable, Iterable> StorageAddress
for Address<Keys, ReturnTy, Fetchable, Defaultable, Iterable>
where
StorageKey: EncodeWithMetadata,
Keys: StorageKey,
ReturnTy: DecodeWithMetadata,
{
type Target = ReturnTy;
type Keys = Keys;
type IsFetchable = Fetchable;
type IsDefaultable = Defaultable;
type IsIterable = Iterable;
@@ -156,78 +164,10 @@ where
.entry_by_name(self.entry_name())
.ok_or_else(|| MetadataError::StorageEntryNotFound(self.entry_name().to_owned()))?;
match entry.entry_type() {
StorageEntryType::Plain(_) => {
if !self.storage_entry_keys.is_empty() {
Err(StorageAddressError::WrongNumberOfKeys {
expected: 0,
actual: self.storage_entry_keys.len(),
}
.into())
} else {
Ok(())
}
}
StorageEntryType::Map {
hashers, key_ty, ..
} => {
let ty = metadata
.types()
.resolve(*key_ty)
.ok_or(MetadataError::TypeNotFound(*key_ty))?;
// If the provided keys are empty, the storage address must be
// equal to the storage root address.
if self.storage_entry_keys.is_empty() {
return Ok(());
}
// If the key is a tuple, we encode each value to the corresponding tuple type.
// If the key is not a tuple, encode a single value to the key type.
let type_ids = match &ty.type_def {
TypeDef::Tuple(tuple) => {
either::Either::Left(tuple.fields.iter().map(|f| f.id))
}
_other => either::Either::Right(std::iter::once(*key_ty)),
};
if type_ids.len() < self.storage_entry_keys.len() {
// Provided more keys than fields.
return Err(StorageAddressError::WrongNumberOfKeys {
expected: type_ids.len(),
actual: self.storage_entry_keys.len(),
}
.into());
}
if hashers.len() == 1 {
// One hasher; hash a tuple of all SCALE encoded bytes with the one hash function.
let mut input = Vec::new();
let iter = self.storage_entry_keys.iter().zip(type_ids);
for (key, type_id) in iter {
key.encode_with_metadata(type_id, metadata, &mut input)?;
}
hash_bytes(&input, &hashers[0], bytes);
Ok(())
} else if hashers.len() >= type_ids.len() {
let iter = self.storage_entry_keys.iter().zip(type_ids).zip(hashers);
// A hasher per field; encode and hash each field independently.
for ((key, type_id), hasher) in iter {
let mut input = Vec::new();
key.encode_with_metadata(type_id, metadata, &mut input)?;
hash_bytes(&input, hasher, bytes);
}
Ok(())
} else {
// Provided more fields than hashers.
Err(StorageAddressError::WrongNumberOfHashers {
hashers: hashers.len(),
fields: type_ids.len(),
}
.into())
}
}
}
let hashers = StorageHashers::new(entry.entry_type(), metadata.types())?;
self.keys
.encode_storage_key(bytes, &mut hashers.iter(), metadata.types())?;
Ok(())
}
fn validation_hash(&self) -> Option<[u8; 32]> {
@@ -235,40 +175,11 @@ where
}
}
/// A static storage key; this is some pre-encoded bytes
/// likely provided by the generated interface.
pub type StaticStorageMapKey = Static<Encoded>;
// Used in codegen to construct the above.
#[doc(hidden)]
pub fn make_static_storage_map_key<T: codec::Encode>(t: T) -> StaticStorageMapKey {
Static(Encoded(t.encode()))
}
/// Construct a new dynamic storage lookup.
pub fn dynamic<StorageKey: EncodeWithMetadata>(
pub fn dynamic<Keys: StorageKey>(
pallet_name: impl Into<String>,
entry_name: impl Into<String>,
storage_entry_keys: Vec<StorageKey>,
) -> DynamicAddress<StorageKey> {
storage_entry_keys: Keys,
) -> DynamicAddress<Keys> {
DynamicAddress::new(pallet_name, entry_name, storage_entry_keys)
}
/// Take some SCALE encoded bytes and a [`StorageHasher`] and hash the bytes accordingly.
fn hash_bytes(input: &[u8], hasher: &StorageHasher, bytes: &mut Vec<u8>) {
match hasher {
StorageHasher::Identity => bytes.extend(input),
StorageHasher::Blake2_128 => bytes.extend(sp_core_hashing::blake2_128(input)),
StorageHasher::Blake2_128Concat => {
bytes.extend(sp_core_hashing::blake2_128(input));
bytes.extend(input);
}
StorageHasher::Blake2_256 => bytes.extend(sp_core_hashing::blake2_256(input)),
StorageHasher::Twox128 => bytes.extend(sp_core_hashing::twox_128(input)),
StorageHasher::Twox256 => bytes.extend(sp_core_hashing::twox_256(input)),
StorageHasher::Twox64Concat => {
bytes.extend(sp_core_hashing::twox_64(input));
bytes.extend(input);
}
}
}
+466
View File
@@ -0,0 +1,466 @@
use crate::{
error::{Error, MetadataError, StorageAddressError},
utils::{Encoded, Static},
};
use scale_decode::{visitor::IgnoreVisitor, DecodeAsType};
use scale_encode::EncodeAsType;
use scale_info::{PortableRegistry, TypeDef};
use scale_value::Value;
use subxt_metadata::{StorageEntryType, StorageHasher};
use derivative::Derivative;
use super::utils::hash_bytes;
/// A collection of storage hashers paired with the type ids of the types they should hash.
/// Can be created for each storage entry in the metadata via [`StorageHashers::new()`].
#[derive(Debug)]
pub(crate) struct StorageHashers {
hashers_and_ty_ids: Vec<(StorageHasher, u32)>,
}
impl StorageHashers {
/// Creates new [`StorageHashers`] from a storage entry. Looks at the [`StorageEntryType`] and
/// assigns a hasher to each type id that makes up the key.
pub fn new(storage_entry: &StorageEntryType, types: &PortableRegistry) -> Result<Self, Error> {
let mut hashers_and_ty_ids = vec![];
if let StorageEntryType::Map {
hashers, key_ty, ..
} = storage_entry
{
let ty = types
.resolve(*key_ty)
.ok_or(MetadataError::TypeNotFound(*key_ty))?;
if let TypeDef::Tuple(tuple) = &ty.type_def {
if hashers.len() == 1 {
// use the same hasher for all fields, if only 1 hasher present:
let hasher = hashers[0];
for f in tuple.fields.iter() {
hashers_and_ty_ids.push((hasher, f.id));
}
} else if hashers.len() < tuple.fields.len() {
return Err(StorageAddressError::WrongNumberOfHashers {
hashers: hashers.len(),
fields: tuple.fields.len(),
}
.into());
} else {
for (i, f) in tuple.fields.iter().enumerate() {
hashers_and_ty_ids.push((hashers[i], f.id));
}
}
} else {
if hashers.len() != 1 {
return Err(StorageAddressError::WrongNumberOfHashers {
hashers: hashers.len(),
fields: 1,
}
.into());
}
hashers_and_ty_ids.push((hashers[0], *key_ty));
};
}
Ok(Self { hashers_and_ty_ids })
}
/// Creates an iterator over the storage hashers and type ids.
pub fn iter(&self) -> StorageHashersIter<'_> {
StorageHashersIter {
hashers: self,
idx: 0,
}
}
}
/// An iterator over all type ids of the key and the respective hashers.
/// See [`StorageHashers::iter()`].
#[derive(Debug)]
pub struct StorageHashersIter<'a> {
hashers: &'a StorageHashers,
idx: usize,
}
impl<'a> StorageHashersIter<'a> {
fn next_or_err(&mut self) -> Result<(StorageHasher, u32), Error> {
self.next().ok_or_else(|| {
StorageAddressError::TooManyKeys {
expected: self.hashers.hashers_and_ty_ids.len(),
}
.into()
})
}
}
impl<'a> Iterator for StorageHashersIter<'a> {
type Item = (StorageHasher, u32);
fn next(&mut self) -> Option<Self::Item> {
let item = self.hashers.hashers_and_ty_ids.get(self.idx).copied()?;
self.idx += 1;
Some(item)
}
}
impl<'a> ExactSizeIterator for StorageHashersIter<'a> {
fn len(&self) -> usize {
self.hashers.hashers_and_ty_ids.len() - self.idx
}
}
/// This trait should be implemented by anything that can be used as one or multiple storage keys.
pub trait StorageKey {
/// Encodes the storage key into some bytes
fn encode_storage_key(
&self,
bytes: &mut Vec<u8>,
hashers: &mut StorageHashersIter,
types: &PortableRegistry,
) -> Result<(), Error>;
/// Attempts to decode the StorageKey given some bytes and a set of hashers and type IDs that they are meant to represent.
/// The bytes passed to `decode` should start with:
/// - 1. some fixed size hash (for all hashers except `Identity`)
/// - 2. the plain key value itself (for `Identity`, `Blake2_128Concat` and `Twox64Concat` hashers)
fn decode_storage_key(
bytes: &mut &[u8],
hashers: &mut StorageHashersIter,
types: &PortableRegistry,
) -> Result<Self, Error>
where
Self: Sized + 'static;
}
/// Implement `StorageKey` for `()` which can be used for keyless storage entries,
/// or to otherwise just ignore some entry.
impl StorageKey for () {
fn encode_storage_key(
&self,
_bytes: &mut Vec<u8>,
hashers: &mut StorageHashersIter,
_types: &PortableRegistry,
) -> Result<(), Error> {
_ = hashers.next_or_err();
Ok(())
}
fn decode_storage_key(
bytes: &mut &[u8],
hashers: &mut StorageHashersIter,
types: &PortableRegistry,
) -> Result<Self, Error> {
let (hasher, ty_id) = match hashers.next_or_err() {
Ok((hasher, ty_id)) => (hasher, ty_id),
Err(_) if bytes.is_empty() => return Ok(()),
Err(err) => return Err(err),
};
consume_hash_returning_key_bytes(bytes, hasher, ty_id, types)?;
Ok(())
}
}
/// A storage key for static encoded values.
/// The original value is only present at construction, but can be decoded from the contained bytes.
#[derive(Derivative)]
#[derivative(Clone(bound = ""), Debug(bound = ""))]
pub struct StaticStorageKey<K: ?Sized> {
bytes: Static<Encoded>,
_marker: std::marker::PhantomData<K>,
}
impl<K: codec::Encode + ?Sized> StaticStorageKey<K> {
/// Creates a new static storage key
pub fn new(key: &K) -> Self {
StaticStorageKey {
bytes: Static(Encoded(key.encode())),
_marker: std::marker::PhantomData,
}
}
}
impl<K: codec::Decode + ?Sized> StaticStorageKey<K> {
/// Decodes the encoded inner bytes into the type `K`.
pub fn decoded(&self) -> Result<K, Error> {
let decoded = K::decode(&mut self.bytes())?;
Ok(decoded)
}
}
impl<K: ?Sized> StaticStorageKey<K> {
/// Returns the scale-encoded bytes that make up this key
pub fn bytes(&self) -> &[u8] {
&self.bytes.0 .0
}
}
// Note: The ?Sized bound is necessary to support e.g. `StorageKey<[u8]>`.
impl<K: ?Sized> StorageKey for StaticStorageKey<K> {
fn encode_storage_key(
&self,
bytes: &mut Vec<u8>,
hashers: &mut StorageHashersIter,
types: &PortableRegistry,
) -> Result<(), Error> {
let (hasher, ty_id) = hashers.next_or_err()?;
let encoded_value = self.bytes.encode_as_type(ty_id, types)?;
hash_bytes(&encoded_value, hasher, bytes);
Ok(())
}
fn decode_storage_key(
bytes: &mut &[u8],
hashers: &mut StorageHashersIter,
types: &PortableRegistry,
) -> Result<Self, Error>
where
Self: Sized + 'static,
{
let (hasher, ty_id) = hashers.next_or_err()?;
let key_bytes = consume_hash_returning_key_bytes(bytes, hasher, ty_id, types)?;
// if the hasher had no key appended, we can't decode it into a `StaticStorageKey`.
let Some(key_bytes) = key_bytes else {
return Err(StorageAddressError::HasherCannotReconstructKey { ty_id, hasher }.into());
};
// Return the key bytes.
let key = StaticStorageKey {
bytes: Static(Encoded(key_bytes.to_vec())),
_marker: std::marker::PhantomData::<K>,
};
Ok(key)
}
}
impl StorageKey for Vec<scale_value::Value> {
fn encode_storage_key(
&self,
bytes: &mut Vec<u8>,
hashers: &mut StorageHashersIter,
types: &PortableRegistry,
) -> Result<(), Error> {
for value in self.iter() {
let (hasher, ty_id) = hashers.next_or_err()?;
let encoded_value = value.encode_as_type(ty_id, types)?;
hash_bytes(&encoded_value, hasher, bytes);
}
Ok(())
}
fn decode_storage_key(
bytes: &mut &[u8],
hashers: &mut StorageHashersIter,
types: &PortableRegistry,
) -> Result<Self, Error>
where
Self: Sized + 'static,
{
let mut result: Vec<scale_value::Value> = vec![];
for (hasher, ty_id) in hashers.by_ref() {
match consume_hash_returning_key_bytes(bytes, hasher, ty_id, types)? {
Some(value_bytes) => {
let value = Value::decode_as_type(&mut &*value_bytes, ty_id, types)?;
result.push(value.remove_context());
}
None => {
result.push(Value::unnamed_composite([]));
}
}
}
// We've consumed all of the hashers, so we expect to also consume all of the bytes:
if !bytes.is_empty() {
return Err(StorageAddressError::TooManyBytes.into());
}
Ok(result)
}
}
// Skip over the hash bytes (including any key at the end), returning bytes
// representing the key if one exists, or None if the hasher has no key appended.
fn consume_hash_returning_key_bytes<'a>(
bytes: &mut &'a [u8],
hasher: StorageHasher,
ty_id: u32,
types: &PortableRegistry,
) -> Result<Option<&'a [u8]>, Error> {
// Strip the bytes off for the actual hash, consuming them.
let bytes_to_strip = hasher.len_excluding_key();
if bytes.len() < bytes_to_strip {
return Err(StorageAddressError::NotEnoughBytes.into());
}
*bytes = &bytes[bytes_to_strip..];
// Now, find the bytes representing the key, consuming them.
let before_key = *bytes;
if hasher.ends_with_key() {
scale_decode::visitor::decode_with_visitor(bytes, ty_id, types, IgnoreVisitor)
.map_err(|err| Error::Decode(err.into()))?;
// Return the key bytes, having advanced the input cursor past them.
let key_bytes = &before_key[..before_key.len() - bytes.len()];
Ok(Some(key_bytes))
} else {
// There are no key bytes, so return None.
Ok(None)
}
}
/// Generates StorageKey implementations for tuples
macro_rules! impl_tuples {
($($ty:ident $n:tt),+) => {{
impl<$($ty: StorageKey),+> StorageKey for ($( $ty ),+) {
fn encode_storage_key(
&self,
bytes: &mut Vec<u8>,
hashers: &mut StorageHashersIter,
types: &PortableRegistry,
) -> Result<(), Error> {
$( self.$n.encode_storage_key(bytes, hashers, types)?; )+
Ok(())
}
fn decode_storage_key(
bytes: &mut &[u8],
hashers: &mut StorageHashersIter,
types: &PortableRegistry,
) -> Result<Self, Error>
where
Self: Sized + 'static,
{
Ok( ( $( $ty::decode_storage_key(bytes, hashers, types)?, )+ ) )
}
}
}};
}
#[rustfmt::skip]
const _: () = {
impl_tuples!(A 0, B 1);
impl_tuples!(A 0, B 1, C 2);
impl_tuples!(A 0, B 1, C 2, D 3);
impl_tuples!(A 0, B 1, C 2, D 3, E 4);
impl_tuples!(A 0, B 1, C 2, D 3, E 4, F 5);
impl_tuples!(A 0, B 1, C 2, D 3, E 4, F 5, G 6);
impl_tuples!(A 0, B 1, C 2, D 3, E 4, F 5, G 6, H 7);
};
#[cfg(test)]
mod tests {
use codec::Encode;
use scale_info::{meta_type, PortableRegistry, Registry, TypeInfo};
use subxt_metadata::StorageHasher;
use crate::utils::Era;
use super::{StaticStorageKey, StorageKey};
struct KeyBuilder {
registry: Registry,
bytes: Vec<u8>,
hashers_and_ty_ids: Vec<(StorageHasher, u32)>,
}
impl KeyBuilder {
fn new() -> KeyBuilder {
KeyBuilder {
registry: Registry::new(),
bytes: vec![],
hashers_and_ty_ids: vec![],
}
}
fn add<T: TypeInfo + Encode + 'static>(mut self, value: T, hasher: StorageHasher) -> Self {
let id = self.registry.register_type(&meta_type::<T>()).id;
self.hashers_and_ty_ids.push((hasher, id));
for _i in 0..hasher.len_excluding_key() {
self.bytes.push(0);
}
value.encode_to(&mut self.bytes);
self
}
fn build(self) -> (PortableRegistry, Vec<u8>, Vec<(StorageHasher, u32)>) {
(self.registry.into(), self.bytes, self.hashers_and_ty_ids)
}
}
#[test]
fn storage_key_decoding_fuzz() {
let hashers = [
StorageHasher::Blake2_128,
StorageHasher::Blake2_128Concat,
StorageHasher::Blake2_256,
StorageHasher::Identity,
StorageHasher::Twox128,
StorageHasher::Twox256,
StorageHasher::Twox64Concat,
];
let key_preserving_hashers = [
StorageHasher::Blake2_128Concat,
StorageHasher::Identity,
StorageHasher::Twox64Concat,
];
type T4A = (
(),
StaticStorageKey<u32>,
StaticStorageKey<String>,
StaticStorageKey<Era>,
);
type T4B = (
(),
(StaticStorageKey<u32>, StaticStorageKey<String>),
StaticStorageKey<Era>,
);
type T4C = (
((), StaticStorageKey<u32>),
(StaticStorageKey<String>, StaticStorageKey<Era>),
);
let era = Era::Immortal;
for h0 in hashers {
for h1 in key_preserving_hashers {
for h2 in key_preserving_hashers {
for h3 in key_preserving_hashers {
let (types, bytes, hashers_and_ty_ids) = KeyBuilder::new()
.add((), h0)
.add(13u32, h1)
.add("Hello", h2)
.add(era, h3)
.build();
let hashers = super::StorageHashers { hashers_and_ty_ids };
let keys_a =
T4A::decode_storage_key(&mut &bytes[..], &mut hashers.iter(), &types)
.unwrap();
let keys_b =
T4B::decode_storage_key(&mut &bytes[..], &mut hashers.iter(), &types)
.unwrap();
let keys_c =
T4C::decode_storage_key(&mut &bytes[..], &mut hashers.iter(), &types)
.unwrap();
assert_eq!(keys_a.1.decoded().unwrap(), 13);
assert_eq!(keys_b.1 .0.decoded().unwrap(), 13);
assert_eq!(keys_c.0 .1.decoded().unwrap(), 13);
assert_eq!(keys_a.2.decoded().unwrap(), "Hello");
assert_eq!(keys_b.1 .1.decoded().unwrap(), "Hello");
assert_eq!(keys_c.1 .0.decoded().unwrap(), "Hello");
assert_eq!(keys_a.3.decoded().unwrap(), era);
assert_eq!(keys_b.2.decoded().unwrap(), era);
assert_eq!(keys_c.1 .1.decoded().unwrap(), era);
}
}
}
}
}
}
+53 -9
View File
@@ -3,18 +3,22 @@
// see LICENSE for license details.
use super::storage_address::{StorageAddress, Yes};
use super::storage_key::StorageHashers;
use super::StorageKey;
use crate::{
backend::{BackendExt, BlockRef},
client::OnlineClientT,
error::{Error, MetadataError},
error::{Error, MetadataError, StorageAddressError},
metadata::{DecodeWithMetadata, Metadata},
Config,
};
use codec::Decode;
use derivative::Derivative;
use futures::StreamExt;
use std::{future::Future, marker::PhantomData};
use subxt_metadata::{PalletMetadata, StorageEntryMetadata, StorageEntryType};
/// This is returned from a couple of storage functions.
@@ -197,18 +201,19 @@ where
/// .await
/// .unwrap();
///
/// while let Some(Ok((key, value))) = iter.next().await {
/// println!("Key: 0x{}", hex::encode(&key));
/// println!("Value: {}", value);
/// while let Some(Ok(kv)) = iter.next().await {
/// println!("Key bytes: 0x{}", hex::encode(&kv.key_bytes));
/// println!("Value: {}", kv.value);
/// }
/// # }
/// ```
pub fn iter<Address>(
&self,
address: Address,
) -> impl Future<Output = Result<StreamOfResults<(Vec<u8>, Address::Target)>, Error>> + 'static
) -> impl Future<Output = Result<StreamOfResults<StorageKeyValuePair<Address>>, Error>> + 'static
where
Address: StorageAddress<IsIterable = Yes> + 'static,
Address::Keys: 'static + Sized,
{
let client = self.client.clone();
let block_ref = self.block_ref.clone();
@@ -226,11 +231,13 @@ where
// Look up the return type for flexible decoding. Do this once here to avoid
// potentially doing it every iteration if we used `decode_storage_with_metadata`
// in the iterator.
let return_type_id = return_type_from_storage_entry_type(entry.entry_type());
let entry = entry.entry_type();
let return_type_id = entry.value_ty();
let hashers = StorageHashers::new(entry, metadata.types())?;
// The address bytes of this entry:
let address_bytes = super::utils::storage_address_bytes(&address, &metadata)?;
let s = client
.backend()
.storage_fetch_descendant_values(address_bytes, block_ref.hash())
@@ -240,12 +247,27 @@ where
Ok(kv) => kv,
Err(e) => return Err(e),
};
let val = Address::Target::decode_with_metadata(
let value = Address::Target::decode_with_metadata(
&mut &*kv.value,
return_type_id,
&metadata,
)?;
Ok((kv.key, val))
let key_bytes = kv.key;
let cursor = &mut &key_bytes[..];
strip_storage_addess_root_bytes(cursor)?;
let keys = <Address::Keys as StorageKey>::decode_storage_key(
cursor,
&mut hashers.iter(),
metadata.types(),
)?;
Ok(StorageKeyValuePair::<Address> {
keys,
key_bytes,
value,
})
});
let s = StreamOfResults::new(Box::pin(s));
@@ -290,6 +312,28 @@ where
}
}
/// Strips the first 16 bytes (8 for the pallet hash, 8 for the entry hash) off some storage address bytes.
fn strip_storage_addess_root_bytes(address_bytes: &mut &[u8]) -> Result<(), StorageAddressError> {
if address_bytes.len() >= 16 {
*address_bytes = &address_bytes[16..];
Ok(())
} else {
Err(StorageAddressError::UnexpectedAddressBytes)
}
}
/// A pair of keys and values together with all the bytes that make up the storage address.
/// `keys` is `None` if non-concat hashers are used. In this case the keys could not be extracted back from the key_bytes.
#[derive(Clone, Debug, Eq, PartialEq, Ord, PartialOrd)]
pub struct StorageKeyValuePair<T: StorageAddress> {
/// The bytes that make up the address of the storage entry.
pub key_bytes: Vec<u8>,
/// The keys that can be used to construct the address of this storage entry.
pub keys: T::Keys,
/// The value of the storage entry.
pub value: T::Target,
}
/// Validate a storage address against the metadata.
pub(crate) fn validate_storage_address<Address: StorageAddress>(
address: &Address,
+24 -3
View File
@@ -6,12 +6,14 @@
//! aren't things that should ever be overridden, and so don't exist on
//! the trait itself.
use subxt_metadata::StorageHasher;
use super::StorageAddress;
use crate::{error::Error, metadata::Metadata};
/// Return the root of a given [`StorageAddress`]: hash the pallet name and entry name
/// and append those bytes to the output.
pub(crate) fn write_storage_address_root_bytes<Address: StorageAddress>(
pub fn write_storage_address_root_bytes<Address: StorageAddress>(
addr: &Address,
out: &mut Vec<u8>,
) {
@@ -21,7 +23,7 @@ pub(crate) fn write_storage_address_root_bytes<Address: StorageAddress>(
/// Outputs the [`storage_address_root_bytes`] as well as any additional bytes that represent
/// a lookup in a storage map at that location.
pub(crate) fn storage_address_bytes<Address: StorageAddress>(
pub fn storage_address_bytes<Address: StorageAddress>(
addr: &Address,
metadata: &Metadata,
) -> Result<Vec<u8>, Error> {
@@ -32,8 +34,27 @@ pub(crate) fn storage_address_bytes<Address: StorageAddress>(
}
/// Outputs a vector containing the bytes written by [`write_storage_address_root_bytes`].
pub(crate) fn storage_address_root_bytes<Address: StorageAddress>(addr: &Address) -> Vec<u8> {
pub fn storage_address_root_bytes<Address: StorageAddress>(addr: &Address) -> Vec<u8> {
let mut bytes = Vec::new();
write_storage_address_root_bytes(addr, &mut bytes);
bytes
}
/// Take some SCALE encoded bytes and a [`StorageHasher`] and hash the bytes accordingly.
pub fn hash_bytes(input: &[u8], hasher: StorageHasher, bytes: &mut Vec<u8>) {
match hasher {
StorageHasher::Identity => bytes.extend(input),
StorageHasher::Blake2_128 => bytes.extend(sp_core_hashing::blake2_128(input)),
StorageHasher::Blake2_128Concat => {
bytes.extend(sp_core_hashing::blake2_128(input));
bytes.extend(input);
}
StorageHasher::Blake2_256 => bytes.extend(sp_core_hashing::blake2_256(input)),
StorageHasher::Twox128 => bytes.extend(sp_core_hashing::twox_128(input)),
StorageHasher::Twox256 => bytes.extend(sp_core_hashing::twox_256(input)),
StorageHasher::Twox64Concat => {
bytes.extend(sp_core_hashing::twox_64(input));
bytes.extend(input);
}
}
}
File diff suppressed because it is too large Load Diff
@@ -56,10 +56,6 @@ async fn storage_map_lookup() -> Result<(), subxt::Error> {
Ok(())
}
// This fails until the fix in https://github.com/paritytech/subxt/pull/458 is introduced.
// Here we create a key that looks a bit like a StorageNMap key, but should in fact be
// treated as a StorageKey (ie we should hash both values together with one hasher, rather
// than hash both values separately, or ignore the second value).
#[tokio::test]
async fn storage_n_mapish_key_is_properly_created() -> Result<(), subxt::Error> {
use codec::Encode;
@@ -73,18 +69,21 @@ async fn storage_n_mapish_key_is_properly_created() -> Result<(), subxt::Error>
.session()
.key_owner(KeyTypeId([1, 2, 3, 4]), [5u8, 6, 7, 8]);
let actual_key_bytes = api.storage().address_bytes(&actual_key)?;
// Let's manually hash to what we assume it should be and compare:
let expected_key_bytes = {
// Hash the prefix to the storage entry:
let mut bytes = sp_core::twox_128("Session".as_bytes()).to_vec();
bytes.extend(&sp_core::twox_128("KeyOwner".as_bytes())[..]);
// twox64_concat a *tuple* of the args expected:
let suffix = (KeyTypeId([1, 2, 3, 4]), vec![5u8, 6, 7, 8]).encode();
bytes.extend(sp_core::twox_64(&suffix));
bytes.extend(&suffix);
// Both keys, use twox64_concat hashers:
let key1 = KeyTypeId([1, 2, 3, 4]).encode();
let key2 = vec![5u8, 6, 7, 8].encode();
bytes.extend(sp_core::twox_64(&key1));
bytes.extend(&key1);
bytes.extend(sp_core::twox_64(&key2));
bytes.extend(&key2);
bytes
};
dbg!(&expected_key_bytes);
assert_eq!(actual_key_bytes, expected_key_bytes);
Ok(())
@@ -167,9 +166,9 @@ async fn storage_partial_lookup() -> Result<(), subxt::Error> {
let addr_bytes = api.storage().address_bytes(&addr)?;
let mut results = api.storage().at_latest().await?.iter(addr).await?;
let mut approvals = Vec::new();
while let Some(Ok((key, value))) = results.next().await {
assert!(key.starts_with(&addr_bytes));
approvals.push(value);
while let Some(Ok(kv)) = results.next().await {
assert!(kv.key_bytes.starts_with(&addr_bytes));
approvals.push(kv.value);
}
assert_eq!(approvals.len(), assets.len());
let mut amounts = approvals.iter().map(|a| a.amount).collect::<Vec<_>>();
@@ -188,9 +187,10 @@ async fn storage_partial_lookup() -> Result<(), subxt::Error> {
let mut results = api.storage().at_latest().await?.iter(addr).await?;
let mut approvals = Vec::new();
while let Some(Ok((key, value))) = results.next().await {
assert!(key.starts_with(&addr_bytes));
approvals.push(value);
while let Some(Ok(kv)) = results.next().await {
assert!(kv.key_bytes.starts_with(&addr_bytes));
assert!(kv.keys.decoded().is_ok());
approvals.push(kv.value);
}
assert_eq!(approvals.len(), 1);
assert_eq!(approvals[0].amount, amount);