Static Metadata Validation (#478)

* metadata: Implement MetadataHashable for deterministic hashing

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Hash `scale_info::Field`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Hash `scale_info::Variant`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Hash `scale_info::TypeDef`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Hash pallet metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Avoid data representation collision via unique identifiers

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Finalize hashing on recursive types

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Cache recursive calls

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Move `MetadataHashable` to codegen to avoid cyclic dependency

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Add pallet unique hash

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Wrap metadata as owned

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Use MetadataHashable wrapper for clients

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Generate runtime pallet uid from metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Validate metadata compatibility at the pallet level

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update polkadot.rs

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Modify examples and tests for the new API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Implement metadata uid

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update polkadot with TryFrom implementation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client: Change `to_runtime_api` to reflect TryFrom changes

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client: Skip full metadata validation option

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Add option to skip pallet validation for TransactionApi

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Add option to skip pallet validation for StorageApi

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update polkadot.rs with ability to skip pallet validation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Change `MetadataHashable` to per function implementation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Use metadata hashes functions

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Use metadata hashes functions

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Make `get_type_uid` private

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen, subxt: Rename metadata functions `*_uid` to `*_hash`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Update `get_field_hash` to use `codec::Encode`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Update polkadot.rs

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen, subxt: Move metadata check from client to subxt::Metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen, subxt: Rename metadata check functions to follow `*_hash` naming

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Update polkadot.rs to reflect naming changes

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Use `encode_to` for metadata generation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Update polkadot.rs to reflect `encode_to` changes

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Specific name for visited set

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Provide cache to hashing functions

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Compute metadata hash by sorted pallets

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Get extrinsic hash

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Extend metadata hash with extrinsic and metadata type

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Add cache as metadata parameter

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen, subxt: Update metadata hash to use cache

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Implement Default trait for MetadataHasherCache

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Add cache for pallets

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Move functionality to metadata crate

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen, subxt: Use subxt-metadata crate

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Remove metdata hashing functionality

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Add documentation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Fix vector capacity to include extrinisc and type hash

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Add empty CLI

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-cli: Fetch metadata from substrate nodes

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-cli: Log metadata hashes of provided nodes

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-cli: Group compatible nodes by metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-cli: Simplify hash map insertion

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-cli: Move full metadata check to function

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-cli: Group metadata validation at the pallet level

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Persist metadata cache

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Move compatibility cli from subxt-metadata to subxt-cli

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Remove cli from subxt-metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* cli: Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Fix compatible metadata when pallets are registered in different order

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* tests: Handle result of pallet hashing

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Remove type cache for deterministic hashing

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Add test assets from `substrate-node-template` tag `polkadot-v0.9.17`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-tests: Check cache hashing for Balances pallet

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Fix `get_type_hash` clippy issue

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-tests: Compare one time cache with persistent cache

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-test: Check metadata hash populates cache for pallets

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-tests: Simplify `cache_deterministic_hash` test

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata-tests: Check deterministic metadata for different order pallets

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Implement TransactionApiUnchecked for skipping pallet validation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Implement StorageApiUnchecked for skipping pallet validation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Remove skip_pallet_validation boolean

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Implement ClientUnchecked for skipping metadata validation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update polkadot.rs

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update examples of rpc_call to skip metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Remove heck dependency

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Add pallet name as an identifier for pallet hashing

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Implement MetadataHashDetails

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Adjust testing to `MetadataHashDetails` interface

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Remove extra `pallet_name`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Update polkadot.rs

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Fix clippy issue

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Change StorageApi to support `_unchecked` methods

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Change TransactionApi to support `_unchecked` methods

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Switch back from `TryFrom` to `From` for `subxt::Client`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen, subxt: Remove `ClientUnchecked`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Expose `validate_metadata` as validation of compatibility method

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* examples: Update to the new interface

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Update test integration to latest interface

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Update polkadot.rs

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Check different pallet index order

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Check recursive type hashing

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Check recursive types registered in different order

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Fix recursive types warning

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Remove test assets

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Extend tests to verify cached pallet values

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* examples: Add metadata compatiblity example

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* examples: Revert balance_transfer to initial form

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Add ConstantsApi metadata check

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* tests: Modify tests to accomodate ConstantsApi changes

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* examples: Modify verified version

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Generate polkadot.rs from `0.9.18-4542a603cc-aarch64-macos`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* examples: Update polkadot_metadata.scale from `0.9.18-4542a603cc-aarch64-macos`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Update documentation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* tests: Modify default pallet usage

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Remove hex dependency

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Add MetadataTestType to capture complex types

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Update tests to use complex types

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Check metadata correctness via extending pallets

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Extend pallet hash with Events

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Extend pallet hash with constants

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Extend pallet hash with error

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* examples: Extend metadata compatibiliy with StorageApi and ConstantsApi

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Modify comments and documentation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Benchmarks for full validation and pallet validation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/benches: Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Hash metadata just by inspecting the provided pallets

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Make pallets generic over T for `AsRef<str>`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Expose the name of the pallets composing the metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/tests: Update polkadot.rs with pallets name

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Obtain metadata hash only by inspecting pallets

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen,subxt: Extend the metadata hash to utilize just pallets

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/tests: Update polkadot.rs with client metadata has per pallet

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Test `get_metadata_per_pallet_hash` correctness

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/benches: Fix decode of metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* [static metadata] validate storage, calls and constants per call (#507)

* validate storage, calls and constants per call

* fix typo

* cache per-thing hashing, move an Arc, remove some unused bits

* create hash_cache to simplify metadata call/constant/storage caching

* simplify/robustify the caching logic to help prevent mismatch between get and set

* cargo fmt

* Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* bench the per-call metadata functions

* metadata: Add test for `node_template_runtime_variant`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* ensure criteron cli opts work

* group benchmarks and avoid unwrap issues

* metadata: Check template runtime for handling the pallet swap order case

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Remove debug logs

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Optimise by removing field's name and type_name and type's path

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Refactor `get_type_hash` to break recursion earlier

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Add tests for `hash_cache`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Add tests for checking Metadata Inner cache

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Check semantic changes inside enum and struct fields

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Add enums named differently with compatible semantic meaning

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Guard testing of release versions for `node_template_runtime`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Improve documentation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update polkadot.rs

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata/tests: Manually construct type of `node_template_runtimeL::Call`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* no more special Call handling, avoid a little cloning, and actually sort by name

* remove unused deps and fmt

* RuntimeMetadataLastVersion => RuntimeMetadataV14

* remove a bunch of allocations in the metadata hashing, speed up from ~17ms to ~5ms

* update release docs to release metadata crate too

* subxt: Remove codegen dependency

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Replace std RwLock with parking_lot

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/tests: Add ws address to `TestNodeProcess`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/tests: Add metadata validation integration test

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Allow setting metadata on the ClientBuilder

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/tests: Check incompatible metadatas

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* metadata: Fix constant hashing for deterministic output

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/tests: Check metadata validation for constants

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/tests: Test validation for calls

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/tests: Test validation for storage

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Expose `set_metadata` for testing only

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Guard metadata tests under integration-tests

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

Co-authored-by: James Wilson <james@jsdw.me>
This commit is contained in:
Alexandru Vasile
2022-04-28 12:37:07 +03:00
committed by GitHub
parent 5605fbd308
commit 1fd1eee72a
44 changed files with 17760 additions and 4624 deletions
+1
View File
@@ -5,6 +5,7 @@ members = [
"codegen",
"examples",
"macro",
"metadata",
"subxt",
"test-runtime"
]
+2 -1
View File
@@ -67,7 +67,8 @@ We also assume that ongoing work done is being merged directly to the `master` b
a little time in between each to let crates.io catch up with what we've published).
```
(cd codegen && cargo publish) && \
(cd metadata && cargo publish) && \
(cd codegen && cargo publish) && \
sleep 10 && \
(cd macro && cargo publish) && \
sleep 10 && \
+4
View File
@@ -17,6 +17,10 @@ path = "src/main.rs"
[dependencies]
# perform subxt codegen
subxt-codegen = { version = "0.20.0", path = "../codegen" }
# perform node compatibility
subxt-metadata = { version = "0.20.0", path = "../metadata" }
# information of portable registry
scale-info = "2.0.0"
# parse command line args
structopt = "0.3.25"
# make the request to a substrate node to get the metadata
+126 -1
View File
@@ -18,12 +18,22 @@ use color_eyre::eyre::{
self,
WrapErr,
};
use frame_metadata::RuntimeMetadataPrefixed;
use frame_metadata::{
RuntimeMetadata,
RuntimeMetadataPrefixed,
RuntimeMetadataV14,
META_RESERVED,
};
use scale::{
Decode,
Input,
};
use serde::{
Deserialize,
Serialize,
};
use std::{
collections::HashMap,
fs,
io::{
self,
@@ -34,6 +44,10 @@ use std::{
};
use structopt::StructOpt;
use subxt_codegen::GeneratedTypeDerives;
use subxt_metadata::{
get_metadata_hash,
get_pallet_hash,
};
/// Utilities for working with substrate metadata for subxt.
#[derive(Debug, StructOpt)]
@@ -75,6 +89,18 @@ enum Command {
#[structopt(long = "derive")]
derives: Vec<String>,
},
/// Verify metadata compatibility between substrate nodes.
Compatibility {
/// Urls of the substrate nodes to verify for metadata compatibility.
#[structopt(name = "nodes", long, use_delimiter = true, parse(try_from_str))]
nodes: Vec<url::Url>,
/// Check the compatibility of metadata for a particular pallet.
///
/// ### Note
/// The validation will omit the full metadata check and focus instead on the pallet.
#[structopt(long, parse(try_from_str))]
pallet: Option<String>,
},
}
fn main() -> color_eyre::Result<()> {
@@ -126,6 +152,105 @@ fn main() -> color_eyre::Result<()> {
codegen(&mut &bytes[..], derives)?;
Ok(())
}
Command::Compatibility { nodes, pallet } => {
match pallet {
Some(pallet) => handle_pallet_metadata(nodes.as_slice(), pallet.as_str()),
None => handle_full_metadata(nodes.as_slice()),
}
}
}
}
fn handle_pallet_metadata(nodes: &[url::Url], name: &str) -> color_eyre::Result<()> {
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "camelCase")]
struct CompatibilityPallet {
pallet_present: HashMap<String, Vec<String>>,
pallet_not_found: Vec<String>,
}
let mut compatibility: CompatibilityPallet = Default::default();
for node in nodes.iter() {
let metadata = fetch_runtime_metadata(node)?;
match metadata.pallets.iter().find(|pallet| pallet.name == name) {
Some(pallet_metadata) => {
let hash = get_pallet_hash(&metadata.types, pallet_metadata);
let hex_hash = hex::encode(hash);
println!(
"Node {:?} has pallet metadata hash {:?}",
node.as_str(),
hex_hash
);
compatibility
.pallet_present
.entry(hex_hash)
.or_insert_with(Vec::new)
.push(node.as_str().to_string());
}
None => {
compatibility
.pallet_not_found
.push(node.as_str().to_string());
}
}
}
println!(
"\nCompatible nodes by pallet\n{}",
serde_json::to_string_pretty(&compatibility)
.context("Failed to parse compatibility map")?
);
Ok(())
}
fn handle_full_metadata(nodes: &[url::Url]) -> color_eyre::Result<()> {
let mut compatibility_map: HashMap<String, Vec<String>> = HashMap::new();
for node in nodes.iter() {
let metadata = fetch_runtime_metadata(node)?;
let hash = get_metadata_hash(&metadata);
let hex_hash = hex::encode(hash);
println!("Node {:?} has metadata hash {:?}", node.as_str(), hex_hash,);
compatibility_map
.entry(hex_hash)
.or_insert_with(Vec::new)
.push(node.as_str().to_string());
}
println!(
"\nCompatible nodes\n{}",
serde_json::to_string_pretty(&compatibility_map)
.context("Failed to parse compatibility map")?
);
Ok(())
}
fn fetch_runtime_metadata(url: &url::Url) -> color_eyre::Result<RuntimeMetadataV14> {
let (_, bytes) = fetch_metadata(url)?;
let metadata = <RuntimeMetadataPrefixed as Decode>::decode(&mut &bytes[..])?;
if metadata.0 != META_RESERVED {
return Err(eyre::eyre!(
"Node {:?} has invalid metadata prefix: {:?} expected prefix: {:?}",
url.as_str(),
metadata.0,
META_RESERVED
))
}
match metadata.1 {
RuntimeMetadata::V14(v14) => Ok(v14),
_ => {
Err(eyre::eyre!(
"Node {:?} with unsupported metadata version: {:?}",
url.as_str(),
metadata.1
))
}
}
}
+2
View File
@@ -22,6 +22,8 @@ proc-macro-error = "1.0.4"
quote = "1.0.8"
syn = "1.0.58"
scale-info = { version = "2.0.0", features = ["bit-vec"] }
sp-core = { version = "6.0.0" }
subxt-metadata = { version = "0.20.0", path = "../metadata" }
[dev-dependencies]
bitvec = { version = "1.0.0", default-features = false, features = ["alloc"] }
+17 -9
View File
@@ -19,6 +19,7 @@ use crate::types::{
TypeGenerator,
};
use frame_metadata::{
v14::RuntimeMetadataV14,
PalletCallMetadata,
PalletMetadata,
};
@@ -35,6 +36,7 @@ use quote::{
use scale_info::form::PortableForm;
pub fn generate_calls(
metadata: &RuntimeMetadataV14,
type_gen: &TypeGenerator,
pallet: &PalletMetadata<PortableForm>,
call: &PalletCallMetadata<PortableForm>,
@@ -48,7 +50,7 @@ pub fn generate_calls(
);
let (call_structs, call_fns): (Vec<_>, Vec<_>) = struct_defs
.iter_mut()
.map(|struct_def| {
.map(|(variant_name, struct_def)| {
let (call_fn_args, call_args): (Vec<_>, Vec<_>) =
match struct_def.fields {
CompositeDefFields::Named(ref named_fields) => {
@@ -74,10 +76,12 @@ pub fn generate_calls(
};
let pallet_name = &pallet.name;
let call_struct_name = &struct_def.name;
let function_name = struct_def.name.to_string().to_snake_case();
let fn_name = format_ident!("{}", function_name);
let call_name = &variant_name;
let struct_name = &struct_def.name;
let call_hash = subxt_metadata::get_call_hash(metadata, pallet_name, call_name)
.unwrap_or_else(|_| abort_call_site!("Metadata information for the call {}_{} could not be found", pallet_name, call_name));
let fn_name = format_ident!("{}", variant_name.to_snake_case());
// Propagate the documentation just to `TransactionApi` methods, while
// draining the documentation of inner call structures.
let docs = struct_def.docs.take();
@@ -85,9 +89,9 @@ pub fn generate_calls(
let call_struct = quote! {
#struct_def
impl ::subxt::Call for #call_struct_name {
impl ::subxt::Call for #struct_name {
const PALLET: &'static str = #pallet_name;
const FUNCTION: &'static str = #function_name;
const FUNCTION: &'static str = #call_name;
}
};
let client_fn = quote! {
@@ -95,9 +99,13 @@ pub fn generate_calls(
pub fn #fn_name(
&self,
#( #call_fn_args, )*
) -> ::subxt::SubmittableExtrinsic<'a, T, X, #call_struct_name, DispatchError, root_mod::Event> {
let call = #call_struct_name { #( #call_args, )* };
::subxt::SubmittableExtrinsic::new(self.client, call)
) -> Result<::subxt::SubmittableExtrinsic<'a, T, X, #struct_name, DispatchError, root_mod::Event>, ::subxt::BasicError> {
if self.client.metadata().call_hash::<#struct_name>()? == [#(#call_hash,)*] {
let call = #struct_name { #( #call_args, )* };
Ok(::subxt::SubmittableExtrinsic::new(self.client, call))
} else {
Err(::subxt::MetadataError::IncompatibleMetadata.into())
}
}
};
(call_struct, client_fn)
+14 -4
View File
@@ -16,11 +16,13 @@
use crate::types::TypeGenerator;
use frame_metadata::{
v14::RuntimeMetadataV14,
PalletConstantMetadata,
PalletMetadata,
};
use heck::ToSnakeCase as _;
use proc_macro2::TokenStream as TokenStream2;
use proc_macro_error::abort_call_site;
use quote::{
format_ident,
quote,
@@ -28,6 +30,7 @@ use quote::{
use scale_info::form::PortableForm;
pub fn generate_constants(
metadata: &RuntimeMetadataV14,
type_gen: &TypeGenerator,
pallet: &PalletMetadata<PortableForm>,
constants: &[PalletConstantMetadata<PortableForm>],
@@ -37,16 +40,23 @@ pub fn generate_constants(
let fn_name = format_ident!("{}", constant.name.to_snake_case());
let pallet_name = &pallet.name;
let constant_name = &constant.name;
let constant_hash = subxt_metadata::get_constant_hash(metadata, pallet_name, constant_name)
.unwrap_or_else(|_| abort_call_site!("Metadata information for the constant {}_{} could not be found", pallet_name, constant_name));
let return_ty = type_gen.resolve_type_path(constant.ty.id(), &[]);
let docs = &constant.docs;
quote! {
#( #[doc = #docs ] )*
pub fn #fn_name(&self) -> ::core::result::Result<#return_ty, ::subxt::BasicError> {
let pallet = self.client.metadata().pallet(#pallet_name)?;
let constant = pallet.constant(#constant_name)?;
let value = ::subxt::codec::Decode::decode(&mut &constant.value[..])?;
Ok(value)
if self.client.metadata().constant_hash(#pallet_name, #constant_name)? == [#(#constant_hash,)*] {
let pallet = self.client.metadata().pallet(#pallet_name)?;
let constant = pallet.constant(#constant_name)?;
let value = ::subxt::codec::Decode::decode(&mut &constant.value[..])?;
Ok(value)
} else {
Err(::subxt::MetadataError::IncompatibleMetadata.into())
}
}
}
});
+2 -2
View File
@@ -35,10 +35,10 @@ pub fn generate_events(
|name| name.into(),
"Event",
);
let event_structs = struct_defs.iter().map(|struct_def| {
let event_structs = struct_defs.iter().map(|(variant_name, struct_def)| {
let pallet_name = &pallet.name;
let event_struct = &struct_def.name;
let event_name = struct_def.name.to_string();
let event_name = variant_name;
quote! {
#struct_def
+65 -23
View File
@@ -37,6 +37,8 @@ mod errors;
mod events;
mod storage;
use subxt_metadata::get_metadata_per_pallet_hash;
use super::GeneratedTypeDerives;
use crate::{
ir,
@@ -181,9 +183,28 @@ impl RuntimeGenerator {
})
.collect::<Vec<_>>();
// Pallet names and their length are used to create PALLETS array.
// The array is used to identify the pallets composing the metadata for
// validation of just those pallets.
let pallet_names: Vec<_> = self
.metadata
.pallets
.iter()
.map(|pallet| &pallet.name)
.collect();
let pallet_names_len = pallet_names.len();
let metadata_hash = get_metadata_per_pallet_hash(&self.metadata, &pallet_names);
let modules = pallets_with_mod_names.iter().map(|(pallet, mod_name)| {
let calls = if let Some(ref calls) = pallet.calls {
calls::generate_calls(&type_gen, pallet, calls, types_mod_ident)
calls::generate_calls(
&self.metadata,
&type_gen,
pallet,
calls,
types_mod_ident,
)
} else {
quote!()
};
@@ -195,13 +216,20 @@ impl RuntimeGenerator {
};
let storage_mod = if let Some(ref storage) = pallet.storage {
storage::generate_storage(&type_gen, pallet, storage, types_mod_ident)
storage::generate_storage(
&self.metadata,
&type_gen,
pallet,
storage,
types_mod_ident,
)
} else {
quote!()
};
let constants_mod = if !pallet.constants.is_empty() {
constants::generate_constants(
&self.metadata,
&type_gen,
pallet,
&pallet.constants,
@@ -244,24 +272,26 @@ impl RuntimeGenerator {
};
let mod_ident = item_mod_ir.ident;
let pallets_with_constants =
pallets_with_mod_names
.iter()
.filter_map(|(pallet, pallet_mod_name)| {
(!pallet.constants.is_empty()).then(|| pallet_mod_name)
});
let pallets_with_storage =
pallets_with_mod_names
.iter()
.filter_map(|(pallet, pallet_mod_name)| {
pallet.storage.as_ref().map(|_| pallet_mod_name)
});
let pallets_with_calls =
pallets_with_mod_names
.iter()
.filter_map(|(pallet, pallet_mod_name)| {
pallet.calls.as_ref().map(|_| pallet_mod_name)
});
let pallets_with_constants: Vec<_> = pallets_with_mod_names
.iter()
.filter_map(|(pallet, pallet_mod_name)| {
(!pallet.constants.is_empty()).then(|| pallet_mod_name)
})
.collect();
let pallets_with_storage: Vec<_> = pallets_with_mod_names
.iter()
.filter_map(|(pallet, pallet_mod_name)| {
pallet.storage.as_ref().map(|_| pallet_mod_name)
})
.collect();
let pallets_with_calls: Vec<_> = pallets_with_mod_names
.iter()
.filter_map(|(pallet, pallet_mod_name)| {
pallet.calls.as_ref().map(|_| pallet_mod_name)
})
.collect();
let has_module_error_impl =
errors::generate_has_module_error_impl(&self.metadata, types_mod_ident);
@@ -271,6 +301,8 @@ impl RuntimeGenerator {
pub mod #mod_ident {
// Make it easy to access the root via `root_mod` at different levels:
use super::#mod_ident as root_mod;
// Identify the pallets composing the static metadata by name.
pub static PALLETS: [&str; #pallet_names_len] = [ #(#pallet_names,)* ];
#outer_event
#( #modules )*
@@ -301,6 +333,14 @@ impl RuntimeGenerator {
T: ::subxt::Config,
X: ::subxt::extrinsic::ExtrinsicParams<T>,
{
pub fn validate_metadata(&'a self) -> Result<(), ::subxt::MetadataError> {
if self.client.metadata().metadata_hash(&PALLETS) != [ #(#metadata_hash,)* ] {
Err(::subxt::MetadataError::IncompatibleMetadata)
} else {
Ok(())
}
}
pub fn constants(&'a self) -> ConstantsApi<'a, T> {
ConstantsApi { client: &self.client }
}
@@ -384,12 +424,13 @@ impl RuntimeGenerator {
}
}
/// Return a vector of tuples of variant names and corresponding struct definitions.
pub fn generate_structs_from_variants<'a, F>(
type_gen: &'a TypeGenerator,
type_id: u32,
variant_to_struct_name: F,
error_message_type_name: &str,
) -> Vec<CompositeDef>
) -> Vec<(String, CompositeDef)>
where
F: Fn(&str) -> std::borrow::Cow<str>,
{
@@ -406,14 +447,15 @@ where
&[],
type_gen,
);
CompositeDef::struct_def(
let struct_def = CompositeDef::struct_def(
struct_name.as_ref(),
Default::default(),
fields,
Some(parse_quote!(pub)),
type_gen,
var.docs(),
)
);
(var.name().to_string(), struct_def)
})
.collect()
} else {
+28 -6
View File
@@ -16,6 +16,7 @@
use crate::types::TypeGenerator;
use frame_metadata::{
v14::RuntimeMetadataV14,
PalletMetadata,
PalletStorageMetadata,
StorageEntryMetadata,
@@ -36,6 +37,7 @@ use scale_info::{
};
pub fn generate_storage(
metadata: &RuntimeMetadataV14,
type_gen: &TypeGenerator,
pallet: &PalletMetadata<PortableForm>,
storage: &PalletStorageMetadata<PortableForm>,
@@ -44,7 +46,7 @@ pub fn generate_storage(
let (storage_structs, storage_fns): (Vec<_>, Vec<_>) = storage
.entries
.iter()
.map(|entry| generate_storage_entry_fns(type_gen, pallet, entry))
.map(|entry| generate_storage_entry_fns(metadata, type_gen, pallet, entry))
.unzip();
quote! {
@@ -69,6 +71,7 @@ pub fn generate_storage(
}
fn generate_storage_entry_fns(
metadata: &RuntimeMetadataV14,
type_gen: &TypeGenerator,
pallet: &PalletMetadata<PortableForm>,
storage_entry: &StorageEntryMetadata<PortableForm>,
@@ -205,8 +208,19 @@ fn generate_storage_entry_fns(
}
}
};
let pallet_name = &pallet.name;
let storage_name = &storage_entry.name;
let storage_hash =
subxt_metadata::get_storage_hash(metadata, pallet_name, storage_name)
.unwrap_or_else(|_| {
abort_call_site!(
"Metadata information for the storage entry {}_{} could not be found",
pallet_name,
storage_name
)
});
let fn_name = format_ident!("{}", storage_entry.name.to_snake_case());
let fn_name_iter = format_ident!("{}_iter", fn_name);
let storage_entry_ty = match storage_entry.ty {
@@ -255,9 +269,13 @@ fn generate_storage_entry_fns(
#docs_token
pub async fn #fn_name_iter(
&self,
hash: ::core::option::Option<T::Hash>,
block_hash: ::core::option::Option<T::Hash>,
) -> ::core::result::Result<::subxt::KeyIter<'a, T, #entry_struct_ident #lifetime_param>, ::subxt::BasicError> {
self.client.storage().iter(hash).await
if self.client.metadata().storage_hash::<#entry_struct_ident>()? == [#(#storage_hash,)*] {
self.client.storage().iter(block_hash).await
} else {
Err(::subxt::MetadataError::IncompatibleMetadata.into())
}
}
)
} else {
@@ -280,10 +298,14 @@ fn generate_storage_entry_fns(
pub async fn #fn_name(
&self,
#( #key_args, )*
hash: ::core::option::Option<T::Hash>,
block_hash: ::core::option::Option<T::Hash>,
) -> ::core::result::Result<#return_ty, ::subxt::BasicError> {
let entry = #constructor;
self.client.storage().#fetch(&entry, hash).await
if self.client.metadata().storage_hash::<#entry_struct_ident>()? == [#(#storage_hash,)*] {
let entry = #constructor;
self.client.storage().#fetch(&entry, block_hash).await
} else {
Err(::subxt::MetadataError::IncompatibleMetadata.into())
}
}
#client_iter_fn
+2 -2
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
@@ -47,7 +47,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let hash = api
.tx()
.balances()
.transfer(dest, 123_456_789_012_345)
.transfer(dest, 123_456_789_012_345)?
.sign_and_submit_default(&signer)
.await?;
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
@@ -59,7 +59,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let hash = api
.tx()
.balances()
.transfer(dest, 123_456_789_012_345)
.transfer(dest, 123_456_789_012_345)?
.sign_and_submit(&signer, tx_params)
.await?;
+2 -2
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
@@ -68,7 +68,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let hash = api
.tx()
.balances()
.transfer(dest, 10_000)
.transfer(dest, 10_000)?
.sign_and_submit_default(&signer)
.await?;
+1 -1
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
#![allow(clippy::redundant_clone)]
+1 -1
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
+1 -1
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.13-d96d3bea85-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
@@ -0,0 +1,52 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
use subxt::{
ClientBuilder,
DefaultConfig,
PolkadotExtrinsicParams,
};
#[subxt::subxt(runtime_metadata_path = "examples/polkadot_metadata.scale")]
pub mod polkadot {}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
env_logger::init();
let api = ClientBuilder::new()
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<DefaultConfig, PolkadotExtrinsicParams<DefaultConfig>>>();
// Full metadata validation is not enabled by default; instead, the individual calls,
// storage requests and constant accesses are runtime type checked against the node
// metadata to ensure that they are compatible with the generated code.
//
// To make sure that all of our statically generated pallets are compatible with the
// runtime node, we can run this check:
api.validate_metadata()?;
Ok(())
}
Binary file not shown.
+1 -1
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
+4 -4
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
@@ -60,7 +60,7 @@ async fn simple_transfer() -> Result<(), Box<dyn std::error::Error>> {
let balance_transfer = api
.tx()
.balances()
.transfer(dest, 10_000)
.transfer(dest, 10_000)?
.sign_and_submit_then_watch_default(&signer)
.await?
.wait_for_finalized_success()
@@ -92,7 +92,7 @@ async fn simple_transfer_separate_events() -> Result<(), Box<dyn std::error::Err
let balance_transfer = api
.tx()
.balances()
.transfer(dest, 10_000)
.transfer(dest, 10_000)?
.sign_and_submit_then_watch_default(&signer)
.await?
.wait_for_finalized()
@@ -143,7 +143,7 @@ async fn handle_transfer_events() -> Result<(), Box<dyn std::error::Error>> {
let mut balance_transfer_progress = api
.tx()
.balances()
.transfer(dest, 10_000)
.transfer(dest, 10_000)?
.sign_and_submit_then_watch_default(&signer)
.await?;
+2 -1
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
@@ -69,6 +69,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
api.tx()
.balances()
.transfer(AccountKeyring::Bob.to_account_id().into(), transfer_amount)
.expect("compatible transfer call on runtime node")
.sign_and_submit_default(&signer)
.await
.unwrap();
+2 -1
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
@@ -73,6 +73,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
api.tx()
.balances()
.transfer(AccountKeyring::Bob.to_account_id().into(), 1_000_000_000)
.expect("compatible transfer call on runtime node")
.sign_and_submit_default(&signer)
.await
.unwrap();
+2 -1
View File
@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-4542a603cc-aarch64-macos.
//!
//! E.g.
//! ```bash
@@ -74,6 +74,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
api.tx()
.balances()
.transfer(AccountKeyring::Bob.to_account_id().into(), 1_000_000_000)
.expect("compatible transfer call on runtime node")
.sign_and_submit_default(&signer)
.await
.unwrap();
+32
View File
@@ -0,0 +1,32 @@
[package]
name = "subxt-metadata"
version = "0.20.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
autotests = false
license = "GPL-3.0"
repository = "https://github.com/paritytech/subxt"
documentation = "https://docs.rs/subxt"
homepage = "https://www.parity.io/"
description = "Command line utilities for checking metadata compatibility between nodes."
[dependencies]
codec = { package = "parity-scale-codec", version = "3.0.0", default-features = false, features = ["derive", "full"] }
frame-metadata = "15.0.0"
scale-info = "2.0.0"
sp-core = { version = "6.0.0" }
[dev-dependencies]
bitvec = { version = "1.0.0", default-features = false, features = ["alloc"] }
criterion = "0.3"
scale-info = { version = "2.0.0", features = ["bit-vec"] }
test-runtime = { path = "../test-runtime" }
[lib]
# Without this, libtest cli opts interfere with criteron benches:
bench = false
[[bench]]
name = "bench"
harness = false
+145
View File
@@ -0,0 +1,145 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use codec::Decode;
use criterion::*;
use frame_metadata::{
RuntimeMetadata::V14,
RuntimeMetadataPrefixed,
RuntimeMetadataV14,
};
use scale_info::{
form::PortableForm,
TypeDef,
TypeDefVariant,
};
use subxt_metadata::{
get_call_hash,
get_constant_hash,
get_metadata_hash,
get_pallet_hash,
get_storage_hash,
};
fn load_metadata() -> RuntimeMetadataV14 {
let bytes = test_runtime::METADATA;
let meta: RuntimeMetadataPrefixed =
Decode::decode(&mut &*bytes).expect("Cannot decode scale metadata");
match meta.1 {
V14(v14) => v14,
_ => panic!("Unsupported metadata version {:?}", meta.1),
}
}
fn expect_variant(def: &TypeDef<PortableForm>) -> &TypeDefVariant<PortableForm> {
match def {
TypeDef::Variant(variant) => variant,
_ => panic!("Expected a variant type, got {def:?}"),
}
}
fn bench_get_metadata_hash(c: &mut Criterion) {
let metadata = load_metadata();
c.bench_function("get_metadata_hash", |b| {
b.iter(|| get_metadata_hash(&metadata))
});
}
fn bench_get_pallet_hash(c: &mut Criterion) {
let metadata = load_metadata();
let mut group = c.benchmark_group("get_pallet_hash");
for pallet in metadata.pallets.iter() {
let pallet_name = &pallet.name;
group.bench_function(pallet_name, |b| {
b.iter(|| get_pallet_hash(&metadata.types, pallet))
});
}
}
fn bench_get_call_hash(c: &mut Criterion) {
let metadata = load_metadata();
let mut group = c.benchmark_group("get_call_hash");
for pallet in metadata.pallets.iter() {
let pallet_name = &pallet.name;
let call_type_id = match &pallet.calls {
Some(calls) => calls.ty.id(),
None => continue,
};
let call_type = metadata.types.resolve(call_type_id).unwrap();
let variants = expect_variant(call_type.type_def());
for variant in variants.variants() {
let call_name = variant.name();
let bench_name = format!("{pallet_name}/{call_name}");
group.bench_function(&bench_name, |b| {
b.iter(|| get_call_hash(&metadata, &pallet.name, call_name))
});
}
}
}
fn bench_get_constant_hash(c: &mut Criterion) {
let metadata = load_metadata();
let mut group = c.benchmark_group("get_constant_hash");
for pallet in metadata.pallets.iter() {
let pallet_name = &pallet.name;
for constant in &pallet.constants {
let constant_name = &constant.name;
let bench_name = format!("{pallet_name}/{constant_name}");
group.bench_function(&bench_name, |b| {
b.iter(|| get_constant_hash(&metadata, &pallet.name, constant_name))
});
}
}
}
fn bench_get_storage_hash(c: &mut Criterion) {
let metadata = load_metadata();
let mut group = c.benchmark_group("get_storage_hash");
for pallet in metadata.pallets.iter() {
let pallet_name = &pallet.name;
let storage_entries = match &pallet.storage {
Some(storage) => &storage.entries,
None => continue,
};
for storage in storage_entries {
let storage_name = &storage.name;
let bench_name = format!("{pallet_name}/{storage_name}");
group.bench_function(&bench_name, |b| {
b.iter(|| get_storage_hash(&metadata, &pallet.name, storage_name))
});
}
}
}
criterion_group!(
name = benches;
config = Criterion::default();
targets =
bench_get_metadata_hash,
bench_get_pallet_hash,
bench_get_call_hash,
bench_get_constant_hash,
bench_get_storage_hash,
);
criterion_main!(benches);
+883
View File
@@ -0,0 +1,883 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use frame_metadata::{
ExtrinsicMetadata,
RuntimeMetadataV14,
StorageEntryMetadata,
StorageEntryType,
};
use scale_info::{
form::PortableForm,
Field,
PortableRegistry,
TypeDef,
Variant,
};
use std::collections::HashSet;
/// Internal byte representation for various metadata types utilized for
/// generating deterministic hashes between different rust versions.
#[repr(u8)]
enum TypeBeingHashed {
Composite,
Variant,
Sequence,
Array,
Tuple,
Primitive,
Compact,
BitSequence,
}
/// Hashing function utilized internally.
fn hash(bytes: &[u8]) -> [u8; 32] {
sp_core::hashing::twox_256(bytes)
}
/// XOR two hashes together. If we have two pseudorandom hashes, then this will
/// lead to another pseudorandom value. If there is potentially some pattern to
/// the hashes we are xoring (eg we might be xoring the same hashes a few times),
/// prefer `hash_hashes` to give us stronger pseudorandomness guarantees.
fn xor(a: [u8; 32], b: [u8; 32]) -> [u8; 32] {
let mut out = [0u8; 32];
for (idx, (a, b)) in a.into_iter().zip(b).enumerate() {
out[idx] = a ^ b;
}
out
}
/// Combine two hashes or hash-like sets of bytes together into a single hash.
/// `xor` is OK for one-off combinations of bytes, but if we are merging
/// potentially identical hashes, this is a safer way to ensure the result is
/// unique.
fn hash_hashes(a: [u8; 32], b: [u8; 32]) -> [u8; 32] {
let mut out = [0u8; 32 * 2];
for (idx, byte) in a.into_iter().chain(b).enumerate() {
out[idx] = byte;
}
hash(&out)
}
/// Obtain the hash representation of a `scale_info::Field`.
fn get_field_hash(
registry: &PortableRegistry,
field: &Field<PortableForm>,
visited_ids: &mut HashSet<u32>,
) -> [u8; 32] {
let mut bytes = get_type_hash(registry, field.ty().id(), visited_ids);
// XOR name and field name with the type hash if they exist
if let Some(name) = field.name() {
bytes = xor(bytes, hash(name.as_bytes()));
}
if let Some(name) = field.type_name() {
bytes = xor(bytes, hash(name.as_bytes()));
}
bytes
}
/// Obtain the hash representation of a `scale_info::Variant`.
fn get_variant_hash(
registry: &PortableRegistry,
var: &Variant<PortableForm>,
visited_ids: &mut HashSet<u32>,
) -> [u8; 32] {
// Merge our hashes of the name and each field together using xor.
let mut bytes = hash(var.name().as_bytes());
for field in var.fields() {
bytes = hash_hashes(bytes, get_field_hash(registry, field, visited_ids))
}
bytes
}
/// Obtain the hash representation of a `scale_info::TypeDef`.
fn get_type_def_hash(
registry: &PortableRegistry,
ty_def: &TypeDef<PortableForm>,
visited_ids: &mut HashSet<u32>,
) -> [u8; 32] {
match ty_def {
TypeDef::Composite(composite) => {
let mut bytes = hash(&[TypeBeingHashed::Composite as u8]);
for field in composite.fields() {
bytes = hash_hashes(bytes, get_field_hash(registry, field, visited_ids));
}
bytes
}
TypeDef::Variant(variant) => {
let mut bytes = hash(&[TypeBeingHashed::Variant as u8]);
for var in variant.variants().iter() {
bytes = hash_hashes(bytes, get_variant_hash(registry, var, visited_ids));
}
bytes
}
TypeDef::Sequence(sequence) => {
let bytes = hash(&[TypeBeingHashed::Sequence as u8]);
xor(
bytes,
get_type_hash(registry, sequence.type_param().id(), visited_ids),
)
}
TypeDef::Array(array) => {
// Take length into account; different length must lead to different hash.
let len_bytes = array.len().to_be_bytes();
let bytes = hash(&[
TypeBeingHashed::Array as u8,
len_bytes[0],
len_bytes[1],
len_bytes[2],
len_bytes[3],
]);
xor(
bytes,
get_type_hash(registry, array.type_param().id(), visited_ids),
)
}
TypeDef::Tuple(tuple) => {
let mut bytes = hash(&[TypeBeingHashed::Tuple as u8]);
for field in tuple.fields() {
bytes =
hash_hashes(bytes, get_type_hash(registry, field.id(), visited_ids));
}
bytes
}
TypeDef::Primitive(primitive) => {
// Cloning the 'primitive' type should essentially be a copy.
hash(&[TypeBeingHashed::Primitive as u8, primitive.clone() as u8])
}
TypeDef::Compact(compact) => {
let bytes = hash(&[TypeBeingHashed::Compact as u8]);
xor(
bytes,
get_type_hash(registry, compact.type_param().id(), visited_ids),
)
}
TypeDef::BitSequence(bitseq) => {
let mut bytes = hash(&[TypeBeingHashed::BitSequence as u8]);
bytes = xor(
bytes,
get_type_hash(registry, bitseq.bit_order_type().id(), visited_ids),
);
bytes = xor(
bytes,
get_type_hash(registry, bitseq.bit_store_type().id(), visited_ids),
);
bytes
}
}
}
/// Obtain the hash representation of a `scale_info::Type` identified by id.
fn get_type_hash(
registry: &PortableRegistry,
id: u32,
visited_ids: &mut HashSet<u32>,
) -> [u8; 32] {
// Guard against recursive types and return a fixed arbitrary hash
if !visited_ids.insert(id) {
return hash(&[123u8])
}
let ty = registry.resolve(id).unwrap();
get_type_def_hash(registry, ty.type_def(), visited_ids)
}
/// Obtain the hash representation of a `frame_metadata::ExtrinsicMetadata`.
fn get_extrinsic_hash(
registry: &PortableRegistry,
extrinsic: &ExtrinsicMetadata<PortableForm>,
) -> [u8; 32] {
let mut visited_ids = HashSet::<u32>::new();
let mut bytes = get_type_hash(registry, extrinsic.ty.id(), &mut visited_ids);
bytes = xor(bytes, hash(&[extrinsic.version]));
for signed_extension in extrinsic.signed_extensions.iter() {
let mut ext_bytes = hash(signed_extension.identifier.as_bytes());
ext_bytes = xor(
ext_bytes,
get_type_hash(registry, signed_extension.ty.id(), &mut visited_ids),
);
ext_bytes = xor(
ext_bytes,
get_type_hash(
registry,
signed_extension.additional_signed.id(),
&mut visited_ids,
),
);
bytes = hash_hashes(bytes, ext_bytes);
}
bytes
}
/// Get the hash corresponding to a single storage entry.
fn get_storage_entry_hash(
registry: &PortableRegistry,
entry: &StorageEntryMetadata<PortableForm>,
visited_ids: &mut HashSet<u32>,
) -> [u8; 32] {
let mut bytes = hash(entry.name.as_bytes());
// Cloning 'entry.modifier' should essentially be a copy.
bytes = xor(bytes, hash(&[entry.modifier.clone() as u8]));
bytes = xor(bytes, hash(&entry.default));
match &entry.ty {
StorageEntryType::Plain(ty) => {
bytes = xor(bytes, get_type_hash(registry, ty.id(), visited_ids));
}
StorageEntryType::Map {
hashers,
key,
value,
} => {
for hasher in hashers {
// Cloning the hasher should essentially be a copy.
bytes = hash_hashes(bytes, [hasher.clone() as u8; 32]);
}
bytes = xor(bytes, get_type_hash(registry, key.id(), visited_ids));
bytes = xor(bytes, get_type_hash(registry, value.id(), visited_ids));
}
}
bytes
}
/// Obtain the hash for a specific storage item, or an error if it's not found.
pub fn get_storage_hash(
metadata: &RuntimeMetadataV14,
pallet_name: &str,
storage_name: &str,
) -> Result<[u8; 32], NotFound> {
let pallet = metadata
.pallets
.iter()
.find(|p| p.name == pallet_name)
.ok_or(NotFound::Pallet)?;
let storage = pallet.storage.as_ref().ok_or(NotFound::Item)?;
let entry = storage
.entries
.iter()
.find(|s| s.name == storage_name)
.ok_or(NotFound::Item)?;
let hash = get_storage_entry_hash(&metadata.types, entry, &mut HashSet::new());
Ok(hash)
}
/// Obtain the hash for a specific constant, or an error if it's not found.
pub fn get_constant_hash(
metadata: &RuntimeMetadataV14,
pallet_name: &str,
constant_name: &str,
) -> Result<[u8; 32], NotFound> {
let pallet = metadata
.pallets
.iter()
.find(|p| p.name == pallet_name)
.ok_or(NotFound::Pallet)?;
let constant = pallet
.constants
.iter()
.find(|c| c.name == constant_name)
.ok_or(NotFound::Item)?;
let mut bytes = get_type_hash(&metadata.types, constant.ty.id(), &mut HashSet::new());
bytes = xor(bytes, hash(constant.name.as_bytes()));
bytes = xor(bytes, hash(&constant.value));
Ok(bytes)
}
/// Obtain the hash for a specific call, or an error if it's not found.
pub fn get_call_hash(
metadata: &RuntimeMetadataV14,
pallet_name: &str,
call_name: &str,
) -> Result<[u8; 32], NotFound> {
let pallet = metadata
.pallets
.iter()
.find(|p| p.name == pallet_name)
.ok_or(NotFound::Pallet)?;
let call_id = pallet.calls.as_ref().ok_or(NotFound::Item)?.ty.id();
let call_ty = metadata.types.resolve(call_id).ok_or(NotFound::Item)?;
let call_variants = match call_ty.type_def() {
TypeDef::Variant(variant) => variant.variants(),
_ => return Err(NotFound::Item),
};
let variant = call_variants
.iter()
.find(|v| v.name() == call_name)
.ok_or(NotFound::Item)?;
// hash the specific variant representing the call we are interested in.
let hash = get_variant_hash(&metadata.types, variant, &mut HashSet::new());
Ok(hash)
}
/// Obtain the hash representation of a `frame_metadata::PalletMetadata`.
pub fn get_pallet_hash(
registry: &PortableRegistry,
pallet: &frame_metadata::PalletMetadata<PortableForm>,
) -> [u8; 32] {
// Begin with some arbitrary hash (we don't really care what it is).
let mut bytes = hash(&[19]);
let mut visited_ids = HashSet::<u32>::new();
if let Some(calls) = &pallet.calls {
bytes = xor(
bytes,
get_type_hash(registry, calls.ty.id(), &mut visited_ids),
);
}
if let Some(ref event) = pallet.event {
bytes = xor(
bytes,
get_type_hash(registry, event.ty.id(), &mut visited_ids),
);
}
for constant in pallet.constants.iter() {
bytes = xor(bytes, hash(constant.name.as_bytes()));
bytes = xor(bytes, hash(&constant.value));
bytes = xor(
bytes,
get_type_hash(registry, constant.ty.id(), &mut visited_ids),
);
}
if let Some(ref error) = pallet.error {
bytes = xor(
bytes,
get_type_hash(registry, error.ty.id(), &mut visited_ids),
);
}
if let Some(ref storage) = pallet.storage {
bytes = xor(bytes, hash(storage.prefix.as_bytes()));
for entry in storage.entries.iter() {
bytes = hash_hashes(
bytes,
get_storage_entry_hash(registry, entry, &mut visited_ids),
);
}
}
bytes
}
/// Obtain the hash representation of a `frame_metadata::RuntimeMetadataV14`.
pub fn get_metadata_hash(metadata: &RuntimeMetadataV14) -> [u8; 32] {
// Collect all pairs of (pallet name, pallet hash).
let mut pallets: Vec<(&str, [u8; 32])> = metadata
.pallets
.iter()
.map(|pallet| {
let hash = get_pallet_hash(&metadata.types, pallet);
(&*pallet.name, hash)
})
.collect();
// Sort by pallet name to create a deterministic representation of the underlying metadata.
pallets.sort_by_key(|&(name, _hash)| name);
// Note: pallet name is excluded from hashing.
// Each pallet has a hash of 32 bytes, and the vector is extended with
// extrinsic hash and metadata ty hash (2 * 32).
let mut bytes = Vec::with_capacity(pallets.len() * 32 + 64);
for (_, hash) in pallets.iter() {
bytes.extend(hash)
}
bytes.extend(get_extrinsic_hash(&metadata.types, &metadata.extrinsic));
let mut visited_ids = HashSet::<u32>::new();
bytes.extend(get_type_hash(
&metadata.types,
metadata.ty.id(),
&mut visited_ids,
));
hash(&bytes)
}
/// Obtain the hash representation of a `frame_metadata::RuntimeMetadataV14`
/// hashing only the provided pallets.
///
/// **Note:** This is similar to `get_metadata_hash`, but performs hashing only of the provided
/// pallets if they exist. There are cases where the runtime metadata contains a subset of
/// the pallets from the static metadata. In those cases, the static API can communicate
/// properly with the subset of pallets from the runtime node.
pub fn get_metadata_per_pallet_hash<T: AsRef<str>>(
metadata: &RuntimeMetadataV14,
pallets: &[T],
) -> [u8; 32] {
// Collect all pairs of (pallet name, pallet hash).
let mut pallets_hashed: Vec<(&str, [u8; 32])> = metadata
.pallets
.iter()
.filter_map(|pallet| {
// Make sure to filter just the pallets we are interested in.
let in_pallet = pallets
.iter()
.any(|pallet_ref| pallet_ref.as_ref() == pallet.name);
if in_pallet {
let hash = get_pallet_hash(&metadata.types, pallet);
Some((&*pallet.name, hash))
} else {
None
}
})
.collect();
// Sort by pallet name to create a deterministic representation of the underlying metadata.
pallets_hashed.sort_by_key(|&(name, _hash)| name);
// Note: pallet name is excluded from hashing.
// Each pallet has a hash of 32 bytes, and the vector is extended with
// extrinsic hash and metadata ty hash (2 * 32).
let mut bytes = Vec::with_capacity(pallets_hashed.len() * 32);
for (_, hash) in pallets_hashed.iter() {
bytes.extend(hash)
}
hash(&bytes)
}
/// An error returned if we attempt to get the hash for a specific call, constant
/// or storage item that doesn't exist.
#[derive(Clone, Debug)]
pub enum NotFound {
Pallet,
Item,
}
#[cfg(test)]
mod tests {
use super::*;
use bitvec::{
order::Lsb0,
vec::BitVec,
};
use frame_metadata::{
ExtrinsicMetadata,
PalletCallMetadata,
PalletConstantMetadata,
PalletErrorMetadata,
PalletEventMetadata,
PalletMetadata,
PalletStorageMetadata,
RuntimeMetadataV14,
StorageEntryMetadata,
StorageEntryModifier,
};
use scale_info::meta_type;
// Define recursive types.
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
struct A {
pub b: Box<B>,
}
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
struct B {
pub a: Box<A>,
}
// Define TypeDef supported types.
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
// TypeDef::Composite with TypeDef::Array with Typedef::Primitive.
struct AccountId32([u8; 32]);
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
// TypeDef::Variant.
enum DigestItem {
PreRuntime(
// TypeDef::Array with primitive.
[::core::primitive::u8; 4usize],
// TypeDef::Sequence.
::std::vec::Vec<::core::primitive::u8>,
),
Other(::std::vec::Vec<::core::primitive::u8>),
// Nested TypeDef::Tuple.
RuntimeEnvironmentUpdated(((i8, i16), (u32, u64))),
// TypeDef::Compact.
Index(#[codec(compact)] ::core::primitive::u8),
// TypeDef::BitSequence.
BitSeq(BitVec<u8, Lsb0>),
}
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
// Ensure recursive types and TypeDef variants are captured.
struct MetadataTestType {
recursive: A,
composite: AccountId32,
type_def: DigestItem,
}
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
// Simulate a PalletCallMetadata.
enum Call {
#[codec(index = 0)]
FillBlock { ratio: AccountId32 },
#[codec(index = 1)]
Remark { remark: DigestItem },
}
fn build_default_extrinsic() -> ExtrinsicMetadata {
ExtrinsicMetadata {
ty: meta_type::<()>(),
version: 0,
signed_extensions: vec![],
}
}
fn default_pallet() -> PalletMetadata {
PalletMetadata {
name: "Test",
storage: None,
calls: None,
event: None,
constants: vec![],
error: None,
index: 0,
}
}
fn build_default_pallets() -> Vec<PalletMetadata> {
vec![
PalletMetadata {
name: "First",
calls: Some(PalletCallMetadata {
ty: meta_type::<MetadataTestType>(),
}),
..default_pallet()
},
PalletMetadata {
name: "Second",
index: 1,
calls: Some(PalletCallMetadata {
ty: meta_type::<(DigestItem, AccountId32, A)>(),
}),
..default_pallet()
},
]
}
fn pallets_to_metadata(pallets: Vec<PalletMetadata>) -> RuntimeMetadataV14 {
RuntimeMetadataV14::new(pallets, build_default_extrinsic(), meta_type::<()>())
}
#[test]
fn different_pallet_index() {
let pallets = build_default_pallets();
let mut pallets_swap = pallets.clone();
let metadata = pallets_to_metadata(pallets);
// Change the order in which pallets are registered.
pallets_swap.swap(0, 1);
pallets_swap[0].index = 0;
pallets_swap[1].index = 1;
let metadata_swap = pallets_to_metadata(pallets_swap);
let hash = get_metadata_hash(&metadata);
let hash_swap = get_metadata_hash(&metadata_swap);
// Changing pallet order must still result in a deterministic unique hash.
assert_eq!(hash, hash_swap);
}
#[test]
fn recursive_type() {
let mut pallet = default_pallet();
pallet.calls = Some(PalletCallMetadata {
ty: meta_type::<A>(),
});
let metadata = pallets_to_metadata(vec![pallet]);
// Check hashing algorithm finishes on a recursive type.
get_metadata_hash(&metadata);
}
#[test]
/// Ensure correctness of hashing when parsing the `metadata.types`.
///
/// Having a recursive structure `A: { B }` and `B: { A }` registered in different order
/// `types: { { id: 0, A }, { id: 1, B } }` and `types: { { id: 0, B }, { id: 1, A } }`
/// must produce the same deterministic hashing value.
fn recursive_types_different_order() {
let mut pallets = build_default_pallets();
pallets[0].calls = Some(PalletCallMetadata {
ty: meta_type::<A>(),
});
pallets[1].calls = Some(PalletCallMetadata {
ty: meta_type::<B>(),
});
pallets[1].index = 1;
let mut pallets_swap = pallets.clone();
let metadata = pallets_to_metadata(pallets);
pallets_swap.swap(0, 1);
pallets_swap[0].index = 0;
pallets_swap[1].index = 1;
let metadata_swap = pallets_to_metadata(pallets_swap);
let hash = get_metadata_hash(&metadata);
let hash_swap = get_metadata_hash(&metadata_swap);
// Changing pallet order must still result in a deterministic unique hash.
assert_eq!(hash, hash_swap);
}
#[test]
fn pallet_hash_correctness() {
let compare_pallets_hash = |lhs: &PalletMetadata, rhs: &PalletMetadata| {
let metadata = pallets_to_metadata(vec![lhs.clone()]);
let hash = get_metadata_hash(&metadata);
let metadata = pallets_to_metadata(vec![rhs.clone()]);
let new_hash = get_metadata_hash(&metadata);
assert_ne!(hash, new_hash);
};
// Build metadata progressively from an empty pallet to a fully populated pallet.
let mut pallet = default_pallet();
let pallet_lhs = pallet.clone();
pallet.storage = Some(PalletStorageMetadata {
prefix: "Storage",
entries: vec![StorageEntryMetadata {
name: "BlockWeight",
modifier: StorageEntryModifier::Default,
ty: StorageEntryType::Plain(meta_type::<u8>()),
default: vec![],
docs: vec![],
}],
});
compare_pallets_hash(&pallet_lhs, &pallet);
let pallet_lhs = pallet.clone();
// Calls are similar to:
//
// ```
// pub enum Call {
// call_name_01 { arg01: type },
// call_name_02 { arg01: type, arg02: type }
// }
// ```
pallet.calls = Some(PalletCallMetadata {
ty: meta_type::<Call>(),
});
compare_pallets_hash(&pallet_lhs, &pallet);
let pallet_lhs = pallet.clone();
// Events are similar to Calls.
pallet.event = Some(PalletEventMetadata {
ty: meta_type::<Call>(),
});
compare_pallets_hash(&pallet_lhs, &pallet);
let pallet_lhs = pallet.clone();
pallet.constants = vec![PalletConstantMetadata {
name: "BlockHashCount",
ty: meta_type::<u64>(),
value: vec![96u8, 0, 0, 0],
docs: vec![],
}];
compare_pallets_hash(&pallet_lhs, &pallet);
let pallet_lhs = pallet.clone();
pallet.error = Some(PalletErrorMetadata {
ty: meta_type::<MetadataTestType>(),
});
compare_pallets_hash(&pallet_lhs, &pallet);
}
#[test]
fn metadata_per_pallet_hash_correctness() {
let pallets = build_default_pallets();
// Build metadata with just the first pallet.
let metadata_one = pallets_to_metadata(vec![pallets[0].clone()]);
// Build metadata with both pallets.
let metadata_both = pallets_to_metadata(pallets);
// Hashing will ignore any non-existant pallet and return the same result.
let hash = get_metadata_per_pallet_hash(&metadata_one, &["First", "Second"]);
let hash_rhs = get_metadata_per_pallet_hash(&metadata_one, &["First"]);
assert_eq!(hash, hash_rhs, "hashing should ignore non-existant pallets");
// Hashing one pallet from metadata with 2 pallets inserted will ignore the second pallet.
let hash_second = get_metadata_per_pallet_hash(&metadata_both, &["First"]);
assert_eq!(
hash_second, hash,
"hashing one pallet should ignore the others"
);
// Check hashing with all pallets.
let hash_second =
get_metadata_per_pallet_hash(&metadata_both, &["First", "Second"]);
assert_ne!(hash_second, hash, "hashing both pallets should produce a different result from hashing just one pallet");
}
#[test]
fn field_semantic_changes() {
// Get a hash representation of the provided meta type,
// inserted in the context of pallet metadata call.
let to_hash = |meta_ty| {
let pallet = PalletMetadata {
calls: Some(PalletCallMetadata { ty: meta_ty }),
..default_pallet()
};
let metadata = pallets_to_metadata(vec![pallet]);
get_metadata_hash(&metadata)
};
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
enum EnumFieldNotNamedA {
First(u8),
}
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
enum EnumFieldNotNamedB {
First(u8),
}
// Semantic changes apply only to field names.
// This is considered to be a good tradeoff in hashing performance, as refactoring
// a structure / enum 's name is less likely to cause a breaking change.
// Even if the enums have different names, 'EnumFieldNotNamedA' and 'EnumFieldNotNamedB',
// they are equal in meaning (i.e, both contain `First(u8)`).
assert_eq!(
to_hash(meta_type::<EnumFieldNotNamedA>()),
to_hash(meta_type::<EnumFieldNotNamedB>())
);
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
struct StructFieldNotNamedA([u8; 32]);
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
struct StructFieldNotNamedSecondB([u8; 32]);
// Similarly to enums, semantic changes apply only inside the structure fields.
assert_eq!(
to_hash(meta_type::<StructFieldNotNamedA>()),
to_hash(meta_type::<StructFieldNotNamedSecondB>())
);
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
enum EnumFieldNotNamed {
First(u8),
}
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
enum EnumFieldNotNamedSecond {
Second(u8),
}
// The enums are binary compatible, they contain a different semantic meaning:
// `First(u8)` and `Second(u8)`.
assert_ne!(
to_hash(meta_type::<EnumFieldNotNamed>()),
to_hash(meta_type::<EnumFieldNotNamedSecond>())
);
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
enum EnumFieldNamed {
First { a: u8 },
}
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
enum EnumFieldNamedSecond {
First { b: u8 },
}
// Named fields contain a different semantic meaning ('a' and 'b').
assert_ne!(
to_hash(meta_type::<EnumFieldNamed>()),
to_hash(meta_type::<EnumFieldNamedSecond>())
);
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
struct StructFieldNamed {
a: u32,
}
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
struct StructFieldNamedSecond {
b: u32,
}
// Similar to enums, struct fields contain a different semantic meaning ('a' and 'b').
assert_ne!(
to_hash(meta_type::<StructFieldNamed>()),
to_hash(meta_type::<StructFieldNamedSecond>())
);
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
enum EnumField {
First,
// Field is unnamed, but has type name `u8`.
Second(u8),
// File is named and has type name `u8`.
Third { named: u8 },
}
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
enum EnumFieldSwap {
Second(u8),
First,
Third { named: u8 },
}
// Swapping the registration order should also be taken into account.
assert_ne!(
to_hash(meta_type::<EnumField>()),
to_hash(meta_type::<EnumFieldSwap>())
);
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
struct StructField {
a: u32,
b: u32,
}
#[allow(dead_code)]
#[derive(scale_info::TypeInfo)]
struct StructFieldSwap {
b: u32,
a: u32,
}
assert_ne!(
to_hash(meta_type::<StructField>()),
to_hash(meta_type::<StructFieldSwap>())
);
}
}
+2
View File
@@ -25,8 +25,10 @@ log = "0.4.14"
serde = { version = "1.0.124", features = ["derive"] }
serde_json = "1.0.64"
thiserror = "1.0.24"
parking_lot = "0.12.0"
subxt-macro = { version = "0.20.0", path = "../macro" }
subxt-metadata = { version = "0.20.0", path = "../metadata" }
sp-core = { version = "6.0.0", default-features = false }
sp-runtime = "6.0.0"
+20 -6
View File
@@ -46,13 +46,13 @@ use codec::{
Encode,
};
use derivative::Derivative;
use std::sync::Arc;
/// ClientBuilder for constructing a Client.
#[derive(Default)]
pub struct ClientBuilder {
url: Option<String>,
client: Option<RpcClient>,
metadata: Option<Metadata>,
page_size: Option<u32>,
}
@@ -62,6 +62,7 @@ impl ClientBuilder {
Self {
url: None,
client: None,
metadata: None,
page_size: None,
}
}
@@ -84,6 +85,15 @@ impl ClientBuilder {
self
}
/// Set the metadata.
///
/// *Note:* Metadata will no longer be downloaded from the runtime node.
#[cfg(integration_tests)]
pub fn set_metadata(mut self, metadata: Metadata) -> Self {
self.metadata = Some(metadata);
self
}
/// Creates a new Client.
pub async fn build<T: Config>(self) -> Result<Client<T>, BasicError> {
let client = if let Some(client) = self.client {
@@ -93,19 +103,23 @@ impl ClientBuilder {
crate::rpc::ws_client(url).await?
};
let rpc = Rpc::new(client);
let (metadata, genesis_hash, runtime_version, properties) = future::join4(
rpc.metadata(),
let (genesis_hash, runtime_version, properties) = future::join3(
rpc.genesis_hash(),
rpc.runtime_version(None),
rpc.system_properties(),
)
.await;
let metadata = metadata?;
let metadata = if let Some(metadata) = self.metadata {
metadata
} else {
rpc.metadata().await?
};
Ok(Client {
rpc,
genesis_hash: genesis_hash?,
metadata: Arc::new(metadata),
metadata,
properties: properties.unwrap_or_else(|_| Default::default()),
runtime_version: runtime_version?,
iter_page_size: self.page_size.unwrap_or(10),
@@ -119,7 +133,7 @@ impl ClientBuilder {
pub struct Client<T: Config> {
rpc: Rpc<T>,
genesis_hash: T::Hash,
metadata: Arc<Metadata>,
metadata: Metadata,
properties: SystemProperties,
runtime_version: RuntimeVersion,
iter_page_size: u32,
+2 -2
View File
@@ -403,7 +403,7 @@ pub(crate) mod test_utils {
ExtrinsicMetadata,
PalletEventMetadata,
PalletMetadata,
RuntimeMetadataLastVersion,
RuntimeMetadataV14,
},
RuntimeMetadataPrefixed,
};
@@ -459,7 +459,7 @@ pub(crate) mod test_utils {
signed_extensions: vec![],
};
let v14 = RuntimeMetadataLastVersion::new(pallets, extrinsic, meta_type::<()>());
let v14 = RuntimeMetadataV14::new(pallets, extrinsic, meta_type::<()>());
let runtime_metadata: RuntimeMetadataPrefixed = v14.into();
Metadata::try_from(runtime_metadata).unwrap()
+113
View File
@@ -0,0 +1,113 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use parking_lot::RwLock;
use std::{
borrow::Cow,
collections::HashMap,
};
/// A cache with the simple goal of storing 32 byte hashes against pallet+item keys
#[derive(Default, Debug)]
pub struct HashCache {
inner: RwLock<HashMap<PalletItemKey<'static>, [u8; 32]>>,
}
impl HashCache {
/// get a hash out of the cache by its pallet and item key. If the item doesn't exist,
/// run the function provided to obtain a hash to insert (or bail with some error on failure).
pub fn get_or_insert<F, E>(
&self,
pallet: &str,
item: &str,
f: F,
) -> Result<[u8; 32], E>
where
F: FnOnce() -> Result<[u8; 32], E>,
{
let maybe_hash = self
.inner
.read()
.get(&PalletItemKey::new(pallet, item))
.copied();
if let Some(hash) = maybe_hash {
return Ok(hash)
}
let hash = f()?;
self.inner.write().insert(
PalletItemKey::new(pallet.to_string(), item.to_string()),
hash,
);
Ok(hash)
}
}
/// This exists so that we can look items up in the cache using &strs, without having to allocate
/// Strings first (as you'd have to do to construct something like an `&(String,String)` key).
#[derive(Debug, PartialEq, Eq, Hash)]
struct PalletItemKey<'a> {
pallet: Cow<'a, str>,
item: Cow<'a, str>,
}
impl<'a> PalletItemKey<'a> {
fn new(pallet: impl Into<Cow<'a, str>>, item: impl Into<Cow<'a, str>>) -> Self {
PalletItemKey {
pallet: pallet.into(),
item: item.into(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn hash_cache_validation() {
let cache = HashCache::default();
let pallet = "System";
let item = "Account";
let mut call_number = 0;
let value = cache.get_or_insert(pallet, item, || -> Result<[u8; 32], ()> {
call_number += 1;
Ok([0; 32])
});
assert_eq!(
cache
.inner
.read()
.get(&PalletItemKey::new(pallet, item))
.unwrap(),
&value.unwrap()
);
assert_eq!(value.unwrap(), [0; 32]);
assert_eq!(call_number, 1);
// Further calls must be hashed.
let value = cache.get_or_insert(pallet, item, || -> Result<[u8; 32], ()> {
call_number += 1;
Ok([0; 32])
});
assert_eq!(call_number, 1);
assert_eq!(value.unwrap(), [0; 32]);
}
}
@@ -14,41 +14,43 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use std::{
collections::HashMap,
convert::TryFrom,
};
use super::hash_cache::HashCache;
use crate::Call;
use codec::Error as CodecError;
use frame_metadata::{
PalletConstantMetadata,
RuntimeMetadata,
RuntimeMetadataLastVersion,
RuntimeMetadataPrefixed,
RuntimeMetadataV14,
StorageEntryMetadata,
META_RESERVED,
};
use crate::Call;
use scale_info::{
form::PortableForm,
Type,
Variant,
};
use std::{
collections::HashMap,
convert::TryFrom,
sync::{
Arc,
RwLock,
},
};
/// Metadata error.
#[derive(Debug, thiserror::Error)]
#[derive(Debug, thiserror::Error, PartialEq)]
pub enum MetadataError {
/// Module is not in metadata.
#[error("Pallet {0} not found")]
PalletNotFound(String),
#[error("Pallet not found")]
PalletNotFound,
/// Pallet is not in metadata.
#[error("Pallet index {0} not found")]
PalletIndexNotFound(u8),
/// Call is not in metadata.
#[error("Call {0} not found")]
CallNotFound(&'static str),
#[error("Call not found")]
CallNotFound,
/// Event is not in metadata.
#[error("Pallet {0}, Event {0} not found")]
EventNotFound(u8, u8),
@@ -56,8 +58,8 @@ pub enum MetadataError {
#[error("Pallet {0}, Error {0} not found")]
ErrorNotFound(u8, u8),
/// Storage is not in metadata.
#[error("Storage {0} not found")]
StorageNotFound(&'static str),
#[error("Storage not found")]
StorageNotFound,
/// Storage type does not match requested type.
#[error("Storage type error")]
StorageTypeError,
@@ -68,28 +70,48 @@ pub enum MetadataError {
#[error("Failed to decode constant value: {0}")]
ConstantValueError(CodecError),
/// Constant is not in metadata.
#[error("Constant {0} not found")]
ConstantNotFound(&'static str),
#[error("Constant not found")]
ConstantNotFound,
/// Type is not in metadata.
#[error("Type {0} missing from type registry")]
TypeNotFound(u32),
/// Runtime pallet metadata is incompatible with the static one.
#[error("Pallet {0} has incompatible metadata")]
IncompatiblePalletMetadata(&'static str),
/// Runtime metadata is not fully compatible with the static one.
#[error("Node metadata is not fully compatible")]
IncompatibleMetadata,
}
/// Runtime metadata.
#[derive(Clone, Debug)]
pub struct Metadata {
metadata: RuntimeMetadataLastVersion,
inner: Arc<MetadataInner>,
}
// We hide the innards behind an Arc so that it's easy to clone and share.
#[derive(Debug)]
struct MetadataInner {
metadata: RuntimeMetadataV14,
pallets: HashMap<String, PalletMetadata>,
events: HashMap<(u8, u8), EventMetadata>,
errors: HashMap<(u8, u8), ErrorMetadata>,
// The hashes uniquely identify parts of the metadata; different
// hashes mean some type difference exists between static and runtime
// versions. We cache them here to avoid recalculating:
cached_metadata_hash: RwLock<Option<[u8; 32]>>,
cached_call_hashes: HashCache,
cached_constant_hashes: HashCache,
cached_storage_hashes: HashCache,
}
impl Metadata {
/// Returns a reference to [`PalletMetadata`].
pub fn pallet(&self, name: &'static str) -> Result<&PalletMetadata, MetadataError> {
self.pallets
self.inner
.pallets
.get(name)
.ok_or_else(|| MetadataError::PalletNotFound(name.to_string()))
.ok_or(MetadataError::PalletNotFound)
}
/// Returns the metadata for the event at the given pallet and event indices.
@@ -99,6 +121,7 @@ impl Metadata {
event_index: u8,
) -> Result<&EventMetadata, MetadataError> {
let event = self
.inner
.events
.get(&(pallet_index, event_index))
.ok_or(MetadataError::EventNotFound(pallet_index, event_index))?;
@@ -112,6 +135,7 @@ impl Metadata {
error_index: u8,
) -> Result<&ErrorMetadata, MetadataError> {
let error = self
.inner
.errors
.get(&(pallet_index, error_index))
.ok_or(MetadataError::ErrorNotFound(pallet_index, error_index))?;
@@ -120,12 +144,90 @@ impl Metadata {
/// Resolve a type definition.
pub fn resolve_type(&self, id: u32) -> Option<&Type<PortableForm>> {
self.metadata.types.resolve(id)
self.inner.metadata.types.resolve(id)
}
/// Return the runtime metadata.
pub fn runtime_metadata(&self) -> &RuntimeMetadataLastVersion {
&self.metadata
pub fn runtime_metadata(&self) -> &RuntimeMetadataV14 {
&self.inner.metadata
}
/// Obtain the unique hash for a specific storage entry.
pub fn storage_hash<S: crate::StorageEntry>(
&self,
) -> Result<[u8; 32], MetadataError> {
self.inner
.cached_storage_hashes
.get_or_insert(S::PALLET, S::STORAGE, || {
subxt_metadata::get_storage_hash(
&self.inner.metadata,
S::PALLET,
S::STORAGE,
)
.map_err(|e| {
match e {
subxt_metadata::NotFound::Pallet => MetadataError::PalletNotFound,
subxt_metadata::NotFound::Item => MetadataError::StorageNotFound,
}
})
})
}
/// Obtain the unique hash for a constant.
pub fn constant_hash(
&self,
pallet: &str,
constant: &str,
) -> Result<[u8; 32], MetadataError> {
self.inner
.cached_constant_hashes
.get_or_insert(pallet, constant, || {
subxt_metadata::get_constant_hash(&self.inner.metadata, pallet, constant)
.map_err(|e| {
match e {
subxt_metadata::NotFound::Pallet => {
MetadataError::PalletNotFound
}
subxt_metadata::NotFound::Item => {
MetadataError::ConstantNotFound
}
}
})
})
}
/// Obtain the unique hash for a call.
pub fn call_hash<C: crate::Call>(&self) -> Result<[u8; 32], MetadataError> {
self.inner
.cached_call_hashes
.get_or_insert(C::PALLET, C::FUNCTION, || {
subxt_metadata::get_call_hash(
&self.inner.metadata,
C::PALLET,
C::FUNCTION,
)
.map_err(|e| {
match e {
subxt_metadata::NotFound::Pallet => MetadataError::PalletNotFound,
subxt_metadata::NotFound::Item => MetadataError::CallNotFound,
}
})
})
}
/// Obtain the unique hash for this metadata.
pub fn metadata_hash<T: AsRef<str>>(&self, pallets: &[T]) -> [u8; 32] {
if let Some(hash) = *self.inner.cached_metadata_hash.read().unwrap() {
return hash
}
let hash = subxt_metadata::get_metadata_per_pallet_hash(
self.runtime_metadata(),
pallets,
);
*self.inner.cached_metadata_hash.write().unwrap() = Some(hash);
hash
}
}
@@ -159,28 +261,26 @@ impl PalletMetadata {
let fn_index = *self
.calls
.get(C::FUNCTION)
.ok_or(MetadataError::CallNotFound(C::FUNCTION))?;
.ok_or(MetadataError::CallNotFound)?;
Ok(fn_index)
}
/// Return [`StorageEntryMetadata`] given some storage key.
pub fn storage(
&self,
key: &'static str,
key: &str,
) -> Result<&StorageEntryMetadata<PortableForm>, MetadataError> {
self.storage
.get(key)
.ok_or(MetadataError::StorageNotFound(key))
self.storage.get(key).ok_or(MetadataError::StorageNotFound)
}
/// Get a constant's metadata by name.
pub fn constant(
&self,
key: &'static str,
key: &str,
) -> Result<&PalletConstantMetadata<PortableForm>, MetadataError> {
self.constants
.get(key)
.ok_or(MetadataError::ConstantNotFound(key))
.ok_or(MetadataError::ConstantNotFound)
}
}
@@ -359,11 +459,134 @@ impl TryFrom<RuntimeMetadataPrefixed> for Metadata {
})
.collect();
Ok(Self {
metadata,
pallets,
events,
errors,
Ok(Metadata {
inner: Arc::new(MetadataInner {
metadata,
pallets,
events,
errors,
cached_metadata_hash: Default::default(),
cached_call_hashes: Default::default(),
cached_constant_hashes: Default::default(),
cached_storage_hashes: Default::default(),
}),
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::StorageEntryKey;
fn load_metadata() -> Metadata {
let bytes = test_runtime::METADATA;
let meta: RuntimeMetadataPrefixed =
codec::Decode::decode(&mut &*bytes).expect("Cannot decode scale metadata");
Metadata::try_from(meta)
.expect("Cannot translate runtime metadata to internal Metadata")
}
#[test]
fn metadata_inner_cache() {
// Note: Dependency on test_runtime can be removed if complex metadata
// is manually constructed.
let metadata = load_metadata();
let hash = metadata.metadata_hash(&["System"]);
// Check inner caching.
assert_eq!(
metadata.inner.cached_metadata_hash.read().unwrap().unwrap(),
hash
);
// Currently the caching does not take into account different pallets
// as the intended behavior is to use this method only once.
// Enforce this behavior into testing.
let hash_old = metadata.metadata_hash(&["Balances"]);
assert_eq!(hash_old, hash);
}
#[test]
fn metadata_call_inner_cache() {
let metadata = load_metadata();
#[derive(codec::Encode)]
struct ValidCall;
impl crate::Call for ValidCall {
const PALLET: &'static str = "System";
const FUNCTION: &'static str = "fill_block";
}
let hash = metadata.call_hash::<ValidCall>();
let mut call_number = 0;
let hash_cached = metadata.inner.cached_call_hashes.get_or_insert(
"System",
"fill_block",
|| -> Result<[u8; 32], MetadataError> {
call_number += 1;
Ok([0; 32])
},
);
// Check function is never called (e.i, value fetched from cache).
assert_eq!(call_number, 0);
assert_eq!(hash.unwrap(), hash_cached.unwrap());
}
#[test]
fn metadata_constant_inner_cache() {
let metadata = load_metadata();
let hash = metadata.constant_hash("System", "BlockWeights");
let mut call_number = 0;
let hash_cached = metadata.inner.cached_constant_hashes.get_or_insert(
"System",
"BlockWeights",
|| -> Result<[u8; 32], MetadataError> {
call_number += 1;
Ok([0; 32])
},
);
// Check function is never called (e.i, value fetched from cache).
assert_eq!(call_number, 0);
assert_eq!(hash.unwrap(), hash_cached.unwrap());
}
#[test]
fn metadata_storage_inner_cache() {
let metadata = load_metadata();
#[derive(codec::Encode)]
struct ValidStorage;
impl crate::StorageEntry for ValidStorage {
const PALLET: &'static str = "System";
const STORAGE: &'static str = "Account";
type Value = ();
fn key(&self) -> StorageEntryKey {
unreachable!("Should not be called");
}
}
let hash = metadata.storage_hash::<ValidStorage>();
let mut call_number = 0;
let hash_cached = metadata.inner.cached_storage_hashes.get_or_insert(
"System",
"Account",
|| -> Result<[u8; 32], MetadataError> {
call_number += 1;
Ok([0; 32])
},
);
// Check function is never called (e.i, value fetched from cache).
assert_eq!(call_number, 0);
assert_eq!(hash.unwrap(), hash_cached.unwrap());
}
}
+27
View File
@@ -0,0 +1,27 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
mod hash_cache;
mod metadata_type;
pub use metadata_type::{
ErrorMetadata,
EventMetadata,
InvalidMetadataError,
Metadata,
MetadataError,
PalletMetadata,
};
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -116,7 +116,7 @@ async fn balance_transfer_subscription() -> Result<(), subxt::BasicError> {
ctx.api
.tx()
.balances()
.transfer(bob.clone().into(), 10_000)
.transfer(bob.clone().into(), 10_000)?
.sign_and_submit_then_watch_default(&alice)
.await?;
+6 -3
View File
@@ -58,7 +58,7 @@ async fn tx_basic_transfer() -> Result<(), subxt::Error<DispatchError>> {
let events = api
.tx()
.balances()
.transfer(bob_address, 10_000)
.transfer(bob_address, 10_000)?
.sign_and_submit_then_watch_default(&alice)
.await?
.wait_for_finalized_success()
@@ -114,7 +114,7 @@ async fn multiple_transfers_work_nonce_incremented(
api
.tx()
.balances()
.transfer(bob_address.clone(), 10_000)
.transfer(bob_address.clone(), 10_000)?
.sign_and_submit_then_watch_default(&alice)
.await?
.wait_for_in_block() // Don't need to wait for finalization; this is quicker.
@@ -159,7 +159,7 @@ async fn storage_balance_lock() -> Result<(), subxt::Error<DispatchError>> {
charlie.into(),
100_000_000_000_000,
runtime_types::pallet_staking::RewardDestination::Stash,
)
)?
.sign_and_submit_then_watch_default(&bob)
.await?
.wait_for_finalized_success()
@@ -200,6 +200,7 @@ async fn transfer_error() {
.tx()
.balances()
.transfer(hans_address, 100_000_000_000_000_000)
.unwrap()
.sign_and_submit_then_watch_default(&alice)
.await
.unwrap()
@@ -212,6 +213,7 @@ async fn transfer_error() {
.tx()
.balances()
.transfer(alice_addr, 100_000_000_000_000_000)
.unwrap()
.sign_and_submit_then_watch_default(&hans)
.await
.unwrap()
@@ -239,6 +241,7 @@ async fn transfer_implicit_subscription() {
.tx()
.balances()
.transfer(bob_addr, 10_000)
.unwrap()
.sign_and_submit_then_watch_default(&alice)
.await
.unwrap()
+3 -3
View File
@@ -90,7 +90,7 @@ impl ContractsTestContext {
code,
vec![], // data
vec![], // salt
)
)?
.sign_and_submit_then_watch_default(&self.signer)
.await?
.wait_for_finalized_success()
@@ -130,7 +130,7 @@ impl ContractsTestContext {
code_hash,
data,
salt,
)
)?
.sign_and_submit_then_watch_default(&self.signer)
.await?
.wait_for_finalized_success()
@@ -161,7 +161,7 @@ impl ContractsTestContext {
500_000_000, // gas_limit
None, // storage_deposit_limit
input_data,
)
)?
.sign_and_submit_then_watch_default(&self.signer)
.await?;
+9 -5
View File
@@ -55,6 +55,7 @@ async fn validate_with_controller_account() {
.tx()
.staking()
.validate(default_validator_prefs())
.unwrap()
.sign_and_submit_then_watch_default(&alice)
.await
.unwrap()
@@ -71,7 +72,7 @@ async fn validate_not_possible_for_stash_account() -> Result<(), Error<DispatchE
.api
.tx()
.staking()
.validate(default_validator_prefs())
.validate(default_validator_prefs())?
.sign_and_submit_then_watch_default(&alice_stash)
.await?
.wait_for_finalized_success()
@@ -93,6 +94,7 @@ async fn nominate_with_controller_account() {
.tx()
.staking()
.nominate(vec![bob.account_id().clone().into()])
.unwrap()
.sign_and_submit_then_watch_default(&alice)
.await
.unwrap()
@@ -111,7 +113,7 @@ async fn nominate_not_possible_for_stash_account() -> Result<(), Error<DispatchE
.api
.tx()
.staking()
.nominate(vec![bob.account_id().clone().into()])
.nominate(vec![bob.account_id().clone().into()])?
.sign_and_submit_then_watch_default(&alice_stash)
.await?
.wait_for_finalized_success()
@@ -135,7 +137,7 @@ async fn chill_works_for_controller_only() -> Result<(), Error<DispatchError>> {
ctx.api
.tx()
.staking()
.nominate(vec![bob_stash.account_id().clone().into()])
.nominate(vec![bob_stash.account_id().clone().into()])?
.sign_and_submit_then_watch_default(&alice)
.await?
.wait_for_finalized_success()
@@ -154,7 +156,7 @@ async fn chill_works_for_controller_only() -> Result<(), Error<DispatchError>> {
.api
.tx()
.staking()
.chill()
.chill()?
.sign_and_submit_then_watch_default(&alice_stash)
.await?
.wait_for_finalized_success()
@@ -169,7 +171,7 @@ async fn chill_works_for_controller_only() -> Result<(), Error<DispatchError>> {
.api
.tx()
.staking()
.chill()
.chill()?
.sign_and_submit_then_watch_default(&alice)
.await?
.wait_for_finalized_success()
@@ -194,6 +196,7 @@ async fn tx_bond() -> Result<(), Error<DispatchError>> {
100_000_000_000_000,
RewardDestination::Stash,
)
.unwrap()
.sign_and_submit_then_watch_default(&alice)
.await?
.wait_for_finalized_success()
@@ -210,6 +213,7 @@ async fn tx_bond() -> Result<(), Error<DispatchError>> {
100_000_000_000_000,
RewardDestination::Stash,
)
.unwrap()
.sign_and_submit_then_watch_default(&alice)
.await?
.wait_for_finalized_success()
+2 -2
View File
@@ -43,7 +43,7 @@ async fn test_sudo() -> Result<(), subxt::Error<DispatchError>> {
.api
.tx()
.sudo()
.sudo(call)
.sudo(call)?
.sign_and_submit_then_watch_default(&alice)
.await?
.wait_for_finalized_success()
@@ -69,7 +69,7 @@ async fn test_sudo_unchecked_weight() -> Result<(), subxt::Error<DispatchError>>
.api
.tx()
.sudo()
.sudo_unchecked_weight(call, 0)
.sudo_unchecked_weight(call, 0)?
.sign_and_submit_then_watch_default(&alice)
.await?
.wait_for_finalized_success()
+1 -1
View File
@@ -50,7 +50,7 @@ async fn tx_remark_with_event() -> Result<(), subxt::Error<DispatchError>> {
.api
.tx()
.system()
.remark_with_event(b"remarkable".to_vec())
.remark_with_event(b"remarkable".to_vec())?
.sign_and_submit_then_watch_default(&alice)
.await?
.wait_for_finalized_success()
+3
View File
@@ -24,6 +24,9 @@ mod events;
#[cfg(test)]
mod frame;
#[cfg(test)]
#[cfg(integration_tests)]
mod metadata;
#[cfg(test)]
mod storage;
use test_runtime::node_runtime;
+17
View File
@@ -0,0 +1,17 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
mod validation;
@@ -0,0 +1,334 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use crate::{
test_context,
TestContext,
};
use frame_metadata::{
ExtrinsicMetadata,
PalletCallMetadata,
PalletMetadata,
PalletStorageMetadata,
RuntimeMetadataPrefixed,
RuntimeMetadataV14,
StorageEntryMetadata,
StorageEntryModifier,
StorageEntryType,
};
use scale_info::{
build::{
Fields,
Variants,
},
meta_type,
Path,
Type,
TypeInfo,
};
use subxt::{
ClientBuilder,
DefaultConfig,
Metadata,
SubstrateExtrinsicParams,
};
use crate::utils::node_runtime;
type RuntimeApi =
node_runtime::RuntimeApi<DefaultConfig, SubstrateExtrinsicParams<DefaultConfig>>;
async fn metadata_to_api(metadata: RuntimeMetadataV14, cxt: &TestContext) -> RuntimeApi {
let prefixed = RuntimeMetadataPrefixed::from(metadata);
let metadata = Metadata::try_from(prefixed).unwrap();
ClientBuilder::new()
.set_url(cxt.node_proc.ws_url().to_string())
.set_metadata(metadata)
.build()
.await
.unwrap()
.to_runtime_api::<node_runtime::RuntimeApi<
DefaultConfig,
SubstrateExtrinsicParams<DefaultConfig>,
>>()
}
#[tokio::test]
async fn full_metadata_check() {
let cxt = test_context().await;
let api = &cxt.api;
// Runtime metadata is identical to the metadata used during API generation.
assert!(api.validate_metadata().is_ok());
// Modify the metadata.
let mut metadata: RuntimeMetadataV14 =
api.client.metadata().runtime_metadata().clone();
metadata.pallets[0].name = "NewPallet".to_string();
let new_api = metadata_to_api(metadata, &cxt).await;
assert_eq!(
new_api
.validate_metadata()
.err()
.expect("Validation should fail for incompatible metadata"),
::subxt::MetadataError::IncompatibleMetadata
);
}
#[tokio::test]
async fn constants_check() {
let cxt = test_context().await;
let api = &cxt.api;
// Ensure that `ExistentialDeposit` is compatible before altering the metadata.
assert!(cxt.api.constants().balances().existential_deposit().is_ok());
// Modify the metadata.
let mut metadata: RuntimeMetadataV14 =
api.client.metadata().runtime_metadata().clone();
let mut existential = metadata
.pallets
.iter_mut()
.find(|pallet| pallet.name == "Balances")
.expect("Metadata must contain Balances pallet")
.constants
.iter_mut()
.find(|constant| constant.name == "ExistentialDeposit")
.expect("ExistentialDeposit constant must be present");
existential.value = vec![0u8; 32];
let new_api = metadata_to_api(metadata, &cxt).await;
assert!(new_api.validate_metadata().is_err());
assert!(new_api
.constants()
.balances()
.existential_deposit()
.is_err());
// Other constant validation should not be impacted.
assert!(new_api.constants().balances().max_locks().is_ok());
}
fn default_pallet() -> PalletMetadata {
PalletMetadata {
name: "Test",
storage: None,
calls: None,
event: None,
constants: vec![],
error: None,
index: 0,
}
}
fn pallets_to_metadata(pallets: Vec<PalletMetadata>) -> RuntimeMetadataV14 {
RuntimeMetadataV14::new(
pallets,
ExtrinsicMetadata {
ty: meta_type::<()>(),
version: 0,
signed_extensions: vec![],
},
meta_type::<()>(),
)
}
#[tokio::test]
async fn calls_check() {
let cxt = test_context().await;
// Ensure that `Unbond` and `WinthdrawUnbonded` calls are compatible before altering the metadata.
assert!(cxt.api.tx().staking().unbond(123_456_789_012_345).is_ok());
assert!(cxt.api.tx().staking().withdraw_unbonded(10).is_ok());
// Reconstruct the `Staking` call as is.
struct CallRec;
impl TypeInfo for CallRec {
type Identity = Self;
fn type_info() -> Type {
Type::builder()
.path(Path::new("Call", "pallet_staking::pallet::pallet"))
.variant(
Variants::new()
.variant("unbond", |v| {
v.index(0).fields(Fields::named().field(|f| {
f.compact::<u128>()
.name("value")
.type_name("BalanceOf<T>")
}))
})
.variant("withdraw_unbonded", |v| {
v.index(1).fields(Fields::named().field(|f| {
f.ty::<u32>().name("num_slashing_spans").type_name("u32")
}))
}),
)
}
}
let pallet = PalletMetadata {
name: "Staking",
calls: Some(PalletCallMetadata {
ty: meta_type::<CallRec>(),
}),
..default_pallet()
};
let metadata = pallets_to_metadata(vec![pallet]);
let new_api = metadata_to_api(metadata, &cxt).await;
assert!(new_api.tx().staking().unbond(123_456_789_012_345).is_ok());
assert!(new_api.tx().staking().withdraw_unbonded(10).is_ok());
// Change `Unbond` call but leave the rest as is.
struct CallRecSecond;
impl TypeInfo for CallRecSecond {
type Identity = Self;
fn type_info() -> Type {
Type::builder()
.path(Path::new("Call", "pallet_staking::pallet::pallet"))
.variant(
Variants::new()
.variant("unbond", |v| {
v.index(0).fields(Fields::named().field(|f| {
// Is of type u32 instead of u128.
f.compact::<u32>().name("value").type_name("BalanceOf<T>")
}))
})
.variant("withdraw_unbonded", |v| {
v.index(1).fields(Fields::named().field(|f| {
f.ty::<u32>().name("num_slashing_spans").type_name("u32")
}))
}),
)
}
}
let pallet = PalletMetadata {
name: "Staking",
calls: Some(PalletCallMetadata {
ty: meta_type::<CallRecSecond>(),
}),
..default_pallet()
};
let metadata = pallets_to_metadata(vec![pallet]);
let new_api = metadata_to_api(metadata, &cxt).await;
// Unbond call should fail, while withdraw_unbonded remains compatible.
assert!(new_api.tx().staking().unbond(123_456_789_012_345).is_err());
assert!(new_api.tx().staking().withdraw_unbonded(10).is_ok());
}
#[tokio::test]
async fn storage_check() {
let cxt = test_context().await;
// Ensure that `ExtrinsicCount` and `EventCount` storages are compatible before altering the metadata.
assert!(cxt
.api
.storage()
.system()
.extrinsic_count(None)
.await
.is_ok());
assert!(cxt
.api
.storage()
.system()
.all_extrinsics_len(None)
.await
.is_ok());
// Reconstruct the storage.
let storage = PalletStorageMetadata {
prefix: "System",
entries: vec![
StorageEntryMetadata {
name: "ExtrinsicCount",
modifier: StorageEntryModifier::Optional,
ty: StorageEntryType::Plain(meta_type::<u32>()),
default: vec![0],
docs: vec![],
},
StorageEntryMetadata {
name: "AllExtrinsicsLen",
modifier: StorageEntryModifier::Optional,
ty: StorageEntryType::Plain(meta_type::<u32>()),
default: vec![0],
docs: vec![],
},
],
};
let pallet = PalletMetadata {
name: "System",
storage: Some(storage),
..default_pallet()
};
let metadata = pallets_to_metadata(vec![pallet]);
let new_api = metadata_to_api(metadata, &cxt).await;
assert!(new_api
.storage()
.system()
.extrinsic_count(None)
.await
.is_ok());
assert!(new_api
.storage()
.system()
.all_extrinsics_len(None)
.await
.is_ok());
// Reconstruct the storage while modifying ExtrinsicCount.
let storage = PalletStorageMetadata {
prefix: "System",
entries: vec![
StorageEntryMetadata {
name: "ExtrinsicCount",
modifier: StorageEntryModifier::Optional,
// Previously was u32.
ty: StorageEntryType::Plain(meta_type::<u8>()),
default: vec![0],
docs: vec![],
},
StorageEntryMetadata {
name: "AllExtrinsicsLen",
modifier: StorageEntryModifier::Optional,
ty: StorageEntryType::Plain(meta_type::<u32>()),
default: vec![0],
docs: vec![],
},
],
};
let pallet = PalletMetadata {
name: "System",
storage: Some(storage),
..default_pallet()
};
let metadata = pallets_to_metadata(vec![pallet]);
let new_api = metadata_to_api(metadata, &cxt).await;
assert!(new_api
.storage()
.system()
.extrinsic_count(None)
.await
.is_err());
assert!(new_api
.storage()
.system()
.all_extrinsics_len(None)
.await
.is_ok());
}
+3 -3
View File
@@ -48,7 +48,7 @@ async fn storage_map_lookup() -> Result<(), subxt::Error<DispatchError>> {
ctx.api
.tx()
.system()
.remark(vec![1, 2, 3, 4, 5])
.remark(vec![1, 2, 3, 4, 5])?
.sign_and_submit_then_watch_default(&signer)
.await?
.wait_for_finalized_success()
@@ -113,7 +113,7 @@ async fn storage_n_map_storage_lookup() -> Result<(), subxt::Error<DispatchError
ctx.api
.tx()
.assets()
.create(99, alice.clone().into(), 1)
.create(99, alice.clone().into(), 1)?
.sign_and_submit_then_watch_default(&signer)
.await?
.wait_for_finalized_success()
@@ -121,7 +121,7 @@ async fn storage_n_map_storage_lookup() -> Result<(), subxt::Error<DispatchError
ctx.api
.tx()
.assets()
.approve_transfer(99, bob.clone().into(), 123)
.approve_transfer(99, bob.clone().into(), 123)?
.sign_and_submit_then_watch_default(&signer)
.await?
.wait_for_finalized_success()
+16 -1
View File
@@ -37,6 +37,8 @@ use subxt::{
pub struct TestNodeProcess<R: Config> {
proc: process::Child,
client: Client<R>,
#[cfg(integration_tests)]
ws_url: String,
}
impl<R> Drop for TestNodeProcess<R>
@@ -75,6 +77,12 @@ where
pub fn client(&self) -> &Client<R> {
&self.client
}
/// Returns the address to which the client is connected.
#[cfg(integration_tests)]
pub fn ws_url(&self) -> &str {
&self.ws_url
}
}
/// Construct a test node process.
@@ -137,7 +145,14 @@ impl TestNodeProcessBuilder {
// Connect to the node with a subxt client:
let client = ClientBuilder::new().set_url(ws_url.clone()).build().await;
match client {
Ok(client) => Ok(TestNodeProcess { proc, client }),
Ok(client) => {
Ok(TestNodeProcess {
proc,
client,
#[cfg(integration_tests)]
ws_url,
})
}
Err(err) => {
let err = format!("Failed to connect to node rpc at {}: {}", ws_url, err);
log::error!("{}", err);