Subscribe to Runtime upgrades for proper extrinsic construction (#513)

* subxt: Add subscription to runtime upgrades

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Synchronize and expose inner `RuntimeVersion` of the `Client`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* examples: Add runtime update example

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Expose `RuntimeVersion` as locked

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Expose `Metadata` as locked

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/storage: Use locked metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Use parking lot RwLock variant for locked metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Utilize locked metadata variant

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/transaction: Use locked metadata variant

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update subxt to use locked version of the Metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Add runtime update client wrapper

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* examples: Modify runtime update example

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Fix cargo check

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Keep consistency with cargo check fix

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Remove unnecessary Arc

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Remove MetadataInner and use parking_lot::RwLock

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update polkadot.rs

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update polkadot.rs generation comment

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Switch to async::Mutex

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Block executor while decoding dynamic events

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Use async API to handle async locking

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Remove unused dependencies

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update examples and integration-tests

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Fix test deadlock

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Revert back to sync lock

Revert "Fix test deadlock"

This reverts commit 4a79933df23e81573611cb14be6c5b5b2b56d4df.

Revert "Update examples and integration-tests"

This reverts commit 5423f6eb4131582909d5a4ca70adff75e27cdd0e.

Revert "Remove unused dependencies"

This reverts commit e8ecbabb5b01a7ba4ae83b8bde36295a3f64daf7.

Revert "codegen: Use async API to handle async locking"

This reverts commit ced4646541c431adcb973369b1061b7b3cbfaae1.

Revert "subxt: Block executor while decoding dynamic events"

This reverts commit 8b3ba4a5eabb29f77ac1ca671450956fc479a33d.

Revert "subxt: Switch to async::Mutex"

This reverts commit f5bde9b79394a6bf61b6b9daefc36ceaa84b82be.

* subxt: Perform RuntimeVersion update before fetching metadata

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Reintroduce MetadataInner

* Use parking lot instead of std::RwLock

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt: Reduce lock metadata time when decoding events

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* codegen: Update `validate_metdata` locking pattern

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subxt/examples: Update polkadot download link

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* tests: Wrap metadata in a helper function

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update examples/examples/subscribe_runtime_updates.rs

Co-authored-by: James Wilson <james@jsdw.me>

* subxt/updates: Update runtime if version is different

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

Co-authored-by: James Wilson <james@jsdw.me>
This commit is contained in:
Alexandru Vasile
2022-05-11 14:44:43 +03:00
committed by GitHub
parent 199cfa744b
commit 115073a33d
30 changed files with 6444 additions and 3369 deletions
+6 -1
View File
@@ -100,7 +100,12 @@ pub fn generate_calls(
&self,
#( #call_fn_args, )*
) -> Result<::subxt::SubmittableExtrinsic<'a, T, X, #struct_name, DispatchError, root_mod::Event>, ::subxt::BasicError> {
if self.client.metadata().call_hash::<#struct_name>()? == [#(#call_hash,)*] {
let runtime_call_hash = {
let locked_metadata = self.client.metadata();
let metadata = locked_metadata.read();
metadata.call_hash::<#struct_name>()?
};
if runtime_call_hash == [#(#call_hash,)*] {
let call = #struct_name { #( #call_args, )* };
Ok(::subxt::SubmittableExtrinsic::new(self.client, call))
} else {
+4 -2
View File
@@ -49,8 +49,10 @@ pub fn generate_constants(
quote! {
#( #[doc = #docs ] )*
pub fn #fn_name(&self) -> ::core::result::Result<#return_ty, ::subxt::BasicError> {
if self.client.metadata().constant_hash(#pallet_name, #constant_name)? == [#(#constant_hash,)*] {
let pallet = self.client.metadata().pallet(#pallet_name)?;
let locked_metadata = self.client.metadata();
let metadata = locked_metadata.read();
if metadata.constant_hash(#pallet_name, #constant_name)? == [#(#constant_hash,)*] {
let pallet = metadata.pallet(#pallet_name)?;
let constant = pallet.constant(#constant_name)?;
let value = ::subxt::codec::Decode::decode(&mut &constant.value[..])?;
Ok(value)
+7 -2
View File
@@ -312,7 +312,12 @@ impl RuntimeGenerator {
X: ::subxt::extrinsic::ExtrinsicParams<T>,
{
pub fn validate_metadata(&'a self) -> Result<(), ::subxt::MetadataError> {
if self.client.metadata().metadata_hash(&PALLETS) != [ #(#metadata_hash,)* ] {
let runtime_metadata_hash = {
let locked_metadata = self.client.metadata();
let metadata = locked_metadata.read();
metadata.metadata_hash(&PALLETS)
};
if runtime_metadata_hash != [ #(#metadata_hash,)* ] {
Err(::subxt::MetadataError::IncompatibleMetadata)
} else {
Ok(())
@@ -341,7 +346,7 @@ impl RuntimeGenerator {
}
impl <'a, T: ::subxt::Config> EventsApi<'a, T> {
pub async fn at(&self, block_hash: T::Hash) -> Result<::subxt::events::Events<'a, T, Event>, ::subxt::BasicError> {
pub async fn at(&self, block_hash: T::Hash) -> Result<::subxt::events::Events<T, Event>, ::subxt::BasicError> {
::subxt::events::at::<T, Event>(self.client, block_hash).await
}
+12 -2
View File
@@ -271,7 +271,12 @@ fn generate_storage_entry_fns(
&self,
block_hash: ::core::option::Option<T::Hash>,
) -> ::core::result::Result<::subxt::KeyIter<'a, T, #entry_struct_ident #lifetime_param>, ::subxt::BasicError> {
if self.client.metadata().storage_hash::<#entry_struct_ident>()? == [#(#storage_hash,)*] {
let runtime_storage_hash = {
let locked_metadata = self.client.metadata();
let metadata = locked_metadata.read();
metadata.storage_hash::<#entry_struct_ident>()?
};
if runtime_storage_hash == [#(#storage_hash,)*] {
self.client.storage().iter(block_hash).await
} else {
Err(::subxt::MetadataError::IncompatibleMetadata.into())
@@ -300,7 +305,12 @@ fn generate_storage_entry_fns(
#( #key_args, )*
block_hash: ::core::option::Option<T::Hash>,
) -> ::core::result::Result<#return_ty, ::subxt::BasicError> {
if self.client.metadata().storage_hash::<#entry_struct_ident>()? == [#(#storage_hash,)*] {
let runtime_storage_hash = {
let locked_metadata = self.client.metadata();
let metadata = locked_metadata.read();
metadata.storage_hash::<#entry_struct_ident>()?
};
if runtime_storage_hash == [#(#storage_hash,)*] {
let entry = #constructor;
self.client.storage().#fetch(&entry, block_hash).await
} else {
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
@@ -0,0 +1,80 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! To run this example, a local polkadot node should be running. Example verified against polkadot 0.9.18-f6d6ab005d-aarch64-macos.
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
use sp_keyring::AccountKeyring;
use std::time::Duration;
use subxt::{
ClientBuilder,
DefaultConfig,
PairSigner,
PolkadotExtrinsicParams,
};
#[subxt::subxt(runtime_metadata_path = "../artifacts/polkadot_metadata.scale")]
pub mod polkadot {}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
env_logger::init();
let api = ClientBuilder::new()
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<DefaultConfig, PolkadotExtrinsicParams<DefaultConfig>>>();
// Start a new tokio task to perform the runtime updates while
// utilizing the API for other use cases.
let update_client = api.client.updates();
tokio::spawn(async move {
let result = update_client.perform_runtime_updates().await;
println!("Runtime update failed with result={:?}", result);
});
// Make multiple transfers to simulate a long running `subxt::Client` use-case.
//
// Meanwhile, the tokio task above will perform any necessary updates to keep in sync
// with the node we've connected to. Transactions submitted in the vicinity of a runtime
// update may still fail, however, owing to a race between the update happening and
// subxt synchronising its internal state with it.
let signer = PairSigner::new(AccountKeyring::Alice.pair());
// Make small balance transfers from Alice to Bob:
for _ in 0..10 {
let hash = api
.tx()
.balances()
.transfer(
AccountKeyring::Bob.to_account_id().into(),
123_456_789_012_345,
)
.unwrap()
.sign_and_submit_default(&signer)
.await
.unwrap();
println!("Balance transfer extrinsic submitted: {}", hash);
tokio::time::sleep(Duration::from_secs(30)).await;
}
Ok(())
}
+1 -1
View File
@@ -18,7 +18,7 @@
//!
//! E.g.
//! ```bash
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.13/polkadot" --output /usr/local/bin/polkadot --location
//! curl "https://github.com/paritytech/polkadot/releases/download/v0.9.18/polkadot" --output /usr/local/bin/polkadot --location
//! polkadot --dev --tmp
//! ```
+1 -1
View File
@@ -20,7 +20,7 @@
/// Generate by:
///
/// - run `polkadot --dev --tmp` node locally
/// - `cargo run --release -p subxt-cli -- codegen | rustfmt > subxt/tests/integration/codegen/polkadot.rs`
/// - `cargo run --release -p subxt-cli -- codegen | rustfmt > integration-tests/src/codegen/polkadot.rs`
#[rustfmt::skip]
#[allow(clippy::all)]
mod polkadot;
File diff suppressed because it is too large Load Diff
+3 -1
View File
@@ -265,7 +265,9 @@ async fn transfer_implicit_subscription() {
#[tokio::test]
async fn constant_existential_deposit() {
let cxt = test_context().await;
let balances_metadata = cxt.client().metadata().pallet("Balances").unwrap();
let locked_metadata = cxt.client().metadata();
let metadata = locked_metadata.read();
let balances_metadata = metadata.pallet("Balances").unwrap();
let constant_metadata = balances_metadata.constant("ExistentialDeposit").unwrap();
let existential_deposit = u128::decode(&mut &constant_metadata.value[..]).unwrap();
assert_eq!(existential_deposit, 100_000_000_000_000);
+6 -4
View File
@@ -76,8 +76,9 @@ async fn full_metadata_check() {
assert!(api.validate_metadata().is_ok());
// Modify the metadata.
let mut metadata: RuntimeMetadataV14 =
api.client.metadata().runtime_metadata().clone();
let locked_client_metadata = api.client.metadata();
let client_metadata = locked_client_metadata.read();
let mut metadata: RuntimeMetadataV14 = client_metadata.runtime_metadata().clone();
metadata.pallets[0].name = "NewPallet".to_string();
let new_api = metadata_to_api(metadata, &cxt).await;
@@ -99,8 +100,9 @@ async fn constants_check() {
assert!(cxt.api.constants().balances().existential_deposit().is_ok());
// Modify the metadata.
let mut metadata: RuntimeMetadataV14 =
api.client.metadata().runtime_metadata().clone();
let locked_client_metadata = api.client.metadata();
let client_metadata = locked_client_metadata.read();
let mut metadata: RuntimeMetadataV14 = client_metadata.runtime_metadata().clone();
let mut existential = metadata
.pallets
+39 -15
View File
@@ -35,6 +35,7 @@ use crate::{
},
storage::StorageClient,
transaction::TransactionProgress,
updates::UpdateClient,
Call,
Config,
Encoded,
@@ -46,6 +47,8 @@ use codec::{
Encode,
};
use derivative::Derivative;
use parking_lot::RwLock;
use std::sync::Arc;
/// ClientBuilder for constructing a Client.
#[derive(Default)]
@@ -119,9 +122,9 @@ impl ClientBuilder {
Ok(Client {
rpc,
genesis_hash: genesis_hash?,
metadata,
metadata: Arc::new(RwLock::new(metadata)),
properties: properties.unwrap_or_else(|_| Default::default()),
runtime_version: runtime_version?,
runtime_version: Arc::new(RwLock::new(runtime_version?)),
iter_page_size: self.page_size.unwrap_or(10),
})
}
@@ -133,9 +136,9 @@ impl ClientBuilder {
pub struct Client<T: Config> {
rpc: Rpc<T>,
genesis_hash: T::Hash,
metadata: Metadata,
metadata: Arc<RwLock<Metadata>>,
properties: SystemProperties,
runtime_version: RuntimeVersion,
runtime_version: Arc<RwLock<RuntimeVersion>>,
iter_page_size: u32,
}
@@ -160,8 +163,8 @@ impl<T: Config> Client<T> {
}
/// Returns the chain metadata.
pub fn metadata(&self) -> &Metadata {
&self.metadata
pub fn metadata(&self) -> Arc<RwLock<Metadata>> {
Arc::clone(&self.metadata)
}
/// Returns the properties defined in the chain spec as a JSON object.
@@ -183,7 +186,16 @@ impl<T: Config> Client<T> {
/// Create a client for accessing runtime storage
pub fn storage(&self) -> StorageClient<T> {
StorageClient::new(&self.rpc, &self.metadata, self.iter_page_size)
StorageClient::new(&self.rpc, self.metadata(), self.iter_page_size)
}
/// Create a wrapper for performing runtime updates on this client.
pub fn updates(&self) -> UpdateClient<T> {
UpdateClient::new(
self.rpc.clone(),
self.metadata(),
self.runtime_version.clone(),
)
}
/// Convert the client to a runtime api wrapper for custom runtime access.
@@ -193,6 +205,11 @@ impl<T: Config> Client<T> {
pub fn to_runtime_api<R: From<Self>>(self) -> R {
self.into()
}
/// Returns the client's Runtime Version.
pub fn runtime_version(&self) -> Arc<RwLock<RuntimeVersion>> {
Arc::clone(&self.runtime_version)
}
}
/// A constructed call ready to be signed and submitted.
@@ -312,7 +329,9 @@ where
// 2. SCALE encode call data to bytes (pallet u8, call u8, call params).
let call_data = {
let mut bytes = Vec::new();
let pallet = self.client.metadata().pallet(C::PALLET)?;
let locked_metadata = self.client.metadata();
let metadata = locked_metadata.read();
let pallet = metadata.pallet(C::PALLET)?;
bytes.push(pallet.index());
bytes.push(pallet.call_index::<C>()?);
self.call.encode_to(&mut bytes);
@@ -320,13 +339,18 @@ where
};
// 3. Construct our custom additional/extra params.
let additional_and_extra_params = X::new(
self.client.runtime_version.spec_version,
self.client.runtime_version.transaction_version,
account_nonce,
self.client.genesis_hash,
other_params,
);
let additional_and_extra_params = {
// Obtain spec version and transaction version from the runtime version of the client.
let locked_runtime = self.client.runtime_version();
let runtime = locked_runtime.read();
X::new(
runtime.spec_version,
runtime.transaction_version,
account_nonce,
self.client.genesis_hash,
other_params,
)
};
// 4. Construct signature. This is compatible with the Encode impl
// for SignedPayload (which is this payload of bytes that we'd like)
+2 -2
View File
@@ -165,7 +165,7 @@ pub struct EventSubscription<'a, Sub, T: Config, Evs: 'static> {
#[derivative(Debug = "ignore")]
at: Option<
std::pin::Pin<
Box<dyn Future<Output = Result<Events<'a, T, Evs>, BasicError>> + Send + 'a>,
Box<dyn Future<Output = Result<Events<T, Evs>, BasicError>> + Send + 'a>,
>,
>,
_event_type: std::marker::PhantomData<Evs>,
@@ -222,7 +222,7 @@ where
Sub: Stream<Item = Result<T::Header, E>> + Unpin + 'a,
E: Into<BasicError>,
{
type Item = Result<Events<'a, T, Evs>, BasicError>;
type Item = Result<Events<T, Evs>, BasicError>;
fn poll_next(
mut self: std::pin::Pin<&mut Self>,
+37 -22
View File
@@ -32,11 +32,13 @@ use codec::{
Input,
};
use derivative::Derivative;
use parking_lot::RwLock;
use sp_core::{
storage::StorageKey,
twox_128,
Bytes,
};
use std::sync::Arc;
/// Obtain events at some block hash. The generic parameter is what we
/// will attempt to decode each event into if using [`Events::iter()`],
@@ -50,7 +52,7 @@ use sp_core::{
pub async fn at<T: Config, Evs: Decode>(
client: &'_ Client<T>,
block_hash: T::Hash,
) -> Result<Events<'_, T, Evs>, BasicError> {
) -> Result<Events<T, Evs>, BasicError> {
let mut event_bytes = client
.rpc()
.storage(&system_events_key(), Some(block_hash))
@@ -90,8 +92,8 @@ fn system_events_key() -> StorageKey {
/// information needed to decode and iterate over them.
#[derive(Derivative)]
#[derivative(Debug(bound = ""))]
pub struct Events<'a, T: Config, Evs> {
metadata: &'a Metadata,
pub struct Events<T: Config, Evs> {
metadata: Arc<RwLock<Metadata>>,
block_hash: T::Hash,
// Note; raw event bytes are prefixed with a Compact<u32> containing
// the number of events to be decoded. We should have stripped that off
@@ -101,7 +103,7 @@ pub struct Events<'a, T: Config, Evs> {
_event_type: std::marker::PhantomData<Evs>,
}
impl<'a, T: Config, Evs: Decode> Events<'a, T, Evs> {
impl<'a, T: Config, Evs: Decode> Events<T, Evs> {
/// The number of events.
pub fn len(&self) -> u32 {
self.num_events
@@ -183,6 +185,11 @@ impl<'a, T: Config, Evs: Decode> Events<'a, T, Evs> {
) -> impl Iterator<Item = Result<RawEventDetails, BasicError>> + '_ {
let event_bytes = &self.event_bytes;
let metadata = {
let metadata = self.metadata.read();
metadata.clone()
};
let mut pos = 0;
let mut index = 0;
std::iter::from_fn(move || {
@@ -192,7 +199,7 @@ impl<'a, T: Config, Evs: Decode> Events<'a, T, Evs> {
if start_len == 0 || self.num_events == index {
None
} else {
match decode_raw_event_details::<T>(self.metadata, index, cursor) {
match decode_raw_event_details::<T>(&metadata, index, cursor) {
Ok(raw_event) => {
// Skip over decoded bytes in next iteration:
pos += start_len - cursor.len();
@@ -228,6 +235,11 @@ impl<'a, T: Config, Evs: Decode> Events<'a, T, Evs> {
) -> impl Iterator<Item = Result<RawEventDetails, BasicError>> + 'a {
let mut pos = 0;
let mut index = 0;
let metadata = {
let metadata = self.metadata.read();
metadata.clone()
};
std::iter::from_fn(move || {
let cursor = &mut &self.event_bytes[pos..];
let start_len = cursor.len();
@@ -235,7 +247,7 @@ impl<'a, T: Config, Evs: Decode> Events<'a, T, Evs> {
if start_len == 0 || self.num_events == index {
None
} else {
match decode_raw_event_details::<T>(self.metadata, index, cursor) {
match decode_raw_event_details::<T>(&metadata, index, cursor) {
Ok(raw_event) => {
// Skip over decoded bytes in next iteration:
pos += start_len - cursor.len();
@@ -468,9 +480,9 @@ pub(crate) mod test_utils {
/// Build an `Events` object for test purposes, based on the details provided,
/// and with a default block hash.
pub fn events<E: Decode + Encode>(
metadata: &'_ Metadata,
metadata: Arc<RwLock<Metadata>>,
event_records: Vec<EventRecord<E>>,
) -> Events<'_, DefaultConfig, AllEvents<E>> {
) -> Events<DefaultConfig, AllEvents<E>> {
let num_events = event_records.len() as u32;
let mut event_bytes = Vec::new();
for ev in event_records {
@@ -482,10 +494,10 @@ pub(crate) mod test_utils {
/// Much like [`events`], but takes pre-encoded events and event count, so that we can
/// mess with the bytes in tests if we need to.
pub fn events_raw<E: Decode + Encode>(
metadata: &'_ Metadata,
metadata: Arc<RwLock<Metadata>>,
event_bytes: Vec<u8>,
num_events: u32,
) -> Events<'_, DefaultConfig, AllEvents<E>> {
) -> Events<DefaultConfig, AllEvents<E>> {
Events {
block_hash: <DefaultConfig as Config>::Hash::default(),
event_bytes,
@@ -503,7 +515,6 @@ mod tests {
event_record,
events,
events_raw,
metadata,
AllEvents,
},
*,
@@ -512,6 +523,11 @@ mod tests {
use codec::Encode;
use scale_info::TypeInfo;
/// Build a fake wrapped metadata.
fn metadata<E: TypeInfo + 'static>() -> Arc<RwLock<Metadata>> {
Arc::new(RwLock::new(test_utils::metadata::<E>()))
}
#[test]
fn statically_decode_single_event() {
#[derive(Clone, Debug, PartialEq, Decode, Encode, TypeInfo)]
@@ -521,11 +537,10 @@ mod tests {
// Create fake metadata that knows about our single event, above:
let metadata = metadata::<Event>();
// Encode our events in the format we expect back from a node, and
// construst an Events object to iterate them:
// construct an Events object to iterate them:
let events = events::<Event>(
&metadata,
metadata,
vec![event_record(Phase::Finalization, Event::A(1))],
);
@@ -555,7 +570,7 @@ mod tests {
// Encode our events in the format we expect back from a node, and
// construst an Events object to iterate them:
let events = events::<Event>(
&metadata,
metadata,
vec![
event_record(Phase::Initialization, Event::A(1)),
event_record(Phase::ApplyExtrinsic(123), Event::B(true)),
@@ -610,7 +625,7 @@ mod tests {
// Encode our events in the format we expect back from a node, and
// construst an Events object to iterate them:
let events = events_raw::<Event>(
&metadata,
metadata,
event_bytes,
3, // 2 "good" events, and then it'll hit the naff bytes.
);
@@ -654,7 +669,7 @@ mod tests {
// construst an Events object to iterate them:
let event = Event::A(1);
let events = events::<Event>(
&metadata,
metadata,
vec![event_record(Phase::ApplyExtrinsic(123), event)],
);
@@ -699,7 +714,7 @@ mod tests {
let event3 = Event::A(234);
let events = events::<Event>(
&metadata,
metadata,
vec![
event_record(Phase::Initialization, event1),
event_record(Phase::ApplyExtrinsic(123), event2),
@@ -773,7 +788,7 @@ mod tests {
// Encode our events in the format we expect back from a node, and
// construst an Events object to iterate them:
let events = events_raw::<Event>(
&metadata,
metadata,
event_bytes,
3, // 2 "good" events, and then it'll hit the naff bytes.
);
@@ -831,7 +846,7 @@ mod tests {
// Encode our events in the format we expect back from a node, and
// construst an Events object to iterate them:
let events = events::<Event>(
&metadata,
metadata,
vec![event_record(Phase::Finalization, Event::A(1))],
);
@@ -886,7 +901,7 @@ mod tests {
// Encode our events in the format we expect back from a node, and
// construct an Events object to iterate them:
let events = events::<Event>(
&metadata,
metadata,
vec![event_record(
Phase::Finalization,
Event::A(CompactWrapper(1)),
@@ -948,7 +963,7 @@ mod tests {
// Encode our events in the format we expect back from a node, and
// construct an Events object to iterate them:
let events = events::<Event>(
&metadata,
metadata,
vec![event_record(Phase::Finalization, Event::A(MyType::B))],
);
+17 -16
View File
@@ -70,7 +70,7 @@ impl<'a, Sub: 'a, T: Config, Filter: EventFilter> FilterEvents<'a, Sub, T, Filte
impl<'a, Sub, T, Evs, Filter> Stream for FilterEvents<'a, Sub, T, Filter>
where
Sub: Stream<Item = Result<Events<'a, T, Evs>, BasicError>> + Unpin + 'a,
Sub: Stream<Item = Result<Events<T, Evs>, BasicError>> + Unpin + 'a,
T: Config,
Evs: Decode + 'static,
Filter: EventFilter,
@@ -125,7 +125,7 @@ pub trait EventFilter: private::Sealed {
type ReturnType;
/// Filter the events based on the type implementing this trait.
fn filter<'a, T: Config, Evs: Decode + 'static>(
events: Events<'a, T, Evs>,
events: Events<T, Evs>,
) -> Box<
dyn Iterator<
Item = Result<
@@ -150,7 +150,7 @@ impl<Ev: Event> private::Sealed for (Ev,) {}
impl<Ev: Event> EventFilter for (Ev,) {
type ReturnType = Ev;
fn filter<'a, T: Config, Evs: Decode + 'static>(
events: Events<'a, T, Evs>,
events: Events<T, Evs>,
) -> Box<
dyn Iterator<Item = Result<FilteredEventDetails<T::Hash, Ev>, BasicError>>
+ Send
@@ -192,7 +192,7 @@ macro_rules! impl_event_filter {
impl <$($ty: Event),+> EventFilter for ( $($ty,)+ ) {
type ReturnType = ( $(Option<$ty>,)+ );
fn filter<'a, T: Config, Evs: Decode + 'static>(
events: Events<'a, T, Evs>
events: Events<T, Evs>
) -> Box<dyn Iterator<Item=Result<FilteredEventDetails<T::Hash,Self::ReturnType>, BasicError>> + Send + 'a> {
let block_hash = events.block_hash();
let mut iter = events.into_iter_raw();
@@ -259,7 +259,9 @@ mod test {
Stream,
StreamExt,
};
use parking_lot::RwLock;
use scale_info::TypeInfo;
use std::sync::Arc;
// Some pretend events in a pallet
#[derive(Clone, Debug, PartialEq, Decode, Encode, TypeInfo)]
@@ -295,13 +297,12 @@ mod test {
// A stream of fake events for us to try filtering on.
fn events_stream(
metadata: &'_ Metadata,
) -> impl Stream<
Item = Result<Events<'_, DefaultConfig, AllEvents<PalletEvents>>, BasicError>,
> {
metadata: Arc<RwLock<Metadata>>,
) -> impl Stream<Item = Result<Events<DefaultConfig, AllEvents<PalletEvents>>, BasicError>>
{
stream::iter(vec![
events::<PalletEvents>(
metadata,
metadata.clone(),
vec![
event_record(Phase::Initialization, PalletEvents::A(EventA(1))),
event_record(Phase::ApplyExtrinsic(0), PalletEvents::B(EventB(true))),
@@ -309,7 +310,7 @@ mod test {
],
),
events::<PalletEvents>(
metadata,
metadata.clone(),
vec![event_record(
Phase::ApplyExtrinsic(1),
PalletEvents::B(EventB(false)),
@@ -328,11 +329,11 @@ mod test {
#[tokio::test]
async fn filter_one_event_from_stream() {
let metadata = metadata::<PalletEvents>();
let metadata = Arc::new(RwLock::new(metadata::<PalletEvents>()));
// Filter out fake event stream to select events matching `EventA` only.
let actual: Vec<_> =
FilterEvents::<_, DefaultConfig, (EventA,)>::new(events_stream(&metadata))
FilterEvents::<_, DefaultConfig, (EventA,)>::new(events_stream(metadata))
.map(|e| e.unwrap())
.collect()
.await;
@@ -360,11 +361,11 @@ mod test {
#[tokio::test]
async fn filter_some_events_from_stream() {
let metadata = metadata::<PalletEvents>();
let metadata = Arc::new(RwLock::new(metadata::<PalletEvents>()));
// Filter out fake event stream to select events matching `EventA` or `EventB`.
let actual: Vec<_> = FilterEvents::<_, DefaultConfig, (EventA, EventB)>::new(
events_stream(&metadata),
events_stream(metadata),
)
.map(|e| e.unwrap())
.collect()
@@ -408,11 +409,11 @@ mod test {
#[tokio::test]
async fn filter_no_events_from_stream() {
let metadata = metadata::<PalletEvents>();
let metadata = Arc::new(RwLock::new(metadata::<PalletEvents>()));
// Filter out fake event stream to select events matching `EventC` (none exist).
let actual: Vec<_> =
FilterEvents::<_, DefaultConfig, (EventC,)>::new(events_stream(&metadata))
FilterEvents::<_, DefaultConfig, (EventC,)>::new(events_stream(metadata))
.map(|e| e.unwrap())
.collect()
.await;
+1
View File
@@ -65,6 +65,7 @@ mod metadata;
pub mod rpc;
pub mod storage;
mod transaction;
pub mod updates;
pub use crate::{
client::{
+11 -16
View File
@@ -25,6 +25,7 @@ use frame_metadata::{
StorageEntryMetadata,
META_RESERVED,
};
use parking_lot::RwLock;
use scale_info::{
form::PortableForm,
Type,
@@ -33,10 +34,7 @@ use scale_info::{
use std::{
collections::HashMap,
convert::TryFrom,
sync::{
Arc,
RwLock,
},
sync::Arc,
};
/// Metadata error.
@@ -83,12 +81,6 @@ pub enum MetadataError {
IncompatibleMetadata,
}
/// Runtime metadata.
#[derive(Clone, Debug)]
pub struct Metadata {
inner: Arc<MetadataInner>,
}
// We hide the innards behind an Arc so that it's easy to clone and share.
#[derive(Debug)]
struct MetadataInner {
@@ -105,6 +97,12 @@ struct MetadataInner {
cached_storage_hashes: HashCache,
}
/// Runtime metadata.
#[derive(Clone, Debug)]
pub struct Metadata {
inner: Arc<MetadataInner>,
}
impl Metadata {
/// Returns a reference to [`PalletMetadata`].
pub fn pallet(&self, name: &'static str) -> Result<&PalletMetadata, MetadataError> {
@@ -217,7 +215,7 @@ impl Metadata {
/// Obtain the unique hash for this metadata.
pub fn metadata_hash<T: AsRef<str>>(&self, pallets: &[T]) -> [u8; 32] {
if let Some(hash) = *self.inner.cached_metadata_hash.read().unwrap() {
if let Some(hash) = *self.inner.cached_metadata_hash.read() {
return hash
}
@@ -225,7 +223,7 @@ impl Metadata {
self.runtime_metadata(),
pallets,
);
*self.inner.cached_metadata_hash.write().unwrap() = Some(hash);
*self.inner.cached_metadata_hash.write() = Some(hash);
hash
}
@@ -547,10 +545,7 @@ mod tests {
let hash = metadata.metadata_hash(&["System"]);
// Check inner caching.
assert_eq!(
metadata.inner.cached_metadata_hash.read().unwrap().unwrap(),
hash
);
assert_eq!(metadata.inner.cached_metadata_hash.read().unwrap(), hash);
// The cache `metadata.inner.cached_metadata_hash` is already populated from
// the previous call. Therefore, changing the pallets argument must not
+15
View File
@@ -480,6 +480,21 @@ impl<T: Config> Rpc<T> {
Ok(subscription)
}
/// Subscribe to runtime version updates that produce changes in the metadata.
pub async fn subscribe_runtime_version(
&self,
) -> Result<Subscription<RuntimeVersion>, BasicError> {
let subscription = self
.client
.subscribe(
"state_subscribeRuntimeVersion",
rpc_params![],
"state_unsubscribeRuntimeVersion",
)
.await?;
Ok(subscription)
}
/// Create and submit an extrinsic and return corresponding Hash if successful
pub async fn submit_extrinsic<X: Encode>(
&self,
+14 -5
View File
@@ -20,13 +20,17 @@ use codec::{
Decode,
Encode,
};
use parking_lot::RwLock;
use sp_core::storage::{
StorageChangeSet,
StorageData,
StorageKey,
};
pub use sp_runtime::traits::SignedExtension;
use std::marker::PhantomData;
use std::{
marker::PhantomData,
sync::Arc,
};
use crate::{
error::BasicError,
@@ -133,7 +137,7 @@ impl StorageMapKey {
/// Client for querying runtime storage.
pub struct StorageClient<'a, T: Config> {
rpc: &'a Rpc<T>,
metadata: &'a Metadata,
metadata: Arc<RwLock<Metadata>>,
iter_page_size: u32,
}
@@ -141,7 +145,7 @@ impl<'a, T: Config> Clone for StorageClient<'a, T> {
fn clone(&self) -> Self {
Self {
rpc: self.rpc,
metadata: self.metadata,
metadata: Arc::clone(&self.metadata),
iter_page_size: self.iter_page_size,
}
}
@@ -149,7 +153,11 @@ impl<'a, T: Config> Clone for StorageClient<'a, T> {
impl<'a, T: Config> StorageClient<'a, T> {
/// Create a new [`StorageClient`]
pub fn new(rpc: &'a Rpc<T>, metadata: &'a Metadata, iter_page_size: u32) -> Self {
pub fn new(
rpc: &'a Rpc<T>,
metadata: Arc<RwLock<Metadata>>,
iter_page_size: u32,
) -> Self {
Self {
rpc,
metadata,
@@ -199,7 +207,8 @@ impl<'a, T: Config> StorageClient<'a, T> {
if let Some(data) = self.fetch(store, hash).await? {
Ok(data)
} else {
let pallet_metadata = self.metadata.pallet(F::PALLET)?;
let metadata = self.metadata.read();
let pallet_metadata = metadata.pallet(F::PALLET)?;
let storage_metadata = pallet_metadata.storage(F::STORAGE)?;
let default = Decode::decode(&mut &storage_metadata.default[..])
.map_err(MetadataError::DefaultError)?;
+10 -14
View File
@@ -165,7 +165,7 @@ impl<'client, T: Config, E: Decode + HasModuleError, Evs: Decode>
/// level [`TransactionProgress::next_item()`] API if you'd like to handle these statuses yourself.
pub async fn wait_for_finalized_success(
self,
) -> Result<TransactionEvents<'client, T, Evs>, Error<E>> {
) -> Result<TransactionEvents<T, Evs>, Error<E>> {
let evs = self.wait_for_finalized().await?.wait_for_success().await?;
Ok(evs)
}
@@ -379,9 +379,7 @@ impl<'client, T: Config, E: Decode + HasModuleError, Evs: Decode>
///
/// **Note:** This has to download block details from the node and decode events
/// from them.
pub async fn wait_for_success(
&self,
) -> Result<TransactionEvents<'client, T, Evs>, Error<E>> {
pub async fn wait_for_success(&self) -> Result<TransactionEvents<T, Evs>, Error<E>> {
let events = self.fetch_events().await?;
// Try to find any errors; return the first one we encounter.
@@ -391,9 +389,9 @@ impl<'client, T: Config, E: Decode + HasModuleError, Evs: Decode>
let dispatch_error = E::decode(&mut &*ev.data)?;
if let Some(error_data) = dispatch_error.module_error_data() {
// Error index is utilized as the first byte from the error array.
let details = self
.client
.metadata()
let locked_metadata = self.client.metadata();
let metadata = locked_metadata.read();
let details = metadata
.error(error_data.pallet_index, error_data.error_index())?;
return Err(Error::Module(ModuleError {
pallet: details.pallet().to_string(),
@@ -416,9 +414,7 @@ impl<'client, T: Config, E: Decode + HasModuleError, Evs: Decode>
///
/// **Note:** This has to download block details from the node and decode events
/// from them.
pub async fn fetch_events(
&self,
) -> Result<TransactionEvents<'client, T, Evs>, BasicError> {
pub async fn fetch_events(&self) -> Result<TransactionEvents<T, Evs>, BasicError> {
let block = self
.client
.rpc()
@@ -450,13 +446,13 @@ impl<'client, T: Config, E: Decode + HasModuleError, Evs: Decode>
/// We can iterate over the events, or look for a specific one.
#[derive(Derivative)]
#[derivative(Debug(bound = ""))]
pub struct TransactionEvents<'client, T: Config, Evs: Decode> {
pub struct TransactionEvents<T: Config, Evs: Decode> {
ext_hash: T::Hash,
ext_idx: u32,
events: Events<'client, T, Evs>,
events: Events<T, Evs>,
}
impl<'client, T: Config, Evs: Decode> TransactionEvents<'client, T, Evs> {
impl<T: Config, Evs: Decode> TransactionEvents<T, Evs> {
/// Return the hash of the block that the transaction has made it into.
pub fn block_hash(&self) -> T::Hash {
self.events.block_hash()
@@ -468,7 +464,7 @@ impl<'client, T: Config, Evs: Decode> TransactionEvents<'client, T, Evs> {
}
/// Return all of the events in the block that the transaction made it into.
pub fn all_events_in_block(&self) -> &events::Events<'client, T, Evs> {
pub fn all_events_in_block(&self) -> &events::Events<T, Evs> {
&self.events
}
+106
View File
@@ -0,0 +1,106 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! For performing runtime updates.
use crate::{
rpc::{
Rpc,
RuntimeVersion,
},
BasicError,
Config,
Metadata,
};
use parking_lot::RwLock;
use std::sync::Arc;
/// Client wrapper for performing runtime updates.
pub struct UpdateClient<T: Config> {
rpc: Rpc<T>,
metadata: Arc<RwLock<Metadata>>,
runtime_version: Arc<RwLock<RuntimeVersion>>,
}
impl<T: Config> UpdateClient<T> {
/// Create a new [`UpdateClient`].
pub fn new(
rpc: Rpc<T>,
metadata: Arc<RwLock<Metadata>>,
runtime_version: Arc<RwLock<RuntimeVersion>>,
) -> Self {
Self {
rpc,
metadata,
runtime_version,
}
}
/// Performs runtime updates indefinitely unless encountering an error.
///
/// *Note:* This should be called from a dedicated background task.
pub async fn perform_runtime_updates(&self) -> Result<(), BasicError> {
// Obtain an update subscription to further detect changes in the runtime version of the node.
let mut update_subscription = self.rpc.subscribe_runtime_version().await?;
while let Some(update_runtime_version) = update_subscription.next().await {
// The Runtime Version obtained via subscription.
let update_runtime_version = update_runtime_version?;
// To ensure there are no races between:
// - starting the subxt::Client (fetching runtime version / metadata)
// - subscribing to the runtime updates
// the node provides its runtime version immediately after subscribing.
//
// In those cases, set the Runtime Version on the client if and only if
// the provided runtime version is different than what the client currently
// has stored.
{
// The Runtime Version of the client, as set during building the client
// or during updates.
let runtime_version = self.runtime_version.read();
if runtime_version.spec_version == update_runtime_version.spec_version {
log::debug!(
"Runtime update not performed for spec_version={}, client has spec_version={}",
update_runtime_version.spec_version, runtime_version.spec_version
);
continue
}
}
// Update the RuntimeVersion first.
{
let mut runtime_version = self.runtime_version.write();
// Update both the `RuntimeVersion` and `Metadata` of the client.
log::info!(
"Performing runtime update from {} to {}",
runtime_version.spec_version,
update_runtime_version.spec_version,
);
*runtime_version = update_runtime_version;
}
// Fetch the new metadata of the runtime node.
let update_metadata = self.rpc.metadata().await?;
log::debug!("Performing metadata update");
let mut metadata = self.metadata.write();
*metadata = update_metadata;
log::debug!("Runtime update completed");
}
Ok(())
}
}