Merge remote-tracking branch 'origin/master' into na-jsonrpsee-core-client

This commit is contained in:
Niklas
2022-02-02 19:16:59 +01:00
71 changed files with 15630 additions and 7119 deletions
+69
View File
@@ -6,6 +6,75 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.16.0] - 2022-02-01
*Note*: This is a significant release which introduces support for V14 metadata and macro based codegen, as well as making many breaking changes to the API.
### Changed
- Log debug message for JSON-RPC response ([#415](https://github.com/paritytech/subxt/pull/415))
- Only convert struct names to camel case for Call variant structs ([#412](https://github.com/paritytech/subxt/pull/412))
- Parameterize AccountData ([#409](https://github.com/paritytech/subxt/pull/409))
- Allow decoding Events containing BitVecs ([#408](https://github.com/paritytech/subxt/pull/408))
- Custom derive for cli ([#407](https://github.com/paritytech/subxt/pull/407))
- make storage-n-map fields public too ([#404](https://github.com/paritytech/subxt/pull/404))
- add constants api to codegen ([#402](https://github.com/paritytech/subxt/pull/402))
- Expose transaction::TransactionProgress as public ([#401](https://github.com/paritytech/subxt/pull/401))
- add interbtc-clients to real world usage section ([#397](https://github.com/paritytech/subxt/pull/397))
- Make own version of RuntimeVersion to avoid mismatches ([#395](https://github.com/paritytech/subxt/pull/395))
- Use the generated DispatchError instead of the hardcoded Substrate one ([#394](https://github.com/paritytech/subxt/pull/394))
- Remove bounds on Config trait that aren't strictly necessary ([#389](https://github.com/paritytech/subxt/pull/389))
- add crunch to readme ([#388](https://github.com/paritytech/subxt/pull/388))
- fix remote example ([#386](https://github.com/paritytech/subxt/pull/386))
- fetch system chain, name and version ([#385](https://github.com/paritytech/subxt/pull/385))
- Fix compact event field decoding ([#384](https://github.com/paritytech/subxt/pull/384))
- fix: use index override when decoding enums in events ([#382](https://github.com/paritytech/subxt/pull/382))
- Update to jsonrpsee 0.7 and impl Stream on TransactionProgress ([#380](https://github.com/paritytech/subxt/pull/380))
- Add links to projects using subxt ([#376](https://github.com/paritytech/subxt/pull/376))
- Use released substrate dependencies ([#375](https://github.com/paritytech/subxt/pull/375))
- Configurable Config and Extra types ([#373](https://github.com/paritytech/subxt/pull/373))
- Implement pre_dispatch for SignedExtensions ([#370](https://github.com/paritytech/subxt/pull/370))
- Export TransactionEvents ([#363](https://github.com/paritytech/subxt/pull/363))
- Rebuild test-runtime if substrate binary is updated ([#362](https://github.com/paritytech/subxt/pull/362))
- Expand the subscribe_and_watch example ([#361](https://github.com/paritytech/subxt/pull/361))
- Add TooManyConsumers variant to track latest sp-runtime addition ([#360](https://github.com/paritytech/subxt/pull/360))
- Implement new API for sign_and_submit_then_watch ([#354](https://github.com/paritytech/subxt/pull/354))
- Simpler dependencies ([#353](https://github.com/paritytech/subxt/pull/353))
- Refactor type generation, remove code duplication ([#352](https://github.com/paritytech/subxt/pull/352))
- Make system properties an arbitrary JSON object, plus CI fixes ([#349](https://github.com/paritytech/subxt/pull/349))
- Fix a couple of CI niggles ([#344](https://github.com/paritytech/subxt/pull/344))
- Add timestamp pallet test ([#340](https://github.com/paritytech/subxt/pull/340))
- Add nightly CI check against latest substrate. ([#335](https://github.com/paritytech/subxt/pull/335))
- Ensure metadata is in sync with running node during tests ([#333](https://github.com/paritytech/subxt/pull/333))
- Update to jsonrpsee 0.5.1 ([#332](https://github.com/paritytech/subxt/pull/332))
- Update substrate and hardcoded default ChargeAssetTxPayment extension ([#330](https://github.com/paritytech/subxt/pull/330))
- codegen: fix compact unnamed fields ([#327](https://github.com/paritytech/subxt/pull/327))
- Check docs and run clippy on PRs ([#326](https://github.com/paritytech/subxt/pull/326))
- Additional parameters for SignedExtra ([#322](https://github.com/paritytech/subxt/pull/322))
- fix: also processess initialize and finalize events in event subscription ([#321](https://github.com/paritytech/subxt/pull/321))
- Release initial versions of subxt-codegen and subxt-cli ([#320](https://github.com/paritytech/subxt/pull/320))
- Add some basic usage docs to README. ([#319](https://github.com/paritytech/subxt/pull/319))
- Update jsonrpsee ([#317](https://github.com/paritytech/subxt/pull/317))
- Add missing cargo metadata fields for new crates ([#311](https://github.com/paritytech/subxt/pull/311))
- fix: keep processing a block's events after encountering a dispatch error ([#310](https://github.com/paritytech/subxt/pull/310))
- Codegen: enum variant indices ([#308](https://github.com/paritytech/subxt/pull/308))
- fix extrinsics retracted ([#307](https://github.com/paritytech/subxt/pull/307))
- Add utility pallet tests ([#300](https://github.com/paritytech/subxt/pull/300))
- fix metadata constants ([#299](https://github.com/paritytech/subxt/pull/299))
- Generate runtime API from metadata ([#294](https://github.com/paritytech/subxt/pull/294))
- Add NextKeys and QueuedKeys for session module ([#291](https://github.com/paritytech/subxt/pull/291))
- deps: update jsonrpsee 0.3.0 ([#289](https://github.com/paritytech/subxt/pull/289))
- deps: update jsonrpsee 0.2.0 ([#285](https://github.com/paritytech/subxt/pull/285))
- deps: Reorg the order of deps ([#284](https://github.com/paritytech/subxt/pull/284))
- Expose the rpc client in Client ([#267](https://github.com/paritytech/subxt/pull/267))
- update jsonrpsee to 0.2.0-alpha.6 ([#266](https://github.com/paritytech/subxt/pull/266))
- Remove funty pin, upgrade codec ([#265](https://github.com/paritytech/subxt/pull/265))
- Use async-trait ([#264](https://github.com/paritytech/subxt/pull/264))
- [jsonrpsee http client]: support tokio1 & tokio02. ([#263](https://github.com/paritytech/subxt/pull/263))
- impl `From<Arc<WsClient>>` and `From<Arc<HttpClient>>` ([#257](https://github.com/paritytech/subxt/pull/257))
- update jsonrpsee ([#251](https://github.com/paritytech/subxt/pull/251))
- return none if subscription returns early ([#250](https://github.com/paritytech/subxt/pull/250))
## [0.15.0] - 2021-03-15
### Added
+10 -50
View File
@@ -1,52 +1,12 @@
[workspace]
members = [".", "cli", "codegen", "macro", "test-runtime", "client"]
[package]
name = "subxt"
version = "0.15.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
license = "GPL-3.0"
readme = "README.md"
repository = "https://github.com/paritytech/subxt"
documentation = "https://docs.rs/subxt"
homepage = "https://www.parity.io/"
description = "Submit extrinsics (transactions) to a substrate node via RPC"
keywords = ["parity", "substrate", "blockchain"]
include = ["Cargo.toml", "src/**/*.rs", "README.md", "LICENSE"]
[dependencies]
async-trait = "0.1.49"
bitvec = { version = "0.20.1", default-features = false, features = ["alloc"] }
codec = { package = "parity-scale-codec", version = "2", default-features = false, features = ["derive", "full", "bit-vec"] }
chameleon = "0.1.0"
scale-info = { version = "1.0.0", features = ["bit-vec"] }
futures = "0.3.13"
hex = "0.4.3"
jsonrpsee = { version = "0.7.0", features = ["macros", "async-client", "client-ws-transport"] }
log = "0.4.14"
num-traits = { version = "0.2.14", default-features = false }
serde = { version = "1.0.124", features = ["derive"] }
serde_json = "1.0.64"
thiserror = "1.0.24"
subxt-macro = { version = "0.1.0", path = "macro" }
sp-core = { git = "https://github.com/paritytech/substrate/", branch = "master", default-features = false }
sp-runtime = { git = "https://github.com/paritytech/substrate/", branch = "master", default-features = false }
sp-version = { package = "sp-version", git = "https://github.com/paritytech/substrate/", branch = "master" }
frame-metadata = "14.0.0"
[dev-dependencies]
sp-arithmetic = { git = "https://github.com/paritytech/substrate/", branch = "master", default-features = false }
assert_matches = "1.5.0"
async-std = { version = "1.9.0", features = ["attributes", "tokio1"] }
env_logger = "0.8.3"
tempdir = "0.3.7"
wabt = "0.10.0"
which = "4.0.2"
test-runtime = { path = "test-runtime" }
sp-keyring = { package = "sp-keyring", git = "https://github.com/paritytech/substrate/", branch = "master" }
members = [
"cli",
"codegen",
"examples",
"macro",
"subxt",
"test-runtime",
# TODO(niklasad1): remove to separate repo
"client"
]
+1 -1
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
+9 -5
View File
@@ -8,6 +8,8 @@ A library to **sub**mit e**xt**rinsics to a [substrate](https://github.com/parit
## Usage
Take a look in the [examples](./examples/examples) folder for various `subxt` usage examples.
### Downloading metadata from a Substrate node
Use the [`subxt-cli`](./cli) tool to download the metadata for your target runtime from a node.
@@ -26,7 +28,7 @@ a different node then the `metadata` command accepts a `--url` argument.
### Generating the runtime API from the downloaded metadata
Declare a module and decorate it with the `subxt` attribute which points at the downloaded metadata for the
Declare a module and decorate it with the `subxt` attribute which points at the downloaded metadata for the
target runtime:
```rust
@@ -34,20 +36,20 @@ target runtime:
pub mod node_runtime { }
```
**Important:** `runtime_metadata_path` resolves to a path relative to the directory where your crate's `Cargo.toml`
**Important:** `runtime_metadata_path` resolves to a path relative to the directory where your crate's `Cargo.toml`
resides ([`CARGO_MANIFEST_DIR`](https://doc.rust-lang.org/cargo/reference/environment-variables.html)), *not* relative to the source file.
### Initializing the API client
API is still a work in progress. See [examples](./examples) for the current usage.
API is still a work in progress. See [examples](./examples/examples) for the current usage.
### Querying Storage
API is still a work in progress. See [tests](./tests/integration/frame) for the current usage.
API is still a work in progress. See [tests](./subxt/tests/integration/frame) for the current usage.
### Submitting Extrinsics
API is still a work in progress. See [examples](./examples/polkadot_balance_transfer.rs) for the current usage.
API is still a work in progress. See [examples](./examples/examples/polkadot_balance_transfer.rs) for the current usage.
## Integration Testing
@@ -67,6 +69,8 @@ Please add your project to this list via a PR.
- [cargo-contract](https://github.com/paritytech/cargo-contract/pull/79) CLI for interacting with Wasm smart contracts.
- [xcm-cli](https://github.com/ascjones/xcm-cli) CLI for submitting XCM messages.
- [phala-pherry](https://github.com/Phala-Network/phala-blockchain/tree/master/standalone/pherry) The relayer between Phala blockchain and the off-chain Secure workers.
- [crunch](https://github.com/turboflakes/crunch) CLI to claim staking rewards in batch every Era or X hours for substrate-based chains.
- [interbtc-clients](https://github.com/interlay/interbtc-clients) Client implementations for the interBTC parachain; notably the Vault / Relayer and Oracle.
**Alternatives**
+76
View File
@@ -0,0 +1,76 @@
# Release Checklist
These steps assume that you've checked out the Subxt repository and are in the root directory of it.
We also assume that ongoing work done is being merged directly to the `master` branch.
1. Ensure that everything you'd like to see released is on the `master` branch.
2. Create a release branch off `master`, for example `release-v0.17.0`. Decide how far the version needs to be bumped based
on the changes to date. If unsure what to bump the version to (e.g. is it a major, minor or patch release), check with the
Parity Tools team.
3. Check that you're happy with the current documentation.
```
cargo doc --open --all-features
```
CI checks for broken internal links at the moment. Optionally you can also confirm that any external links
are still valid like so:
```
cargo install cargo-deadlinks
cargo deadlinks --check-http -- --all-features
```
If there are minor issues with the documentation, they can be fixed in the release branch.
4. Bump the crate version in `Cargo.toml` to whatever was decided in step 2 for `subxt-codegen`, `subxt-macro`, `subxt` and `subxt-cli`.
5. Update `CHANGELOG.md` to reflect the difference between this release and the last. If you're unsure of
what to add, check with the Tools team. See the `CHANGELOG.md` file for details of the format it follows.
Any [closed PRs](https://github.com/paritytech/subxt/pulls?q=is%3Apr+is%3Aclosed) between the last release and
this release branch should be noted.
6. Commit any of the above changes to the release branch and open a PR in GitHub with a base of `master`.
7. Once the branch has been reviewed and passes CI, merge it.
8. Now, we're ready to publish the release to crates.io.
Checkout `master`, ensuring we're looking at that latest merge (`git pull`).
The crates in this repository need publishing in a specific order, since they depend on each other.
Additionally, `subxt-macro` has a circular dev dependency on `subxt`, so we use `cargo hack` to remove
dev dependencies (and `--allow-dirty` to ignore the git changes as a result) to publish it.
So, first install `cargo hack` with `cargo install cargo hack`. Next, you can run something like the following
command to publish each crate in the required order (allowing a little time inbetween each to let `crates.io` catch up)
with what we've published).
```
(cd codegen && cargo publish) && \
sleep 10 && \
(cd macro && cargo hack publish --no-dev-deps --allow-dirty) && \
sleep 10 && \
(cd subxt && cargo publish) && \
sleep 10 && \
(cd cli && cargo publish);
```
If you run into any issues regarding crates not being able to find suitable versions of other `subxt-*` crates,
you may just need to wait a little longer and then run the remaining portion of that command.
9. If the release was successful, tag the commit that we released in the `master` branch with the
version that we just released, for example:
```
git tag -s v0.17.0 # use the version number you've just published to crates.io, not this one
git push --tags
```
Once this is pushed, go along to [the releases page on GitHub](https://github.com/paritytech/subxt/releases)
and draft a new release which points to the tag you just pushed to `master` above. Copy the changelog comments
for the current release into the release description.
+3 -3
View File
@@ -1,6 +1,6 @@
[package]
name = "subxt-cli"
version = "0.2.0"
version = "0.16.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
@@ -16,7 +16,7 @@ path = "src/main.rs"
[dependencies]
# perform subxt codegen
subxt-codegen = { version = "0.2.0", path = "../codegen" }
subxt-codegen = { version = "0.16.0", path = "../codegen" }
# parse command line args
structopt = "0.3.25"
# make the request to a substrate node to get the metadata
@@ -36,4 +36,4 @@ scale = { package = "parity-scale-codec", version = "2.3.0", default-features =
# handle urls to communicate with substrate nodes
url = { version = "2.2.2", features = ["serde"] }
# generate the item mod for codegen
syn = "1.0.80"
syn = "1.0.80"
+21 -6
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -33,6 +33,7 @@ use std::{
path::PathBuf,
};
use structopt::StructOpt;
use subxt_codegen::GeneratedTypeDerives;
/// Utilities for working with substrate metadata for subxt.
#[derive(Debug, StructOpt)]
@@ -70,6 +71,9 @@ enum Command {
/// the path to the encoded metadata file.
#[structopt(short, long, parse(from_os_str))]
file: Option<PathBuf>,
/// Additional derives
#[structopt(long = "derive")]
derives: Vec<String>,
},
}
@@ -102,7 +106,7 @@ fn main() -> color_eyre::Result<()> {
}
}
}
Command::Codegen { url, file } => {
Command::Codegen { url, file, derives } => {
if let Some(file) = file.as_ref() {
if url.is_some() {
eyre::bail!("specify one of `--url` or `--file` but not both")
@@ -111,7 +115,7 @@ fn main() -> color_eyre::Result<()> {
let mut file = fs::File::open(file)?;
let mut bytes = Vec::new();
file.read_to_end(&mut bytes)?;
codegen(&mut &bytes[..])?;
codegen(&mut &bytes[..], derives)?;
return Ok(())
}
@@ -119,7 +123,7 @@ fn main() -> color_eyre::Result<()> {
url::Url::parse("http://localhost:9933").expect("default url is valid")
});
let (_, bytes) = fetch_metadata(&url)?;
codegen(&mut &bytes[..])?;
codegen(&mut &bytes[..], derives)?;
Ok(())
}
}
@@ -145,13 +149,24 @@ fn fetch_metadata(url: &url::Url) -> color_eyre::Result<(String, Vec<u8>)> {
Ok((hex_data, bytes))
}
fn codegen<I: Input>(encoded: &mut I) -> color_eyre::Result<()> {
fn codegen<I: Input>(
encoded: &mut I,
raw_derives: Vec<String>,
) -> color_eyre::Result<()> {
let metadata = <RuntimeMetadataPrefixed as Decode>::decode(encoded)?;
let generator = subxt_codegen::RuntimeGenerator::new(metadata);
let item_mod = syn::parse_quote!(
pub mod api {}
);
let runtime_api = generator.generate_runtime(item_mod, Default::default());
let p = raw_derives
.iter()
.map(|raw| syn::parse_str(raw))
.collect::<Result<Vec<_>, _>>()?;
let mut derives = GeneratedTypeDerives::default();
derives.append(p.into_iter());
let runtime_api = generator.generate_runtime(item_mod, derives);
println!("{}", runtime_api);
Ok(())
}
+4 -4
View File
@@ -17,7 +17,7 @@ keywords = ["parity", "substrate", "blockchain"]
[dependencies]
async-std = { version = "1.8.0", features = ["tokio1"] }
futures = "0.3.9"
jsonrpsee = { version = "0.7.0", features = ["async-client"] }
jsonrpsee = { version = "0.8.0", features = ["async-client"] }
log = "0.4.13"
thiserror = "1.0.23"
serde_json = "1"
@@ -27,7 +27,7 @@ sp-keyring = { git = "https://github.com/paritytech/substrate.git", branch = "ma
sc-network = { git = "https://github.com/paritytech/substrate.git", branch = "master" }
sc-service = { git = "https://github.com/paritytech/substrate.git", branch = "master" }
tokio = { version = "1.10", features = ["rt-multi-thread"] }
tokio = { version = "1.16", features = ["rt-multi-thread"] }
[target.'cfg(target_arch="x86_64")'.dependencies]
sc-service = { git = "https://github.com/paritytech/substrate.git", branch = "master", default-features = false, features = [
@@ -36,9 +36,9 @@ sc-service = { git = "https://github.com/paritytech/substrate.git", branch = "ma
[dev-dependencies]
async-std = { version = "1.8.0", features = ["attributes"] }
env_logger = "0.8.2"
env_logger = "0.9"
tracing-subscriber = { version = "0.3.3", features = ["env-filter"] }
node-cli = { git = "https://github.com/paritytech/substrate.git", branch = "master", default-features = false }
tempdir = "0.3.7"
subxt = { path = ".." }
subxt = { path = "../subxt" }
test-runtime = { path = "../test-runtime" }
+6 -10
View File
@@ -27,13 +27,10 @@ use sp_keyring::AccountKeyring;
use subxt::{
ClientBuilder,
PairSigner,
DefaultConfig,
DefaultExtra
};
use tempdir::TempDir;
use test_runtime::node_runtime::{
self,
system,
DefaultConfig,
};
#[async_std::test]
pub async fn test_embedded_client() {
@@ -74,12 +71,11 @@ pub async fn test_embedded_client() {
let ext_client = ClientBuilder::new()
.set_client(client)
.build::<DefaultConfig>()
.build()
.await
.unwrap();
let api: node_runtime::RuntimeApi<DefaultConfig> =
ext_client.clone().to_runtime_api();
let api: test_runtime::node_runtime::RuntimeApi<DefaultConfig, DefaultExtra<_>> = ext_client.clone().to_runtime_api();
// verify that we can read storage
api.storage()
@@ -88,7 +84,7 @@ pub async fn test_embedded_client() {
.await
.unwrap();
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
let alice = PairSigner::new(AccountKeyring::Alice.pair());
let bob_address = AccountKeyring::Bob.to_account_id().into();
// verify that we can call dispatchable functions
@@ -106,5 +102,5 @@ pub async fn test_embedded_client() {
panic!("{:?}", events);
// verify that we receive events
//assert!(success);
// assert!(success);
}
+2 -2
View File
@@ -1,6 +1,6 @@
[package]
name = "subxt-codegen"
version = "0.2.0"
version = "0.16.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
@@ -25,4 +25,4 @@ scale-info = { version = "1.0.0", features = ["bit-vec"] }
[dev-dependencies]
bitvec = { version = "0.20.1", default-features = false, features = ["alloc"] }
pretty_assertions = "0.6.1"
pretty_assertions = "1.0.0"
+49 -21
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -14,12 +14,18 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use crate::types::TypeGenerator;
use crate::types::{
CompositeDefFields,
TypeGenerator,
};
use frame_metadata::{
PalletCallMetadata,
PalletMetadata,
};
use heck::SnakeCase as _;
use heck::{
CamelCase as _,
SnakeCase as _,
};
use proc_macro2::TokenStream as TokenStream2;
use proc_macro_error::abort_call_site;
use quote::{
@@ -34,22 +40,38 @@ pub fn generate_calls(
call: &PalletCallMetadata<PortableForm>,
types_mod_ident: &syn::Ident,
) -> TokenStream2 {
let struct_defs =
super::generate_structs_from_variants(type_gen, call.ty.id(), "Call");
let struct_defs = super::generate_structs_from_variants(
type_gen,
call.ty.id(),
|name| name.to_camel_case().into(),
"Call",
);
let (call_structs, call_fns): (Vec<_>, Vec<_>) = struct_defs
.iter()
.map(|struct_def| {
let (call_fn_args, call_args): (Vec<_>, Vec<_>) = struct_def
.named_fields()
.unwrap_or_else(|| {
abort_call_site!(
"Call variant for type {} must have all named fields",
call.ty.id()
)
})
.iter()
.map(|(name, ty)| (quote!( #name: #ty ), name))
.unzip();
let (call_fn_args, call_args): (Vec<_>, Vec<_>) =
match struct_def.fields {
CompositeDefFields::Named(ref named_fields) => {
named_fields
.iter()
.map(|(name, field)| {
let fn_arg_type = &field.type_path;
let call_arg = if field.is_boxed() {
quote! { #name: ::std::boxed::Box::new(#name) }
} else {
quote! { #name }
};
(quote!( #name: #fn_arg_type ), call_arg)
})
.unzip()
}
CompositeDefFields::NoFields => Default::default(),
CompositeDefFields::Unnamed(_) =>
abort_call_site!(
"Call variant for type {} must have all named fields",
call.ty.id()
)
};
let pallet_name = &pallet.name;
let call_struct_name = &struct_def.name;
@@ -68,7 +90,7 @@ pub fn generate_calls(
pub fn #fn_name(
&self,
#( #call_fn_args, )*
) -> ::subxt::SubmittableExtrinsic<'a, T, #call_struct_name> {
) -> ::subxt::SubmittableExtrinsic<'a, T, X, A, #call_struct_name, DispatchError> {
let call = #call_struct_name { #( #call_args, )* };
::subxt::SubmittableExtrinsic::new(self.client, call)
}
@@ -80,18 +102,24 @@ pub fn generate_calls(
quote! {
pub mod calls {
use super::#types_mod_ident;
type DispatchError = #types_mod_ident::sp_runtime::DispatchError;
#( #call_structs )*
pub struct TransactionApi<'a, T: ::subxt::Config + ::subxt::ExtrinsicExtraData<T>> {
pub struct TransactionApi<'a, T: ::subxt::Config, X, A> {
client: &'a ::subxt::Client<T>,
marker: ::core::marker::PhantomData<(X, A)>,
}
impl<'a, T: ::subxt::Config> TransactionApi<'a, T>
impl<'a, T, X, A> TransactionApi<'a, T, X, A>
where
T: ::subxt::Config + ::subxt::ExtrinsicExtraData<T>,
T: ::subxt::Config,
X: ::subxt::SignedExtra<T>,
A: ::subxt::AccountData,
{
pub fn new(client: &'a ::subxt::Client<T>) -> Self {
Self { client }
Self { client, marker: ::core::marker::PhantomData }
}
#( #call_fns )*
+56
View File
@@ -0,0 +1,56 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use crate::types::TypeGenerator;
use frame_metadata::PalletConstantMetadata;
use heck::SnakeCase as _;
use proc_macro2::TokenStream as TokenStream2;
use quote::{
format_ident,
quote,
};
use scale_info::form::PortableForm;
pub fn generate_constants(
type_gen: &TypeGenerator,
constants: &[PalletConstantMetadata<PortableForm>],
types_mod_ident: &syn::Ident,
) -> TokenStream2 {
let constant_fns = constants.iter().map(|constant| {
let fn_name = format_ident!("{}", constant.name.to_snake_case());
let return_ty = type_gen.resolve_type_path(constant.ty.id(), &[]);
let ref_slice = constant.value.as_slice();
quote! {
pub fn #fn_name(&self) -> ::core::result::Result<#return_ty, ::subxt::BasicError> {
Ok(::subxt::codec::Decode::decode(&mut &[#(#ref_slice,)*][..])?)
}
}
});
quote! {
pub mod constants {
use super::#types_mod_ident;
pub struct ConstantsApi;
impl ConstantsApi {
#(#constant_fns)*
}
}
}
}
+162
View File
@@ -0,0 +1,162 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use frame_metadata::v14::RuntimeMetadataV14;
use proc_macro2::{
Span as Span2,
TokenStream as TokenStream2,
};
use quote::quote;
/// Tokens which allow us to provide static error information in the generated output.
pub struct ErrorDetails {
/// This type definition will be used in the `dispatch_error_impl_fn` and is
/// expected to be generated somewhere in scope for that to be possible.
pub type_def: TokenStream2,
// A function which will live in an impl block for our `DispatchError`,
// to statically return details for known error types:
pub dispatch_error_impl_fn: TokenStream2,
}
impl ErrorDetails {
fn emit_compile_error(err: &str) -> ErrorDetails {
let err_lit_str = syn::LitStr::new(err, Span2::call_site());
ErrorDetails {
type_def: quote!(),
dispatch_error_impl_fn: quote!(compile_error!(#err_lit_str)),
}
}
}
/// The purpose of this is to enumerate all of the possible `(module_index, error_index)` error
/// variants, so that we can convert `u8` error codes inside a generated `DispatchError` into
/// nicer error strings with documentation. To do this, we emit the type we'll return instances of,
/// and a function that returns such an instance for all of the error codes seen in the metadata.
pub fn generate_error_details(metadata: &RuntimeMetadataV14) -> ErrorDetails {
let errors = match pallet_errors(metadata) {
Ok(errors) => errors,
Err(e) => {
let err_string =
format!("Failed to generate error details from metadata: {}", e);
return ErrorDetails::emit_compile_error(&err_string)
}
};
let match_body_items = errors.into_iter().map(|err| {
let docs = err.docs;
let pallet_index = err.pallet_index;
let error_index = err.error_index;
let pallet_name = err.pallet;
let error_name = err.error;
quote! {
(#pallet_index, #error_index) => Some(ErrorDetails {
pallet: #pallet_name,
error: #error_name,
docs: #docs
})
}
});
ErrorDetails {
type_def: quote! {
pub struct ErrorDetails {
pub pallet: &'static str,
pub error: &'static str,
pub docs: &'static str,
}
},
dispatch_error_impl_fn: quote! {
pub fn details(&self) -> Option<ErrorDetails> {
if let Self::Module { index, error } = self {
match (index, error) {
#( #match_body_items ),*,
_ => None
}
} else {
None
}
}
},
}
}
fn pallet_errors(
metadata: &RuntimeMetadataV14,
) -> Result<Vec<ErrorMetadata>, InvalidMetadataError> {
let get_type_def_variant = |type_id: u32| {
let ty = metadata
.types
.resolve(type_id)
.ok_or(InvalidMetadataError::MissingType(type_id))?;
if let scale_info::TypeDef::Variant(var) = ty.type_def() {
Ok(var)
} else {
Err(InvalidMetadataError::TypeDefNotVariant(type_id))
}
};
let mut pallet_errors = vec![];
for pallet in &metadata.pallets {
let error = match &pallet.error {
Some(err) => err,
None => continue,
};
let type_def_variant = get_type_def_variant(error.ty.id())?;
for var in type_def_variant.variants().iter() {
pallet_errors.push(ErrorMetadata {
pallet_index: pallet.index,
error_index: var.index(),
pallet: pallet.name.clone(),
error: var.name().clone(),
docs: var.docs().join("\n"),
});
}
}
Ok(pallet_errors)
}
/// Information about each error that we find in the metadata;
/// used to generate the static error information.
#[derive(Clone, Debug)]
struct ErrorMetadata {
pub pallet_index: u8,
pub error_index: u8,
pub pallet: String,
pub error: String,
pub docs: String,
}
#[derive(Debug)]
enum InvalidMetadataError {
MissingType(u32),
TypeDefNotVariant(u32),
}
impl std::fmt::Display for InvalidMetadataError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
InvalidMetadataError::MissingType(n) => {
write!(f, "Type {} missing from type registry", n)
}
InvalidMetadataError::TypeDefNotVariant(n) => {
write!(f, "Type {} was not a variant/enum type", n)
}
}
}
}
+7 -3
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -29,8 +29,12 @@ pub fn generate_events(
event: &PalletEventMetadata<PortableForm>,
types_mod_ident: &syn::Ident,
) -> TokenStream2 {
let struct_defs =
super::generate_structs_from_variants(type_gen, event.ty.id(), "Event");
let struct_defs = super::generate_structs_from_variants(
type_gen,
event.ty.id(),
|name| name.into(),
"Event",
);
let event_structs = struct_defs.iter().map(|struct_def| {
let pallet_name = &pallet.name;
let event_struct = &struct_def.name;
+186 -54
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -14,21 +14,45 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
//! Generate code for submitting extrinsics and query storage of a Substrate runtime.
//!
//! ## Note
//!
//! By default the codegen will search for the `System` pallet's `Account` storage item, which is
//! the conventional location where an account's index (aka nonce) is stored.
//!
//! If this `System::Account` storage item is discovered, then it is assumed that:
//!
//! 1. The type of the storage item is a `struct` (aka a composite type)
//! 2. There exists a field called `nonce` which contains the account index.
//!
//! These assumptions are based on the fact that the `frame_system::AccountInfo` type is the default
//! configured type, and that the vast majority of chain configurations will use this.
//!
//! If either of these conditions are not satisfied, the codegen will fail.
mod calls;
mod constants;
mod errors;
mod events;
mod storage;
use super::GeneratedTypeDerives;
use crate::{
ir,
struct_def::StructDef,
types::TypeGenerator,
types::{
CompositeDef,
CompositeDefFields,
TypeGenerator,
},
};
use codec::Decode;
use frame_metadata::{
v14::RuntimeMetadataV14,
PalletMetadata,
RuntimeMetadata,
RuntimeMetadataPrefixed,
StorageEntryType,
};
use heck::SnakeCase as _;
use proc_macro2::TokenStream as TokenStream2;
@@ -37,6 +61,7 @@ use quote::{
format_ident,
quote,
};
use scale_info::form::PortableForm;
use std::{
collections::HashMap,
fs,
@@ -152,6 +177,7 @@ impl RuntimeGenerator {
)
})
.collect::<Vec<_>>();
let modules = pallets_with_mod_names.iter().map(|(pallet, mod_name)| {
let calls = if let Some(ref calls) = pallet.calls {
calls::generate_calls(&type_gen, pallet, calls, types_mod_ident)
@@ -171,12 +197,23 @@ impl RuntimeGenerator {
quote!()
};
let constants_mod = if !pallet.constants.is_empty() {
constants::generate_constants(
&type_gen,
&pallet.constants,
types_mod_ident,
)
} else {
quote!()
};
quote! {
pub mod #mod_name {
use super::#types_mod_ident;
#calls
#event
#storage_mod
#constants_mod
}
}
});
@@ -202,6 +239,12 @@ impl RuntimeGenerator {
};
let mod_ident = item_mod_ir.ident;
let pallets_with_constants =
pallets_with_mod_names
.iter()
.filter_map(|(pallet, pallet_mod_name)| {
(!pallet.constants.is_empty()).then(|| pallet_mod_name)
});
let pallets_with_storage =
pallets_with_mod_names
.iter()
@@ -215,6 +258,20 @@ impl RuntimeGenerator {
pallet.calls.as_ref().map(|_| pallet_mod_name)
});
let error_details = errors::generate_error_details(&self.metadata);
let error_type = error_details.type_def;
let error_fn = error_details.dispatch_error_impl_fn;
let default_account_data_ident = format_ident!("DefaultAccountData");
let default_account_data_impl = generate_default_account_data_impl(
&pallets_with_mod_names,
&default_account_data_ident,
&type_gen,
);
let type_parameter_default_impl = default_account_data_impl
.as_ref()
.map(|_| quote!( = #default_account_data_ident ));
quote! {
#[allow(dead_code, unused_imports, non_camel_case_types)]
pub mod #mod_ident {
@@ -222,76 +279,70 @@ impl RuntimeGenerator {
#( #modules )*
#types_mod
/// Default configuration of common types for a target Substrate runtime.
#[derive(Clone, Debug, Default, Eq, PartialEq)]
pub struct DefaultConfig;
/// The default error type returned when there is a runtime issue.
pub type DispatchError = self::runtime_types::sp_runtime::DispatchError;
impl ::subxt::Config for DefaultConfig {
type Index = u32;
type BlockNumber = u32;
type Hash = ::subxt::sp_core::H256;
type Hashing = ::subxt::sp_runtime::traits::BlakeTwo256;
type AccountId = ::subxt::sp_runtime::AccountId32;
type Address = ::subxt::sp_runtime::MultiAddress<Self::AccountId, u32>;
type Header = ::subxt::sp_runtime::generic::Header<
Self::BlockNumber, ::subxt::sp_runtime::traits::BlakeTwo256
>;
type Signature = ::subxt::sp_runtime::MultiSignature;
type Extrinsic = ::subxt::sp_runtime::OpaqueExtrinsic;
// Statically generate error information so that we don't need runtime metadata for it.
#error_type
impl DispatchError {
#error_fn
}
impl ::subxt::ExtrinsicExtraData<DefaultConfig> for DefaultConfig {
type AccountData = AccountData;
type Extra = ::subxt::DefaultExtra<DefaultConfig>;
}
#default_account_data_impl
pub type AccountData = self::system::storage::Account;
impl ::subxt::AccountData<DefaultConfig> for AccountData {
fn nonce(result: &<Self as ::subxt::StorageEntry>::Value) -> <DefaultConfig as ::subxt::Config>::Index {
result.nonce
}
fn storage_entry(account_id: <DefaultConfig as ::subxt::Config>::AccountId) -> Self {
Self(account_id)
}
}
pub struct RuntimeApi<T: ::subxt::Config + ::subxt::ExtrinsicExtraData<T>> {
pub struct RuntimeApi<T: ::subxt::Config, X, A #type_parameter_default_impl> {
pub client: ::subxt::Client<T>,
marker: ::core::marker::PhantomData<(X, A)>,
}
impl<T> ::core::convert::From<::subxt::Client<T>> for RuntimeApi<T>
impl<T, X, A> ::core::convert::From<::subxt::Client<T>> for RuntimeApi<T, X, A>
where
T: ::subxt::Config + ::subxt::ExtrinsicExtraData<T>,
T: ::subxt::Config,
X: ::subxt::SignedExtra<T>,
A: ::subxt::AccountData,
{
fn from(client: ::subxt::Client<T>) -> Self {
Self { client }
Self { client, marker: ::core::marker::PhantomData }
}
}
impl<'a, T> RuntimeApi<T>
impl<'a, T, X, A> RuntimeApi<T, X, A>
where
T: ::subxt::Config + ::subxt::ExtrinsicExtraData<T>,
T: ::subxt::Config,
X: ::subxt::SignedExtra<T>,
A: ::subxt::AccountData,
{
pub fn constants(&'a self) -> ConstantsApi {
ConstantsApi
}
pub fn storage(&'a self) -> StorageApi<'a, T> {
StorageApi { client: &self.client }
}
pub fn tx(&'a self) -> TransactionApi<'a, T> {
TransactionApi { client: &self.client }
pub fn tx(&'a self) -> TransactionApi<'a, T, X, A> {
TransactionApi { client: &self.client, marker: ::core::marker::PhantomData }
}
}
pub struct StorageApi<'a, T>
where
T: ::subxt::Config + ::subxt::ExtrinsicExtraData<T>,
pub struct ConstantsApi;
impl ConstantsApi
{
#(
pub fn #pallets_with_constants(&self) -> #pallets_with_constants::constants::ConstantsApi {
#pallets_with_constants::constants::ConstantsApi
}
)*
}
pub struct StorageApi<'a, T: ::subxt::Config> {
client: &'a ::subxt::Client<T>,
}
impl<'a, T> StorageApi<'a, T>
where
T: ::subxt::Config + ::subxt::ExtrinsicExtraData<T>,
T: ::subxt::Config,
{
#(
pub fn #pallets_with_storage(&self) -> #pallets_with_storage::storage::StorageApi<'a, T> {
@@ -300,16 +351,19 @@ impl RuntimeGenerator {
)*
}
pub struct TransactionApi<'a, T: ::subxt::Config + ::subxt::ExtrinsicExtraData<T>> {
pub struct TransactionApi<'a, T: ::subxt::Config, X, A> {
client: &'a ::subxt::Client<T>,
marker: ::core::marker::PhantomData<(X, A)>,
}
impl<'a, T> TransactionApi<'a, T>
impl<'a, T, X, A> TransactionApi<'a, T, X, A>
where
T: ::subxt::Config + ::subxt::ExtrinsicExtraData<T>,
T: ::subxt::Config,
X: ::subxt::SignedExtra<T>,
A: ::subxt::AccountData,
{
#(
pub fn #pallets_with_calls(&self) -> #pallets_with_calls::calls::TransactionApi<'a, T> {
pub fn #pallets_with_calls(&self) -> #pallets_with_calls::calls::TransactionApi<'a, T, X, A> {
#pallets_with_calls::calls::TransactionApi::new(self.client)
}
)*
@@ -319,21 +373,99 @@ impl RuntimeGenerator {
}
}
pub fn generate_structs_from_variants(
/// Most chains require a valid account nonce as part of the extrinsic, so the default behaviour of
/// the client is to fetch the nonce for the current account.
///
/// The account index (aka nonce) is commonly stored in the `System` pallet's `Account` storage item.
/// This function attempts to find that storage item, and if it is present will implement the
/// `subxt::AccountData` trait for it. This allows the client to construct the appropriate
/// storage key from the account id, and then retrieve the `nonce` from the resulting storage item.
fn generate_default_account_data_impl(
pallets_with_mod_names: &[(&PalletMetadata<PortableForm>, syn::Ident)],
default_impl_name: &syn::Ident,
type_gen: &TypeGenerator,
) -> Option<TokenStream2> {
let storage = pallets_with_mod_names
.iter()
.find(|(pallet, _)| pallet.name == "System")
.and_then(|(pallet, _)| pallet.storage.as_ref())?;
let storage_entry = storage
.entries
.iter()
.find(|entry| entry.name == "Account")?;
// resolve the concrete types for `AccountId` (to build the key) and `Index` to extract the
// account index (nonce) value from the result.
let (account_id_ty, account_nonce_ty) =
if let StorageEntryType::Map { key, value, .. } = &storage_entry.ty {
let account_id_ty = type_gen.resolve_type_path(key.id(), &[]);
let account_data_ty = type_gen.resolve_type(value.id());
let nonce_field = if let scale_info::TypeDef::Composite(composite) =
account_data_ty.type_def()
{
composite
.fields()
.iter()
.find(|f| f.name() == Some(&"nonce".to_string()))?
} else {
abort_call_site!("Expected a `nonce` field in the account info struct")
};
let account_nonce_ty = type_gen.resolve_type_path(nonce_field.ty().id(), &[]);
(account_id_ty, account_nonce_ty)
} else {
abort_call_site!("System::Account should be a `StorageEntryType::Map`")
};
// this path to the storage entry depends on storage codegen.
let storage_entry_path = quote!(self::system::storage::Account);
Some(quote! {
/// The default storage entry from which to fetch an account nonce, required for
/// constructing a transaction.
pub enum #default_impl_name {}
impl ::subxt::AccountData for #default_impl_name {
type StorageEntry = #storage_entry_path;
type AccountId = #account_id_ty;
type Index = #account_nonce_ty;
fn nonce(result: &<Self::StorageEntry as ::subxt::StorageEntry>::Value) -> Self::Index {
result.nonce
}
fn storage_entry(account_id: Self::AccountId) -> Self::StorageEntry {
#storage_entry_path(account_id)
}
}
})
}
pub fn generate_structs_from_variants<'a, F>(
type_gen: &'a TypeGenerator,
type_id: u32,
variant_to_struct_name: F,
error_message_type_name: &str,
) -> Vec<StructDef> {
) -> Vec<CompositeDef>
where
F: Fn(&str) -> std::borrow::Cow<str>,
{
let ty = type_gen.resolve_type(type_id);
if let scale_info::TypeDef::Variant(variant) = ty.type_def() {
variant
.variants()
.iter()
.map(|var| {
StructDef::new(
var.name(),
let struct_name = variant_to_struct_name(var.name());
let fields = CompositeDefFields::from_scale_info_fields(
struct_name.as_ref(),
var.fields(),
Some(syn::parse_quote!(pub)),
&[],
type_gen,
);
CompositeDef::struct_def(
var.name(),
Default::default(),
fields,
Some(parse_quote!(pub)),
type_gen,
)
})
+5 -4
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -50,6 +50,7 @@ pub fn generate_storage(
quote! {
pub mod storage {
use super::#types_mod_ident;
#( #storage_structs )*
pub struct StorageApi<'a, T: ::subxt::Config> {
@@ -119,7 +120,7 @@ fn generate_storage_entry_fns(
fields.iter().map(|(_, field_type)| field_type);
let field_names = fields.iter().map(|(field_name, _)| field_name);
let entry_struct = quote! {
pub struct #entry_struct_ident( #( #tuple_struct_fields ),* );
pub struct #entry_struct_ident( #( pub #tuple_struct_fields ),* );
};
let constructor =
quote!( #entry_struct_ident( #( #field_names ),* ) );
@@ -195,7 +196,7 @@ fn generate_storage_entry_fns(
pub async fn #fn_name_iter(
&self,
hash: ::core::option::Option<T::Hash>,
) -> ::core::result::Result<::subxt::KeyIter<'a, T, #entry_struct_ident>, ::subxt::Error> {
) -> ::core::result::Result<::subxt::KeyIter<'a, T, #entry_struct_ident>, ::subxt::BasicError> {
self.client.storage().iter(hash).await
}
)
@@ -211,7 +212,7 @@ fn generate_storage_entry_fns(
&self,
#( #key_args, )*
hash: ::core::option::Option<T::Hash>,
) -> ::core::result::Result<#return_ty, ::subxt::Error> {
) -> ::core::result::Result<#return_ty, ::subxt::BasicError> {
let entry = #constructor;
self.client.storage().#fetch(&entry, hash).await
}
+1 -2
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -74,7 +74,6 @@ impl ItemMod {
#[allow(clippy::large_enum_variant)]
#[derive(Debug, PartialEq, Eq)]
#[allow(clippy::large_enum_variant)]
pub enum Item {
Rust(syn::Item),
Subxt(SubxtItem),
+2 -4
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -17,9 +17,7 @@
//! Library to generate an API for a Substrate runtime from its metadata.
mod api;
mod derives;
mod ir;
mod struct_def;
mod types;
pub use self::{
@@ -27,5 +25,5 @@ pub use self::{
generate_runtime_api,
RuntimeGenerator,
},
derives::GeneratedTypeDerives,
types::GeneratedTypeDerives,
};
-142
View File
@@ -1,142 +0,0 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use super::GeneratedTypeDerives;
use crate::types::{
TypeGenerator,
TypePath,
};
use heck::CamelCase as _;
use proc_macro2::TokenStream as TokenStream2;
use proc_macro_error::abort_call_site;
use quote::{
format_ident,
quote,
};
use scale_info::form::PortableForm;
#[derive(Debug)]
pub struct StructDef {
pub name: syn::Ident,
pub fields: StructDefFields,
pub field_visibility: Option<syn::Visibility>,
pub derives: GeneratedTypeDerives,
}
#[derive(Debug)]
pub enum StructDefFields {
Named(Vec<(syn::Ident, TypePath)>),
Unnamed(Vec<TypePath>),
}
impl StructDef {
pub fn new(
ident: &str,
fields: &[scale_info::Field<PortableForm>],
field_visibility: Option<syn::Visibility>,
type_gen: &TypeGenerator,
) -> Self {
let name = format_ident!("{}", ident.to_camel_case());
let fields = fields
.iter()
.map(|field| {
let name = field.name().map(|f| format_ident!("{}", f));
let ty = type_gen.resolve_type_path(field.ty().id(), &[]);
(name, ty)
})
.collect::<Vec<_>>();
let named = fields.iter().all(|(name, _)| name.is_some());
let unnamed = fields.iter().all(|(name, _)| name.is_none());
let fields = if named {
StructDefFields::Named(
fields
.iter()
.map(|(name, field)| {
let name = name.as_ref().unwrap_or_else(|| {
abort_call_site!("All fields should have a name")
});
(name.clone(), field.clone())
})
.collect(),
)
} else if unnamed {
StructDefFields::Unnamed(
fields.iter().map(|(_, field)| field.clone()).collect(),
)
} else {
abort_call_site!(
"Struct '{}': Fields should either be all named or all unnamed.",
name,
)
};
let derives = type_gen.derives().clone();
Self {
name,
fields,
field_visibility,
derives,
}
}
pub fn named_fields(&self) -> Option<&[(syn::Ident, TypePath)]> {
if let StructDefFields::Named(ref fields) = self.fields {
Some(fields)
} else {
None
}
}
}
impl quote::ToTokens for StructDef {
fn to_tokens(&self, tokens: &mut TokenStream2) {
let visibility = &self.field_visibility;
let derives = &self.derives;
tokens.extend(match self.fields {
StructDefFields::Named(ref named_fields) => {
let fields = named_fields.iter().map(|(name, ty)| {
let compact_attr =
ty.is_compact().then(|| quote!( #[codec(compact)] ));
quote! { #compact_attr #visibility #name: #ty }
});
let name = &self.name;
quote! {
#derives
pub struct #name {
#( #fields ),*
}
}
}
StructDefFields::Unnamed(ref unnamed_fields) => {
let fields = unnamed_fields.iter().map(|ty| {
let compact_attr =
ty.is_compact().then(|| quote!( #[codec(compact)] ));
quote! { #compact_attr #visibility #ty }
});
let name = &self.name;
quote! {
#derives
pub struct #name (
#( #fields ),*
);
}
}
})
}
}
+345
View File
@@ -0,0 +1,345 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use super::{
Field,
GeneratedTypeDerives,
TypeDefParameters,
TypeGenerator,
TypeParameter,
TypePath,
};
use proc_macro2::TokenStream;
use proc_macro_error::abort_call_site;
use quote::{
format_ident,
quote,
};
use scale_info::{
TypeDef,
TypeDefPrimitive,
};
/// Representation of a type which consists of a set of fields. Used to generate Rust code for
/// either a standalone `struct` definition, or an `enum` variant.
///
/// Fields can either be named or unnamed in either case.
#[derive(Debug)]
pub struct CompositeDef {
/// The name of the `struct`, or the name of the `enum` variant.
pub name: syn::Ident,
/// Generate either a standalone `struct` or an `enum` variant.
pub kind: CompositeDefKind,
/// The fields of the type, which are either all named or all unnamed.
pub fields: CompositeDefFields,
}
impl CompositeDef {
/// Construct a definition which will generate code for a standalone `struct`.
pub fn struct_def(
ident: &str,
type_params: TypeDefParameters,
fields_def: CompositeDefFields,
field_visibility: Option<syn::Visibility>,
type_gen: &TypeGenerator,
) -> Self {
let mut derives = type_gen.derives().clone();
let fields: Vec<_> = fields_def.field_types().collect();
if fields.len() == 1 {
// any single field wrapper struct with a concrete unsigned int type can derive
// CompactAs.
let field = &fields[0];
if !type_params
.params()
.iter()
.any(|tp| Some(tp.original_name.to_string()) == field.type_name)
{
let ty = type_gen.resolve_type(field.type_id);
if matches!(
ty.type_def(),
TypeDef::Primitive(
TypeDefPrimitive::U8
| TypeDefPrimitive::U16
| TypeDefPrimitive::U32
| TypeDefPrimitive::U64
| TypeDefPrimitive::U128
)
) {
derives.push_codec_compact_as()
}
}
}
let name = format_ident!("{}", ident);
Self {
name,
kind: CompositeDefKind::Struct {
derives,
type_params,
field_visibility,
},
fields: fields_def,
}
}
/// Construct a definition which will generate code for an `enum` variant.
pub fn enum_variant_def(ident: &str, fields: CompositeDefFields) -> Self {
let name = format_ident!("{}", ident);
Self {
name,
kind: CompositeDefKind::EnumVariant,
fields,
}
}
}
impl quote::ToTokens for CompositeDef {
fn to_tokens(&self, tokens: &mut TokenStream) {
let name = &self.name;
let decl = match &self.kind {
CompositeDefKind::Struct {
derives,
type_params,
field_visibility,
} => {
let phantom_data = type_params.unused_params_phantom_data();
let fields = self
.fields
.to_struct_field_tokens(phantom_data, field_visibility.as_ref());
let trailing_semicolon = matches!(
self.fields,
CompositeDefFields::NoFields | CompositeDefFields::Unnamed(_)
)
.then(|| quote!(;));
quote! {
#derives
pub struct #name #type_params #fields #trailing_semicolon
}
}
CompositeDefKind::EnumVariant => {
let fields = self.fields.to_enum_variant_field_tokens();
quote! {
#name #fields
}
}
};
tokens.extend(decl)
}
}
/// Which kind of composite type are we generating, either a standalone `struct` or an `enum`
/// variant.
#[derive(Debug)]
pub enum CompositeDefKind {
/// Composite type comprising a Rust `struct`.
Struct {
derives: GeneratedTypeDerives,
type_params: TypeDefParameters,
field_visibility: Option<syn::Visibility>,
},
/// Comprises a variant of a Rust `enum`.
EnumVariant,
}
/// Encapsulates the composite fields, keeping the invariant that all fields are either named or
/// unnamed.
#[derive(Debug)]
pub enum CompositeDefFields {
NoFields,
Named(Vec<(syn::Ident, CompositeDefFieldType)>),
Unnamed(Vec<CompositeDefFieldType>),
}
impl CompositeDefFields {
/// Construct a new set of composite fields from the supplied [`::scale_info::Field`]s.
pub fn from_scale_info_fields(
name: &str,
fields: &[Field],
parent_type_params: &[TypeParameter],
type_gen: &TypeGenerator,
) -> Self {
if fields.is_empty() {
return Self::NoFields
}
let mut named_fields = Vec::new();
let mut unnamed_fields = Vec::new();
for field in fields {
let type_path =
type_gen.resolve_type_path(field.ty().id(), parent_type_params);
let field_type = CompositeDefFieldType::new(
field.ty().id(),
type_path,
field.type_name().cloned(),
);
if let Some(name) = field.name() {
let field_name = format_ident!("{}", name);
named_fields.push((field_name, field_type))
} else {
unnamed_fields.push(field_type)
}
}
if !named_fields.is_empty() && !unnamed_fields.is_empty() {
abort_call_site!(
"'{}': Fields should either be all named or all unnamed.",
name,
)
}
if !named_fields.is_empty() {
Self::Named(named_fields)
} else {
Self::Unnamed(unnamed_fields)
}
}
/// Returns the set of composite fields.
pub fn field_types(&self) -> Box<dyn Iterator<Item = &CompositeDefFieldType> + '_> {
match self {
Self::NoFields => Box::new([].iter()),
Self::Named(named_fields) => Box::new(named_fields.iter().map(|(_, f)| f)),
Self::Unnamed(unnamed_fields) => Box::new(unnamed_fields.iter()),
}
}
/// Generate the code for fields which will compose a `struct`.
pub fn to_struct_field_tokens(
&self,
phantom_data: Option<syn::TypePath>,
visibility: Option<&syn::Visibility>,
) -> TokenStream {
match self {
Self::NoFields => {
if let Some(phantom_data) = phantom_data {
quote! { ( #phantom_data ) }
} else {
quote! {}
}
}
Self::Named(ref fields) => {
let fields = fields.iter().map(|(name, ty)| {
let compact_attr = ty.compact_attr();
quote! { #compact_attr #visibility #name: #ty }
});
let marker = phantom_data.map(|phantom_data| {
quote!(
#[codec(skip)]
#visibility __subxt_unused_type_params: #phantom_data
)
});
quote!(
{
#( #fields, )*
#marker
}
)
}
Self::Unnamed(ref fields) => {
let fields = fields.iter().map(|ty| {
let compact_attr = ty.compact_attr();
quote! { #compact_attr #visibility #ty }
});
let marker = phantom_data.map(|phantom_data| {
quote!(
#[codec(skip)]
#visibility #phantom_data
)
});
quote! {
(
#( #fields, )*
#marker
)
}
}
}
}
/// Generate the code for fields which will compose an `enum` variant.
pub fn to_enum_variant_field_tokens(&self) -> TokenStream {
match self {
Self::NoFields => quote! {},
Self::Named(ref fields) => {
let fields = fields.iter().map(|(name, ty)| {
let compact_attr = ty.compact_attr();
quote! { #compact_attr #name: #ty }
});
quote!( { #( #fields, )* } )
}
Self::Unnamed(ref fields) => {
let fields = fields.iter().map(|ty| {
let compact_attr = ty.compact_attr();
quote! { #compact_attr #ty }
});
quote! { ( #( #fields, )* ) }
}
}
}
}
/// Represents a field of a composite type to be generated.
#[derive(Debug)]
pub struct CompositeDefFieldType {
pub type_id: u32,
pub type_path: TypePath,
pub type_name: Option<String>,
}
impl CompositeDefFieldType {
/// Construct a new [`CompositeDefFieldType`].
pub fn new(type_id: u32, type_path: TypePath, type_name: Option<String>) -> Self {
CompositeDefFieldType {
type_id,
type_path,
type_name,
}
}
/// Returns `true` if the field is a [`::std::boxed::Box`].
pub fn is_boxed(&self) -> bool {
// Use the type name to detect a `Box` field.
// Should be updated once `Box` types are no longer erased:
// https://github.com/paritytech/scale-info/pull/82
matches!(&self.type_name, Some(ty_name) if ty_name.contains("Box<"))
}
/// Returns the `#[codec(compact)]` attribute if the type is compact.
fn compact_attr(&self) -> Option<TokenStream> {
self.type_path
.is_compact()
.then(|| quote!( #[codec(compact)] ))
}
}
impl quote::ToTokens for CompositeDefFieldType {
fn to_tokens(&self, tokens: &mut TokenStream) {
let ty_path = &self.type_path;
if self.is_boxed() {
tokens.extend(quote! { ::std::boxed::Box<#ty_path> })
} else {
tokens.extend(quote! { #ty_path })
};
}
}
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -14,7 +14,10 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use syn::punctuated::Punctuated;
use syn::{
parse_quote,
punctuated::Punctuated,
};
#[derive(Debug, Clone)]
pub struct GeneratedTypeDerives {
@@ -26,11 +29,20 @@ impl GeneratedTypeDerives {
Self { derives }
}
/// Add `::subxt::codec::CompactAs` to the derives.
pub fn push_codec_compact_as(&mut self) {
self.derives.push(parse_quote!(::subxt::codec::CompactAs));
}
pub fn append(&mut self, derives: impl Iterator<Item = syn::Path>) {
for derive in derives {
self.derives.push(derive)
}
}
pub fn push(&mut self, derive: syn::Path) {
self.derives.push(derive);
}
}
impl Default for GeneratedTypeDerives {
@@ -38,15 +50,18 @@ impl Default for GeneratedTypeDerives {
let mut derives = Punctuated::new();
derives.push(syn::parse_quote!(::subxt::codec::Encode));
derives.push(syn::parse_quote!(::subxt::codec::Decode));
derives.push(syn::parse_quote!(Debug));
Self::new(derives)
}
}
impl quote::ToTokens for GeneratedTypeDerives {
fn to_tokens(&self, tokens: &mut proc_macro2::TokenStream) {
let derives = &self.derives;
tokens.extend(quote::quote! {
#[derive(#derives)]
})
if !self.derives.is_empty() {
let derives = &self.derives;
tokens.extend(quote::quote! {
#[derive(#derives)]
})
}
}
}
+14 -3
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -14,12 +14,14 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
mod composite_def;
mod derives;
#[cfg(test)]
mod tests;
mod type_def;
mod type_def_params;
mod type_path;
use super::GeneratedTypeDerives;
use proc_macro2::{
Ident,
Span,
@@ -41,7 +43,14 @@ use std::collections::{
};
pub use self::{
composite_def::{
CompositeDef,
CompositeDefFieldType,
CompositeDefFields,
},
derives::GeneratedTypeDerives,
type_def::TypeDefGen,
type_def_params::TypeDefParameters,
type_path::{
TypeParameter,
TypePath,
@@ -50,6 +59,8 @@ pub use self::{
},
};
pub type Field = scale_info::Field<PortableForm>;
/// Generate a Rust module containing all types defined in the supplied [`PortableRegistry`].
#[derive(Debug)]
pub struct TypeGenerator<'a> {
@@ -126,7 +137,7 @@ impl<'a> TypeGenerator<'a> {
if path.len() == 1 {
child_mod
.types
.insert(ty.path().clone(), TypeDefGen { ty, type_gen: self });
.insert(ty.path().clone(), TypeDefGen::from_type(ty, self));
} else {
self.insert_type(ty, id, path[1..].to_vec(), root_mod_ident, child_mod)
}
+72 -50
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -64,7 +64,7 @@ fn generate_struct_with_primitives() {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct S {
pub a: ::core::primitive::bool,
pub b: ::core::primitive::u32,
@@ -110,12 +110,12 @@ fn generate_struct_with_a_struct_field() {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Child {
pub a: ::core::primitive::i32,
}
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Parent {
pub a: ::core::primitive::bool,
pub b: root::subxt_codegen::types::tests::Child,
@@ -155,10 +155,10 @@ fn generate_tuple_struct() {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Child(pub ::core::primitive::i32,);
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Parent(pub ::core::primitive::bool, pub root::subxt_codegen::types::tests::Child,);
}
}
@@ -237,44 +237,34 @@ fn derive_compact_as_for_uint_wrapper_structs() {
pub mod tests {
use super::root;
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct Su128 { pub a: ::core::primitive::u128, }
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct Su16 { pub a: ::core::primitive::u16, }
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct Su32 { pub a: ::core::primitive::u32, }
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct Su64 { pub a: ::core::primitive::u64, }
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct Su8 { pub a: ::core::primitive::u8, }
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct TSu128(pub ::core::primitive::u128,);
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct TSu16(pub ::core::primitive::u16,);
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct TSu32(pub ::core::primitive::u32,);
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct TSu64(pub ::core::primitive::u64,);
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct TSu8(pub ::core::primitive::u8,);
}
}
@@ -310,7 +300,7 @@ fn generate_enum() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub enum E {
# [codec (index = 0)]
A,
@@ -368,7 +358,7 @@ fn compact_fields() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub enum E {
# [codec (index = 0)]
A {
@@ -379,12 +369,12 @@ fn compact_fields() {
B( #[codec(compact)] ::core::primitive::u32,),
}
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct S {
#[codec(compact)] pub a: ::core::primitive::u32,
}
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct TupleStruct(#[codec(compact)] pub ::core::primitive::u32,);
}
}
@@ -418,7 +408,7 @@ fn generate_array_field() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct S {
pub a: [::core::primitive::u8; 32usize],
}
@@ -455,7 +445,7 @@ fn option_fields() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct S {
pub a: ::core::option::Option<::core::primitive::bool>,
pub b: ::core::option::Option<::core::primitive::u32>,
@@ -495,7 +485,7 @@ fn box_fields_struct() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct S {
pub a: ::std::boxed::Box<::core::primitive::bool>,
pub b: ::std::boxed::Box<::core::primitive::u32>,
@@ -535,7 +525,7 @@ fn box_fields_enum() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub enum E {
# [codec (index = 0)]
A(::std::boxed::Box<::core::primitive::bool>,),
@@ -575,7 +565,7 @@ fn range_fields() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct S {
pub a: ::core::ops::Range<::core::primitive::u32>,
pub b: ::core::ops::RangeInclusive<::core::primitive::u32>,
@@ -619,12 +609,12 @@ fn generics() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Bar {
pub b: root::subxt_codegen::types::tests::Foo<::core::primitive::u32>,
pub c: root::subxt_codegen::types::tests::Foo<::core::primitive::u8>,
}
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Foo<_0> {
pub a: _0,
}
@@ -667,12 +657,12 @@ fn generics_nested() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Bar<_0> {
pub b: root::subxt_codegen::types::tests::Foo<_0, ::core::primitive::u32>,
}
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Foo<_0, _1> {
pub a: _0,
pub b: ::core::option::Option<(_0, _1,)>,
@@ -718,7 +708,7 @@ fn generate_bitvec() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct S {
pub lsb: ::subxt::bitvec::vec::BitVec<root::bitvec::order::Lsb0, ::core::primitive::u8>,
pub msb: ::subxt::bitvec::vec::BitVec<root::bitvec::order::Msb0, ::core::primitive::u16>,
@@ -771,16 +761,15 @@ fn generics_with_alias_adds_phantom_data_marker() {
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::CompactAs)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug, ::subxt::codec::CompactAs)]
pub struct NamedFields<_0> {
pub b: ::core::primitive::u32,
#[codec(skip)] pub __subxt_unused_type_params: ::core::marker::PhantomData<_0>,
#[codec(skip)] pub __subxt_unused_type_params: ::core::marker::PhantomData<_0>
}
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct UnnamedFields<_0, _1> (
pub (::core::primitive::u32, ::core::primitive::u32,),
#[codec(skip)] pub ::core::marker::PhantomData<(_0, _1)>,
#[codec(skip)] pub ::core::marker::PhantomData<(_0, _1)>
);
}
}
@@ -794,7 +783,7 @@ fn modules() {
pub mod a {
#[allow(unused)]
#[derive(scale_info::TypeInfo)]
pub struct Foo {}
pub struct Foo;
pub mod b {
#[allow(unused)]
@@ -840,20 +829,20 @@ fn modules() {
pub mod b {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Bar {
pub a: root::subxt_codegen::types::tests::m::a::Foo,
}
}
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
pub struct Foo {}
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Foo;
}
pub mod c {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode)]
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct Foo {
pub a: root::subxt_codegen::types::tests::m::a::b::Bar,
}
@@ -864,3 +853,36 @@ fn modules() {
.to_string()
)
}
#[test]
fn dont_force_struct_names_camel_case() {
#[allow(unused)]
#[derive(TypeInfo)]
struct AB;
let mut registry = Registry::new();
registry.register_type(&meta_type::<AB>());
let portable_types: PortableRegistry = registry.into();
let type_gen = TypeGenerator::new(
&portable_types,
"root",
Default::default(),
Default::default(),
);
let types = type_gen.generate_types_mod();
let tests_mod = get_mod(&types, MOD_PATH).unwrap();
assert_eq!(
tests_mod.into_token_stream().to_string(),
quote! {
pub mod tests {
use super::root;
#[derive(::subxt::codec::Encode, ::subxt::codec::Decode, Debug)]
pub struct AB;
}
}
.to_string()
)
}
+90 -253
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -15,9 +15,12 @@
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use super::{
CompositeDef,
CompositeDefFields,
GeneratedTypeDerives,
TypeDefParameters,
TypeGenerator,
TypeParameter,
TypePath,
};
use proc_macro2::TokenStream;
use quote::{
@@ -26,12 +29,9 @@ use quote::{
};
use scale_info::{
form::PortableForm,
Field,
Type,
TypeDef,
TypeDefPrimitive,
};
use std::collections::HashSet;
use syn::parse_quote;
/// Generates a Rust `struct` or `enum` definition based on the supplied [`scale-info::Type`].
@@ -40,17 +40,20 @@ use syn::parse_quote;
/// generated types in the module.
#[derive(Debug)]
pub struct TypeDefGen<'a> {
/// The type generation context, allows resolving of type paths for the fields of the
/// generated type.
pub(super) type_gen: &'a TypeGenerator<'a>,
/// Contains the definition of the type to be generated.
pub(super) ty: Type<PortableForm>,
/// The type parameters of the type to be generated
type_params: TypeDefParameters,
/// The derives with which to annotate the generated type.
derives: &'a GeneratedTypeDerives,
/// The kind of type to be generated.
ty_kind: TypeDefGenKind,
}
impl<'a> quote::ToTokens for TypeDefGen<'a> {
fn to_tokens(&self, tokens: &mut TokenStream) {
let type_params = self
.ty
impl<'a> TypeDefGen<'a> {
/// Construct a type definition for codegen from the given [`scale_info::Type`].
pub fn from_type(ty: Type<PortableForm>, type_gen: &'a TypeGenerator) -> Self {
let derives = type_gen.derives();
let type_params = ty
.type_params()
.iter()
.enumerate()
@@ -60,6 +63,7 @@ impl<'a> quote::ToTokens for TypeDefGen<'a> {
let tp_name = format_ident!("_{}", i);
Some(TypeParameter {
concrete_type_id: ty.id(),
original_name: tp.name().clone(),
name: tp_name,
})
}
@@ -68,267 +72,100 @@ impl<'a> quote::ToTokens for TypeDefGen<'a> {
})
.collect::<Vec<_>>();
let type_name = self.ty.path().ident().map(|ident| {
let type_params = if !type_params.is_empty() {
quote! { < #( #type_params ),* > }
} else {
quote! {}
};
let ty = format_ident!("{}", ident);
let path = parse_quote! { #ty #type_params};
syn::Type::Path(path)
});
let mut type_params = TypeDefParameters::new(type_params);
let derives = self.type_gen.derives();
match self.ty.type_def() {
let ty_kind = match ty.type_def() {
TypeDef::Composite(composite) => {
let type_name = type_name.expect("structs should have a name");
let (fields, _) =
self.composite_fields(composite.fields(), &type_params, true);
let derive_as_compact = if composite.fields().len() == 1 {
// any single field wrapper struct with a concrete unsigned int type can derive
// CompactAs.
let field = &composite.fields()[0];
if !self
.ty
.type_params()
.iter()
.any(|tp| Some(tp.name()) == field.type_name())
{
let ty = self.type_gen.resolve_type(field.ty().id());
if matches!(
ty.type_def(),
TypeDef::Primitive(
TypeDefPrimitive::U8
| TypeDefPrimitive::U16
| TypeDefPrimitive::U32
| TypeDefPrimitive::U64
| TypeDefPrimitive::U128
)
) {
Some(quote!( #[derive(::subxt::codec::CompactAs)] ))
} else {
None
}
} else {
None
}
} else {
None
};
let ty_toks = quote! {
#derive_as_compact
#derives
pub struct #type_name #fields
};
tokens.extend(ty_toks);
let type_name = ty.path().ident().expect("structs should have a name");
let fields = CompositeDefFields::from_scale_info_fields(
&type_name,
composite.fields(),
type_params.params(),
type_gen,
);
type_params.update_unused(fields.field_types());
let composite_def = CompositeDef::struct_def(
&type_name,
type_params.clone(),
fields,
Some(parse_quote!(pub)),
type_gen,
);
TypeDefGenKind::Struct(composite_def)
}
TypeDef::Variant(variant) => {
let type_name = type_name.expect("variants should have a name");
let mut variants = Vec::new();
let mut used_type_params = HashSet::new();
let type_params_set: HashSet<_> = type_params.iter().cloned().collect();
let type_name = ty.path().ident().expect("variants should have a name");
let variants = variant
.variants()
.iter()
.map(|v| {
let fields = CompositeDefFields::from_scale_info_fields(
v.name(),
v.fields(),
type_params.params(),
type_gen,
);
type_params.update_unused(fields.field_types());
let variant_def =
CompositeDef::enum_variant_def(v.name(), fields);
(v.index(), variant_def)
})
.collect();
for v in variant.variants() {
let variant_name = format_ident!("{}", v.name());
let (fields, unused_type_params) = if v.fields().is_empty() {
let unused = type_params_set.iter().cloned().collect::<Vec<_>>();
(quote! {}, unused)
} else {
self.composite_fields(v.fields(), &type_params, false)
};
let index = proc_macro2::Literal::u8_unsuffixed(v.index());
variants.push(quote! {
#[codec(index = #index)]
#variant_name #fields
});
let unused_params_set = unused_type_params.iter().cloned().collect();
let used_params = type_params_set.difference(&unused_params_set);
TypeDefGenKind::Enum(type_name, variants)
}
_ => TypeDefGenKind::BuiltIn,
};
for used_param in used_params {
used_type_params.insert(used_param.clone());
}
}
Self {
type_params,
derives,
ty_kind,
}
}
}
let unused_type_params = type_params_set
.difference(&used_type_params)
.cloned()
impl<'a> quote::ToTokens for TypeDefGen<'a> {
fn to_tokens(&self, tokens: &mut TokenStream) {
match &self.ty_kind {
TypeDefGenKind::Struct(composite) => composite.to_tokens(tokens),
TypeDefGenKind::Enum(type_name, variants) => {
let mut variants = variants
.iter()
.map(|(index, def)| {
let index = proc_macro2::Literal::u8_unsuffixed(*index);
quote! {
#[codec(index = #index)]
#def
}
})
.collect::<Vec<_>>();
if !unused_type_params.is_empty() {
let phantom = Self::phantom_data(&unused_type_params);
if let Some(phantom) = self.type_params.unused_params_phantom_data() {
variants.push(quote! {
__Ignore(#phantom)
})
}
let enum_ident = format_ident!("{}", type_name);
let type_params = &self.type_params;
let derives = self.derives;
let ty_toks = quote! {
#derives
pub enum #type_name {
pub enum #enum_ident #type_params {
#( #variants, )*
}
};
tokens.extend(ty_toks);
}
_ => (), // all built-in types should already be in scope
TypeDefGenKind::BuiltIn => (), /* all built-in types should already be in scope */
}
}
}
impl<'a> TypeDefGen<'a> {
fn composite_fields(
&self,
fields: &'a [Field<PortableForm>],
type_params: &'a [TypeParameter],
is_struct: bool,
) -> (TokenStream, Vec<TypeParameter>) {
let named = fields.iter().all(|f| f.name().is_some());
let unnamed = fields.iter().all(|f| f.name().is_none());
fn unused_type_params<'a>(
type_params: &'a [TypeParameter],
types: impl Iterator<Item = &'a TypePath>,
) -> Vec<TypeParameter> {
let mut used_type_params = HashSet::new();
for ty in types {
ty.parent_type_params(&mut used_type_params)
}
let type_params_set: HashSet<_> = type_params.iter().cloned().collect();
let mut unused = type_params_set
.difference(&used_type_params)
.cloned()
.collect::<Vec<_>>();
unused.sort();
unused
}
let ty_toks = |ty_name: &str, ty_path: &TypePath| {
if ty_name.contains("Box<") {
quote! { ::std::boxed::Box<#ty_path> }
} else {
quote! { #ty_path }
}
};
if named {
let fields = fields
.iter()
.map(|field| {
let name = format_ident!(
"{}",
field.name().expect("named field without a name")
);
let ty = self
.type_gen
.resolve_type_path(field.ty().id(), type_params);
(name, ty, field.type_name())
})
.collect::<Vec<_>>();
let mut fields_tokens = fields
.iter()
.map(|(name, ty, ty_name)| {
let field_type = match ty_name {
Some(ty_name) => {
let ty = ty_toks(ty_name, ty);
if is_struct {
quote! ( pub #name: #ty )
} else {
quote! ( #name: #ty )
}
}
None => {
quote! ( #name: #ty )
}
};
if ty.is_compact() {
quote!( #[codec(compact)] #field_type )
} else {
quote!( #field_type )
}
})
.collect::<Vec<_>>();
let unused_params =
unused_type_params(type_params, fields.iter().map(|(_, ty, _)| ty));
if is_struct && !unused_params.is_empty() {
let phantom = Self::phantom_data(&unused_params);
fields_tokens.push(quote! {
#[codec(skip)] pub __subxt_unused_type_params: #phantom
})
}
let fields = quote! {
{
#( #fields_tokens, )*
}
};
(fields, unused_params)
} else if unnamed {
let type_paths = fields
.iter()
.map(|field| {
let ty = self
.type_gen
.resolve_type_path(field.ty().id(), type_params);
(ty, field.type_name())
})
.collect::<Vec<_>>();
let mut fields_tokens = type_paths
.iter()
.map(|(ty, ty_name)| {
let field_type = match ty_name {
Some(ty_name) => {
let ty = ty_toks(ty_name, ty);
if is_struct {
quote! { pub #ty }
} else {
quote! { #ty }
}
}
None => {
quote! { #ty }
}
};
if ty.is_compact() {
quote!( #[codec(compact)] #field_type )
} else {
quote!( #field_type )
}
})
.collect::<Vec<_>>();
let unused_params =
unused_type_params(type_params, type_paths.iter().map(|(ty, _)| ty));
if is_struct && !unused_params.is_empty() {
let phantom_data = Self::phantom_data(&unused_params);
fields_tokens.push(quote! { #[codec(skip)] pub #phantom_data })
}
let fields = quote! { ( #( #fields_tokens, )* ) };
let fields_tokens = if is_struct {
// add a semicolon for tuple structs
quote! { #fields; }
} else {
fields
};
(fields_tokens, unused_params)
} else {
panic!("Fields must be either all named or all unnamed")
}
}
fn phantom_data(params: &[TypeParameter]) -> TokenStream {
let params = if params.len() == 1 {
let param = &params[0];
quote! { #param }
} else {
quote! { ( #( #params ), * ) }
};
quote! ( ::core::marker::PhantomData<#params> )
}
#[derive(Debug)]
pub enum TypeDefGenKind {
Struct(CompositeDef),
Enum(String, Vec<(u8, CompositeDef)>),
BuiltIn,
}
+87
View File
@@ -0,0 +1,87 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use super::TypeParameter;
use crate::types::CompositeDefFieldType;
use quote::quote;
use std::collections::BTreeSet;
/// Represents the set of generic type parameters for generating a type definition e.g. the `T` in
/// `Foo<T>`.
///
/// Additionally this allows generating a `PhantomData` type for any type params which are unused
/// in the type definition itself.
#[derive(Clone, Debug, Default)]
pub struct TypeDefParameters {
params: Vec<TypeParameter>,
unused: BTreeSet<TypeParameter>,
}
impl TypeDefParameters {
/// Create a new [`TypeDefParameters`] instance.
pub fn new(params: Vec<TypeParameter>) -> Self {
let unused = params.iter().cloned().collect();
Self { params, unused }
}
/// Update the set of unused type parameters by removing those that are used in the given
/// fields.
pub fn update_unused<'a>(
&mut self,
fields: impl Iterator<Item = &'a CompositeDefFieldType>,
) {
let mut used_type_params = BTreeSet::new();
for field in fields {
field.type_path.parent_type_params(&mut used_type_params)
}
for used_type_param in &used_type_params {
self.unused.remove(used_type_param);
}
}
/// Construct a [`core::marker::PhantomData`] for the type unused type params.
pub fn unused_params_phantom_data(&self) -> Option<syn::TypePath> {
if self.unused.is_empty() {
return None
}
let params = if self.unused.len() == 1 {
let param = self
.unused
.iter()
.next()
.expect("Checked for exactly one unused param");
quote! { #param }
} else {
let params = self.unused.iter();
quote! { ( #( #params ), * ) }
};
Some(syn::parse_quote! { ::core::marker::PhantomData<#params> })
}
/// Returns the set of type parameters.
pub fn params(&self) -> &[TypeParameter] {
&self.params
}
}
impl<'a> quote::ToTokens for TypeDefParameters {
fn to_tokens(&self, tokens: &mut proc_macro2::TokenStream) {
if !self.params.is_empty() {
let params = &self.params;
tokens.extend(quote! { < #( #params ),* > })
}
}
}
+7 -6
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -28,7 +28,7 @@ use scale_info::{
TypeDef,
TypeDefPrimitive,
};
use std::collections::HashSet;
use std::collections::BTreeSet;
use syn::parse_quote;
#[derive(Clone, Debug)]
@@ -67,7 +67,7 @@ impl TypePath {
/// a: Vec<Option<T>>, // the parent type param here is `T`
/// }
/// ```
pub fn parent_type_params(&self, acc: &mut HashSet<TypeParameter>) {
pub fn parent_type_params(&self, acc: &mut BTreeSet<TypeParameter>) {
match self {
Self::Parameter(type_parameter) => {
acc.insert(type_parameter.clone());
@@ -173,7 +173,7 @@ impl TypePathType {
}
TypeDef::Compact(_) => {
let compact_type = &self.params[0];
syn::Type::Path(parse_quote! ( #compact_type ))
parse_quote! ( #compact_type )
}
TypeDef::BitSequence(_) => {
let bit_order_type = &self.params[0];
@@ -195,7 +195,7 @@ impl TypePathType {
/// a: Vec<Option<T>>, // the parent type param here is `T`
/// }
/// ```
fn parent_type_params(&self, acc: &mut HashSet<TypeParameter>) {
fn parent_type_params(&self, acc: &mut BTreeSet<TypeParameter>) {
for p in &self.params {
p.parent_type_params(acc);
}
@@ -205,6 +205,7 @@ impl TypePathType {
#[derive(Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Hash)]
pub struct TypeParameter {
pub(super) concrete_type_id: u32,
pub(super) original_name: String,
pub(super) name: proc_macro2::Ident,
}
@@ -235,7 +236,7 @@ impl quote::ToTokens for TypePathSubstitute {
}
impl TypePathSubstitute {
fn parent_type_params(&self, acc: &mut HashSet<TypeParameter>) {
fn parent_type_params(&self, acc: &mut BTreeSet<TypeParameter>) {
for p in &self.params {
p.parent_type_params(acc);
}
+21
View File
@@ -0,0 +1,21 @@
[package]
name = "subxt-examples"
version = "0.16.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
publish = false
license = "GPL-3.0"
repository = "https://github.com/paritytech/subxt"
documentation = "https://docs.rs/subxt"
homepage = "https://www.parity.io/"
description = "Subxt example usage"
[dev-dependencies]
subxt = { path = "../subxt" }
async-std = { version = "1.9.0", features = ["attributes", "tokio1"] }
sp-keyring = "4.0.0"
env_logger = "0.9.0"
futures = "0.3.13"
codec = { package = "parity-scale-codec", version = "2", default-features = false, features = ["derive", "full", "bit-vec"] }
hex = "0.4.3"
+3
View File
@@ -0,0 +1,3 @@
# Subxt Examples
Take a look in the [examples](./examples) subfolder for various `subxt` usage examples.
+70
View File
@@ -0,0 +1,70 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use sp_keyring::AccountKeyring;
use subxt::{
ClientBuilder,
Config,
DefaultConfig,
DefaultExtra,
PairSigner,
};
#[subxt::subxt(runtime_metadata_path = "examples/polkadot_metadata.scale")]
pub mod polkadot {}
/// Custom [`Config`] impl where the default types for the target chain differ from the
/// [`DefaultConfig`]
#[derive(Clone, Debug, Default, Eq, PartialEq)]
pub struct MyConfig;
impl Config for MyConfig {
// This is different from the default `u32`.
//
// *Note* that in this example it does differ from the actual `Index` type in the
// polkadot runtime used, so some operations will fail. Normally when using a custom `Config`
// impl types MUST match exactly those used in the actual runtime.
type Index = u64;
type BlockNumber = <DefaultConfig as Config>::BlockNumber;
type Hash = <DefaultConfig as Config>::Hash;
type Hashing = <DefaultConfig as Config>::Hashing;
type AccountId = <DefaultConfig as Config>::AccountId;
type Address = <DefaultConfig as Config>::Address;
type Header = <DefaultConfig as Config>::Header;
type Signature = <DefaultConfig as Config>::Signature;
type Extrinsic = <DefaultConfig as Config>::Extrinsic;
}
#[async_std::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api = ClientBuilder::new()
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<MyConfig, DefaultExtra<MyConfig>>>();
let signer = PairSigner::new(AccountKeyring::Alice.pair());
let dest = AccountKeyring::Bob.to_account_id().into();
let hash = api
.tx()
.balances()
.transfer(dest, 10_000)
.sign_and_submit(&signer)
.await?;
println!("Balance transfer extrinsic submitted: {}", hash);
Ok(())
}
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -14,9 +14,13 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
#![allow(clippy::redundant_clone)]
#[subxt::subxt(
runtime_metadata_path = "examples/polkadot_metadata.scale",
generated_type_derives = "Clone, Debug"
// We can add (certain) custom derives to the generated types by providing
// a comma separated list to the below attribute. Most useful for adding `Clone`:
generated_type_derives = "Clone, Hash"
)]
pub mod polkadot {}
@@ -25,6 +29,6 @@ use polkadot::runtime_types::frame_support::PalletId;
#[async_std::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let pallet_id = PalletId([1u8; 8]);
let _ = <PalletId as Clone>::clone(&pallet_id);
let _ = pallet_id.clone();
Ok(())
}
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -22,7 +22,11 @@
//! polkadot --dev --tmp
//! ```
use subxt::ClientBuilder;
use subxt::{
ClientBuilder,
DefaultConfig,
DefaultExtra,
};
#[subxt::subxt(runtime_metadata_path = "examples/polkadot_metadata.scale")]
pub mod polkadot {}
@@ -34,7 +38,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api = ClientBuilder::new()
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<polkadot::DefaultConfig>>();
.to_runtime_api::<polkadot::RuntimeApi<DefaultConfig, DefaultExtra<DefaultConfig>>>();
let mut iter = api.storage().system().account_iter(None).await?;
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -14,7 +14,11 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use subxt::ClientBuilder;
use subxt::{
ClientBuilder,
DefaultConfig,
DefaultExtra,
};
#[subxt::subxt(runtime_metadata_path = "examples/polkadot_metadata.scale")]
pub mod polkadot {}
@@ -24,10 +28,10 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
env_logger::init();
let api = ClientBuilder::new()
.set_url("wss://rpc.polkadot.io")
.set_url("wss://rpc.polkadot.io:443")
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<polkadot::DefaultConfig>>();
.to_runtime_api::<polkadot::RuntimeApi<DefaultConfig, DefaultExtra<DefaultConfig>>>();
let block_number = 1;
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -25,6 +25,8 @@
use sp_keyring::AccountKeyring;
use subxt::{
ClientBuilder,
DefaultConfig,
DefaultExtra,
PairSigner,
};
@@ -41,7 +43,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api = ClientBuilder::new()
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<polkadot::DefaultConfig>>();
.to_runtime_api::<polkadot::RuntimeApi<DefaultConfig, DefaultExtra<DefaultConfig>>>();
let hash = api
.tx()
.balances()
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -26,6 +26,8 @@ use futures::StreamExt;
use sp_keyring::AccountKeyring;
use subxt::{
ClientBuilder,
DefaultConfig,
DefaultExtra,
PairSigner,
};
@@ -53,7 +55,7 @@ async fn simple_transfer() -> Result<(), Box<dyn std::error::Error>> {
let api = ClientBuilder::new()
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<polkadot::DefaultConfig>>();
.to_runtime_api::<polkadot::RuntimeApi<DefaultConfig, DefaultExtra<_>>>();
let balance_transfer = api
.tx()
@@ -85,7 +87,7 @@ async fn simple_transfer_separate_events() -> Result<(), Box<dyn std::error::Err
let api = ClientBuilder::new()
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<polkadot::DefaultConfig>>();
.to_runtime_api::<polkadot::RuntimeApi<DefaultConfig, DefaultExtra<_>>>();
let balance_transfer = api
.tx()
@@ -136,7 +138,7 @@ async fn handle_transfer_events() -> Result<(), Box<dyn std::error::Error>> {
let api = ClientBuilder::new()
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<polkadot::DefaultConfig>>();
.to_runtime_api::<polkadot::RuntimeApi<DefaultConfig, DefaultExtra<_>>>();
let mut balance_transfer_progress = api
.tx()
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -25,6 +25,8 @@
use sp_keyring::AccountKeyring;
use subxt::{
ClientBuilder,
DefaultConfig,
DefaultExtra,
EventSubscription,
PairSigner,
};
@@ -42,11 +44,11 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api = ClientBuilder::new()
.build()
.await?
.to_runtime_api::<polkadot::RuntimeApi<polkadot::DefaultConfig>>();
.to_runtime_api::<polkadot::RuntimeApi<DefaultConfig, DefaultExtra<DefaultConfig>>>();
let sub = api.client.rpc().subscribe_events().await?;
let decoder = api.client.events_decoder();
let mut sub = EventSubscription::<polkadot::DefaultConfig>::new(sub, decoder);
let mut sub = EventSubscription::<DefaultConfig>::new(sub, decoder);
sub.filter_event::<polkadot::balances::events::Transfer>();
api.tx()
@@ -56,7 +58,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
.await?;
let raw = sub.next().await.unwrap().unwrap();
let event = <polkadot::balances::events::Transfer as codec::Decode>::decode(
let event = <polkadot::balances::events::Transfer as subxt::codec::Decode>::decode(
&mut &raw.data[..],
);
if let Ok(e) = event {
+5 -6
View File
@@ -1,6 +1,6 @@
[package]
name = "subxt-macro"
version = "0.1.0"
version = "0.16.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
autotests = false
@@ -27,11 +27,10 @@ quote = "1.0.8"
syn = "1.0.58"
scale-info = "1.0.0"
subxt-codegen = { version = "0.2.0", path = "../codegen" }
subxt-codegen = { path = "../codegen", version = "0.16.0" }
[dev-dependencies]
pretty_assertions = "0.6.1"
subxt = { path = ".." }
pretty_assertions = "1.0.0"
subxt = { path = "../subxt", version = "0.16.0" }
trybuild = "1.0.38"
sp-keyring = { git = "https://github.com/paritytech/substrate/", branch = "master" }
sp-keyring = "4.0.0"
+1 -1
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
-180
View File
@@ -1,180 +0,0 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use crate::{
events::EventsDecodingError,
metadata::{
InvalidMetadataError,
MetadataError,
},
Metadata,
};
use jsonrpsee::core::Error as RpcError;
use sp_core::crypto::SecretStringError;
use sp_runtime::{
transaction_validity::TransactionValidityError,
DispatchError,
};
use thiserror::Error;
/// Error enum.
#[derive(Debug, Error)]
pub enum Error {
/// Io error.
#[error("Io error: {0}")]
Io(#[from] std::io::Error),
/// Codec error.
#[error("Scale codec error: {0}")]
Codec(#[from] codec::Error),
/// Rpc error.
#[error("Rpc error: {0}")]
Rpc(#[from] RpcError),
/// Serde serialization error
#[error("Serde json error: {0}")]
Serialization(#[from] serde_json::error::Error),
/// Secret string error.
#[error("Secret String Error")]
SecretString(SecretStringError),
/// Extrinsic validity error
#[error("Transaction Validity Error: {0:?}")]
Invalid(TransactionValidityError),
/// Invalid metadata error
#[error("Invalid Metadata: {0}")]
InvalidMetadata(#[from] InvalidMetadataError),
/// Invalid metadata error
#[error("Metadata: {0}")]
Metadata(#[from] MetadataError),
/// Runtime error.
#[error("Runtime error: {0}")]
Runtime(#[from] RuntimeError),
/// Events decoding error.
#[error("Events decoding error: {0}")]
EventsDecoding(#[from] EventsDecodingError),
/// Transaction progress error.
#[error("Transaction error: {0}")]
Transaction(#[from] TransactionError),
/// Other error.
#[error("Other error: {0}")]
Other(String),
}
impl From<SecretStringError> for Error {
fn from(error: SecretStringError) -> Self {
Error::SecretString(error)
}
}
impl From<TransactionValidityError> for Error {
fn from(error: TransactionValidityError) -> Self {
Error::Invalid(error)
}
}
impl From<&str> for Error {
fn from(error: &str) -> Self {
Error::Other(error.into())
}
}
impl From<String> for Error {
fn from(error: String) -> Self {
Error::Other(error)
}
}
/// Runtime error.
#[derive(Clone, Debug, Eq, Error, PartialEq)]
pub enum RuntimeError {
/// Module error.
#[error("Runtime module error: {0}")]
Module(PalletError),
/// At least one consumer is remaining so the account cannot be destroyed.
#[error("At least one consumer is remaining so the account cannot be destroyed.")]
ConsumerRemaining,
/// There are no providers so the account cannot be created.
#[error("There are no providers so the account cannot be created.")]
NoProviders,
/// There are too many consumers so the account cannot be created.
#[error("There are too many consumers so the account cannot be created.")]
TooManyConsumers,
/// Bad origin.
#[error("Bad origin: throw by ensure_signed, ensure_root or ensure_none.")]
BadOrigin,
/// Cannot lookup.
#[error("Cannot lookup some information required to validate the transaction.")]
CannotLookup,
/// Other error.
#[error("Other error: {0}")]
Other(String),
}
impl RuntimeError {
/// Converts a `DispatchError` into a subxt error.
pub fn from_dispatch(
metadata: &Metadata,
error: DispatchError,
) -> Result<Self, Error> {
match error {
DispatchError::Module {
index,
error,
message: _,
} => {
let error = metadata.error(index, error)?;
Ok(Self::Module(PalletError {
pallet: error.pallet().to_string(),
error: error.error().to_string(),
description: error.description().to_vec(),
}))
}
DispatchError::BadOrigin => Ok(Self::BadOrigin),
DispatchError::CannotLookup => Ok(Self::CannotLookup),
DispatchError::ConsumerRemaining => Ok(Self::ConsumerRemaining),
DispatchError::NoProviders => Ok(Self::NoProviders),
DispatchError::TooManyConsumers => Ok(Self::TooManyConsumers),
DispatchError::Arithmetic(_math_error) => {
Ok(Self::Other("math_error".into()))
}
DispatchError::Token(_token_error) => Ok(Self::Other("token error".into())),
DispatchError::Other(msg) => Ok(Self::Other(msg.to_string())),
}
}
}
/// Module error.
#[derive(Clone, Debug, Eq, Error, PartialEq)]
#[error("{error} from {pallet}")]
pub struct PalletError {
/// The module where the error originated.
pub pallet: String,
/// The actual error code.
pub error: String,
/// The error description.
pub description: Vec<String>,
}
/// Transaction error.
#[derive(Clone, Debug, Eq, Error, PartialEq)]
pub enum TransactionError {
/// The finality subscription expired (after ~512 blocks we give up if the
/// block hasn't yet been finalized).
#[error("The finality subscription expired")]
FinalitySubscriptionTimeout,
/// The block hash that the tranaction was added to could not be found.
/// This is probably because the block was retracted before being finalized.
#[error("The block containing the transaction can no longer be found (perhaps it was on a non-finalized fork?)")]
BlockHashNotFound,
}
-373
View File
@@ -1,373 +0,0 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use codec::{
Codec,
Compact,
Decode,
Encode,
Error as CodecError,
Input,
};
use std::marker::PhantomData;
use crate::{
metadata::{
EventMetadata,
MetadataError,
},
Config,
Error,
Event,
Metadata,
Phase,
};
use scale_info::{
TypeDef,
TypeDefPrimitive,
};
use sp_core::Bytes;
/// Raw bytes for an Event
#[derive(Debug)]
#[cfg_attr(test, derive(PartialEq, Clone))]
pub struct RawEvent {
/// The name of the pallet from whence the Event originated.
pub pallet: String,
/// The index of the pallet from whence the Event originated.
pub pallet_index: u8,
/// The name of the pallet Event variant.
pub variant: String,
/// The index of the pallet Event variant.
pub variant_index: u8,
/// The raw Event data
pub data: Bytes,
}
impl RawEvent {
/// Attempt to decode this [`RawEvent`] into a specific event.
pub fn as_event<E: Event>(&self) -> Result<Option<E>, CodecError> {
if self.pallet == E::PALLET && self.variant == E::EVENT {
Ok(Some(E::decode(&mut &self.data[..])?))
} else {
Ok(None)
}
}
}
/// Events decoder.
#[derive(Debug, Clone)]
pub struct EventsDecoder<T> {
metadata: Metadata,
marker: PhantomData<T>,
}
impl<T> EventsDecoder<T>
where
T: Config,
{
/// Creates a new `EventsDecoder`.
pub fn new(metadata: Metadata) -> Self {
Self {
metadata,
marker: Default::default(),
}
}
/// Decode events.
pub fn decode_events(
&self,
input: &mut &[u8],
) -> Result<Vec<(Phase, RawEvent)>, Error> {
let compact_len = <Compact<u32>>::decode(input)?;
let len = compact_len.0 as usize;
log::debug!("decoding {} events", len);
let mut r = Vec::new();
for _ in 0..len {
// decode EventRecord
let phase = Phase::decode(input)?;
let pallet_index = input.read_byte()?;
let variant_index = input.read_byte()?;
log::debug!(
"phase {:?}, pallet_index {}, event_variant: {}",
phase,
pallet_index,
variant_index
);
log::debug!("remaining input: {}", hex::encode(&input));
let event_metadata = self.metadata.event(pallet_index, variant_index)?;
let mut event_data = Vec::<u8>::new();
let result = self.decode_raw_event(event_metadata, input, &mut event_data);
let raw = match result {
Ok(()) => {
log::debug!("raw bytes: {}", hex::encode(&event_data),);
let event = RawEvent {
pallet: event_metadata.pallet().to_string(),
pallet_index,
variant: event_metadata.event().to_string(),
variant_index,
data: event_data.into(),
};
// topics come after the event data in EventRecord
let topics = Vec::<T::Hash>::decode(input)?;
log::debug!("topics: {:?}", topics);
event
}
Err(err) => return Err(err),
};
r.push((phase.clone(), raw));
}
Ok(r)
}
fn decode_raw_event(
&self,
event_metadata: &EventMetadata,
input: &mut &[u8],
output: &mut Vec<u8>,
) -> Result<(), Error> {
log::debug!(
"Decoding Event '{}::{}'",
event_metadata.pallet(),
event_metadata.event()
);
for arg in event_metadata.variant().fields() {
let type_id = arg.ty().id();
self.decode_type(type_id, input, output)?
}
Ok(())
}
fn decode_type(
&self,
type_id: u32,
input: &mut &[u8],
output: &mut Vec<u8>,
) -> Result<(), Error> {
let ty = self
.metadata
.resolve_type(type_id)
.ok_or(MetadataError::TypeNotFound(type_id))?;
fn decode_raw<T: Codec>(
input: &mut &[u8],
output: &mut Vec<u8>,
) -> Result<(), Error> {
let decoded = T::decode(input)?;
decoded.encode_to(output);
Ok(())
}
match ty.type_def() {
TypeDef::Composite(composite) => {
for field in composite.fields() {
self.decode_type(field.ty().id(), input, output)?
}
Ok(())
}
TypeDef::Variant(variant) => {
let variant_index = u8::decode(input)?;
variant_index.encode_to(output);
let variant =
variant
.variants()
.get(variant_index as usize)
.ok_or_else(|| {
Error::Other(format!("Variant {} not found", variant_index))
})?;
for field in variant.fields() {
self.decode_type(field.ty().id(), input, output)?;
}
Ok(())
}
TypeDef::Sequence(seq) => {
let len = <Compact<u32>>::decode(input)?;
len.encode_to(output);
for _ in 0..len.0 {
self.decode_type(seq.type_param().id(), input, output)?;
}
Ok(())
}
TypeDef::Array(arr) => {
for _ in 0..arr.len() {
self.decode_type(arr.type_param().id(), input, output)?;
}
Ok(())
}
TypeDef::Tuple(tuple) => {
for field in tuple.fields() {
self.decode_type(field.id(), input, output)?;
}
Ok(())
}
TypeDef::Primitive(primitive) => {
match primitive {
TypeDefPrimitive::Bool => decode_raw::<bool>(input, output),
TypeDefPrimitive::Char => {
Err(EventsDecodingError::UnsupportedPrimitive(
TypeDefPrimitive::Char,
)
.into())
}
TypeDefPrimitive::Str => decode_raw::<String>(input, output),
TypeDefPrimitive::U8 => decode_raw::<u8>(input, output),
TypeDefPrimitive::U16 => decode_raw::<u16>(input, output),
TypeDefPrimitive::U32 => decode_raw::<u32>(input, output),
TypeDefPrimitive::U64 => decode_raw::<u64>(input, output),
TypeDefPrimitive::U128 => decode_raw::<u128>(input, output),
TypeDefPrimitive::U256 => {
Err(EventsDecodingError::UnsupportedPrimitive(
TypeDefPrimitive::U256,
)
.into())
}
TypeDefPrimitive::I8 => decode_raw::<i8>(input, output),
TypeDefPrimitive::I16 => decode_raw::<i16>(input, output),
TypeDefPrimitive::I32 => decode_raw::<i32>(input, output),
TypeDefPrimitive::I64 => decode_raw::<i64>(input, output),
TypeDefPrimitive::I128 => decode_raw::<i128>(input, output),
TypeDefPrimitive::I256 => {
Err(EventsDecodingError::UnsupportedPrimitive(
TypeDefPrimitive::I256,
)
.into())
}
}
}
TypeDef::Compact(_compact) => {
let inner = self
.metadata
.resolve_type(type_id)
.ok_or(MetadataError::TypeNotFound(type_id))?;
let mut decode_compact_primitive = |primitive: &TypeDefPrimitive| {
match primitive {
TypeDefPrimitive::U8 => decode_raw::<Compact<u8>>(input, output),
TypeDefPrimitive::U16 => {
decode_raw::<Compact<u16>>(input, output)
}
TypeDefPrimitive::U32 => {
decode_raw::<Compact<u32>>(input, output)
}
TypeDefPrimitive::U64 => {
decode_raw::<Compact<u64>>(input, output)
}
TypeDefPrimitive::U128 => {
decode_raw::<Compact<u128>>(input, output)
}
prim => {
Err(EventsDecodingError::InvalidCompactPrimitive(
prim.clone(),
)
.into())
}
}
};
match inner.type_def() {
TypeDef::Primitive(primitive) => decode_compact_primitive(primitive),
TypeDef::Composite(composite) => {
match composite.fields() {
[field] => {
let field_ty = self
.metadata
.resolve_type(field.ty().id())
.ok_or_else(|| {
MetadataError::TypeNotFound(field.ty().id())
})?;
if let TypeDef::Primitive(primitive) = field_ty.type_def()
{
decode_compact_primitive(primitive)
} else {
Err(EventsDecodingError::InvalidCompactType(
"Composite type must have a single primitive field"
.into(),
)
.into())
}
}
_ => {
Err(EventsDecodingError::InvalidCompactType(
"Composite type must have a single field".into(),
)
.into())
}
}
}
_ => {
Err(EventsDecodingError::InvalidCompactType(
"Compact type must be a primitive or a composite type".into(),
)
.into())
}
}
}
TypeDef::BitSequence(_bitseq) => {
// decode_raw::<bitvec::BitVec>
unimplemented!("BitVec decoding for events not implemented yet")
}
}
}
}
#[derive(Debug, thiserror::Error)]
pub enum EventsDecodingError {
/// Unsupported primitive type
#[error("Unsupported primitive type {0:?}")]
UnsupportedPrimitive(TypeDefPrimitive),
/// Invalid compact type, must be an unsigned int.
#[error("Invalid compact primitive {0:?}")]
InvalidCompactPrimitive(TypeDefPrimitive),
#[error("Invalid compact composite type {0}")]
InvalidCompactType(String),
}
// #[cfg(test)]
// mod tests {
// use super::*;
// use std::convert::TryFrom;
//
// type DefaultConfig = crate::NodeTemplateRuntime;
//
// #[test]
// fn test_decode_option() {
// let decoder = EventsDecoder::<DefaultConfig>::new(
// Metadata::default(),
// );
//
// let value = Some(0u8);
// let input = value.encode();
// let mut output = Vec::<u8>::new();
// let mut errors = Vec::<RuntimeError>::new();
//
// decoder
// .decode_raw_bytes(
// &[EventArg::Option(Box::new(EventArg::Primitive(
// "u8".to_string(),
// )))],
// &mut &input[..],
// &mut output,
// &mut errors,
// )
// .unwrap();
//
// assert_eq!(output, vec![1, 0]);
// }
// }
+50
View File
@@ -0,0 +1,50 @@
[package]
name = "subxt"
version = "0.16.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
license = "GPL-3.0"
readme = "README.md"
repository = "https://github.com/paritytech/subxt"
documentation = "https://docs.rs/subxt"
homepage = "https://www.parity.io/"
description = "Submit extrinsics (transactions) to a substrate node via RPC"
keywords = ["parity", "substrate", "blockchain"]
include = ["Cargo.toml", "src/**/*.rs", "README.md", "LICENSE"]
[dependencies]
async-trait = "0.1.49"
bitvec = { version = "0.20.1", default-features = false, features = ["alloc"] }
codec = { package = "parity-scale-codec", version = "2", default-features = false, features = ["derive", "full", "bit-vec"] }
chameleon = "0.1.0"
scale-info = { version = "1.0.0", features = ["bit-vec"] }
futures = "0.3.13"
hex = "0.4.3"
jsonrpsee = { version = "0.8.0", features = ["macros", "async-client", "client-ws-transport"] }
log = "0.4.14"
num-traits = { version = "0.2.14", default-features = false }
serde = { version = "1.0.124", features = ["derive"] }
serde_json = "1.0.64"
thiserror = "1.0.24"
url = "2.2.1"
subxt-macro = { version = "0.16.0", path = "../macro" }
sp-core = { version = "4.0.0", default-features = false }
sp-runtime = { version = "4.0.0", default-features = false }
sp-version = "4.0.0"
frame-metadata = "14.0.0"
derivative = "2.2.0"
[dev-dependencies]
sp-arithmetic = { version = "4.0.0", default-features = false }
assert_matches = "1.5.0"
async-std = { version = "1.9.0", features = ["attributes", "tokio1"] }
env_logger = "0.9.0"
tempdir = "0.3.7"
wabt = "0.10.0"
which = "4.0.2"
test-runtime = { path = "../test-runtime" }
sp-keyring = "4.0.0"
+44 -25
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -17,10 +17,9 @@
use futures::future;
use sp_runtime::traits::Hash;
pub use sp_runtime::traits::SignedExtension;
pub use sp_version::RuntimeVersion;
use crate::{
error::Error,
error::BasicError,
events::EventsDecoder,
extrinsic::{
self,
@@ -31,6 +30,7 @@ use crate::{
rpc::{
Rpc,
RpcClient,
RuntimeVersion,
SystemProperties,
},
storage::StorageClient,
@@ -38,9 +38,10 @@ use crate::{
AccountData,
Call,
Config,
ExtrinsicExtraData,
Metadata,
};
use codec::Decode;
use derivative::Derivative;
use std::sync::Arc;
/// ClientBuilder for constructing a Client.
@@ -80,7 +81,7 @@ impl ClientBuilder {
}
/// Creates a new Client.
pub async fn build<T: Config>(self) -> Result<Client<T>, Error> {
pub async fn build<T: Config>(self) -> Result<Client<T>, BasicError> {
let client = if let Some(client) = self.client {
client
} else {
@@ -112,7 +113,8 @@ impl ClientBuilder {
}
/// Client to interface with a substrate node.
#[derive(Clone)]
#[derive(Derivative)]
#[derivative(Clone(bound = ""))]
pub struct Client<T: Config> {
rpc: Rpc<T>,
genesis_hash: T::Hash,
@@ -131,7 +133,7 @@ impl<T: Config> std::fmt::Debug for Client<T> {
.field("metadata", &"<Metadata>")
.field("events_decoder", &"<EventsDecoder>")
.field("properties", &self.properties)
.field("runtime_version", &self.runtime_version.to_string())
.field("runtime_version", &self.runtime_version)
.field("iter_page_size", &self.iter_page_size)
.finish()
}
@@ -185,19 +187,27 @@ impl<T: Config> Client<T> {
}
/// A constructed call ready to be signed and submitted.
pub struct SubmittableExtrinsic<'client, T: Config, C> {
pub struct SubmittableExtrinsic<'client, T: Config, X, A, C, E: Decode> {
client: &'client Client<T>,
call: C,
marker: std::marker::PhantomData<(X, A, E)>,
}
impl<'client, T, C> SubmittableExtrinsic<'client, T, C>
impl<'client, T, X, A, C, E> SubmittableExtrinsic<'client, T, X, A, C, E>
where
T: Config + ExtrinsicExtraData<T>,
T: Config,
X: SignedExtra<T>,
A: AccountData,
C: Call + Send + Sync,
E: Decode,
{
/// Create a new [`SubmittableExtrinsic`].
pub fn new(client: &'client Client<T>, call: C) -> Self {
Self { client, call }
Self {
client,
call,
marker: Default::default(),
}
}
/// Creates and signs an extrinsic and submits it to the chain.
@@ -206,15 +216,20 @@ where
/// and obtain details about it, once it has made it into a block.
pub async fn sign_and_submit_then_watch(
self,
signer: &(dyn Signer<T> + Send + Sync),
) -> Result<TransactionProgress<'client, T>, Error>
signer: &(dyn Signer<T, X> + Send + Sync),
) -> Result<TransactionProgress<'client, T, E>, BasicError>
where
<<<T as ExtrinsicExtraData<T>>::Extra as SignedExtra<T>>::Extra as SignedExtension>::AdditionalSigned: Send + Sync + 'static
<<X as SignedExtra<T>>::Extra as SignedExtension>::AdditionalSigned:
Send + Sync + 'static,
<A as AccountData>::AccountId: From<<T as Config>::AccountId>,
<A as AccountData>::Index: Into<<T as Config>::Index>,
{
// Sign the call data to create our extrinsic.
let extrinsic = self.create_signed(signer, Default::default()).await?;
// Get a hash of the extrinsic (we'll need this later).
let ext_hash = T::Hashing::hash_of(&extrinsic);
// Submit and watch for transaction progress.
let sub = self.client.rpc().watch_extrinsic(extrinsic).await?;
@@ -231,10 +246,13 @@ where
/// and has been included in the transaction pool.
pub async fn sign_and_submit(
self,
signer: &(dyn Signer<T> + Send + Sync),
) -> Result<T::Hash, Error>
signer: &(dyn Signer<T, X> + Send + Sync),
) -> Result<T::Hash, BasicError>
where
<<<T as ExtrinsicExtraData<T>>::Extra as SignedExtra<T>>::Extra as SignedExtension>::AdditionalSigned: Send + Sync + 'static
<<X as SignedExtra<T>>::Extra as SignedExtension>::AdditionalSigned:
Send + Sync + 'static,
<A as AccountData>::AccountId: From<<T as Config>::AccountId>,
<A as AccountData>::Index: Into<<T as Config>::Index>,
{
let extrinsic = self.create_signed(signer, Default::default()).await?;
self.client.rpc().submit_extrinsic(extrinsic).await
@@ -243,25 +261,26 @@ where
/// Creates a signed extrinsic.
pub async fn create_signed(
&self,
signer: &(dyn Signer<T> + Send + Sync),
additional_params: <T::Extra as SignedExtra<T>>::Parameters,
) -> Result<UncheckedExtrinsic<T>, Error>
signer: &(dyn Signer<T, X> + Send + Sync),
additional_params: X::Parameters,
) -> Result<UncheckedExtrinsic<T, X>, BasicError>
where
<<<T as ExtrinsicExtraData<T>>::Extra as SignedExtra<T>>::Extra as SignedExtension>::AdditionalSigned: Send + Sync + 'static
<<X as SignedExtra<T>>::Extra as SignedExtension>::AdditionalSigned:
Send + Sync + 'static,
<A as AccountData>::AccountId: From<<T as Config>::AccountId>,
<A as AccountData>::Index: Into<<T as Config>::Index>,
{
let account_nonce = if let Some(nonce) = signer.nonce() {
nonce
} else {
let account_storage_entry =
<<T as ExtrinsicExtraData<T>>::AccountData as AccountData<T>>::storage_entry(signer.account_id().clone());
A::storage_entry(signer.account_id().clone().into());
let account_data = self
.client
.storage()
.fetch_or_default(&account_storage_entry, None)
.await?;
<<T as ExtrinsicExtraData<T>>::AccountData as AccountData<T>>::nonce(
&account_data,
)
A::nonce(&account_data).into()
};
let call = self
.client
+39 -21
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -14,10 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use crate::{
SignedExtra,
StorageEntry,
};
use crate::StorageEntry;
use codec::{
Codec,
Encode,
@@ -35,7 +32,10 @@ use sp_runtime::traits::{
};
/// Runtime types.
pub trait Config: Clone + Sized + Send + Sync + 'static {
// Note: the 'static bound isn't strictly required, but currently deriving TypeInfo
// automatically applies a 'static bound to all generic types (including this one),
// and so until that is resolved, we'll keep the (easy to satisfy) constraint here.
pub trait Config: 'static {
/// Account index (aka nonce) type. This stores the number of previous
/// transactions associated with a sender account.
type Index: Parameter + Member + Default + AtLeast32Bit + Copy + scale_info::TypeInfo;
@@ -85,21 +85,39 @@ pub trait Config: Clone + Sized + Send + Sync + 'static {
pub trait Parameter: Codec + EncodeLike + Clone + Eq + Debug {}
impl<T> Parameter for T where T: Codec + EncodeLike + Clone + Eq + Debug {}
/// Trait to fetch data about an account.
///
/// Should be implemented on a type implementing `StorageEntry`,
/// usually generated by the `subxt` macro.
pub trait AccountData<T: Config>: StorageEntry {
/// Create a new storage entry key from the account id.
fn storage_entry(account_id: T::AccountId) -> Self;
/// Get the nonce from the storage entry value.
fn nonce(result: &<Self as StorageEntry>::Value) -> T::Index;
/// Default set of commonly used types by Substrate runtimes.
// Note: We only use this at the type level, so it should be impossible to
// create an instance of it.
pub enum DefaultConfig {}
impl Config for DefaultConfig {
type Index = u32;
type BlockNumber = u32;
type Hash = sp_core::H256;
type Hashing = sp_runtime::traits::BlakeTwo256;
type AccountId = sp_runtime::AccountId32;
type Address = sp_runtime::MultiAddress<Self::AccountId, u32>;
type Header =
sp_runtime::generic::Header<Self::BlockNumber, sp_runtime::traits::BlakeTwo256>;
type Signature = sp_runtime::MultiSignature;
type Extrinsic = sp_runtime::OpaqueExtrinsic;
}
/// Trait to configure the extra data for an extrinsic.
pub trait ExtrinsicExtraData<T: Config> {
/// The type of the [`StorageEntry`] which can be used to retrieve an account nonce.
type AccountData: AccountData<T>;
/// The type of extra data and additional signed data to be included in a transaction.
type Extra: SignedExtra<T> + Send + Sync + 'static;
/// Trait to fetch data about an account.
pub trait AccountData {
/// The runtime storage entry from which the account data can be fetched.
/// Usually generated by the `subxt` macro.
type StorageEntry: StorageEntry;
/// The type of the account id to fetch the account data for.
type AccountId;
/// The type of the account nonce returned from storage.
type Index;
/// Create a new storage entry key from the account id.
fn storage_entry(account_id: Self::AccountId) -> Self::StorageEntry;
/// Get the nonce from the storage entry value.
fn nonce(result: &<Self::StorageEntry as StorageEntry>::Value) -> Self::Index;
}
+181
View File
@@ -0,0 +1,181 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use crate::{
events::EventsDecodingError,
metadata::{
InvalidMetadataError,
MetadataError,
},
};
use core::fmt::Debug;
use jsonrpsee::core::error::Error as RequestError;
use sp_core::crypto::SecretStringError;
use sp_runtime::transaction_validity::TransactionValidityError;
/// An error that may contain some runtime error `E`
pub type Error<E> = GenericError<RuntimeError<E>>;
/// An error that will never contain a runtime error.
pub type BasicError = GenericError<std::convert::Infallible>;
/// The underlying error enum, generic over the type held by the `Runtime`
/// variant. Prefer to use the [`Error<E>`] and [`BasicError`] aliases over
/// using this type directly.
#[derive(Debug, thiserror::Error)]
pub enum GenericError<E> {
/// Io error.
#[error("Io error: {0}")]
Io(#[from] std::io::Error),
/// Codec error.
#[error("Scale codec error: {0}")]
Codec(#[from] codec::Error),
/// Rpc error.
#[error("Rpc error: {0}")]
Rpc(#[from] RequestError),
/// Serde serialization error
#[error("Serde json error: {0}")]
Serialization(#[from] serde_json::error::Error),
/// Secret string error.
#[error("Secret String Error")]
SecretString(SecretStringError),
/// Extrinsic validity error
#[error("Transaction Validity Error: {0:?}")]
Invalid(TransactionValidityError),
/// Invalid metadata error
#[error("Invalid Metadata: {0}")]
InvalidMetadata(#[from] InvalidMetadataError),
/// Invalid metadata error
#[error("Metadata: {0}")]
Metadata(#[from] MetadataError),
/// Runtime error.
#[error("Runtime error: {0:?}")]
Runtime(E),
/// Events decoding error.
#[error("Events decoding error: {0}")]
EventsDecoding(#[from] EventsDecodingError),
/// Transaction progress error.
#[error("Transaction error: {0}")]
Transaction(#[from] TransactionError),
/// Other error.
#[error("Other error: {0}")]
Other(String),
}
impl<E> GenericError<E> {
/// [`GenericError`] is parameterised over the type that it holds in the `Runtime`
/// variant. This function allows us to map the Runtime error contained within (if present)
/// to a different type.
pub fn map_runtime_err<F, NewE>(self, f: F) -> GenericError<NewE>
where
F: FnOnce(E) -> NewE,
{
match self {
GenericError::Io(e) => GenericError::Io(e),
GenericError::Codec(e) => GenericError::Codec(e),
GenericError::Rpc(e) => GenericError::Rpc(e),
GenericError::Serialization(e) => GenericError::Serialization(e),
GenericError::SecretString(e) => GenericError::SecretString(e),
GenericError::Invalid(e) => GenericError::Invalid(e),
GenericError::InvalidMetadata(e) => GenericError::InvalidMetadata(e),
GenericError::Metadata(e) => GenericError::Metadata(e),
GenericError::EventsDecoding(e) => GenericError::EventsDecoding(e),
GenericError::Transaction(e) => GenericError::Transaction(e),
GenericError::Other(e) => GenericError::Other(e),
// This is the only branch we really care about:
GenericError::Runtime(e) => GenericError::Runtime(f(e)),
}
}
}
impl BasicError {
/// Convert an [`BasicError`] into any
/// arbitrary [`Error<E>`].
pub fn into_error<E>(self) -> Error<E> {
self.map_runtime_err(|e| match e {})
}
}
impl<E> From<BasicError> for Error<E> {
fn from(err: BasicError) -> Self {
err.into_error()
}
}
impl<E> From<SecretStringError> for GenericError<E> {
fn from(error: SecretStringError) -> Self {
GenericError::SecretString(error)
}
}
impl<E> From<TransactionValidityError> for GenericError<E> {
fn from(error: TransactionValidityError) -> Self {
GenericError::Invalid(error)
}
}
impl<E> From<&str> for GenericError<E> {
fn from(error: &str) -> Self {
GenericError::Other(error.into())
}
}
impl<E> From<String> for GenericError<E> {
fn from(error: String) -> Self {
GenericError::Other(error)
}
}
/// This is used in the place of the `E` in [`GenericError<E>`] when we may have a
/// Runtime Error. We use this wrapper so that it is possible to implement
/// `From<Error<Infallible>` for `Error<RuntimeError<E>>`.
///
/// This should not be used as a type; prefer to use the alias [`Error<E>`] when referring
/// to errors which may contain some Runtime error `E`.
#[derive(Clone, Debug, PartialEq)]
pub struct RuntimeError<E>(pub E);
impl<E> RuntimeError<E> {
/// Extract the actual runtime error from this struct.
pub fn inner(self) -> E {
self.0
}
}
/// Module error.
#[derive(Clone, Debug, Eq, thiserror::Error, PartialEq)]
#[error("{error} from {pallet}")]
pub struct PalletError {
/// The module where the error originated.
pub pallet: String,
/// The actual error code.
pub error: String,
/// The error description.
pub description: Vec<String>,
}
/// Transaction error.
#[derive(Clone, Debug, Eq, thiserror::Error, PartialEq)]
pub enum TransactionError {
/// The finality subscription expired (after ~512 blocks we give up if the
/// block hasn't yet been finalized).
#[error("The finality subscription expired")]
FinalitySubscriptionTimeout,
/// The block hash that the tranaction was added to could not be found.
/// This is probably because the block was retracted before being finalized.
#[error("The block containing the transaction can no longer be found (perhaps it was on a non-finalized fork?)")]
BlockHashNotFound,
}
+646
View File
@@ -0,0 +1,646 @@
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// subxt is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use crate::{
error::BasicError,
metadata::{
EventMetadata,
MetadataError,
},
Config,
Event,
Metadata,
PhantomDataSendSync,
Phase,
};
use bitvec::{
order::Lsb0,
vec::BitVec,
};
use codec::{
Codec,
Compact,
Decode,
Error as CodecError,
Input,
};
use derivative::Derivative;
use scale_info::{
PortableRegistry,
TypeDef,
TypeDefPrimitive,
};
use sp_core::Bytes;
/// Raw bytes for an Event
#[derive(Debug)]
#[cfg_attr(test, derive(PartialEq, Clone))]
pub struct RawEvent {
/// The name of the pallet from whence the Event originated.
pub pallet: String,
/// The index of the pallet from whence the Event originated.
pub pallet_index: u8,
/// The name of the pallet Event variant.
pub variant: String,
/// The index of the pallet Event variant.
pub variant_index: u8,
/// The raw Event data
pub data: Bytes,
}
impl RawEvent {
/// Attempt to decode this [`RawEvent`] into a specific event.
pub fn as_event<E: Event>(&self) -> Result<Option<E>, CodecError> {
if self.pallet == E::PALLET && self.variant == E::EVENT {
Ok(Some(E::decode(&mut &self.data[..])?))
} else {
Ok(None)
}
}
}
/// Events decoder.
#[derive(Derivative)]
#[derivative(Clone(bound = ""), Debug(bound = ""))]
pub struct EventsDecoder<T: Config> {
metadata: Metadata,
marker: PhantomDataSendSync<T>,
}
impl<T: Config> EventsDecoder<T> {
/// Creates a new `EventsDecoder`.
pub fn new(metadata: Metadata) -> Self {
Self {
metadata,
marker: Default::default(),
}
}
/// Decode events.
pub fn decode_events(
&self,
input: &mut &[u8],
) -> Result<Vec<(Phase, RawEvent)>, BasicError> {
let compact_len = <Compact<u32>>::decode(input)?;
let len = compact_len.0 as usize;
log::debug!("decoding {} events", len);
let mut r = Vec::new();
for _ in 0..len {
// decode EventRecord
let phase = Phase::decode(input)?;
let pallet_index = input.read_byte()?;
let variant_index = input.read_byte()?;
log::debug!(
"phase {:?}, pallet_index {}, event_variant: {}",
phase,
pallet_index,
variant_index
);
log::debug!("remaining input: {}", hex::encode(&input));
let event_metadata = self.metadata.event(pallet_index, variant_index)?;
let mut event_data = Vec::<u8>::new();
let result = self.decode_raw_event(event_metadata, input, &mut event_data);
let raw = match result {
Ok(()) => {
log::debug!("raw bytes: {}", hex::encode(&event_data),);
let event = RawEvent {
pallet: event_metadata.pallet().to_string(),
pallet_index,
variant: event_metadata.event().to_string(),
variant_index,
data: event_data.into(),
};
// topics come after the event data in EventRecord
let topics = Vec::<T::Hash>::decode(input)?;
log::debug!("topics: {:?}", topics);
event
}
Err(err) => return Err(err),
};
r.push((phase.clone(), raw));
}
Ok(r)
}
fn decode_raw_event(
&self,
event_metadata: &EventMetadata,
input: &mut &[u8],
output: &mut Vec<u8>,
) -> Result<(), BasicError> {
log::debug!(
"Decoding Event '{}::{}'",
event_metadata.pallet(),
event_metadata.event()
);
for arg in event_metadata.variant().fields() {
let type_id = arg.ty().id();
self.decode_type(type_id, input, output)?
}
Ok(())
}
fn decode_type(
&self,
type_id: u32,
input: &mut &[u8],
output: &mut Vec<u8>,
) -> Result<(), BasicError> {
let all_bytes = *input;
// consume some bytes, moving the cursor forward:
decode_and_consume_type(type_id, &self.metadata.runtime_metadata().types, input)?;
// count how many bytes were consumed based on remaining length:
let consumed_len = all_bytes.len() - input.len();
// move those consumed bytes to the output vec unaltered:
output.extend(&all_bytes[0..consumed_len]);
Ok(())
}
}
// Given a type Id and a type registry, attempt to consume the bytes
// corresponding to that type from our input.
fn decode_and_consume_type(
type_id: u32,
types: &PortableRegistry,
input: &mut &[u8],
) -> Result<(), BasicError> {
let ty = types
.resolve(type_id)
.ok_or(MetadataError::TypeNotFound(type_id))?;
fn consume_type<T: Codec>(input: &mut &[u8]) -> Result<(), BasicError> {
T::decode(input)?;
Ok(())
}
match ty.type_def() {
TypeDef::Composite(composite) => {
for field in composite.fields() {
decode_and_consume_type(field.ty().id(), types, input)?
}
Ok(())
}
TypeDef::Variant(variant) => {
let variant_index = u8::decode(input)?;
let variant = variant
.variants()
.iter()
.find(|v| v.index() == variant_index)
.ok_or_else(|| {
BasicError::Other(format!("Variant {} not found", variant_index))
})?;
for field in variant.fields() {
decode_and_consume_type(field.ty().id(), types, input)?;
}
Ok(())
}
TypeDef::Sequence(seq) => {
let len = <Compact<u32>>::decode(input)?;
for _ in 0..len.0 {
decode_and_consume_type(seq.type_param().id(), types, input)?;
}
Ok(())
}
TypeDef::Array(arr) => {
for _ in 0..arr.len() {
decode_and_consume_type(arr.type_param().id(), types, input)?;
}
Ok(())
}
TypeDef::Tuple(tuple) => {
for field in tuple.fields() {
decode_and_consume_type(field.id(), types, input)?;
}
Ok(())
}
TypeDef::Primitive(primitive) => {
match primitive {
TypeDefPrimitive::Bool => consume_type::<bool>(input),
TypeDefPrimitive::Char => {
Err(
EventsDecodingError::UnsupportedPrimitive(TypeDefPrimitive::Char)
.into(),
)
}
TypeDefPrimitive::Str => consume_type::<String>(input),
TypeDefPrimitive::U8 => consume_type::<u8>(input),
TypeDefPrimitive::U16 => consume_type::<u16>(input),
TypeDefPrimitive::U32 => consume_type::<u32>(input),
TypeDefPrimitive::U64 => consume_type::<u64>(input),
TypeDefPrimitive::U128 => consume_type::<u128>(input),
TypeDefPrimitive::U256 => {
Err(
EventsDecodingError::UnsupportedPrimitive(TypeDefPrimitive::U256)
.into(),
)
}
TypeDefPrimitive::I8 => consume_type::<i8>(input),
TypeDefPrimitive::I16 => consume_type::<i16>(input),
TypeDefPrimitive::I32 => consume_type::<i32>(input),
TypeDefPrimitive::I64 => consume_type::<i64>(input),
TypeDefPrimitive::I128 => consume_type::<i128>(input),
TypeDefPrimitive::I256 => {
Err(
EventsDecodingError::UnsupportedPrimitive(TypeDefPrimitive::I256)
.into(),
)
}
}
}
TypeDef::Compact(compact) => {
let inner = types
.resolve(compact.type_param().id())
.ok_or(MetadataError::TypeNotFound(type_id))?;
let mut decode_compact_primitive = |primitive: &TypeDefPrimitive| {
match primitive {
TypeDefPrimitive::U8 => consume_type::<Compact<u8>>(input),
TypeDefPrimitive::U16 => consume_type::<Compact<u16>>(input),
TypeDefPrimitive::U32 => consume_type::<Compact<u32>>(input),
TypeDefPrimitive::U64 => consume_type::<Compact<u64>>(input),
TypeDefPrimitive::U128 => consume_type::<Compact<u128>>(input),
prim => {
Err(EventsDecodingError::InvalidCompactPrimitive(prim.clone())
.into())
}
}
};
match inner.type_def() {
TypeDef::Primitive(primitive) => decode_compact_primitive(primitive),
TypeDef::Composite(composite) => {
match composite.fields() {
[field] => {
let field_ty =
types.resolve(field.ty().id()).ok_or_else(|| {
MetadataError::TypeNotFound(field.ty().id())
})?;
if let TypeDef::Primitive(primitive) = field_ty.type_def() {
decode_compact_primitive(primitive)
} else {
Err(EventsDecodingError::InvalidCompactType(
"Composite type must have a single primitive field"
.into(),
)
.into())
}
}
_ => {
Err(EventsDecodingError::InvalidCompactType(
"Composite type must have a single field".into(),
)
.into())
}
}
}
_ => {
Err(EventsDecodingError::InvalidCompactType(
"Compact type must be a primitive or a composite type".into(),
)
.into())
}
}
}
TypeDef::BitSequence(bitseq) => {
let bit_store_def = types
.resolve(bitseq.bit_store_type().id())
.ok_or(MetadataError::TypeNotFound(type_id))?
.type_def();
// We just need to consume the correct number of bytes. Roughly, we encode this
// as a Compact<u32> length, and then a slice of T of that length, where T is the
// bit store type. So, we ignore the bit order and only care that the bit store type
// used lines up in terms of the number of bytes it will take to encode/decode it.
match bit_store_def {
TypeDef::Primitive(TypeDefPrimitive::U8) => {
consume_type::<BitVec<Lsb0, u8>>(input)
}
TypeDef::Primitive(TypeDefPrimitive::U16) => {
consume_type::<BitVec<Lsb0, u16>>(input)
}
TypeDef::Primitive(TypeDefPrimitive::U32) => {
consume_type::<BitVec<Lsb0, u32>>(input)
}
TypeDef::Primitive(TypeDefPrimitive::U64) => {
consume_type::<BitVec<Lsb0, u64>>(input)
}
store => {
return Err(EventsDecodingError::InvalidBitSequenceType(format!(
"{:?}",
store
))
.into())
}
}
}
}
}
#[derive(Debug, thiserror::Error)]
pub enum EventsDecodingError {
/// Unsupported primitive type
#[error("Unsupported primitive type {0:?}")]
UnsupportedPrimitive(TypeDefPrimitive),
/// Invalid compact type, must be an unsigned int.
#[error("Invalid compact primitive {0:?}")]
InvalidCompactPrimitive(TypeDefPrimitive),
/// Invalid compact type; error details in string.
#[error("Invalid compact composite type {0}")]
InvalidCompactType(String),
/// Invalid bit sequence type; bit store type or bit order type used aren't supported.
#[error("Invalid bit sequence type; bit store type {0} is not supported")]
InvalidBitSequenceType(String),
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
Config,
DefaultConfig,
Phase,
};
use codec::Encode;
use frame_metadata::{
v14::{
ExtrinsicMetadata,
PalletEventMetadata,
PalletMetadata,
RuntimeMetadataLastVersion,
},
RuntimeMetadataPrefixed,
};
use scale_info::{
meta_type,
TypeInfo,
};
use std::convert::TryFrom;
type TypeId = scale_info::interner::UntrackedSymbol<std::any::TypeId>;
#[derive(Encode)]
pub struct EventRecord<E: Encode> {
phase: Phase,
pallet_index: u8,
event: E,
topics: Vec<<DefaultConfig as Config>::Hash>,
}
fn event_record<E: Encode>(pallet_index: u8, event: E) -> EventRecord<E> {
EventRecord {
phase: Phase::Finalization,
pallet_index,
event,
topics: vec![],
}
}
fn singleton_type_registry<T: scale_info::TypeInfo + 'static>(
) -> (TypeId, PortableRegistry) {
let m = scale_info::MetaType::new::<T>();
let mut types = scale_info::Registry::new();
let id = types.register_type(&m);
let portable_registry: PortableRegistry = types.into();
(id, portable_registry)
}
fn pallet_metadata<E: TypeInfo + 'static>(pallet_index: u8) -> PalletMetadata {
let event = PalletEventMetadata {
ty: meta_type::<E>(),
};
PalletMetadata {
name: "Test",
storage: None,
calls: None,
event: Some(event),
constants: vec![],
error: None,
index: pallet_index,
}
}
fn init_decoder(pallets: Vec<PalletMetadata>) -> EventsDecoder<DefaultConfig> {
let extrinsic = ExtrinsicMetadata {
ty: meta_type::<()>(),
version: 0,
signed_extensions: vec![],
};
let v14 = RuntimeMetadataLastVersion::new(pallets, extrinsic, meta_type::<()>());
let runtime_metadata: RuntimeMetadataPrefixed = v14.into();
let metadata = Metadata::try_from(runtime_metadata).unwrap();
EventsDecoder::<DefaultConfig>::new(metadata)
}
fn decode_and_consume_type_consumes_all_bytes<
T: codec::Encode + scale_info::TypeInfo + 'static,
>(
val: T,
) {
let (type_id, registry) = singleton_type_registry::<T>();
let bytes = val.encode();
let cursor = &mut &*bytes;
decode_and_consume_type(type_id.id(), &registry, cursor).unwrap();
assert_eq!(cursor.len(), 0);
}
#[test]
fn decode_single_event() {
#[derive(Clone, Encode, TypeInfo)]
enum Event {
A(u8),
}
let pallet_index = 0;
let pallet = pallet_metadata::<Event>(pallet_index);
let decoder = init_decoder(vec![pallet]);
let event = Event::A(1);
let encoded_event = event.encode();
let event_records = vec![event_record(pallet_index, event)];
let mut input = Vec::new();
event_records.encode_to(&mut input);
let events = decoder.decode_events(&mut &input[..]).unwrap();
assert_eq!(events[0].1.variant_index, encoded_event[0]);
assert_eq!(events[0].1.data.0, encoded_event[1..]);
}
#[test]
fn decode_multiple_events() {
#[derive(Clone, Encode, TypeInfo)]
enum Event {
A(u8),
B,
C { a: u32 },
}
let pallet_index = 0;
let pallet = pallet_metadata::<Event>(pallet_index);
let decoder = init_decoder(vec![pallet]);
let event1 = Event::A(1);
let event2 = Event::B;
let event3 = Event::C { a: 3 };
let encoded_event1 = event1.encode();
let encoded_event2 = event2.encode();
let encoded_event3 = event3.encode();
let event_records = vec![
event_record(pallet_index, event1),
event_record(pallet_index, event2),
event_record(pallet_index, event3),
];
let mut input = Vec::new();
event_records.encode_to(&mut input);
let events = decoder.decode_events(&mut &input[..]).unwrap();
assert_eq!(events[0].1.variant_index, encoded_event1[0]);
assert_eq!(events[0].1.data.0, encoded_event1[1..]);
assert_eq!(events[1].1.variant_index, encoded_event2[0]);
assert_eq!(events[1].1.data.0, encoded_event2[1..]);
assert_eq!(events[2].1.variant_index, encoded_event3[0]);
assert_eq!(events[2].1.data.0, encoded_event3[1..]);
}
#[test]
fn compact_event_field() {
#[derive(Clone, Encode, TypeInfo)]
enum Event {
A(#[codec(compact)] u32),
}
let pallet_index = 0;
let pallet = pallet_metadata::<Event>(pallet_index);
let decoder = init_decoder(vec![pallet]);
let event = Event::A(u32::MAX);
let encoded_event = event.encode();
let event_records = vec![event_record(pallet_index, event)];
let mut input = Vec::new();
event_records.encode_to(&mut input);
let events = decoder.decode_events(&mut &input[..]).unwrap();
assert_eq!(events[0].1.variant_index, encoded_event[0]);
assert_eq!(events[0].1.data.0, encoded_event[1..]);
}
#[test]
fn compact_wrapper_struct_field() {
#[derive(Clone, Encode, TypeInfo)]
enum Event {
A(#[codec(compact)] CompactWrapper),
}
#[derive(Clone, codec::CompactAs, Encode, TypeInfo)]
struct CompactWrapper(u64);
let pallet_index = 0;
let pallet = pallet_metadata::<Event>(pallet_index);
let decoder = init_decoder(vec![pallet]);
let event = Event::A(CompactWrapper(0));
let encoded_event = event.encode();
let event_records = vec![event_record(pallet_index, event)];
let mut input = Vec::new();
event_records.encode_to(&mut input);
let events = decoder.decode_events(&mut &input[..]).unwrap();
assert_eq!(events[0].1.variant_index, encoded_event[0]);
assert_eq!(events[0].1.data.0, encoded_event[1..]);
}
#[test]
fn event_containing_explicit_index() {
#[derive(Clone, Encode, TypeInfo)]
#[repr(u8)]
#[allow(trivial_numeric_casts, clippy::unnecessary_cast)] // required because the Encode derive produces a warning otherwise
pub enum MyType {
B = 10u8,
}
#[derive(Clone, Encode, TypeInfo)]
enum Event {
A(MyType),
}
let pallet_index = 0;
let pallet = pallet_metadata::<Event>(pallet_index);
let decoder = init_decoder(vec![pallet]);
let event = Event::A(MyType::B);
let encoded_event = event.encode();
let event_records = vec![event_record(pallet_index, event)];
let mut input = Vec::new();
event_records.encode_to(&mut input);
// this would panic if the explicit enum item index were not correctly used
let events = decoder.decode_events(&mut &input[..]).unwrap();
assert_eq!(events[0].1.variant_index, encoded_event[0]);
assert_eq!(events[0].1.data.0, encoded_event[1..]);
}
#[test]
fn decode_bitvec() {
use bitvec::order::Msb0;
decode_and_consume_type_consumes_all_bytes(
bitvec::bitvec![Lsb0, u8; 0, 1, 1, 0, 1],
);
decode_and_consume_type_consumes_all_bytes(
bitvec::bitvec![Msb0, u8; 0, 1, 1, 0, 1, 0, 1, 0, 0],
);
decode_and_consume_type_consumes_all_bytes(
bitvec::bitvec![Lsb0, u16; 0, 1, 1, 0, 1],
);
decode_and_consume_type_consumes_all_bytes(
bitvec::bitvec![Msb0, u16; 0, 1, 1, 0, 1, 0, 1, 0, 0],
);
decode_and_consume_type_consumes_all_bytes(
bitvec::bitvec![Lsb0, u32; 0, 1, 1, 0, 1],
);
decode_and_consume_type_consumes_all_bytes(
bitvec::bitvec![Msb0, u32; 0, 1, 1, 0, 1, 0, 1, 0, 0],
);
decode_and_consume_type_consumes_all_bytes(
bitvec::bitvec![Lsb0, u64; 0, 1, 1, 0, 1],
);
decode_and_consume_type_consumes_all_bytes(
bitvec::bitvec![Msb0, u64; 0, 1, 1, 0, 1, 0, 1, 0, 0],
);
}
}
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -14,14 +14,12 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use crate::PhantomDataSendSync;
use codec::{
Decode,
Encode,
};
use core::{
fmt::Debug,
marker::PhantomData,
};
use derivative::Derivative;
use scale_info::TypeInfo;
use sp_runtime::{
generic::Era,
@@ -48,21 +46,24 @@ use crate::Config;
/// returned via `additional_signed()`.
/// Ensure the runtime version registered in the transaction is the same as at present.
#[derive(Encode, Decode, Clone, Eq, PartialEq, Debug, TypeInfo)]
#[derive(Derivative, Encode, Decode, TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = "")
)]
#[scale_info(skip_type_params(T))]
pub struct CheckSpecVersion<T: Config>(
pub PhantomData<T>,
pub PhantomDataSendSync<T>,
/// Local version to be used for `AdditionalSigned`
#[codec(skip)]
pub u32,
);
impl<T> SignedExtension for CheckSpecVersion<T>
where
T: Config + Clone + Debug + Eq + Send + Sync,
{
impl<T: Config> SignedExtension for CheckSpecVersion<T> {
const IDENTIFIER: &'static str = "CheckSpecVersion";
type AccountId = u64;
type AccountId = T::AccountId;
type Call = ();
type AdditionalSigned = u32;
type Pre = ();
@@ -88,21 +89,24 @@ where
///
/// This is modified from the substrate version to allow passing in of the version, which is
/// returned via `additional_signed()`.
#[derive(Encode, Decode, Clone, Eq, PartialEq, Debug, TypeInfo)]
#[derive(Derivative, Encode, Decode, TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = "")
)]
#[scale_info(skip_type_params(T))]
pub struct CheckTxVersion<T: Config>(
pub PhantomData<T>,
pub PhantomDataSendSync<T>,
/// Local version to be used for `AdditionalSigned`
#[codec(skip)]
pub u32,
);
impl<T> SignedExtension for CheckTxVersion<T>
where
T: Config + Clone + Debug + Eq + Send + Sync,
{
impl<T: Config> SignedExtension for CheckTxVersion<T> {
const IDENTIFIER: &'static str = "CheckTxVersion";
type AccountId = u64;
type AccountId = T::AccountId;
type Call = ();
type AdditionalSigned = u32;
type Pre = ();
@@ -128,21 +132,24 @@ where
///
/// This is modified from the substrate version to allow passing in of the genesis hash, which is
/// returned via `additional_signed()`.
#[derive(Encode, Decode, Clone, Eq, PartialEq, Debug, TypeInfo)]
#[derive(Derivative, Encode, Decode, TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = "")
)]
#[scale_info(skip_type_params(T))]
pub struct CheckGenesis<T: Config>(
pub PhantomData<T>,
pub PhantomDataSendSync<T>,
/// Local genesis hash to be used for `AdditionalSigned`
#[codec(skip)]
pub T::Hash,
);
impl<T> SignedExtension for CheckGenesis<T>
where
T: Config + Clone + Debug + Eq + Send + Sync,
{
impl<T: Config> SignedExtension for CheckGenesis<T> {
const IDENTIFIER: &'static str = "CheckGenesis";
type AccountId = u64;
type AccountId = T::AccountId;
type Call = ();
type AdditionalSigned = T::Hash;
type Pre = ();
@@ -169,22 +176,25 @@ where
/// This is modified from the substrate version to allow passing in of the genesis hash, which is
/// returned via `additional_signed()`. It assumes therefore `Era::Immortal` (The transaction is
/// valid forever)
#[derive(Encode, Decode, Clone, Eq, PartialEq, Debug, TypeInfo)]
#[derive(Derivative, Encode, Decode, TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = "")
)]
#[scale_info(skip_type_params(T))]
pub struct CheckMortality<T: Config>(
/// The default structure for the Extra encoding
pub (Era, PhantomData<T>),
pub (Era, PhantomDataSendSync<T>),
/// Local genesis hash to be used for `AdditionalSigned`
#[codec(skip)]
pub T::Hash,
);
impl<T> SignedExtension for CheckMortality<T>
where
T: Config + Clone + Debug + Eq + Send + Sync,
{
impl<T: Config> SignedExtension for CheckMortality<T> {
const IDENTIFIER: &'static str = "CheckMortality";
type AccountId = u64;
type AccountId = T::AccountId;
type Call = ();
type AdditionalSigned = T::Hash;
type Pre = ();
@@ -205,16 +215,19 @@ where
}
/// Nonce check and increment to give replay protection for transactions.
#[derive(Encode, Decode, Clone, Eq, PartialEq, Debug, TypeInfo)]
#[derive(Derivative, Encode, Decode, TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = "")
)]
#[scale_info(skip_type_params(T))]
pub struct CheckNonce<T: Config>(#[codec(compact)] pub T::Index);
impl<T> SignedExtension for CheckNonce<T>
where
T: Config + Clone + Debug + Eq + Send + Sync,
{
impl<T: Config> SignedExtension for CheckNonce<T> {
const IDENTIFIER: &'static str = "CheckNonce";
type AccountId = u64;
type AccountId = T::AccountId;
type Call = ();
type AdditionalSigned = ();
type Pre = ();
@@ -235,16 +248,19 @@ where
}
/// Resource limit check.
#[derive(Encode, Decode, Clone, Eq, PartialEq, Debug, TypeInfo)]
#[derive(Derivative, Encode, Decode, TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = "")
)]
#[scale_info(skip_type_params(T))]
pub struct CheckWeight<T: Config>(pub PhantomData<T>);
pub struct CheckWeight<T: Config>(pub PhantomDataSendSync<T>);
impl<T> SignedExtension for CheckWeight<T>
where
T: Config + Clone + Debug + Eq + Send + Sync,
{
impl<T: Config> SignedExtension for CheckWeight<T> {
const IDENTIFIER: &'static str = "CheckWeight";
type AccountId = u64;
type AccountId = T::AccountId;
type Call = ();
type AdditionalSigned = ();
type Pre = ();
@@ -266,19 +282,66 @@ where
/// Require the transactor pay for themselves and maybe include a tip to gain additional priority
/// in the queue.
#[derive(Encode, Decode, Clone, Eq, PartialEq, Debug, TypeInfo)]
#[derive(Derivative, Encode, Decode, TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = ""),
Default(bound = "")
)]
#[scale_info(skip_type_params(T))]
pub struct ChargeAssetTxPayment {
pub struct ChargeTransactionPayment<T: Config>(
#[codec(compact)] u128,
pub PhantomDataSendSync<T>,
);
impl<T: Config> SignedExtension for ChargeTransactionPayment<T> {
const IDENTIFIER: &'static str = "ChargeTransactionPayment";
type AccountId = T::AccountId;
type Call = ();
type AdditionalSigned = ();
type Pre = ();
fn additional_signed(
&self,
) -> Result<Self::AdditionalSigned, TransactionValidityError> {
Ok(())
}
fn pre_dispatch(
self,
_who: &Self::AccountId,
_call: &Self::Call,
_info: &DispatchInfoOf<Self::Call>,
_len: usize,
) -> Result<Self::Pre, TransactionValidityError> {
Ok(())
}
}
/// Require the transactor pay for themselves and maybe include a tip to gain additional priority
/// in the queue.
#[derive(Derivative, Encode, Decode, TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = ""),
Default(bound = "")
)]
#[scale_info(skip_type_params(T))]
pub struct ChargeAssetTxPayment<T: Config> {
/// The tip for the block author.
#[codec(compact)]
pub tip: u128,
/// The asset with which to pay the tip.
pub asset_id: Option<u32>,
/// Marker for unused type parameter.
pub marker: PhantomDataSendSync<T>,
}
impl SignedExtension for ChargeAssetTxPayment {
impl<T: Config> SignedExtension for ChargeAssetTxPayment<T> {
const IDENTIFIER: &'static str = "ChargeAssetTxPayment";
type AccountId = u64;
type AccountId = T::AccountId;
type Call = ();
type AdditionalSigned = ();
type Pre = ();
@@ -319,16 +382,27 @@ pub trait SignedExtra<T: Config>: SignedExtension {
}
/// Default `SignedExtra` for substrate runtimes.
#[derive(Encode, Decode, Clone, Eq, PartialEq, Debug, TypeInfo)]
#[derive(Derivative, Encode, Decode, TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = "")
)]
#[scale_info(skip_type_params(T))]
pub struct DefaultExtra<T: Config> {
pub struct DefaultExtraWithTxPayment<T: Config, X> {
spec_version: u32,
tx_version: u32,
nonce: T::Index,
genesis_hash: T::Hash,
marker: PhantomDataSendSync<X>,
}
impl<T: Config + Clone + Debug + Eq + Send + Sync> SignedExtra<T> for DefaultExtra<T> {
impl<T, X> SignedExtra<T> for DefaultExtraWithTxPayment<T, X>
where
T: Config,
X: SignedExtension<AccountId = T::AccountId, Call = ()> + Default,
{
type Extra = (
CheckSpecVersion<T>,
CheckTxVersion<T>,
@@ -336,7 +410,7 @@ impl<T: Config + Clone + Debug + Eq + Send + Sync> SignedExtra<T> for DefaultExt
CheckMortality<T>,
CheckNonce<T>,
CheckWeight<T>,
ChargeAssetTxPayment,
X,
);
type Parameters = ();
@@ -347,31 +421,37 @@ impl<T: Config + Clone + Debug + Eq + Send + Sync> SignedExtra<T> for DefaultExt
genesis_hash: T::Hash,
_params: Self::Parameters,
) -> Self {
DefaultExtra {
DefaultExtraWithTxPayment {
spec_version,
tx_version,
nonce,
genesis_hash,
marker: PhantomDataSendSync::new(),
}
}
fn extra(&self) -> Self::Extra {
(
CheckSpecVersion(PhantomData, self.spec_version),
CheckTxVersion(PhantomData, self.tx_version),
CheckGenesis(PhantomData, self.genesis_hash),
CheckMortality((Era::Immortal, PhantomData), self.genesis_hash),
CheckSpecVersion(PhantomDataSendSync::new(), self.spec_version),
CheckTxVersion(PhantomDataSendSync::new(), self.tx_version),
CheckGenesis(PhantomDataSendSync::new(), self.genesis_hash),
CheckMortality(
(Era::Immortal, PhantomDataSendSync::new()),
self.genesis_hash,
),
CheckNonce(self.nonce),
CheckWeight(PhantomData),
ChargeAssetTxPayment {
tip: u128::default(),
asset_id: None,
},
CheckWeight(PhantomDataSendSync::new()),
X::default(),
)
}
}
impl<T: Config + Clone + Debug + Eq + Send + Sync> SignedExtension for DefaultExtra<T> {
impl<T, X: SignedExtension<AccountId = T::AccountId, Call = ()> + Default> SignedExtension
for DefaultExtraWithTxPayment<T, X>
where
T: Config,
X: SignedExtension,
{
const IDENTIFIER: &'static str = "DefaultExtra";
type AccountId = T::AccountId;
type Call = ();
@@ -394,3 +474,8 @@ impl<T: Config + Clone + Debug + Eq + Send + Sync> SignedExtension for DefaultEx
Ok(())
}
}
/// A default `SignedExtra` configuration, with [`ChargeTransactionPayment`] for tipping.
///
/// Note that this must match the `SignedExtra` type in the target runtime's extrinsic definition.
pub type DefaultExtra<T> = DefaultExtraWithTxPayment<T, ChargeTransactionPayment<T>>;
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -22,6 +22,7 @@ mod signer;
pub use self::{
extra::{
ChargeAssetTxPayment,
ChargeTransactionPayment,
CheckGenesis,
CheckMortality,
CheckNonce,
@@ -29,6 +30,7 @@ pub use self::{
CheckTxVersion,
CheckWeight,
DefaultExtra,
DefaultExtraWithTxPayment,
SignedExtra,
},
signer::{
@@ -38,53 +40,50 @@ pub use self::{
};
use sp_runtime::traits::SignedExtension;
use sp_version::RuntimeVersion;
use crate::{
error::BasicError,
rpc::RuntimeVersion,
Config,
Encoded,
Error,
ExtrinsicExtraData,
};
/// UncheckedExtrinsic type.
pub type UncheckedExtrinsic<T> = sp_runtime::generic::UncheckedExtrinsic<
pub type UncheckedExtrinsic<T, X> = sp_runtime::generic::UncheckedExtrinsic<
<T as Config>::Address,
Encoded,
<T as Config>::Signature,
<<T as ExtrinsicExtraData<T>>::Extra as SignedExtra<T>>::Extra,
<X as SignedExtra<T>>::Extra,
>;
/// SignedPayload type.
pub type SignedPayload<T> = sp_runtime::generic::SignedPayload<
Encoded,
<<T as ExtrinsicExtraData<T>>::Extra as SignedExtra<T>>::Extra,
>;
pub type SignedPayload<T, X> =
sp_runtime::generic::SignedPayload<Encoded, <X as SignedExtra<T>>::Extra>;
/// Creates a signed extrinsic
pub async fn create_signed<T>(
pub async fn create_signed<T, X>(
runtime_version: &RuntimeVersion,
genesis_hash: T::Hash,
nonce: T::Index,
call: Encoded,
signer: &(dyn Signer<T> + Send + Sync),
additional_params: <T::Extra as SignedExtra<T>>::Parameters,
) -> Result<UncheckedExtrinsic<T>, Error>
signer: &(dyn Signer<T, X> + Send + Sync),
additional_params: X::Parameters,
) -> Result<UncheckedExtrinsic<T, X>, BasicError>
where
T: Config + ExtrinsicExtraData<T>,
<<<T as ExtrinsicExtraData<T>>::Extra as SignedExtra<T>>::Extra as SignedExtension>::AdditionalSigned:
Send + Sync,
T: Config,
X: SignedExtra<T>,
<X::Extra as SignedExtension>::AdditionalSigned: Send + Sync,
{
let spec_version = runtime_version.spec_version;
let tx_version = runtime_version.transaction_version;
let extra = <T as ExtrinsicExtraData<T>>::Extra::new(
let extra = X::new(
spec_version,
tx_version,
nonce,
genesis_hash,
additional_params,
);
let payload = SignedPayload::<T>::new(call, extra.extra())?;
let payload = SignedPayload::<T, X>::new(call, extra.extra())?;
let signed = signer.sign(payload).await?;
Ok(signed)
}
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -18,14 +18,11 @@
//! [substrate](https://github.com/paritytech/substrate) node via RPC.
use super::{
SignedExtra,
SignedPayload,
UncheckedExtrinsic,
};
use crate::{
Config,
ExtrinsicExtraData,
SignedExtra,
};
use crate::Config;
use codec::Encode;
use sp_core::Pair;
use sp_runtime::traits::{
@@ -36,7 +33,7 @@ use sp_runtime::traits::{
/// Extrinsic signer.
#[async_trait::async_trait]
pub trait Signer<T: Config + ExtrinsicExtraData<T>> {
pub trait Signer<T: Config, E: SignedExtra<T>> {
/// Returns the account id.
fn account_id(&self) -> &T::AccountId;
@@ -49,21 +46,23 @@ pub trait Signer<T: Config + ExtrinsicExtraData<T>> {
/// refused the operation.
async fn sign(
&self,
extrinsic: SignedPayload<T>,
) -> Result<UncheckedExtrinsic<T>, String>;
extrinsic: SignedPayload<T, E>,
) -> Result<UncheckedExtrinsic<T, E>, String>;
}
/// Extrinsic signer using a private key.
#[derive(Clone, Debug)]
pub struct PairSigner<T: Config, P: Pair> {
pub struct PairSigner<T: Config, E, P: Pair> {
account_id: T::AccountId,
nonce: Option<T::Index>,
signer: P,
marker: std::marker::PhantomData<E>,
}
impl<T, P> PairSigner<T, P>
impl<T, E, P> PairSigner<T, E, P>
where
T: Config + ExtrinsicExtraData<T>,
T: Config,
E: SignedExtra<T>,
T::Signature: From<P::Signature>,
<T::Signature as Verify>::Signer:
From<P::Public> + IdentifyAccount<AccountId = T::AccountId>,
@@ -77,6 +76,7 @@ where
account_id,
nonce: None,
signer,
marker: Default::default(),
}
}
@@ -97,11 +97,13 @@ where
}
#[async_trait::async_trait]
impl<T, P> Signer<T> for PairSigner<T, P>
impl<T, E, P> Signer<T, E> for PairSigner<T, E, P>
where
T: Config + ExtrinsicExtraData<T>,
T: Config,
E: SignedExtra<T>,
T::AccountId: Into<T::Address> + 'static,
<<<T as ExtrinsicExtraData<T>>::Extra as SignedExtra<T>>::Extra as SignedExtension>::AdditionalSigned: Send + Sync + 'static,
<<E as SignedExtra<T>>::Extra as SignedExtension>::AdditionalSigned:
Send + Sync + 'static,
P: Pair + 'static,
P::Signature: Into<T::Signature> + 'static,
{
@@ -115,11 +117,11 @@ where
async fn sign(
&self,
extrinsic: SignedPayload<T>,
) -> Result<UncheckedExtrinsic<T>, String> {
extrinsic: SignedPayload<T, E>,
) -> Result<UncheckedExtrinsic<T, E>, String> {
let signature = extrinsic.using_encoded(|payload| self.signer.sign(payload));
let (call, extra, _) = extrinsic.deconstruct();
let extrinsic = UncheckedExtrinsic::<T>::new_signed(
let extrinsic = UncheckedExtrinsic::<T, E>::new_signed(
call,
self.account_id.clone().into(),
signature.into(),
+41 -11
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -53,10 +53,8 @@ use codec::{
DecodeAll,
Encode,
};
use core::{
fmt::Debug,
marker::PhantomData,
};
use core::fmt::Debug;
use derivative::Derivative;
mod client;
mod config;
@@ -78,12 +76,12 @@ pub use crate::{
config::{
AccountData,
Config,
ExtrinsicExtraData,
DefaultConfig,
},
error::{
BasicError,
Error,
PalletError,
RuntimeError,
TransactionError,
},
events::{
@@ -92,6 +90,7 @@ pub use crate::{
},
extrinsic::{
DefaultExtra,
DefaultExtraWithTxPayment,
PairSigner,
SignedExtra,
Signer,
@@ -165,7 +164,7 @@ impl codec::Encode for Encoded {
}
/// A phase of a block's execution.
#[derive(Clone, Debug, Eq, PartialEq, Decode)]
#[derive(Clone, Debug, Eq, PartialEq, Decode, Encode)]
pub enum Phase {
/// Applying an extrinsic.
ApplyExtrinsic(u32),
@@ -179,10 +178,17 @@ pub enum Phase {
///
/// [`WrapperKeepOpaque`] stores the type only in its opaque format, aka as a `Vec<u8>`. To
/// access the real type `T` [`Self::try_decode`] needs to be used.
#[derive(Debug, Eq, PartialEq, Default, Clone, Decode, Encode)]
#[derive(Derivative, Encode, Decode)]
#[derivative(
Debug(bound = ""),
Clone(bound = ""),
PartialEq(bound = ""),
Eq(bound = ""),
Default(bound = "")
)]
pub struct WrapperKeepOpaque<T> {
data: Vec<u8>,
_phantom: PhantomData<T>,
_phantom: PhantomDataSendSync<T>,
}
impl<T: Decode> WrapperKeepOpaque<T> {
@@ -207,7 +213,31 @@ impl<T: Decode> WrapperKeepOpaque<T> {
pub fn from_encoded(data: Vec<u8>) -> Self {
Self {
data,
_phantom: PhantomData,
_phantom: PhantomDataSendSync::new(),
}
}
}
/// A version of [`std::marker::PhantomData`] that is also Send and Sync (which is fine
/// because regardless of the generic param, it is always possible to Send + Sync this
/// 0 size type).
#[derive(Derivative, Encode, Decode, scale_info::TypeInfo)]
#[derivative(
Clone(bound = ""),
PartialEq(bound = ""),
Debug(bound = ""),
Eq(bound = ""),
Default(bound = "")
)]
#[scale_info(skip_type_params(T))]
#[doc(hidden)]
pub struct PhantomDataSendSync<T>(core::marker::PhantomData<T>);
impl<T> PhantomDataSendSync<T> {
pub(crate) fn new() -> Self {
Self(core::marker::PhantomData)
}
}
unsafe impl<T> Send for PhantomDataSendSync<T> {}
unsafe impl<T> Sync for PhantomDataSendSync<T> {}
+1 -65
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -84,7 +84,6 @@ pub struct Metadata {
metadata: RuntimeMetadataLastVersion,
pallets: HashMap<String, PalletMetadata>,
events: HashMap<(u8, u8), EventMetadata>,
errors: HashMap<(u8, u8), ErrorMetadata>,
}
impl Metadata {
@@ -108,19 +107,6 @@ impl Metadata {
Ok(event)
}
/// Returns the metadata for the error at the given pallet and error indices.
pub fn error(
&self,
pallet_index: u8,
error_index: u8,
) -> Result<&ErrorMetadata, MetadataError> {
let error = self
.errors
.get(&(pallet_index, error_index))
.ok_or(MetadataError::ErrorNotFound(pallet_index, error_index))?;
Ok(error)
}
/// Resolve a type definition.
pub fn resolve_type(&self, id: u32) -> Option<&Type<PortableForm>> {
self.metadata.types.resolve(id)
@@ -207,30 +193,6 @@ impl EventMetadata {
}
}
#[derive(Clone, Debug)]
pub struct ErrorMetadata {
pallet: String,
error: String,
variant: Variant<PortableForm>,
}
impl ErrorMetadata {
/// Get the name of the pallet from which the error originates.
pub fn pallet(&self) -> &str {
&self.pallet
}
/// Get the name of the specific pallet error.
pub fn error(&self) -> &str {
&self.error
}
/// Get the description of the specific pallet error.
pub fn description(&self) -> &[String] {
self.variant.docs()
}
}
#[derive(Debug, thiserror::Error)]
pub enum InvalidMetadataError {
#[error("Invalid prefix")]
@@ -331,36 +293,10 @@ impl TryFrom<RuntimeMetadataPrefixed> for Metadata {
})
.collect();
let pallet_errors = metadata
.pallets
.iter()
.filter_map(|pallet| {
pallet.error.as_ref().map(|error| {
let type_def_variant = get_type_def_variant(error.ty.id())?;
Ok((pallet, type_def_variant))
})
})
.collect::<Result<Vec<_>, _>>()?;
let errors = pallet_errors
.iter()
.flat_map(|(pallet, type_def_variant)| {
type_def_variant.variants().iter().map(move |var| {
let key = (pallet.index, var.index());
let value = ErrorMetadata {
pallet: pallet.name.clone(),
error: var.name().clone(),
variant: var.clone(),
};
(key, value)
})
})
.collect();
Ok(Self {
metadata,
pallets,
events,
errors,
})
}
}
+122 -42
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -21,8 +21,22 @@
// Related: https://github.com/paritytech/subxt/issues/66
#![allow(irrefutable_let_patterns)]
use std::sync::Arc;
use std::{
collections::HashMap,
sync::Arc,
};
use crate::{
error::BasicError,
storage::StorageKeyPrefix,
subscription::{
EventStorageSubscription,
FinalizedEventStorageSubscription,
SystemEvents,
},
Config,
Metadata,
};
use codec::{
Decode,
Encode,
@@ -72,19 +86,6 @@ use sp_runtime::generic::{
Block,
SignedBlock,
};
use sp_version::RuntimeVersion;
use crate::{
error::Error,
storage::StorageKeyPrefix,
subscription::{
EventStorageSubscription,
FinalizedEventStorageSubscription,
SystemEvents,
},
Config,
Metadata,
};
/// A number type that can be serialized both as a number or a string that encodes a number in a
/// string.
@@ -169,6 +170,33 @@ pub enum SubstrateTransactionStatus<Hash, BlockHash> {
Invalid,
}
/// This contains the runtime version information necessary to make transactions, as obtained from
/// the RPC call `state_getRuntimeVersion`,
#[derive(Debug, Clone, PartialEq, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct RuntimeVersion {
/// Version of the runtime specification. A full-node will not attempt to use its native
/// runtime in substitute for the on-chain Wasm runtime unless all of `spec_name`,
/// `spec_version` and `authoring_version` are the same between Wasm and native.
pub spec_version: u32,
/// All existing dispatches are fully compatible when this number doesn't change. If this
/// number changes, then `spec_version` must change, also.
///
/// This number must change when an existing dispatchable (module ID, dispatch ID) is changed,
/// either through an alteration in its user-level semantics, a parameter
/// added/removed/changed, a dispatchable being removed, a module being removed, or a
/// dispatchable/module changing its index.
///
/// It need *not* change when a new module is added or when a dispatchable is added.
pub transaction_version: u32,
/// The other fields present may vary and aren't necessary for `subxt`; they are preserved in
/// this map.
#[serde(flatten)]
pub other: HashMap<String, serde_json::Value>,
}
/// ReadProof struct returned by the RPC
///
/// # Note
@@ -214,7 +242,7 @@ impl<T: Config> Rpc<T> {
&self,
key: &StorageKey,
hash: Option<T::Hash>,
) -> Result<Option<StorageData>, Error> {
) -> Result<Option<StorageData>, BasicError> {
let params = rpc_params![key, hash];
let data = self.client.request("state_getStorage", params).await?;
Ok(data)
@@ -229,7 +257,7 @@ impl<T: Config> Rpc<T> {
count: u32,
start_key: Option<StorageKey>,
hash: Option<T::Hash>,
) -> Result<Vec<StorageKey>, Error> {
) -> Result<Vec<StorageKey>, BasicError> {
let prefix = prefix.map(|p| p.to_storage_key());
let params = rpc_params![prefix, count, start_key, hash];
let data = self.client.request("state_getKeysPaged", params).await?;
@@ -242,7 +270,7 @@ impl<T: Config> Rpc<T> {
keys: Vec<StorageKey>,
from: T::Hash,
to: Option<T::Hash>,
) -> Result<Vec<StorageChangeSet<T::Hash>>, Error> {
) -> Result<Vec<StorageChangeSet<T::Hash>>, BasicError> {
let params = rpc_params![keys, from, to];
self.client
.request("state_queryStorage", params)
@@ -255,7 +283,7 @@ impl<T: Config> Rpc<T> {
&self,
keys: &[StorageKey],
at: Option<T::Hash>,
) -> Result<Vec<StorageChangeSet<T::Hash>>, Error> {
) -> Result<Vec<StorageChangeSet<T::Hash>>, BasicError> {
let params = rpc_params![keys, at];
self.client
.request("state_queryStorageAt", params)
@@ -264,7 +292,7 @@ impl<T: Config> Rpc<T> {
}
/// Fetch the genesis hash
pub async fn genesis_hash(&self) -> Result<T::Hash, Error> {
pub async fn genesis_hash(&self) -> Result<T::Hash, BasicError> {
let block_zero = Some(ListOrValue::Value(NumberOrHex::Number(0)));
let params = rpc_params![block_zero];
let list_or_value: ListOrValue<Option<T::Hash>> =
@@ -278,7 +306,7 @@ impl<T: Config> Rpc<T> {
}
/// Fetch the metadata
pub async fn metadata(&self) -> Result<Metadata, Error> {
pub async fn metadata(&self) -> Result<Metadata, BasicError> {
let bytes: Bytes = self
.client
.request("state_getMetadata", rpc_params![])
@@ -289,18 +317,33 @@ impl<T: Config> Rpc<T> {
}
/// Fetch system properties
pub async fn system_properties(&self) -> Result<SystemProperties, Error> {
pub async fn system_properties(&self) -> Result<SystemProperties, BasicError> {
Ok(self
.client
.request("system_properties", rpc_params![])
.await?)
}
/// Fetch system chain
pub async fn system_chain(&self) -> Result<String, BasicError> {
Ok(self.client.request("system_chain", rpc_params![]).await?)
}
/// Fetch system name
pub async fn system_name(&self) -> Result<String, BasicError> {
Ok(self.client.request("system_name", rpc_params![]).await?)
}
/// Fetch system version
pub async fn system_version(&self) -> Result<String, BasicError> {
Ok(self.client.request("system_version", rpc_params![]).await?)
}
/// Get a header
pub async fn header(
&self,
hash: Option<T::Hash>,
) -> Result<Option<T::Header>, Error> {
) -> Result<Option<T::Header>, BasicError> {
let params = rpc_params![hash];
let header = self.client.request("chain_getHeader", params).await?;
Ok(header)
@@ -310,7 +353,7 @@ impl<T: Config> Rpc<T> {
pub async fn block_hash(
&self,
block_number: Option<BlockNumber>,
) -> Result<Option<T::Hash>, Error> {
) -> Result<Option<T::Hash>, BasicError> {
let block_number = block_number.map(ListOrValue::Value);
let params = rpc_params![block_number];
let list_or_value = self.client.request("chain_getBlockHash", params).await?;
@@ -321,7 +364,7 @@ impl<T: Config> Rpc<T> {
}
/// Get a block hash of the latest finalized block
pub async fn finalized_head(&self) -> Result<T::Hash, Error> {
pub async fn finalized_head(&self) -> Result<T::Hash, BasicError> {
let hash = self
.client
.request("chain_getFinalizedHead", rpc_params![])
@@ -333,7 +376,7 @@ impl<T: Config> Rpc<T> {
pub async fn block(
&self,
hash: Option<T::Hash>,
) -> Result<Option<ChainBlock<T>>, Error> {
) -> Result<Option<ChainBlock<T>>, BasicError> {
let params = rpc_params![hash];
let block = self.client.request("chain_getBlock", params).await?;
Ok(block)
@@ -344,7 +387,7 @@ impl<T: Config> Rpc<T> {
&self,
keys: Vec<StorageKey>,
hash: Option<T::Hash>,
) -> Result<ReadProof<T::Hash>, Error> {
) -> Result<ReadProof<T::Hash>, BasicError> {
let params = rpc_params![keys, hash];
let proof = self.client.request("state_getReadProof", params).await?;
Ok(proof)
@@ -354,7 +397,7 @@ impl<T: Config> Rpc<T> {
pub async fn runtime_version(
&self,
at: Option<T::Hash>,
) -> Result<RuntimeVersion, Error> {
) -> Result<RuntimeVersion, BasicError> {
let params = rpc_params![at];
let version = self
.client
@@ -367,7 +410,9 @@ impl<T: Config> Rpc<T> {
///
/// *WARNING* these may not be included in the finalized chain, use
/// `subscribe_finalized_events` to ensure events are finalized.
pub async fn subscribe_events(&self) -> Result<EventStorageSubscription<T>, Error> {
pub async fn subscribe_events(
&self,
) -> Result<EventStorageSubscription<T>, BasicError> {
let keys = Some(vec![StorageKey::from(SystemEvents::new())]);
let params = rpc_params![keys];
@@ -381,7 +426,7 @@ impl<T: Config> Rpc<T> {
/// Subscribe to finalized events.
pub async fn subscribe_finalized_events(
&self,
) -> Result<EventStorageSubscription<T>, Error> {
) -> Result<EventStorageSubscription<T>, BasicError> {
Ok(EventStorageSubscription::Finalized(
FinalizedEventStorageSubscription::new(
self.clone(),
@@ -391,7 +436,7 @@ impl<T: Config> Rpc<T> {
}
/// Subscribe to blocks.
pub async fn subscribe_blocks(&self) -> Result<Subscription<T::Header>, Error> {
pub async fn subscribe_blocks(&self) -> Result<Subscription<T::Header>, BasicError> {
let subscription = self
.client
.subscribe(
@@ -407,7 +452,7 @@ impl<T: Config> Rpc<T> {
/// Subscribe to finalized blocks.
pub async fn subscribe_finalized_blocks(
&self,
) -> Result<Subscription<T::Header>, Error> {
) -> Result<Subscription<T::Header>, BasicError> {
let subscription = self
.client
.subscribe(
@@ -420,10 +465,10 @@ impl<T: Config> Rpc<T> {
}
/// Create and submit an extrinsic and return corresponding Hash if successful
pub async fn submit_extrinsic<E: Encode>(
pub async fn submit_extrinsic<X: Encode>(
&self,
extrinsic: E,
) -> Result<T::Hash, Error> {
extrinsic: X,
) -> Result<T::Hash, BasicError> {
let bytes: Bytes = extrinsic.encode().into();
let params = rpc_params![bytes];
let xt_hash = self
@@ -434,10 +479,11 @@ impl<T: Config> Rpc<T> {
}
/// Create and submit an extrinsic and return a subscription to the events triggered.
pub async fn watch_extrinsic<E: Encode>(
pub async fn watch_extrinsic<X: Encode>(
&self,
extrinsic: E,
) -> Result<Subscription<SubstrateTransactionStatus<T::Hash, T::Hash>>, Error> {
extrinsic: X,
) -> Result<Subscription<SubstrateTransactionStatus<T::Hash, T::Hash>>, BasicError>
{
let bytes: Bytes = extrinsic.encode().into();
let params = rpc_params![bytes];
let subscription = self
@@ -457,14 +503,14 @@ impl<T: Config> Rpc<T> {
key_type: String,
suri: String,
public: Bytes,
) -> Result<(), Error> {
) -> Result<(), BasicError> {
let params = rpc_params![key_type, suri, public];
self.client.request("author_insertKey", params).await?;
Ok(())
}
/// Generate new session keys and returns the corresponding public keys.
pub async fn rotate_keys(&self) -> Result<Bytes, Error> {
pub async fn rotate_keys(&self) -> Result<Bytes, BasicError> {
Ok(self
.client
.request("author_rotateKeys", rpc_params![])
@@ -476,7 +522,10 @@ impl<T: Config> Rpc<T> {
/// `session_keys` is the SCALE encoded session keys object from the runtime.
///
/// Returns `true` iff all private keys could be found.
pub async fn has_session_keys(&self, session_keys: Bytes) -> Result<bool, Error> {
pub async fn has_session_keys(
&self,
session_keys: Bytes,
) -> Result<bool, BasicError> {
let params = rpc_params![session_keys];
Ok(self.client.request("author_hasSessionKeys", params).await?)
}
@@ -488,7 +537,7 @@ impl<T: Config> Rpc<T> {
&self,
public_key: Bytes,
key_type: String,
) -> Result<bool, Error> {
) -> Result<bool, BasicError> {
let params = rpc_params![public_key, key_type];
Ok(self.client.request("author_hasKey", params).await?)
}
@@ -511,3 +560,34 @@ async fn ws_transport(url: &str) -> Result<(WsSender, WsReceiver), RpcError> {
.await
.map_err(|e| RpcError::Transport(e.into()))
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_deser_runtime_version() {
let val: RuntimeVersion = serde_json::from_str(
r#"{
"specVersion": 123,
"transactionVersion": 456,
"foo": true,
"wibble": [1,2,3]
}"#,
)
.expect("deserializing failed");
let mut m = std::collections::HashMap::new();
m.insert("foo".to_owned(), serde_json::json!(true));
m.insert("wibble".to_owned(), serde_json::json!([1, 2, 3]));
assert_eq!(
val,
RuntimeVersion {
spec_version: 123,
transaction_version: 456,
other: m
}
);
}
}
+20 -11
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -30,13 +30,13 @@ pub use sp_version::RuntimeVersion;
use std::marker::PhantomData;
use crate::{
error::BasicError,
metadata::{
Metadata,
MetadataError,
},
rpc::Rpc,
Config,
Error,
StorageHasher,
};
@@ -132,13 +132,22 @@ impl StorageMapKey {
}
/// Client for querying runtime storage.
#[derive(Clone)]
pub struct StorageClient<'a, T: Config> {
rpc: &'a Rpc<T>,
metadata: &'a Metadata,
iter_page_size: u32,
}
impl<'a, T: Config> Clone for StorageClient<'a, T> {
fn clone(&self) -> Self {
Self {
rpc: self.rpc,
metadata: self.metadata,
iter_page_size: self.iter_page_size,
}
}
}
impl<'a, T: Config> StorageClient<'a, T> {
/// Create a new [`StorageClient`]
pub fn new(rpc: &'a Rpc<T>, metadata: &'a Metadata, iter_page_size: u32) -> Self {
@@ -154,7 +163,7 @@ impl<'a, T: Config> StorageClient<'a, T> {
&self,
key: StorageKey,
hash: Option<T::Hash>,
) -> Result<Option<V>, Error> {
) -> Result<Option<V>, BasicError> {
if let Some(data) = self.rpc.storage(&key, hash).await? {
Ok(Some(Decode::decode(&mut &data.0[..])?))
} else {
@@ -167,7 +176,7 @@ impl<'a, T: Config> StorageClient<'a, T> {
&self,
key: StorageKey,
hash: Option<T::Hash>,
) -> Result<Option<StorageData>, Error> {
) -> Result<Option<StorageData>, BasicError> {
self.rpc.storage(&key, hash).await
}
@@ -176,7 +185,7 @@ impl<'a, T: Config> StorageClient<'a, T> {
&self,
store: &F,
hash: Option<T::Hash>,
) -> Result<Option<F::Value>, Error> {
) -> Result<Option<F::Value>, BasicError> {
let prefix = StorageKeyPrefix::new::<F>();
let key = store.key().final_key(prefix);
self.fetch_unhashed::<F::Value>(key, hash).await
@@ -187,7 +196,7 @@ impl<'a, T: Config> StorageClient<'a, T> {
&self,
store: &F,
hash: Option<T::Hash>,
) -> Result<F::Value, Error> {
) -> Result<F::Value, BasicError> {
if let Some(data) = self.fetch(store, hash).await? {
Ok(data)
} else {
@@ -205,7 +214,7 @@ impl<'a, T: Config> StorageClient<'a, T> {
keys: Vec<StorageKey>,
from: T::Hash,
to: Option<T::Hash>,
) -> Result<Vec<StorageChangeSet<T::Hash>>, Error> {
) -> Result<Vec<StorageChangeSet<T::Hash>>, BasicError> {
self.rpc.query_storage(keys, from, to).await
}
@@ -217,7 +226,7 @@ impl<'a, T: Config> StorageClient<'a, T> {
count: u32,
start_key: Option<StorageKey>,
hash: Option<T::Hash>,
) -> Result<Vec<StorageKey>, Error> {
) -> Result<Vec<StorageKey>, BasicError> {
let prefix = StorageKeyPrefix::new::<F>();
let keys = self
.rpc
@@ -230,7 +239,7 @@ impl<'a, T: Config> StorageClient<'a, T> {
pub async fn iter<F: StorageEntry>(
&self,
hash: Option<T::Hash>,
) -> Result<KeyIter<'a, T, F>, Error> {
) -> Result<KeyIter<'a, T, F>, BasicError> {
let hash = if let Some(hash) = hash {
hash
} else {
@@ -262,7 +271,7 @@ pub struct KeyIter<'a, T: Config, F: StorageEntry> {
impl<'a, T: Config, F: StorageEntry> KeyIter<'a, T, F> {
/// Returns the next key value pair from a map.
pub async fn next(&mut self) -> Result<Option<(StorageKey, F::Value)>, Error> {
pub async fn next(&mut self) -> Result<Option<(StorageKey, F::Value)>, BasicError> {
loop {
if let Some((k, v)) = self.buffer.pop() {
return Ok(Some((k, Decode::decode(&mut &v.0[..])?)))
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -14,6 +14,17 @@
// You should have received a copy of the GNU General Public License
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
use crate::{
error::BasicError,
events::{
EventsDecoder,
RawEvent,
},
rpc::Rpc,
Config,
Event,
Phase,
};
use jsonrpsee::core::{
client::Subscription,
DeserializeOwned,
@@ -28,18 +39,6 @@ use sp_core::{
use sp_runtime::traits::Header;
use std::collections::VecDeque;
use crate::{
error::Error,
events::{
EventsDecoder,
RawEvent,
},
rpc::Rpc,
Config,
Event,
Phase,
};
/// Event subscription simplifies filtering a storage change set stream for
/// events of interest.
pub struct EventSubscription<'a, T: Config> {
@@ -58,11 +57,13 @@ enum BlockReader<'a, T: Config> {
},
/// Mock event listener for unit tests
#[cfg(test)]
Mock(Box<dyn Iterator<Item = (T::Hash, Result<Vec<(Phase, RawEvent)>, Error>)>>),
Mock(Box<dyn Iterator<Item = (T::Hash, Result<Vec<(Phase, RawEvent)>, BasicError>)>>),
}
impl<'a, T: Config> BlockReader<'a, T> {
async fn next(&mut self) -> Option<(T::Hash, Result<Vec<(Phase, RawEvent)>, Error>)> {
async fn next(
&mut self,
) -> Option<(T::Hash, Result<Vec<(Phase, RawEvent)>, BasicError>)> {
match self {
BlockReader::Decoder {
subscription,
@@ -117,12 +118,12 @@ impl<'a, T: Config> EventSubscription<'a, T> {
}
/// Filters events by type.
pub fn filter_event<E: Event>(&mut self) {
self.event = Some((E::PALLET, E::EVENT));
pub fn filter_event<Ev: Event>(&mut self) {
self.event = Some((Ev::PALLET, Ev::EVENT));
}
/// Gets the next event.
pub async fn next(&mut self) -> Option<Result<RawEvent, Error>> {
pub async fn next(&mut self) -> Option<Result<RawEvent, BasicError>> {
loop {
if let Some(raw_event) = self.events.pop_front() {
return Some(Ok(raw_event))
@@ -259,24 +260,8 @@ where
#[cfg(test)]
mod tests {
use super::*;
use crate::DefaultConfig;
use sp_core::H256;
#[derive(Clone)]
struct MockConfig;
impl Config for MockConfig {
type Index = u32;
type BlockNumber = u32;
type Hash = sp_core::H256;
type Hashing = sp_runtime::traits::BlakeTwo256;
type AccountId = sp_runtime::AccountId32;
type Address = sp_runtime::MultiAddress<Self::AccountId, u32>;
type Header = sp_runtime::generic::Header<
Self::BlockNumber,
sp_runtime::traits::BlakeTwo256,
>;
type Signature = sp_runtime::MultiSignature;
type Extrinsic = sp_runtime::OpaqueExtrinsic;
}
fn named_event(event_name: &str) -> RawEvent {
RawEvent {
@@ -315,7 +300,7 @@ mod tests {
for block_filter in [None, Some(H256::from([1; 32]))] {
for extrinsic_filter in [None, Some(1)] {
for event_filter in [None, Some(("b", "b"))] {
let mut subscription: EventSubscription<MockConfig> =
let mut subscription: EventSubscription<DefaultConfig> =
EventSubscription {
block_reader: BlockReader::Mock(Box::new(
vec![
+73 -50
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -16,6 +16,8 @@
use std::task::Poll;
use crate::PhantomDataSendSync;
use codec::Decode;
use sp_core::storage::StorageKey;
use sp_runtime::traits::Hash;
pub use sp_runtime::traits::SignedExtension;
@@ -24,7 +26,9 @@ pub use sp_version::RuntimeVersion;
use crate::{
client::Client,
error::{
BasicError,
Error,
RuntimeError,
TransactionError,
},
rpc::SubstrateTransactionStatus,
@@ -32,6 +36,7 @@ use crate::{
Config,
Phase,
};
use derivative::Derivative;
use futures::{
Stream,
StreamExt,
@@ -43,20 +48,23 @@ use jsonrpsee::core::{
/// This struct represents a subscription to the progress of some transaction, and is
/// returned from [`crate::SubmittableExtrinsic::sign_and_submit_then_watch()`].
#[derive(Debug)]
pub struct TransactionProgress<'client, T: Config> {
#[derive(Derivative)]
#[derivative(Debug(bound = ""))]
pub struct TransactionProgress<'client, T: Config, E: Decode> {
sub: Option<RpcSubscription<SubstrateTransactionStatus<T::Hash, T::Hash>>>,
ext_hash: T::Hash,
client: &'client Client<T>,
_error: PhantomDataSendSync<E>,
}
// The above type is not `Unpin` by default unless the generic param `T` is,
// so we manually make it clear that Unpin is actually fine regardless of `T`
// (we don't care if this moves around in memory while it's "pinned").
impl<'client, T: Config> Unpin for TransactionProgress<'client, T> {}
impl<'client, T: Config, E: Decode> Unpin for TransactionProgress<'client, T, E> {}
impl<'client, T: Config> TransactionProgress<'client, T> {
pub(crate) fn new(
impl<'client, T: Config, E: Decode> TransactionProgress<'client, T, E> {
/// Instantiate a new [`TransactionProgress`] from a custom subscription.
pub fn new(
sub: RpcSubscription<SubstrateTransactionStatus<T::Hash, T::Hash>>,
client: &'client Client<T>,
ext_hash: T::Hash,
@@ -65,6 +73,7 @@ impl<'client, T: Config> TransactionProgress<'client, T> {
sub: Some(sub),
client,
ext_hash,
_error: PhantomDataSendSync::new(),
}
}
@@ -73,7 +82,7 @@ impl<'client, T: Config> TransactionProgress<'client, T> {
/// avoid importing that trait if you don't otherwise need it.
pub async fn next_item(
&mut self,
) -> Option<Result<TransactionStatus<'client, T>, Error>> {
) -> Option<Result<TransactionStatus<'client, T, E>, BasicError>> {
self.next().await
}
@@ -90,7 +99,7 @@ impl<'client, T: Config> TransactionProgress<'client, T> {
/// level [`TransactionProgress::next_item()`] API if you'd like to handle these statuses yourself.
pub async fn wait_for_in_block(
mut self,
) -> Result<TransactionInBlock<'client, T>, Error> {
) -> Result<TransactionInBlock<'client, T, E>, BasicError> {
while let Some(status) = self.next_item().await {
match status? {
// Finalized or otherwise in a block! Return.
@@ -120,7 +129,7 @@ impl<'client, T: Config> TransactionProgress<'client, T> {
/// level [`TransactionProgress::next_item()`] API if you'd like to handle these statuses yourself.
pub async fn wait_for_finalized(
mut self,
) -> Result<TransactionInBlock<'client, T>, Error> {
) -> Result<TransactionInBlock<'client, T, E>, BasicError> {
while let Some(status) = self.next_item().await {
match status? {
// Finalized! Return.
@@ -147,14 +156,16 @@ impl<'client, T: Config> TransactionProgress<'client, T> {
/// may well indicate with some probability that the transaction will not make it into a block,
/// there is no guarantee that this is true. Thus, we prefer to "play it safe" here. Use the lower
/// level [`TransactionProgress::next_item()`] API if you'd like to handle these statuses yourself.
pub async fn wait_for_finalized_success(self) -> Result<TransactionEvents<T>, Error> {
pub async fn wait_for_finalized_success(
self,
) -> Result<TransactionEvents<T>, Error<E>> {
let evs = self.wait_for_finalized().await?.wait_for_success().await?;
Ok(evs)
}
}
impl<'client, T: Config> Stream for TransactionProgress<'client, T> {
type Item = Result<TransactionStatus<'client, T>, Error>;
impl<'client, T: Config, E: Decode> Stream for TransactionProgress<'client, T, E> {
type Item = Result<TransactionStatus<'client, T, E>, BasicError>;
fn poll_next(
mut self: std::pin::Pin<&mut Self>,
@@ -175,11 +186,11 @@ impl<'client, T: Config> Stream for TransactionProgress<'client, T> {
TransactionStatus::Broadcast(peers)
}
SubstrateTransactionStatus::InBlock(hash) => {
TransactionStatus::InBlock(TransactionInBlock {
block_hash: hash,
ext_hash: self.ext_hash,
client: self.client,
})
TransactionStatus::InBlock(TransactionInBlock::new(
hash,
self.ext_hash,
self.client,
))
}
SubstrateTransactionStatus::Retracted(hash) => {
TransactionStatus::Retracted(hash)
@@ -204,11 +215,11 @@ impl<'client, T: Config> Stream for TransactionProgress<'client, T> {
}
SubstrateTransactionStatus::Finalized(hash) => {
self.sub = None;
TransactionStatus::Finalized(TransactionInBlock {
block_hash: hash,
ext_hash: self.ext_hash,
client: self.client,
})
TransactionStatus::Finalized(TransactionInBlock::new(
hash,
self.ext_hash,
self.client,
))
}
}
})
@@ -261,8 +272,9 @@ impl<'client, T: Config> Stream for TransactionProgress<'client, T> {
/// finalized. The `FinalityTimeout` event will be emitted when the block did not reach finality
/// within 512 blocks. This either indicates that finality is not available for your chain,
/// or that finality gadget is lagging behind.
#[derive(Debug)]
pub enum TransactionStatus<'client, T: Config> {
#[derive(Derivative)]
#[derivative(Debug(bound = ""))]
pub enum TransactionStatus<'client, T: Config, E: Decode> {
/// The transaction is part of the "future" queue.
Future,
/// The transaction is part of the "ready" queue.
@@ -270,7 +282,7 @@ pub enum TransactionStatus<'client, T: Config> {
/// The transaction has been broadcast to the given peers.
Broadcast(Vec<String>),
/// The transaction has been included in a block with given hash.
InBlock(TransactionInBlock<'client, T>),
InBlock(TransactionInBlock<'client, T, E>),
/// The block this transaction was included in has been retracted,
/// probably because it did not make it onto the blocks which were
/// finalized.
@@ -279,7 +291,7 @@ pub enum TransactionStatus<'client, T: Config> {
/// blocks, and so the subscription has ended.
FinalityTimeout(T::Hash),
/// The transaction has been finalized by a finality-gadget, e.g GRANDPA.
Finalized(TransactionInBlock<'client, T>),
Finalized(TransactionInBlock<'client, T, E>),
/// The transaction has been replaced in the pool by another transaction
/// that provides the same tags. (e.g. same (sender, nonce)).
Usurped(T::Hash),
@@ -289,10 +301,10 @@ pub enum TransactionStatus<'client, T: Config> {
Invalid,
}
impl<'client, T: Config> TransactionStatus<'client, T> {
impl<'client, T: Config, E: Decode> TransactionStatus<'client, T, E> {
/// A convenience method to return the `Finalized` details. Returns
/// [`None`] if the enum variant is not [`TransactionStatus::Finalized`].
pub fn as_finalized(&self) -> Option<&TransactionInBlock<'client, T>> {
pub fn as_finalized(&self) -> Option<&TransactionInBlock<'client, T, E>> {
match self {
Self::Finalized(val) => Some(val),
_ => None,
@@ -301,7 +313,7 @@ impl<'client, T: Config> TransactionStatus<'client, T> {
/// A convenience method to return the `InBlock` details. Returns
/// [`None`] if the enum variant is not [`TransactionStatus::InBlock`].
pub fn as_in_block(&self) -> Option<&TransactionInBlock<'client, T>> {
pub fn as_in_block(&self) -> Option<&TransactionInBlock<'client, T, E>> {
match self {
Self::InBlock(val) => Some(val),
_ => None,
@@ -310,14 +322,29 @@ impl<'client, T: Config> TransactionStatus<'client, T> {
}
/// This struct represents a transaction that has made it into a block.
#[derive(Debug)]
pub struct TransactionInBlock<'client, T: Config> {
#[derive(Derivative)]
#[derivative(Debug(bound = ""))]
pub struct TransactionInBlock<'client, T: Config, E: Decode> {
block_hash: T::Hash,
ext_hash: T::Hash,
client: &'client Client<T>,
_error: PhantomDataSendSync<E>,
}
impl<'client, T: Config> TransactionInBlock<'client, T> {
impl<'client, T: Config, E: Decode> TransactionInBlock<'client, T, E> {
pub(crate) fn new(
block_hash: T::Hash,
ext_hash: T::Hash,
client: &'client Client<T>,
) -> Self {
Self {
block_hash,
ext_hash,
client,
_error: PhantomDataSendSync::new(),
}
}
/// Return the hash of the block that the transaction has made it into.
pub fn block_hash(&self) -> T::Hash {
self.block_hash
@@ -341,19 +368,14 @@ impl<'client, T: Config> TransactionInBlock<'client, T> {
///
/// **Note:** This has to download block details from the node and decode events
/// from them.
pub async fn wait_for_success(&self) -> Result<TransactionEvents<T>, Error> {
pub async fn wait_for_success(&self) -> Result<TransactionEvents<T>, Error<E>> {
let events = self.fetch_events().await?;
// Try to find any errors; return the first one we encounter.
for ev in events.as_slice() {
if &ev.pallet == "System" && &ev.variant == "ExtrinsicFailed" {
use codec::Decode;
let dispatch_error = sp_runtime::DispatchError::decode(&mut &*ev.data)?;
let runtime_error = crate::RuntimeError::from_dispatch(
self.client.metadata(),
dispatch_error,
)?;
return Err(runtime_error.into())
let dispatch_error = E::decode(&mut &*ev.data)?;
return Err(Error::Runtime(RuntimeError(dispatch_error)))
}
}
@@ -366,13 +388,13 @@ impl<'client, T: Config> TransactionInBlock<'client, T> {
///
/// **Note:** This has to download block details from the node and decode events
/// from them.
pub async fn fetch_events(&self) -> Result<TransactionEvents<T>, Error> {
pub async fn fetch_events(&self) -> Result<TransactionEvents<T>, BasicError> {
let block = self
.client
.rpc()
.block(Some(self.block_hash))
.await?
.ok_or(Error::Transaction(TransactionError::BlockHashNotFound))?;
.ok_or(BasicError::Transaction(TransactionError::BlockHashNotFound))?;
let extrinsic_idx = block.block.extrinsics
.iter()
@@ -382,7 +404,7 @@ impl<'client, T: Config> TransactionInBlock<'client, T> {
})
// If we successfully obtain the block hash we think contains our
// extrinsic, the extrinsic should be in there somewhere..
.ok_or(Error::Transaction(TransactionError::BlockHashNotFound))?;
.ok_or(BasicError::Transaction(TransactionError::BlockHashNotFound))?;
let raw_events = self
.client
@@ -416,7 +438,8 @@ impl<'client, T: Config> TransactionInBlock<'client, T> {
/// This represents the events related to our transaction.
/// We can iterate over the events, or look for a specific one.
#[derive(Debug)]
#[derive(Derivative)]
#[derivative(Debug(bound = ""))]
pub struct TransactionEvents<T: Config> {
block_hash: T::Hash,
ext_hash: T::Hash,
@@ -441,10 +464,10 @@ impl<T: Config> TransactionEvents<T> {
/// Find all of the events matching the event type provided as a generic parameter. This
/// will return an error if a matching event is found but cannot be properly decoded.
pub fn find_events<E: crate::Event>(&self) -> Result<Vec<E>, Error> {
pub fn find_events<Ev: crate::Event>(&self) -> Result<Vec<Ev>, BasicError> {
self.events
.iter()
.filter_map(|e| e.as_event::<E>().map_err(Into::into).transpose())
.filter_map(|e| e.as_event::<Ev>().map_err(Into::into).transpose())
.collect()
}
@@ -453,18 +476,18 @@ impl<T: Config> TransactionEvents<T> {
///
/// Use [`TransactionEvents::find_events`], or iterate over [`TransactionEvents`] yourself
/// if you'd like to handle multiple events of the same type.
pub fn find_first_event<E: crate::Event>(&self) -> Result<Option<E>, Error> {
pub fn find_first_event<Ev: crate::Event>(&self) -> Result<Option<Ev>, BasicError> {
self.events
.iter()
.filter_map(|e| e.as_event::<E>().transpose())
.filter_map(|e| e.as_event::<Ev>().transpose())
.next()
.transpose()
.map_err(Into::into)
}
/// Find an event. Returns true if it was found.
pub fn has_event<E: crate::Event>(&self) -> Result<bool, Error> {
Ok(self.find_first_event::<E>()?.is_some())
pub fn has_event<Ev: crate::Event>(&self) -> Result<bool, BasicError> {
Ok(self.find_first_event::<Ev>()?.is_some())
}
}
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -122,3 +122,12 @@ async fn test_iter() {
}
assert_eq!(i, 13);
}
#[async_std::test]
async fn fetch_system_info() {
let node_process = test_node_process().await;
let client = node_process.client();
assert_eq!(client.rpc().system_chain().await.unwrap(), "Development");
assert_eq!(client.rpc().system_name().await.unwrap(), "Substrate Node");
assert!(!client.rpc().system_version().await.unwrap().is_empty());
}
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
File diff suppressed because one or more lines are too long
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -19,8 +19,9 @@ use crate::{
balances,
runtime_types,
system,
DefaultConfig,
DispatchError,
},
pair_signer,
test_context,
};
use codec::Decode;
@@ -30,20 +31,16 @@ use sp_core::{
};
use sp_keyring::AccountKeyring;
use subxt::{
extrinsic::{
PairSigner,
Signer,
},
DefaultConfig,
Error,
EventSubscription,
PalletError,
RuntimeError,
Signer,
};
#[async_std::test]
async fn tx_basic_transfer() -> Result<(), subxt::Error> {
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
let bob = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Bob.pair());
async fn tx_basic_transfer() -> Result<(), subxt::Error<DispatchError>> {
let alice = pair_signer(AccountKeyring::Alice.pair());
let bob = pair_signer(AccountKeyring::Bob.pair());
let bob_address = bob.account_id().clone().into();
let cxt = test_context().await;
let api = &cxt.api;
@@ -113,8 +110,8 @@ async fn storage_total_issuance() {
}
#[async_std::test]
async fn storage_balance_lock() -> Result<(), subxt::Error> {
let bob = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Bob.pair());
async fn storage_balance_lock() -> Result<(), subxt::Error<DispatchError>> {
let bob = pair_signer(AccountKeyring::Bob.pair());
let charlie = AccountKeyring::Charlie.to_account_id();
let cxt = test_context().await;
@@ -155,9 +152,9 @@ async fn storage_balance_lock() -> Result<(), subxt::Error> {
#[async_std::test]
async fn transfer_error() {
env_logger::try_init().ok();
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
let alice = pair_signer(AccountKeyring::Alice.pair());
let alice_addr = alice.account_id().clone().into();
let hans = PairSigner::<DefaultConfig, _>::new(Pair::generate().0);
let hans = pair_signer(Pair::generate().0);
let hans_address = hans.account_id().clone().into();
let cxt = test_context().await;
@@ -183,13 +180,10 @@ async fn transfer_error() {
.wait_for_finalized_success()
.await;
if let Err(Error::Runtime(RuntimeError::Module(error))) = res {
let error2 = PalletError {
pallet: "Balances".into(),
error: "InsufficientBalance".into(),
description: vec!["Balance too low to send value".to_string()],
};
assert_eq!(error, error2);
if let Err(Error::Runtime(err)) = res {
let details = err.inner().details().unwrap();
assert_eq!(details.pallet, "Balances");
assert_eq!(details.error, "InsufficientBalance");
} else {
panic!("expected a runtime module error");
}
@@ -198,7 +192,7 @@ async fn transfer_error() {
#[async_std::test]
async fn transfer_subscription() {
env_logger::try_init().ok();
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
let alice = pair_signer(AccountKeyring::Alice.pair());
let bob = AccountKeyring::Bob.to_account_id();
let bob_addr = bob.clone().into();
let cxt = test_context().await;
@@ -230,7 +224,7 @@ async fn transfer_subscription() {
#[async_std::test]
async fn transfer_implicit_subscription() {
env_logger::try_init().ok();
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
let alice = pair_signer(AccountKeyring::Alice.pair());
let bob = AccountKeyring::Bob.to_account_id();
let bob_addr = bob.clone().into();
let cxt = test_context().await;
@@ -267,4 +261,12 @@ async fn constant_existential_deposit() {
let constant_metadata = balances_metadata.constant("ExistentialDeposit").unwrap();
let existential_deposit = u128::decode(&mut &constant_metadata.value[..]).unwrap();
assert_eq!(existential_deposit, 100_000_000_000_000);
assert_eq!(
existential_deposit,
cxt.api
.constants()
.balances()
.existential_deposit()
.unwrap()
);
}
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -24,9 +24,11 @@ use crate::{
storage,
},
system,
DefaultConfig,
DefaultAccountData,
DispatchError,
},
test_context,
NodeRuntimeSignedExtra,
TestContext,
};
use sp_core::sr25519::Pair;
@@ -34,6 +36,7 @@ use sp_runtime::MultiAddress;
use subxt::{
Client,
Config,
DefaultConfig,
Error,
PairSigner,
TransactionProgress,
@@ -41,7 +44,7 @@ use subxt::{
struct ContractsTestContext {
cxt: TestContext,
signer: PairSigner<DefaultConfig, Pair>,
signer: PairSigner<DefaultConfig, NodeRuntimeSignedExtra, Pair>,
}
type Hash = <DefaultConfig as Config>::Hash;
@@ -59,11 +62,15 @@ impl ContractsTestContext {
self.cxt.client()
}
fn contracts_tx(&self) -> TransactionApi<DefaultConfig> {
fn contracts_tx(
&self,
) -> TransactionApi<DefaultConfig, NodeRuntimeSignedExtra, DefaultAccountData> {
self.cxt.api.tx().contracts()
}
async fn instantiate_with_code(&self) -> Result<(Hash, AccountId), Error> {
async fn instantiate_with_code(
&self,
) -> Result<(Hash, AccountId), Error<DispatchError>> {
log::info!("instantiate_with_code:");
const CONTRACT: &str = r#"
(module
@@ -114,7 +121,7 @@ impl ContractsTestContext {
code_hash: Hash,
data: Vec<u8>,
salt: Vec<u8>,
) -> Result<AccountId, Error> {
) -> Result<AccountId, Error<DispatchError>> {
// call instantiate extrinsic
let result = self
.contracts_tx()
@@ -143,7 +150,8 @@ impl ContractsTestContext {
&self,
contract: AccountId,
input_data: Vec<u8>,
) -> Result<TransactionProgress<'_, DefaultConfig>, Error> {
) -> Result<TransactionProgress<'_, DefaultConfig, DispatchError>, Error<DispatchError>>
{
log::info!("call: {:?}", contract);
let result = self
.contracts_tx()
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -21,8 +21,9 @@ use crate::{
ValidatorPrefs,
},
staking,
DefaultConfig,
DispatchError,
},
pair_signer,
test_context,
};
use assert_matches::assert_matches;
@@ -32,12 +33,8 @@ use sp_core::{
};
use sp_keyring::AccountKeyring;
use subxt::{
extrinsic::{
PairSigner,
Signer,
},
Error,
RuntimeError,
Signer,
};
/// Helper function to generate a crypto pair from seed
@@ -55,7 +52,7 @@ fn default_validator_prefs() -> ValidatorPrefs {
#[async_std::test]
async fn validate_with_controller_account() {
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
let alice = pair_signer(AccountKeyring::Alice.pair());
let cxt = test_context().await;
cxt.api
.tx()
@@ -70,8 +67,8 @@ async fn validate_with_controller_account() {
}
#[async_std::test]
async fn validate_not_possible_for_stash_account() -> Result<(), Error> {
let alice_stash = PairSigner::<DefaultConfig, _>::new(get_from_seed("Alice//stash"));
async fn validate_not_possible_for_stash_account() -> Result<(), Error<DispatchError>> {
let alice_stash = pair_signer(get_from_seed("Alice//stash"));
let cxt = test_context().await;
let announce_validator = cxt
.api
@@ -82,17 +79,18 @@ async fn validate_not_possible_for_stash_account() -> Result<(), Error> {
.await?
.wait_for_finalized_success()
.await;
assert_matches!(announce_validator, Err(Error::Runtime(RuntimeError::Module(module_err))) => {
assert_eq!(module_err.pallet, "Staking");
assert_eq!(module_err.error, "NotController");
assert_matches!(announce_validator, Err(Error::Runtime(err)) => {
let details = err.inner().details().unwrap();
assert_eq!(details.pallet, "Staking");
assert_eq!(details.error, "NotController");
});
Ok(())
}
#[async_std::test]
async fn nominate_with_controller_account() {
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
let bob = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Bob.pair());
let alice = pair_signer(AccountKeyring::Alice.pair());
let bob = pair_signer(AccountKeyring::Bob.pair());
let cxt = test_context().await;
cxt.api
@@ -108,10 +106,9 @@ async fn nominate_with_controller_account() {
}
#[async_std::test]
async fn nominate_not_possible_for_stash_account() -> Result<(), Error> {
let alice_stash =
PairSigner::<DefaultConfig, sr25519::Pair>::new(get_from_seed("Alice//stash"));
let bob = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Bob.pair());
async fn nominate_not_possible_for_stash_account() -> Result<(), Error<DispatchError>> {
let alice_stash = pair_signer(get_from_seed("Alice//stash"));
let bob = pair_signer(AccountKeyring::Bob.pair());
let cxt = test_context().await;
let nomination = cxt
@@ -124,20 +121,19 @@ async fn nominate_not_possible_for_stash_account() -> Result<(), Error> {
.wait_for_finalized_success()
.await;
assert_matches!(nomination, Err(Error::Runtime(RuntimeError::Module(module_err))) => {
assert_eq!(module_err.pallet, "Staking");
assert_eq!(module_err.error, "NotController");
assert_matches!(nomination, Err(Error::Runtime(err)) => {
let details = err.inner().details().unwrap();
assert_eq!(details.pallet, "Staking");
assert_eq!(details.error, "NotController");
});
Ok(())
}
#[async_std::test]
async fn chill_works_for_controller_only() -> Result<(), Error> {
let alice_stash =
PairSigner::<DefaultConfig, sr25519::Pair>::new(get_from_seed("Alice//stash"));
let bob_stash =
PairSigner::<DefaultConfig, sr25519::Pair>::new(get_from_seed("Bob//stash"));
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
async fn chill_works_for_controller_only() -> Result<(), Error<DispatchError>> {
let alice_stash = pair_signer(get_from_seed("Alice//stash"));
let bob_stash = pair_signer(get_from_seed("Bob//stash"));
let alice = pair_signer(AccountKeyring::Alice.pair());
let cxt = test_context().await;
// this will fail the second time, which is why this is one test, not two
@@ -169,9 +165,10 @@ async fn chill_works_for_controller_only() -> Result<(), Error> {
.wait_for_finalized_success()
.await;
assert_matches!(chill, Err(Error::Runtime(RuntimeError::Module(module_err))) => {
assert_eq!(module_err.pallet, "Staking");
assert_eq!(module_err.error, "NotController");
assert_matches!(chill, Err(Error::Runtime(err)) => {
let details = err.inner().details().unwrap();
assert_eq!(details.pallet, "Staking");
assert_eq!(details.error, "NotController");
});
let is_chilled = cxt
@@ -190,8 +187,8 @@ async fn chill_works_for_controller_only() -> Result<(), Error> {
}
#[async_std::test]
async fn tx_bond() -> Result<(), Error> {
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
async fn tx_bond() -> Result<(), Error<DispatchError>> {
let alice = pair_signer(AccountKeyring::Alice.pair());
let cxt = test_context().await;
let bond = cxt
@@ -224,16 +221,16 @@ async fn tx_bond() -> Result<(), Error> {
.wait_for_finalized_success()
.await;
assert_matches!(bond_again, Err(Error::Runtime(RuntimeError::Module(module_err))) => {
assert_eq!(module_err.pallet, "Staking");
assert_eq!(module_err.error, "AlreadyBonded");
assert_matches!(bond_again, Err(Error::Runtime(err)) => {
let details = err.inner().details().unwrap();
assert_eq!(details.pallet, "Staking");
assert_eq!(details.error, "AlreadyBonded");
});
Ok(())
}
#[async_std::test]
async fn storage_history_depth() -> Result<(), Error> {
async fn storage_history_depth() -> Result<(), Error<DispatchError>> {
let cxt = test_context().await;
let history_depth = cxt.api.storage().staking().history_depth(None).await?;
assert_eq!(history_depth, 84);
@@ -241,7 +238,7 @@ async fn storage_history_depth() -> Result<(), Error> {
}
#[async_std::test]
async fn storage_current_era() -> Result<(), Error> {
async fn storage_current_era() -> Result<(), Error<DispatchError>> {
let cxt = test_context().await;
let _current_era = cxt
.api
@@ -254,7 +251,7 @@ async fn storage_current_era() -> Result<(), Error> {
}
#[async_std::test]
async fn storage_era_reward_points() -> Result<(), Error> {
async fn storage_era_reward_points() -> Result<(), Error<DispatchError>> {
let cxt = test_context().await;
let current_era_result = cxt
.api
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -18,19 +18,19 @@ use crate::{
node_runtime::{
runtime_types,
sudo,
DefaultConfig,
DispatchError,
},
pair_signer,
test_context,
};
use sp_keyring::AccountKeyring;
use subxt::extrinsic::PairSigner;
type Call = runtime_types::node_runtime::Call;
type BalancesCall = runtime_types::pallet_balances::pallet::Call;
#[async_std::test]
async fn test_sudo() -> Result<(), subxt::Error> {
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
async fn test_sudo() -> Result<(), subxt::Error<DispatchError>> {
let alice = pair_signer(AccountKeyring::Alice.pair());
let bob = AccountKeyring::Bob.to_account_id().into();
let cxt = test_context().await;
@@ -55,8 +55,8 @@ async fn test_sudo() -> Result<(), subxt::Error> {
}
#[async_std::test]
async fn test_sudo_unchecked_weight() -> Result<(), subxt::Error> {
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
async fn test_sudo_unchecked_weight() -> Result<(), subxt::Error<DispatchError>> {
let alice = pair_signer(AccountKeyring::Alice.pair());
let bob = AccountKeyring::Bob.to_account_id().into();
let cxt = test_context().await;
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -17,20 +17,18 @@
use crate::{
node_runtime::{
system,
DefaultConfig,
DispatchError,
},
pair_signer,
test_context,
};
use assert_matches::assert_matches;
use sp_keyring::AccountKeyring;
use subxt::extrinsic::{
PairSigner,
Signer,
};
use subxt::Signer;
#[async_std::test]
async fn storage_account() -> Result<(), subxt::Error> {
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
async fn storage_account() -> Result<(), subxt::Error<DispatchError>> {
let alice = pair_signer(AccountKeyring::Alice.pair());
let cxt = test_context().await;
let account_info = cxt
@@ -45,8 +43,8 @@ async fn storage_account() -> Result<(), subxt::Error> {
}
#[async_std::test]
async fn tx_remark_with_event() -> Result<(), subxt::Error> {
let alice = PairSigner::<DefaultConfig, _>::new(AccountKeyring::Alice.pair());
async fn tx_remark_with_event() -> Result<(), subxt::Error<DispatchError>> {
let alice = pair_signer(AccountKeyring::Alice.pair());
let cxt = test_context().await;
let found_event = cxt
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -15,19 +15,26 @@
// along with subxt. If not, see <http://www.gnu.org/licenses/>.
pub use crate::{
node_runtime::{
self,
DefaultConfig,
},
node_runtime,
TestNodeProcess,
};
use sp_core::sr25519::Pair;
use sp_keyring::AccountKeyring;
use subxt::Client;
use subxt::{
extrinsic::ChargeAssetTxPayment,
Client,
DefaultConfig,
DefaultExtraWithTxPayment,
PairSigner,
};
/// substrate node should be installed on the $PATH
const SUBSTRATE_NODE_PATH: &str = "substrate";
pub type NodeRuntimeSignedExtra =
DefaultExtraWithTxPayment<DefaultConfig, ChargeAssetTxPayment<DefaultConfig>>;
pub async fn test_node_process_with(
key: AccountKeyring,
) -> TestNodeProcess<DefaultConfig> {
@@ -53,7 +60,7 @@ pub async fn test_node_process() -> TestNodeProcess<DefaultConfig> {
pub struct TestContext {
pub node_proc: TestNodeProcess<DefaultConfig>,
pub api: node_runtime::RuntimeApi<DefaultConfig>,
pub api: node_runtime::RuntimeApi<DefaultConfig, NodeRuntimeSignedExtra>,
}
impl TestContext {
@@ -68,3 +75,9 @@ pub async fn test_context() -> TestContext {
let api = node_proc.client().clone().to_runtime_api();
TestContext { node_proc, api }
}
pub fn pair_signer(
pair: Pair,
) -> PairSigner<DefaultConfig, NodeRuntimeSignedExtra, Pair> {
PairSigner::new(pair)
}
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
+6 -6
View File
@@ -1,16 +1,16 @@
[package]
name = "test-runtime"
version = "0.1.0"
version = "0.16.0"
edition = "2021"
[dependencies]
subxt = { path = ".." }
sp-runtime = { package = "sp-runtime", git = "https://github.com/paritytech/substrate/", branch = "master" }
subxt = { path = "../subxt" }
sp-runtime = "4.0.0"
codec = { package = "parity-scale-codec", version = "2", default-features = false, features = ["derive", "full", "bit-vec"] }
[build-dependencies]
subxt = { path = ".." }
sp-core = { package = "sp-core", git = "https://github.com/paritytech/substrate/", branch = "master" }
jsonrpsee = { version = "0.7.0", features = ["http-client"] }
subxt = { path = "../subxt", version = "0.16.0" }
sp-core = "4.0.0"
async-std = { version = "1.9.0", features = ["attributes", "tokio1"] }
which = "4.2.2"
jsonrpsee = { version = "0.8", features = ["http-client"] }
+2 -2
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify
@@ -101,7 +101,7 @@ async fn run() {
r#"
#[subxt::subxt(
runtime_metadata_path = "{}",
generated_type_derives = "Debug, Eq, PartialEq"
generated_type_derives = "Eq, PartialEq"
)]
pub mod node_runtime {{
#[subxt(substitute_type = "sp_arithmetic::per_things::Perbill")]
+1 -1
View File
@@ -1,4 +1,4 @@
// Copyright 2019-2021 Parity Technologies (UK) Ltd.
// Copyright 2019-2022 Parity Technologies (UK) Ltd.
// This file is part of subxt.
//
// subxt is free software: you can redistribute it and/or modify