Merge remote-tracking branch 'origin/master' into na-jsonrpsee-macros

This commit is contained in:
Niklas
2022-02-10 17:52:35 +01:00
20 changed files with 424 additions and 95 deletions
+16
View File
@@ -6,6 +6,22 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.17.0] - 2022-02-04
### Added
- introduce jsonrpsee client abstraction + kill HTTP support. ([#341](https://github.com/paritytech/subxt/pull/341))
- Get event context on EventSubscription ([#423](https://github.com/paritytech/subxt/pull/423))
### Changed
- Add more tests for events.rs/decode_and_consume_type ([#430](https://github.com/paritytech/subxt/pull/430))
- Update substrate dependencies ([#429](https://github.com/paritytech/subxt/pull/429))
- export RuntimeError struct ([#427](https://github.com/paritytech/subxt/pull/427))
- remove unused PalletError struct ([#425](https://github.com/paritytech/subxt/pull/425))
- Move Subxt crate into a subfolder ([#424](https://github.com/paritytech/subxt/pull/424))
- Add release checklist ([#418](https://github.com/paritytech/subxt/pull/418))
## [0.16.0] - 2022-02-01
*Note*: This is a significant release which introduces support for V14 metadata and macro based codegen, as well as making many breaking changes to the API.
+31 -21
View File
@@ -26,12 +26,12 @@ We also assume that ongoing work done is being merged directly to the `master` b
If there are minor issues with the documentation, they can be fixed in the release branch.
4. Bump the crate version in `Cargo.toml` to whatever was decided in step 2 for `subxt-codegen`, `subxt-macro`, `subxt` and `subxt-cli`.
4. Bump the crate version in `Cargo.toml` to whatever was decided in step 2 for `subxt-cli`, `subxt-codegen`, `subxt-examples`, `subxt-macro` ,`subxt`, `test-runtime`.
5. Update `CHANGELOG.md` to reflect the difference between this release and the last. If you're unsure of
what to add, check with the Tools team. See the `CHANGELOG.md` file for details of the format it follows.
Any [closed PRs](https://github.com/paritytech/subxt/pulls?q=is%3Apr+is%3Aclosed) between the last release and
Any [closed PRs](https://github.com/paritytech/subxt/pulls?q=is%3Apr+sort%3Aupdated-desc+is%3Aclosed) between the last release and
this release branch should be noted.
6. Commit any of the above changes to the release branch and open a PR in GitHub with a base of `master`.
@@ -40,28 +40,38 @@ We also assume that ongoing work done is being merged directly to the `master` b
8. Now, we're ready to publish the release to crates.io.
Checkout `master`, ensuring we're looking at that latest merge (`git pull`).
1. Checkout `master`, ensuring we're looking at that latest merge (`git pull`).
The crates in this repository need publishing in a specific order, since they depend on each other.
Additionally, `subxt-macro` has a circular dev dependency on `subxt`, so we use `cargo hack` to remove
dev dependencies (and `--allow-dirty` to ignore the git changes as a result) to publish it.
```
git checkout master && git pull
```
So, first install `cargo hack` with `cargo install cargo hack`. Next, you can run something like the following
command to publish each crate in the required order (allowing a little time inbetween each to let `crates.io` catch up)
with what we've published).
2. Perform a dry-run publish to ensure the crates can be correctly published.
```
(cd codegen && cargo publish) && \
sleep 10 && \
(cd macro && cargo hack publish --no-dev-deps --allow-dirty) && \
sleep 10 && \
(cd subxt && cargo publish) && \
sleep 10 && \
(cd cli && cargo publish);
```
The crates in this repository need publishing in a specific order, since they depend on each other.
If you run into any issues regarding crates not being able to find suitable versions of other `subxt-*` crates,
you may just need to wait a little longer and then run the remaining portion of that command.
```
(cd codegen && cargo publish --dry-run) && \
(cd macro && cargo publish --dry-run) && \
(cd subxt && cargo publish --dry-run) && \
(cd cli && cargo publish --dry-run);
```
3. If the dry-run was successful, run the following command to publish each crate in the required order (allowing
a little time in between each to let crates.io catch up with what we've published).
```
(cd codegen && cargo publish) && \
sleep 10 && \
(cd macro && cargo publish) && \
sleep 10 && \
(cd subxt && cargo publish) && \
sleep 10 && \
(cd cli && cargo publish);
```
If you run into any issues regarding crates not being able to find suitable versions of other `subxt-*` crates,
you may just need to wait a little longer and then run the remaining portion of that command.
9. If the release was successful, tag the commit that we released in the `master` branch with the
version that we just released, for example:
@@ -73,4 +83,4 @@ We also assume that ongoing work done is being merged directly to the `master` b
Once this is pushed, go along to [the releases page on GitHub](https://github.com/paritytech/subxt/releases)
and draft a new release which points to the tag you just pushed to `master` above. Copy the changelog comments
for the current release into the release description.
for the current release into the release description.
+2 -2
View File
@@ -1,6 +1,6 @@
[package]
name = "subxt-cli"
version = "0.16.0"
version = "0.17.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
@@ -16,7 +16,7 @@ path = "src/main.rs"
[dependencies]
# perform subxt codegen
subxt-codegen = { version = "0.16.0", path = "../codegen" }
subxt-codegen = { version = "0.17.0", path = "../codegen" }
# parse command line args
structopt = "0.3.25"
# make the request to a substrate node to get the metadata
+2 -2
View File
@@ -1,6 +1,6 @@
[package]
name = "subxt-codegen"
version = "0.16.0"
version = "0.17.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
@@ -15,7 +15,7 @@ async-trait = "0.1.49"
codec = { package = "parity-scale-codec", version = "2", default-features = false, features = ["derive", "full", "bit-vec"] }
darling = "0.13.0"
frame-metadata = "14.0"
heck = "0.3.2"
heck = "0.4.0"
proc-macro2 = "1.0.24"
proc-macro-crate = "0.1.5"
proc-macro-error = "1.0.4"
+3 -3
View File
@@ -23,8 +23,8 @@ use frame_metadata::{
PalletMetadata,
};
use heck::{
CamelCase as _,
SnakeCase as _,
ToSnakeCase as _,
ToUpperCamelCase as _,
};
use proc_macro2::TokenStream as TokenStream2;
use proc_macro_error::abort_call_site;
@@ -43,7 +43,7 @@ pub fn generate_calls(
let struct_defs = super::generate_structs_from_variants(
type_gen,
call.ty.id(),
|name| name.to_camel_case().into(),
|name| name.to_upper_camel_case().into(),
"Call",
);
let (call_structs, call_fns): (Vec<_>, Vec<_>) = struct_defs
+1 -1
View File
@@ -16,7 +16,7 @@
use crate::types::TypeGenerator;
use frame_metadata::PalletConstantMetadata;
use heck::SnakeCase as _;
use heck::ToSnakeCase as _;
use proc_macro2::TokenStream as TokenStream2;
use quote::{
format_ident,
+2 -2
View File
@@ -54,7 +54,7 @@ use frame_metadata::{
RuntimeMetadataPrefixed,
StorageEntryType,
};
use heck::SnakeCase as _;
use heck::ToSnakeCase as _;
use proc_macro2::TokenStream as TokenStream2;
use proc_macro_error::abort_call_site;
use quote::{
@@ -462,7 +462,7 @@ where
type_gen,
);
CompositeDef::struct_def(
var.name(),
struct_name.as_ref(),
Default::default(),
fields,
Some(parse_quote!(pub)),
+1 -1
View File
@@ -23,7 +23,7 @@ use frame_metadata::{
StorageEntryType,
StorageHasher,
};
use heck::SnakeCase as _;
use heck::ToSnakeCase as _;
use proc_macro2::TokenStream as TokenStream2;
use proc_macro_error::abort_call_site;
use quote::{
+2 -2
View File
@@ -1,6 +1,6 @@
[package]
name = "subxt-examples"
version = "0.16.0"
version = "0.17.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
publish = false
@@ -14,7 +14,7 @@ description = "Subxt example usage"
[dev-dependencies]
subxt = { path = "../subxt" }
async-std = { version = "1.9.0", features = ["attributes", "tokio1"] }
sp-keyring = "4.0.0"
sp-keyring = "5.0.0"
env_logger = "0.9.0"
futures = "0.3.13"
codec = { package = "parity-scale-codec", version = "2", default-features = false, features = ["derive", "full", "bit-vec"] }
+5 -5
View File
@@ -1,6 +1,6 @@
[package]
name = "subxt-macro"
version = "0.16.0"
version = "0.17.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
autotests = false
@@ -19,7 +19,7 @@ async-trait = "0.1.49"
codec = { package = "parity-scale-codec", version = "2", default-features = false, features = ["derive", "full"] }
darling = "0.13.0"
frame-metadata = "14.0"
heck = "0.3.2"
heck = "0.4.0"
proc-macro2 = "1.0.24"
proc-macro-crate = "0.1.5"
proc-macro-error = "1.0.4"
@@ -27,10 +27,10 @@ quote = "1.0.8"
syn = "1.0.58"
scale-info = "1.0.0"
subxt-codegen = { path = "../codegen", version = "0.16.0" }
subxt-codegen = { path = "../codegen", version = "0.17.0" }
[dev-dependencies]
pretty_assertions = "1.0.0"
subxt = { path = "../subxt", version = "0.16.0" }
subxt = { path = "../subxt" }
trybuild = "1.0.38"
sp-keyring = "4.0.0"
sp-keyring = "5.0.0"
+5 -5
View File
@@ -1,6 +1,6 @@
[package]
name = "subxt"
version = "0.16.0"
version = "0.17.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2021"
@@ -27,10 +27,10 @@ serde = { version = "1.0.124", features = ["derive"] }
serde_json = "1.0.64"
thiserror = "1.0.24"
subxt-macro = { version = "0.16.0", path = "../macro" }
subxt-macro = { version = "0.17.0", path = "../macro" }
sp-core = { version = "4.0.0", default-features = false }
sp-runtime = { version = "4.0.0", default-features = false }
sp-core = { version = "5.0.0", default-features = false }
sp-runtime = "5.0.0"
sp-version = "4.0.0"
frame-metadata = "14.0.0"
@@ -45,4 +45,4 @@ tempdir = "0.3.7"
wabt = "0.10.0"
which = "4.0.2"
test-runtime = { path = "../test-runtime" }
sp-keyring = "4.0.0"
sp-keyring = "5.0.0"
+1 -1
View File
@@ -90,7 +90,7 @@ impl ClientBuilder {
client
} else {
let url = self.url.as_deref().unwrap_or("ws://127.0.0.1:9944");
crate::rpc::build_ws_client(url).await?
crate::rpc::ws_client(url).await?
};
let rpc = Rpc::new(client);
let (metadata_bytes, genesis_hash, runtime_version, properties) = future::join4(
-12
View File
@@ -155,18 +155,6 @@ impl<E> RuntimeError<E> {
}
}
/// Module error.
#[derive(Clone, Debug, Eq, thiserror::Error, PartialEq)]
#[error("{error} from {pallet}")]
pub struct PalletError {
/// The module where the error originated.
pub pallet: String,
/// The actual error code.
pub error: String,
/// The error description.
pub description: Vec<String>,
}
/// Transaction error.
#[derive(Clone, Debug, Eq, thiserror::Error, PartialEq)]
pub enum TransactionError {
+188
View File
@@ -373,10 +373,17 @@ pub enum EventsDecodingError {
mod tests {
use super::*;
use crate::{
error::GenericError::{
Codec,
EventsDecoding,
Other,
},
events::EventsDecodingError::UnsupportedPrimitive,
Config,
DefaultConfig,
Phase,
};
use assert_matches::assert_matches;
use codec::Encode;
use frame_metadata::{
v14::{
@@ -643,4 +650,185 @@ mod tests {
bitvec::bitvec![Msb0, u64; 0, 1, 1, 0, 1, 0, 1, 0, 0],
);
}
#[test]
fn decode_primitive() {
decode_and_consume_type_consumes_all_bytes(false);
decode_and_consume_type_consumes_all_bytes(true);
let dummy_data = vec![0u8];
let dummy_cursor = &mut &*dummy_data;
let (id, reg) = singleton_type_registry::<char>();
let res = decode_and_consume_type(id.id(), &reg, dummy_cursor);
assert_matches!(
res,
Err(EventsDecoding(UnsupportedPrimitive(TypeDefPrimitive::Char)))
);
decode_and_consume_type_consumes_all_bytes("str".to_string());
decode_and_consume_type_consumes_all_bytes(1u8);
decode_and_consume_type_consumes_all_bytes(1i8);
decode_and_consume_type_consumes_all_bytes(1u16);
decode_and_consume_type_consumes_all_bytes(1i16);
decode_and_consume_type_consumes_all_bytes(1u32);
decode_and_consume_type_consumes_all_bytes(1i32);
decode_and_consume_type_consumes_all_bytes(1u64);
decode_and_consume_type_consumes_all_bytes(1i64);
decode_and_consume_type_consumes_all_bytes(1u128);
decode_and_consume_type_consumes_all_bytes(1i128);
}
#[test]
fn decode_tuple() {
decode_and_consume_type_consumes_all_bytes(());
decode_and_consume_type_consumes_all_bytes((true,));
decode_and_consume_type_consumes_all_bytes((true, "str"));
// Incomplete bytes for decoding
let dummy_data = false.encode();
let dummy_cursor = &mut &*dummy_data;
let (id, reg) = singleton_type_registry::<(bool, &'static str)>();
let res = decode_and_consume_type(id.id(), &reg, dummy_cursor);
assert_matches!(res, Err(Codec(_)));
// Incomplete bytes for decoding, with invalid char type
let dummy_data = (false, "str", 0u8).encode();
let dummy_cursor = &mut &*dummy_data;
let (id, reg) = singleton_type_registry::<(bool, &'static str, char)>();
let res = decode_and_consume_type(id.id(), &reg, dummy_cursor);
assert_matches!(
res,
Err(EventsDecoding(UnsupportedPrimitive(TypeDefPrimitive::Char)))
);
// The last byte (0x0 u8) should not be consumed
assert_eq!(dummy_cursor.len(), 1);
}
#[test]
fn decode_array_and_seq() {
decode_and_consume_type_consumes_all_bytes([0]);
decode_and_consume_type_consumes_all_bytes([1, 2, 3, 4, 5]);
decode_and_consume_type_consumes_all_bytes([0; 500]);
decode_and_consume_type_consumes_all_bytes(["str", "abc", "cde"]);
decode_and_consume_type_consumes_all_bytes(vec![0]);
decode_and_consume_type_consumes_all_bytes(vec![1, 2, 3, 4, 5]);
decode_and_consume_type_consumes_all_bytes(vec!["str", "abc", "cde"]);
}
#[test]
fn decode_variant() {
#[derive(Clone, Encode, TypeInfo)]
enum EnumVar {
A,
B((&'static str, u8)),
C { named: i16 },
}
const INVALID_TYPE_ID: u32 = 1024;
decode_and_consume_type_consumes_all_bytes(EnumVar::A);
decode_and_consume_type_consumes_all_bytes(EnumVar::B(("str", 1)));
decode_and_consume_type_consumes_all_bytes(EnumVar::C { named: 1 });
// Invalid variant index
let dummy_data = 3u8.encode();
let dummy_cursor = &mut &*dummy_data;
let (id, reg) = singleton_type_registry::<EnumVar>();
let res = decode_and_consume_type(id.id(), &reg, dummy_cursor);
assert_matches!(res, Err(Other(_)));
// Valid index, incomplete data
let dummy_data = 2u8.encode();
let dummy_cursor = &mut &*dummy_data;
let res = decode_and_consume_type(id.id(), &reg, dummy_cursor);
assert_matches!(res, Err(Codec(_)));
let res = decode_and_consume_type(INVALID_TYPE_ID, &reg, dummy_cursor);
assert_matches!(res, Err(crate::error::GenericError::Metadata(_)));
}
#[test]
fn decode_composite() {
#[derive(Clone, Encode, TypeInfo)]
struct Composite {}
decode_and_consume_type_consumes_all_bytes(Composite {});
#[derive(Clone, Encode, TypeInfo)]
struct CompositeV2 {
id: u32,
name: String,
}
decode_and_consume_type_consumes_all_bytes(CompositeV2 {
id: 10,
name: "str".to_string(),
});
#[derive(Clone, Encode, TypeInfo)]
struct CompositeV3<T> {
id: u32,
extra: T,
}
decode_and_consume_type_consumes_all_bytes(CompositeV3 {
id: 10,
extra: vec![0, 1, 2],
});
decode_and_consume_type_consumes_all_bytes(CompositeV3 {
id: 10,
extra: bitvec::bitvec![Lsb0, u8; 0, 1, 1, 0, 1],
});
decode_and_consume_type_consumes_all_bytes(CompositeV3 {
id: 10,
extra: ("str", 1),
});
decode_and_consume_type_consumes_all_bytes(CompositeV3 {
id: 10,
extra: CompositeV2 {
id: 2,
name: "str".to_string(),
},
});
#[derive(Clone, Encode, TypeInfo)]
struct CompositeV4(u32, bool);
decode_and_consume_type_consumes_all_bytes(CompositeV4(1, true));
#[derive(Clone, Encode, TypeInfo)]
struct CompositeV5(u32);
decode_and_consume_type_consumes_all_bytes(CompositeV5(1));
}
#[test]
fn decode_compact() {
#[derive(Clone, Encode, TypeInfo)]
enum Compact {
A(#[codec(compact)] u32),
}
decode_and_consume_type_consumes_all_bytes(Compact::A(1));
#[derive(Clone, Encode, TypeInfo)]
struct CompactV2(#[codec(compact)] u32);
decode_and_consume_type_consumes_all_bytes(CompactV2(1));
#[derive(Clone, Encode, TypeInfo)]
struct CompactV3 {
#[codec(compact)]
val: u32,
}
decode_and_consume_type_consumes_all_bytes(CompactV3 { val: 1 });
#[derive(Clone, Encode, TypeInfo)]
struct CompactV4<T> {
#[codec(compact)]
val: T,
}
decode_and_consume_type_consumes_all_bytes(CompactV4 { val: 0u8 });
decode_and_consume_type_consumes_all_bytes(CompactV4 { val: 1u16 });
}
}
+2 -1
View File
@@ -81,7 +81,8 @@ pub use crate::{
error::{
BasicError,
Error,
PalletError,
GenericError,
RuntimeError,
TransactionError,
},
events::{
+5 -5
View File
@@ -203,6 +203,10 @@ pub trait SubxtRpcApi<Hash, Header, Xt: Serialize> {
#[method(name = "author_hasKey")]
async fn has_key(&self, public_key: Bytes, key_type: String) -> RpcResult<bool>;
/// Create and submit an extrinsic and return corresponding Hash if successful
#[method(name = "author_submitExtrinsic")]
async fn submit_extrinsic(&self, extrinsic: Bytes) -> RpcResult<Hash>;
/// Subscribe to System Events that are imported into blocks.
///
/// *WARNING* these may not be included in the finalized chain, use
@@ -225,10 +229,6 @@ pub trait SubxtRpcApi<Hash, Header, Xt: Serialize> {
item = SubstrateTransactionStatus<Hash, Hash>
)]
fn watch_extrinsic<X: Encode>(&self, xt: Bytes) -> RpcResult<()>;
/// Create and submit an extrinsic and return corresponding Hash if successful
#[method(name = "author_submitExtrinsic")]
async fn submit_extrinsic(&self, extrinsic: Bytes) -> RpcResult<Hash>;
}
/// A number type that can be serialized both as a number or a string that encodes a number in a
@@ -400,7 +400,7 @@ impl<T: Config> Rpc<T> {
}
/// Build WS RPC client from URL
pub async fn build_ws_client(url: &str) -> Result<RpcClient, RpcError> {
pub async fn ws_client(url: &str) -> Result<RpcClient, RpcError> {
let (sender, receiver) = ws_transport(url).await?;
Ok(RpcClientBuilder::default()
.max_notifs_per_subscription(4096)
+143 -20
View File
@@ -42,6 +42,16 @@ use sp_core::{
use sp_runtime::traits::Header;
use std::collections::VecDeque;
/// Raw bytes for an Event, including the block hash where it occurred and its
/// corresponding event index.
#[derive(Debug)]
#[cfg_attr(test, derive(PartialEq, Clone))]
pub struct EventContext<Hash> {
pub block_hash: Hash,
pub event_idx: usize,
pub event: RawEvent,
}
/// Event subscription simplifies filtering a storage change set stream for
/// events of interest.
pub struct EventSubscription<'a, T: Config> {
@@ -49,7 +59,7 @@ pub struct EventSubscription<'a, T: Config> {
block: Option<T::Hash>,
extrinsic: Option<usize>,
event: Option<(&'static str, &'static str)>,
events: VecDeque<RawEvent>,
events: VecDeque<EventContext<T::Hash>>,
finished: bool,
}
@@ -60,13 +70,19 @@ enum BlockReader<'a, T: Config> {
},
/// Mock event listener for unit tests
#[cfg(test)]
Mock(Box<dyn Iterator<Item = (T::Hash, Result<Vec<(Phase, RawEvent)>, BasicError>)>>),
Mock(
Box<
dyn Iterator<
Item = (T::Hash, Result<Vec<(Phase, usize, RawEvent)>, BasicError>),
>,
>,
),
}
impl<'a, T: Config> BlockReader<'a, T> {
async fn next(
&mut self,
) -> Option<(T::Hash, Result<Vec<(Phase, RawEvent)>, BasicError>)> {
) -> Option<(T::Hash, Result<Vec<(Phase, usize, RawEvent)>, BasicError>)> {
match self {
BlockReader::Decoder {
subscription,
@@ -81,7 +97,13 @@ impl<'a, T: Config> BlockReader<'a, T> {
})
.collect();
let flattened_events = events.map(|x| x.into_iter().flatten().collect());
let flattened_events = events.map(|x| {
x.into_iter()
.flatten()
.enumerate()
.map(|(event_idx, (phase, raw))| (phase, event_idx, raw))
.collect()
});
Some((change_set.block, flattened_events))
}
#[cfg(test)]
@@ -127,6 +149,15 @@ impl<'a, T: Config> EventSubscription<'a, T> {
/// Gets the next event.
pub async fn next(&mut self) -> Option<Result<RawEvent, BasicError>> {
self.next_context()
.await
.map(|res| res.map(|ctx| ctx.event))
}
/// Gets the next event with the associated block hash and its corresponding
/// event index.
pub async fn next_context(
&mut self,
) -> Option<Result<EventContext<T::Hash>, BasicError>> {
loop {
if let Some(raw_event) = self.events.pop_front() {
return Some(Ok(raw_event))
@@ -147,7 +178,7 @@ impl<'a, T: Config> EventSubscription<'a, T> {
match events {
Err(err) => return Some(Err(err)),
Ok(raw_events) => {
for (phase, raw) in raw_events {
for (phase, event_idx, raw) in raw_events {
if let Some(ext_index) = self.extrinsic {
if !matches!(phase, Phase::ApplyExtrinsic(i) if i as usize == ext_index)
{
@@ -159,7 +190,11 @@ impl<'a, T: Config> EventSubscription<'a, T> {
continue
}
}
self.events.push_back(raw);
self.events.push_back(EventContext {
block_hash: received_hash,
event_idx,
event: raw,
});
}
}
}
@@ -282,7 +317,7 @@ mod tests {
#[async_std::test]
/// test that filters work correctly, and are independent of each other
async fn test_filters() {
let mut events = vec![];
let mut events: Vec<(H256, Phase, usize, RawEvent)> = vec![];
// create all events
for block_hash in [H256::from([0; 32]), H256::from([1; 32])] {
for phase in [
@@ -291,14 +326,24 @@ mod tests {
Phase::ApplyExtrinsic(1),
Phase::Finalization,
] {
for event in [named_event("a"), named_event("b")] {
events.push((block_hash, phase.clone(), event))
}
[named_event("a"), named_event("b")]
.iter()
.enumerate()
.for_each(|(idx, event)| {
events.push((
block_hash,
phase.clone(),
// The event index
idx,
event.clone(),
))
});
}
}
// set variant index so we can uniquely identify the event
events.iter_mut().enumerate().for_each(|(idx, event)| {
event.2.variant_index = idx as u8;
event.3.variant_index = idx as u8;
});
let half_len = events.len() / 2;
@@ -315,8 +360,8 @@ mod tests {
Ok(events
.iter()
.take(half_len)
.map(|(_, phase, event)| {
(phase.clone(), event.clone())
.map(|(_, phase, idx, event)| {
(phase.clone(), *idx, event.clone())
})
.collect()),
),
@@ -325,8 +370,8 @@ mod tests {
Ok(events
.iter()
.skip(half_len)
.map(|(_, phase, event)| {
(phase.clone(), event.clone())
.map(|(_, phase, idx, event)| {
(phase.clone(), *idx, event.clone())
})
.collect()),
),
@@ -339,21 +384,24 @@ mod tests {
events: Default::default(),
finished: false,
};
let mut expected_events = events.clone();
let mut expected_events: Vec<(H256, Phase, usize, RawEvent)> =
events.clone();
if let Some(hash) = block_filter {
expected_events.retain(|(h, _, _)| h == &hash);
expected_events.retain(|(h, _, _, _)| h == &hash);
}
if let Some(idx) = extrinsic_filter {
expected_events.retain(|(_, phase, _)| matches!(phase, Phase::ApplyExtrinsic(i) if *i as usize == idx));
expected_events.retain(|(_, phase, _, _)| matches!(phase, Phase::ApplyExtrinsic(i) if *i as usize == idx));
}
if let Some(name) = event_filter {
expected_events.retain(|(_, _, event)| event.pallet == name.0);
expected_events.retain(|(_, _, _, event)| event.pallet == name.0);
}
for expected_event in expected_events {
assert_eq!(
subscription.next().await.unwrap().unwrap(),
expected_event.2
expected_event.3
);
}
assert!(subscription.next().await.is_none());
@@ -361,4 +409,79 @@ mod tests {
}
}
}
#[async_std::test]
async fn test_context() {
let mut events = vec![];
// create all events
for block_hash in [H256::from([0; 32]), H256::from([1; 32])] {
for phase in [
Phase::Initialization,
Phase::ApplyExtrinsic(0),
Phase::ApplyExtrinsic(1),
Phase::Finalization,
] {
[named_event("a"), named_event("b")]
.iter()
.enumerate()
.for_each(|(idx, event)| {
events.push((
phase.clone(),
EventContext {
block_hash,
event_idx: idx,
event: event.clone(),
},
));
});
}
}
// set variant index so we can uniquely identify the event
events.iter_mut().enumerate().for_each(|(idx, (_, ctx))| {
ctx.event.variant_index = idx as u8;
});
let half_len = events.len() / 2;
let mut subscription: EventSubscription<DefaultConfig> = EventSubscription {
block_reader: BlockReader::Mock(Box::new(
vec![
(
events[0].1.block_hash,
Ok(events
.iter()
.take(half_len)
.map(|(phase, ctx)| {
(phase.clone(), ctx.event_idx, ctx.event.clone())
})
.collect()),
),
(
events[half_len].1.block_hash,
Ok(events
.iter()
.skip(half_len)
.map(|(phase, ctx)| {
(phase.clone(), ctx.event_idx, ctx.event.clone())
})
.collect()),
),
]
.into_iter(),
)),
block: None,
extrinsic: None,
event: None,
events: Default::default(),
finished: false,
};
let expected_events = events.clone();
for exp in expected_events {
assert_eq!(subscription.next_context().await.unwrap().unwrap(), exp.1);
}
assert!(subscription.next().await.is_none());
}
}
+4 -4
View File
@@ -1,15 +1,15 @@
[package]
name = "test-runtime"
version = "0.16.0"
version = "0.17.0"
edition = "2021"
[dependencies]
subxt = { path = "../subxt" }
sp-runtime = "4.0.0"
sp-runtime = "5.0.0"
codec = { package = "parity-scale-codec", version = "2", default-features = false, features = ["derive", "full", "bit-vec"] }
[build-dependencies]
subxt = { path = "../subxt", version = "0.16.0" }
sp-core = "4.0.0"
subxt = { path = "../subxt" }
sp-core = "5.0.0"
async-std = { version = "1.9.0", features = ["attributes", "tokio1"] }
which = "4.2.2"
+2 -2
View File
@@ -3,8 +3,8 @@
The logic for this crate exists mainly in the `build.rs` file.
At compile time, this crate will:
- Spin up a local `substrate` binary (set the `SUBSTRATE_NODE_PATH` env var to point to a custom binary, otehrwise it'll look for `substrate` on your PATH).
- Spin up a local `substrate` binary (set the `SUBSTRATE_NODE_PATH` env var to point to a custom binary, otherwise it'll look for `substrate` on your PATH).
- Obtain metadata from this node.
- Export the metadata and a `node_runtime` module which has been annotated using the `subxt` proc macro and is based off the above metadata.
The reason for doing this is that our integration tests (which also spin up a Substrate node) can then use the generated `subxt` types from the exact node being tested against, so that we don't have to worry about metadata getting out of sync with the binary under test.
The reason for doing this is that our integration tests (which also spin up a Substrate node) can then use the generated `subxt` types from the exact node being tested against, so that we don't have to worry about metadata getting out of sync with the binary under test.
+9 -6
View File
@@ -58,6 +58,10 @@ async fn run() {
.spawn();
let mut cmd = match cmd {
Ok(cmd) => KillOnDrop(cmd),
Err(ref e) if e.kind() == std::io::ErrorKind::NotFound => {
panic!("A substrate binary should be installed on your path for testing purposes. \
See https://github.com/paritytech/subxt/tree/master#integration-testing")
}
Err(e) => {
panic!("Cannot spawn substrate command '{}': {}", substrate_bin, e)
}
@@ -75,11 +79,10 @@ async fn run() {
// It might take a while for substrate node that spin up the RPC server.
// Thus, the connection might get rejected a few times.
let res =
match rpc::build_ws_client(&format!("ws://localhost:{}", port)).await {
Ok(c) => c.request("state_getMetadata", None).await,
Err(e) => Err(e),
};
let res = match rpc::ws_client(&format!("ws://localhost:{}", port)).await {
Ok(c) => c.request("state_getMetadata", None).await,
Err(e) => Err(e),
};
match res {
Ok(res) => {
@@ -165,7 +168,7 @@ fn next_open_port() -> Option<u16> {
}
}
/// If the substrate process isn't explicilty killed on drop,
/// If the substrate process isn't explicitly killed on drop,
/// it seems that panics that occur while the command is running
/// will leave it running and block the build step from ever finishing.
/// Wrapping it in this prevents this from happening.