Use scale-encode and scale-decode to encode and decode based on metadata (#842)

* WIP EncodeAsType and DecodeAsType

* remove silly cli experiment code

* Get things finally compiling with EncodeAsType and DecodeAsType

* update codegen test and WrapperKeepOpaque proper impl (in case it shows up in codegen)

* fix tests

* accomodate scale-value changes

* starting to migrate to EncodeAsType/DecodeAsType

* static event decoding and tx encoding to use DecodeAsFields/EncodeAsFields

* some tidy up and add decode(skip) attrs where needed

* fix root event decoding

* #[codec(skip)] will do, and combine map_key stuff into storage_address since it's all specific to that

* fmt and clippy

* update Cargo.lock

* remove patched scale-encode

* bump scale-encode to 0.1 and remove unused dep in testing crate

* update deps and use released scale-decode

* update scale-value to latest to remove git branch

* Apply suggestions from code review

Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com>

* remove sorting in derives/attr generation; spit them out in order given

* re-add derive sorting; it's a hashmap

* StaticTxPayload and DynamicTxPayload rolled into single Payload struct

* StaticStorageAddress and DynamicStorageAddress into single Address struct

* Fix storage address byte retrieval

* StaticConstantAddress and DynamicConstantAddress => Address

* Simplify storage codegen to fix test

* Add comments

* Alias to RuntimeEvent rather than making another, and prep for substituting call type

* remove unnecessary clone

* Fix docs and failing UI test

* root_bytes -> to_root_bytes

* document error case in StorageClient::address_bytes()

---------

Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
This commit is contained in:
James Wilson
2023-03-21 15:31:13 +00:00
committed by GitHub
parent c9527abaa8
commit c63ff6ec6d
50 changed files with 9965 additions and 6262 deletions
+5 -13
View File
@@ -21,30 +21,22 @@
//! Fetching storage keys
//!
//! ```no_run
//! # #[tokio::main]
//! # async fn main() {
//! use subxt::{ PolkadotConfig, OnlineClient, storage::StorageKey };
//!
//! #[subxt::subxt(runtime_metadata_path = "../artifacts/polkadot_metadata.scale")]
//! pub mod polkadot {}
//!
//! # #[tokio::main]
//! # async fn main() {
//! let api = OnlineClient::<PolkadotConfig>::new().await.unwrap();
//!
//! let key = polkadot::storage()
//! .xcm_pallet()
//! .version_notifiers_root()
//! .to_bytes();
//!
//! // Fetch up to 10 keys.
//! let keys = api
//! let genesis_hash = api
//! .rpc()
//! .storage_keys_paged(&key, 10, None, None)
//! .genesis_hash()
//! .await
//! .unwrap();
//!
//! for key in keys.iter() {
//! println!("Key: 0x{}", hex::encode(&key));
//! }
//! println!("{genesis_hash}");
//! # }
//! ```
+6 -14
View File
@@ -9,33 +9,25 @@
//!
//! # Example
//!
//! Fetching storage keys
//! Fetching the chain genesis hash.
//!
//! ```no_run
//! # #[tokio::main]
//! # async fn main() {
//! use subxt::{ PolkadotConfig, OnlineClient, storage::StorageKey };
//!
//! #[subxt::subxt(runtime_metadata_path = "../artifacts/polkadot_metadata.scale")]
//! pub mod polkadot {}
//!
//! # #[tokio::main]
//! # async fn main() {
//! let api = OnlineClient::<PolkadotConfig>::new().await.unwrap();
//!
//! let key = polkadot::storage()
//! .xcm_pallet()
//! .version_notifiers_root()
//! .to_bytes();
//!
//! // Fetch up to 10 keys.
//! let keys = api
//! let genesis_hash = api
//! .rpc()
//! .storage_keys_paged(&key, 10, None, None)
//! .genesis_hash()
//! .await
//! .unwrap();
//!
//! for key in keys.iter() {
//! println!("Key: 0x{}", hex::encode(&key));
//! }
//! println!("{genesis_hash}");
//! # }
//! ```