mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-05-01 07:47:57 +00:00
29c0c6a4a8
* Add tokio * No need to map CallError to CallError * jsonrpsee proc macros (#9673) * port error types to `JsonRpseeError` * migrate chain module to proc macro api * make it compile with proc macros * update branch * update branch * update to jsonrpsee master * port system rpc * port state rpc * port childstate & offchain * frame system rpc * frame transaction payment * bring back CORS hack to work with polkadot UI * port babe rpc * port manual seal rpc * port frame mmr rpc * port frame contracts rpc * port finality grandpa rpc * port sync state rpc * resolve a few TODO + no jsonrpc deps * Update bin/node/rpc-client/src/main.rs * Update bin/node/rpc-client/src/main.rs * Update bin/node/rpc-client/src/main.rs * Update bin/node/rpc-client/src/main.rs * Port over system_ rpc tests * Make it compile * Use prost 0.8 * Use prost 0.8 * Make it compile * Ignore more failing tests * Comment out WIP tests * fix nit in frame system api * Update lockfile * No more juggling tokio versions * No more wait_for_stop ? * Remove browser-testing * Arguments must be arrays * Use same argument names * Resolve todo: no wait_for_stop for WS server Add todo: is parse_rpc_result used? Cleanup imports * fmt * log * One test passes * update jsonrpsee * update jsonrpsee * cleanup rpc-servers crate * jsonrpsee: add host and origin filtering (#9787) * add access control in the jsonrpsee servers * use master * fix nits * rpc runtime_version safe * fix nits * fix grumbles * remove unused files * resolve some todos * jsonrpsee more cleanup (#9803) * more cleanup * resolve TODOs * fix some unwraps * remove type hints * update jsonrpsee * downgrade zeroize * pin jsonrpsee rev * remove unwrap nit * Comment out more tests that aren't ported * Comment out more tests * Fix tests after merge * Subscription test * Invalid nonce test * Pending exts * WIP removeExtrinsic test * Test remove_extrinsic * Make state test: should_return_storage work * Uncomment/fix the other non-subscription related state tests * test: author_insertKey * test: author_rotateKeys * Get rest of state tests passing * asyncify a little more * Add todo to note #msg change * Crashing test for has_session_keys * Fix error conversion to avoid stack overflows Port author_hasSessionKeys test fmt * test author_hasKey * Add two missing tests Add a check on the return type Add todos for James's concerns * RPC tests for state, author and system (#9859) * Fix test runner * Impl Default for SubscriptionTaskExecutor * Keep the minimul amount of code needed to compile tests * Re-instate `RpcSession` (for now) * cleanup * Port over RPC tests * Add tokio * No need to map CallError to CallError * Port over system_ rpc tests * Make it compile * Use prost 0.8 * Use prost 0.8 * Make it compile * Ignore more failing tests * Comment out WIP tests * Update lockfile * No more juggling tokio versions * No more wait_for_stop ? * Remove browser-testing * Arguments must be arrays * Use same argument names * Resolve todo: no wait_for_stop for WS server Add todo: is parse_rpc_result used? Cleanup imports * fmt * log * One test passes * Comment out more tests that aren't ported * Comment out more tests * Fix tests after merge * Subscription test * Invalid nonce test * Pending exts * WIP removeExtrinsic test * Test remove_extrinsic * Make state test: should_return_storage work * Uncomment/fix the other non-subscription related state tests * test: author_insertKey * test: author_rotateKeys * Get rest of state tests passing * asyncify a little more * Add todo to note #msg change * Crashing test for has_session_keys * Fix error conversion to avoid stack overflows Port author_hasSessionKeys test fmt * test author_hasKey * Add two missing tests Add a check on the return type Add todos for James's concerns * offchain rpc tests * Address todos * fmt Co-authored-by: James Wilson <james@jsdw.me> * fix drop in state test * update jsonrpsee * fix ignored system test * fix chain tests * remove some boiler plate * Port BEEFY RPC (#9883) * Merge master * Port beefy RPC (ty @niklas!) * trivial changes left over from merge * Remove unused code * Update jsonrpsee * fix build * make tests compile again * beefy update jsonrpsee * fix: respect rpc methods policy * update cargo.lock * update jsonrpsee * update jsonrpsee * downgrade error logs * update jsonrpsee * Fix typo * remove unused file * Better name * Port Babe RPC tests * Put docs back * Resolve todo * Port tests for System RPCs * Resolve todo * fix build * Updated jsonrpsee to current master * fix: port finality grandpa rpc tests * Move .into() outside of the match * more review grumbles * jsonrpsee: add `rpc handlers` back (#10245) * add back RpcHandlers * cargo fmt * fix docs * fix grumble: remove needless alloc * resolve TODO * fmt * Fix typo * grumble: Use constants based on BASE_ERROR * grumble: DRY whitelisted listening addresses grumble: s/JSONRPC/JSON-RPC/ * cleanup * grumbles: Making readers aware of the possibility of gaps * review grumbles * grumbles * remove notes from niklasad1 * Update `jsonrpsee` * fix: jsonrpsee features * jsonrpsee: fallback to random port in case the specified port failed (#10304) * jsonrpsee: fallback to random port * better comment * Update client/rpc-servers/src/lib.rs Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com> * Update client/rpc-servers/src/lib.rs Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com> * address grumbles * cargo fmt * addrs already slice Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com> * Update jsonrpsee to 092081a0a2b8904c6ebd2cd99e16c7bc13ffc3ae * lockfile * update jsonrpsee * fix warning * Don't fetch jsonrpsee from crates * make tests compile again * fix rpc tests * remove unused deps * update tokio * fix rpc tests again * fix: test runner `HttpServerBuilder::builder` fails unless it's called within tokio runtime * cargo fmt * grumbles: fix subscription aliases * make clippy happy * update remaining subscriptions alias * cleanup * cleanup * fix chain subscription: less boiler plate (#10285) * fix chain subscription: less boiler plate * fix bad merge * cargo fmt * Switch to jsonrpsee 0.5 * fix build * add missing features * fix nit: remove needless Box::pin * Integrate jsonrpsee metrics (#10395) * draft metrics impl * Use latest api * Add missing file * Http server metrics * cleanup * bump jsonrpsee * Remove `ServerMetrics` and use a single middleware for both connection counting (aka sessions) and call metrics. * fix build * remove needless Arc::clone * Update to jsonrpsee 0.6 * lolz * fix metrics * Revert "lolz" This reverts commit eed6c6a56e78d8e307b4950f4c52a1c3a2322ba1. * fix: in-memory rpc support subscriptions * commit Cargo.lock * Update tests to 0.7 * fix TODOs * ws server: generate subscriptionIDs as Strings Some libraries seems to expect the subscription IDs to be Strings, let's not break this in this PR. * Increase timeout * Port over tests * cleanup * Using error codes from the spec * fix clippy * cargo fmt * update jsonrpsee * fix nits * fix: rpc_query * enable custom subid gen through spawn_tasks * remove unsed deps * unify tokio deps * Revert "enable custom subid gen through spawn_tasks" This reverts commit 5c5eb70328fe39d154fdb55c56e637b4548cf470. * fix bad merge of `test-utils` * fix more nits * downgrade wasm-instrument to 0.1.0 * [jsonrpsee]: enable custom RPC subscription ID generatation (#10731) * enable custom subid gen through spawn_tasks * fix nits * Update client/service/src/builder.rs Co-authored-by: David <dvdplm@gmail.com> * add Poc; needs jsonrpsee pr * update jsonrpsee * add re-exports * add docs Co-authored-by: David <dvdplm@gmail.com> * cargo fmt * fmt * port RPC-API dev * Remove unused file * fix nit: remove async trait * fix doc links * fix merge nit: remove jsonrpc deps * kill namespace on rpc apis * companion for jsonrpsee v0.10 (#11158) * companion for jsonrpsee v0.10 * update versions v0.10.0 * add some fixes * spelling * fix spaces Co-authored-by: Niklas Adolfsson <niklasadolfsson1@gmail.com> * send error before subs are closed * fix unsubscribe method names: chain * fix tests * jsonrpc server: print binded local address * grumbles: kill SubscriptionTaskExecutor * Update client/sync-state-rpc/src/lib.rs Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> * Update client/rpc/src/chain/chain_full.rs Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> * Update client/rpc/src/chain/chain_full.rs Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> * sync-state-rpc: kill anyhow * no more anyhow * remove todo * jsonrpsee: fix bad params in subscriptions. (#11251) * update jsonrpsee * fix error responses * revert error codes * dont do weird stuff in drop impl * rpc servers: remove needless clone * Remove silly constants * chore: update jsonrpsee v0.12 * commit Cargo.lock * deps: downgrade git2 * feat: CLI flag max subscriptions per connection * metrics: use old logging format * fix: read WS address from substrate output (#11379) Co-authored-by: Niklas Adolfsson <niklasadolfsson1@gmail.com> Co-authored-by: James Wilson <james@jsdw.me> Co-authored-by: Maciej Hirsz <hello@maciej.codes> Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com> Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
567 lines
17 KiB
Rust
567 lines
17 KiB
Rust
// This file is part of Substrate.
|
|
|
|
// Copyright (C) 2018-2022 Parity Technologies (UK) Ltd.
|
|
// SPDX-License-Identifier: GPL-3.0-or-later WITH Classpath-exception-2.0
|
|
|
|
// This program is free software: you can redistribute it and/or modify
|
|
// it under the terms of the GNU General Public License as published by
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
// (at your option) any later version.
|
|
|
|
// This program is distributed in the hope that it will be useful,
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
// GNU General Public License for more details.
|
|
|
|
// You should have received a copy of the GNU General Public License
|
|
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
|
|
//! Service integration test utils.
|
|
|
|
use futures::{task::Poll, Future, TryFutureExt as _};
|
|
use log::{debug, info};
|
|
use parking_lot::Mutex;
|
|
use sc_client_api::{Backend, CallExecutor};
|
|
use sc_network::{
|
|
config::{NetworkConfiguration, TransportConfig},
|
|
multiaddr, Multiaddr,
|
|
};
|
|
use sc_service::{
|
|
client::Client,
|
|
config::{BasePath, DatabaseSource, KeystoreConfig},
|
|
ChainSpecExtension, Configuration, Error, GenericChainSpec, KeepBlocks, Role, RuntimeGenesis,
|
|
SpawnTaskHandle, TaskManager,
|
|
};
|
|
use sc_transaction_pool_api::TransactionPool;
|
|
use sp_api::BlockId;
|
|
use sp_blockchain::HeaderBackend;
|
|
use sp_runtime::traits::Block as BlockT;
|
|
use std::{iter, net::Ipv4Addr, pin::Pin, sync::Arc, task::Context, time::Duration};
|
|
use tempfile::TempDir;
|
|
use tokio::{runtime::Runtime, time};
|
|
|
|
#[cfg(test)]
|
|
mod client;
|
|
|
|
/// Maximum duration of single wait call.
|
|
const MAX_WAIT_TIME: Duration = Duration::from_secs(60 * 3);
|
|
|
|
struct TestNet<G, E, F, U> {
|
|
runtime: Runtime,
|
|
authority_nodes: Vec<(usize, F, U, Multiaddr)>,
|
|
full_nodes: Vec<(usize, F, U, Multiaddr)>,
|
|
chain_spec: GenericChainSpec<G, E>,
|
|
base_port: u16,
|
|
nodes: usize,
|
|
}
|
|
|
|
impl<G, E, F, U> Drop for TestNet<G, E, F, U> {
|
|
fn drop(&mut self) {
|
|
// Drop the nodes before dropping the runtime, as the runtime otherwise waits for all
|
|
// futures to be ended and we run into a dead lock.
|
|
self.full_nodes.drain(..);
|
|
self.authority_nodes.drain(..);
|
|
}
|
|
}
|
|
|
|
pub trait TestNetNode:
|
|
Clone + Future<Output = Result<(), sc_service::Error>> + Send + 'static
|
|
{
|
|
type Block: BlockT;
|
|
type Backend: Backend<Self::Block>;
|
|
type Executor: CallExecutor<Self::Block> + Send + Sync;
|
|
type RuntimeApi: Send + Sync;
|
|
type TransactionPool: TransactionPool<Block = Self::Block>;
|
|
|
|
fn client(&self) -> Arc<Client<Self::Backend, Self::Executor, Self::Block, Self::RuntimeApi>>;
|
|
fn transaction_pool(&self) -> Arc<Self::TransactionPool>;
|
|
fn network(
|
|
&self,
|
|
) -> Arc<sc_network::NetworkService<Self::Block, <Self::Block as BlockT>::Hash>>;
|
|
fn spawn_handle(&self) -> SpawnTaskHandle;
|
|
}
|
|
|
|
pub struct TestNetComponents<TBl: BlockT, TBackend, TExec, TRtApi, TExPool> {
|
|
task_manager: Arc<Mutex<TaskManager>>,
|
|
client: Arc<Client<TBackend, TExec, TBl, TRtApi>>,
|
|
transaction_pool: Arc<TExPool>,
|
|
network: Arc<sc_network::NetworkService<TBl, <TBl as BlockT>::Hash>>,
|
|
}
|
|
|
|
impl<TBl: BlockT, TBackend, TExec, TRtApi, TExPool>
|
|
TestNetComponents<TBl, TBackend, TExec, TRtApi, TExPool>
|
|
{
|
|
pub fn new(
|
|
task_manager: TaskManager,
|
|
client: Arc<Client<TBackend, TExec, TBl, TRtApi>>,
|
|
network: Arc<sc_network::NetworkService<TBl, <TBl as BlockT>::Hash>>,
|
|
transaction_pool: Arc<TExPool>,
|
|
) -> Self {
|
|
Self { client, transaction_pool, network, task_manager: Arc::new(Mutex::new(task_manager)) }
|
|
}
|
|
}
|
|
|
|
impl<TBl: BlockT, TBackend, TExec, TRtApi, TExPool> Clone
|
|
for TestNetComponents<TBl, TBackend, TExec, TRtApi, TExPool>
|
|
{
|
|
fn clone(&self) -> Self {
|
|
Self {
|
|
task_manager: self.task_manager.clone(),
|
|
client: self.client.clone(),
|
|
transaction_pool: self.transaction_pool.clone(),
|
|
network: self.network.clone(),
|
|
}
|
|
}
|
|
}
|
|
|
|
impl<TBl: BlockT, TBackend, TExec, TRtApi, TExPool> Future
|
|
for TestNetComponents<TBl, TBackend, TExec, TRtApi, TExPool>
|
|
{
|
|
type Output = Result<(), sc_service::Error>;
|
|
|
|
fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
|
|
Pin::new(&mut self.task_manager.lock().future()).poll(cx)
|
|
}
|
|
}
|
|
|
|
impl<TBl, TBackend, TExec, TRtApi, TExPool> TestNetNode
|
|
for TestNetComponents<TBl, TBackend, TExec, TRtApi, TExPool>
|
|
where
|
|
TBl: BlockT,
|
|
TBackend: sc_client_api::Backend<TBl> + Send + Sync + 'static,
|
|
TExec: CallExecutor<TBl> + Send + Sync + 'static,
|
|
TRtApi: Send + Sync + 'static,
|
|
TExPool: TransactionPool<Block = TBl> + Send + Sync + 'static,
|
|
{
|
|
type Block = TBl;
|
|
type Backend = TBackend;
|
|
type Executor = TExec;
|
|
type RuntimeApi = TRtApi;
|
|
type TransactionPool = TExPool;
|
|
|
|
fn client(&self) -> Arc<Client<Self::Backend, Self::Executor, Self::Block, Self::RuntimeApi>> {
|
|
self.client.clone()
|
|
}
|
|
fn transaction_pool(&self) -> Arc<Self::TransactionPool> {
|
|
self.transaction_pool.clone()
|
|
}
|
|
fn network(
|
|
&self,
|
|
) -> Arc<sc_network::NetworkService<Self::Block, <Self::Block as BlockT>::Hash>> {
|
|
self.network.clone()
|
|
}
|
|
fn spawn_handle(&self) -> SpawnTaskHandle {
|
|
self.task_manager.lock().spawn_handle()
|
|
}
|
|
}
|
|
|
|
impl<G, E, F, U> TestNet<G, E, F, U>
|
|
where
|
|
F: Clone + Send + 'static,
|
|
U: Clone + Send + 'static,
|
|
{
|
|
pub fn run_until_all_full<FP>(&mut self, full_predicate: FP)
|
|
where
|
|
FP: Send + Fn(usize, &F) -> bool + 'static,
|
|
{
|
|
let full_nodes = self.full_nodes.clone();
|
|
let future = async move {
|
|
let mut interval = time::interval(Duration::from_millis(100));
|
|
loop {
|
|
interval.tick().await;
|
|
|
|
if full_nodes
|
|
.iter()
|
|
.all(|&(ref id, ref service, _, _)| full_predicate(*id, service))
|
|
{
|
|
break
|
|
}
|
|
}
|
|
};
|
|
|
|
if self
|
|
.runtime
|
|
.block_on(async move { time::timeout(MAX_WAIT_TIME, future).await })
|
|
.is_err()
|
|
{
|
|
panic!("Waited for too long");
|
|
}
|
|
}
|
|
}
|
|
|
|
fn node_config<
|
|
G: RuntimeGenesis + 'static,
|
|
E: ChainSpecExtension + Clone + 'static + Send + Sync,
|
|
>(
|
|
index: usize,
|
|
spec: &GenericChainSpec<G, E>,
|
|
role: Role,
|
|
tokio_handle: tokio::runtime::Handle,
|
|
key_seed: Option<String>,
|
|
base_port: u16,
|
|
root: &TempDir,
|
|
) -> Configuration {
|
|
let root = root.path().join(format!("node-{}", index));
|
|
|
|
let mut network_config = NetworkConfiguration::new(
|
|
format!("Node {}", index),
|
|
"network/test/0.1",
|
|
Default::default(),
|
|
None,
|
|
);
|
|
|
|
network_config.allow_non_globals_in_dht = true;
|
|
|
|
network_config.listen_addresses.push(
|
|
iter::once(multiaddr::Protocol::Ip4(Ipv4Addr::new(127, 0, 0, 1)))
|
|
.chain(iter::once(multiaddr::Protocol::Tcp(base_port + index as u16)))
|
|
.collect(),
|
|
);
|
|
|
|
network_config.transport =
|
|
TransportConfig::Normal { enable_mdns: false, allow_private_ipv4: true };
|
|
|
|
Configuration {
|
|
impl_name: String::from("network-test-impl"),
|
|
impl_version: String::from("0.1"),
|
|
role,
|
|
tokio_handle,
|
|
transaction_pool: Default::default(),
|
|
network: network_config,
|
|
keystore_remote: Default::default(),
|
|
keystore: KeystoreConfig::Path { path: root.join("key"), password: None },
|
|
database: DatabaseSource::RocksDb { path: root.join("db"), cache_size: 128 },
|
|
state_cache_size: 16777216,
|
|
state_cache_child_ratio: None,
|
|
state_pruning: Default::default(),
|
|
keep_blocks: KeepBlocks::All,
|
|
chain_spec: Box::new((*spec).clone()),
|
|
wasm_method: sc_service::config::WasmExecutionMethod::Interpreted,
|
|
wasm_runtime_overrides: Default::default(),
|
|
execution_strategies: Default::default(),
|
|
rpc_http: None,
|
|
rpc_ipc: None,
|
|
rpc_ws: None,
|
|
rpc_ws_max_connections: None,
|
|
rpc_cors: None,
|
|
rpc_methods: Default::default(),
|
|
rpc_max_payload: None,
|
|
rpc_max_request_size: None,
|
|
rpc_max_response_size: None,
|
|
rpc_id_provider: None,
|
|
rpc_max_subs_per_conn: None,
|
|
ws_max_out_buffer_capacity: None,
|
|
prometheus_config: None,
|
|
telemetry_endpoints: None,
|
|
default_heap_pages: None,
|
|
offchain_worker: Default::default(),
|
|
force_authoring: false,
|
|
disable_grandpa: false,
|
|
dev_key_seed: key_seed,
|
|
tracing_targets: None,
|
|
tracing_receiver: Default::default(),
|
|
max_runtime_instances: 8,
|
|
announce_block: true,
|
|
base_path: Some(BasePath::new(root)),
|
|
informant_output_format: Default::default(),
|
|
runtime_cache_size: 2,
|
|
}
|
|
}
|
|
|
|
impl<G, E, F, U> TestNet<G, E, F, U>
|
|
where
|
|
F: TestNetNode,
|
|
E: ChainSpecExtension + Clone + 'static + Send + Sync,
|
|
G: RuntimeGenesis + 'static,
|
|
{
|
|
fn new(
|
|
temp: &TempDir,
|
|
spec: GenericChainSpec<G, E>,
|
|
full: impl Iterator<Item = impl FnOnce(Configuration) -> Result<(F, U), Error>>,
|
|
authorities: impl Iterator<Item = (String, impl FnOnce(Configuration) -> Result<(F, U), Error>)>,
|
|
base_port: u16,
|
|
) -> TestNet<G, E, F, U> {
|
|
sp_tracing::try_init_simple();
|
|
fdlimit::raise_fd_limit();
|
|
let runtime = Runtime::new().expect("Error creating tokio runtime");
|
|
let mut net = TestNet {
|
|
runtime,
|
|
authority_nodes: Default::default(),
|
|
full_nodes: Default::default(),
|
|
chain_spec: spec,
|
|
base_port,
|
|
nodes: 0,
|
|
};
|
|
net.insert_nodes(temp, full, authorities);
|
|
net
|
|
}
|
|
|
|
fn insert_nodes(
|
|
&mut self,
|
|
temp: &TempDir,
|
|
full: impl Iterator<Item = impl FnOnce(Configuration) -> Result<(F, U), Error>>,
|
|
authorities: impl Iterator<Item = (String, impl FnOnce(Configuration) -> Result<(F, U), Error>)>,
|
|
) {
|
|
let handle = self.runtime.handle().clone();
|
|
|
|
for (key, authority) in authorities {
|
|
let node_config = node_config(
|
|
self.nodes,
|
|
&self.chain_spec,
|
|
Role::Authority,
|
|
handle.clone(),
|
|
Some(key),
|
|
self.base_port,
|
|
temp,
|
|
);
|
|
let addr = node_config.network.listen_addresses.first().unwrap().clone();
|
|
let (service, user_data) =
|
|
authority(node_config).expect("Error creating test node service");
|
|
|
|
handle.spawn(service.clone().map_err(|_| ()));
|
|
let addr =
|
|
addr.with(multiaddr::Protocol::P2p((*service.network().local_peer_id()).into()));
|
|
self.authority_nodes.push((self.nodes, service, user_data, addr));
|
|
self.nodes += 1;
|
|
}
|
|
|
|
for full in full {
|
|
let node_config = node_config(
|
|
self.nodes,
|
|
&self.chain_spec,
|
|
Role::Full,
|
|
handle.clone(),
|
|
None,
|
|
self.base_port,
|
|
temp,
|
|
);
|
|
let addr = node_config.network.listen_addresses.first().unwrap().clone();
|
|
let (service, user_data) = full(node_config).expect("Error creating test node service");
|
|
|
|
handle.spawn(service.clone().map_err(|_| ()));
|
|
let addr =
|
|
addr.with(multiaddr::Protocol::P2p((*service.network().local_peer_id()).into()));
|
|
self.full_nodes.push((self.nodes, service, user_data, addr));
|
|
self.nodes += 1;
|
|
}
|
|
}
|
|
}
|
|
|
|
fn tempdir_with_prefix(prefix: &str) -> TempDir {
|
|
tempfile::Builder::new()
|
|
.prefix(prefix)
|
|
.tempdir()
|
|
.expect("Error creating test dir")
|
|
}
|
|
|
|
pub fn connectivity<G, E, Fb, F>(spec: GenericChainSpec<G, E>, full_builder: Fb)
|
|
where
|
|
E: ChainSpecExtension + Clone + 'static + Send + Sync,
|
|
G: RuntimeGenesis + 'static,
|
|
Fb: Fn(Configuration) -> Result<F, Error>,
|
|
F: TestNetNode,
|
|
{
|
|
const NUM_FULL_NODES: usize = 5;
|
|
|
|
let expected_full_connections = NUM_FULL_NODES - 1;
|
|
|
|
{
|
|
let temp = tempdir_with_prefix("substrate-connectivity-test");
|
|
{
|
|
let mut network = TestNet::new(
|
|
&temp,
|
|
spec.clone(),
|
|
(0..NUM_FULL_NODES).map(|_| |cfg| full_builder(cfg).map(|s| (s, ()))),
|
|
// Note: this iterator is empty but we can't just use `iter::empty()`, otherwise
|
|
// the type of the closure cannot be inferred.
|
|
(0..0).map(|_| (String::new(), { |cfg| full_builder(cfg).map(|s| (s, ())) })),
|
|
30400,
|
|
);
|
|
info!("Checking star topology");
|
|
let first_address = network.full_nodes[0].3.clone();
|
|
for (_, service, _, _) in network.full_nodes.iter().skip(1) {
|
|
service
|
|
.network()
|
|
.add_reserved_peer(first_address.to_string())
|
|
.expect("Error adding reserved peer");
|
|
}
|
|
|
|
network.run_until_all_full(move |_index, service| {
|
|
let connected = service.network().num_connected();
|
|
debug!("Got {}/{} full connections...", connected, expected_full_connections);
|
|
connected == expected_full_connections
|
|
});
|
|
};
|
|
|
|
temp.close().expect("Error removing temp dir");
|
|
}
|
|
{
|
|
let temp = tempdir_with_prefix("substrate-connectivity-test");
|
|
{
|
|
let mut network = TestNet::new(
|
|
&temp,
|
|
spec,
|
|
(0..NUM_FULL_NODES).map(|_| |cfg| full_builder(cfg).map(|s| (s, ()))),
|
|
// Note: this iterator is empty but we can't just use `iter::empty()`, otherwise
|
|
// the type of the closure cannot be inferred.
|
|
(0..0).map(|_| (String::new(), { |cfg| full_builder(cfg).map(|s| (s, ())) })),
|
|
30400,
|
|
);
|
|
info!("Checking linked topology");
|
|
let mut address = network.full_nodes[0].3.clone();
|
|
for i in 0..NUM_FULL_NODES {
|
|
if i != 0 {
|
|
if let Some((_, service, _, node_id)) = network.full_nodes.get(i) {
|
|
service
|
|
.network()
|
|
.add_reserved_peer(address.to_string())
|
|
.expect("Error adding reserved peer");
|
|
address = node_id.clone();
|
|
}
|
|
}
|
|
}
|
|
|
|
network.run_until_all_full(move |_index, service| {
|
|
let connected = service.network().num_connected();
|
|
debug!("Got {}/{} full connections...", connected, expected_full_connections);
|
|
connected == expected_full_connections
|
|
});
|
|
}
|
|
temp.close().expect("Error removing temp dir");
|
|
}
|
|
}
|
|
|
|
pub fn sync<G, E, Fb, F, B, ExF, U>(
|
|
spec: GenericChainSpec<G, E>,
|
|
full_builder: Fb,
|
|
mut make_block_and_import: B,
|
|
mut extrinsic_factory: ExF,
|
|
) where
|
|
Fb: Fn(Configuration) -> Result<(F, U), Error>,
|
|
F: TestNetNode,
|
|
B: FnMut(&F, &mut U),
|
|
ExF: FnMut(&F, &U) -> <F::Block as BlockT>::Extrinsic,
|
|
U: Clone + Send + 'static,
|
|
E: ChainSpecExtension + Clone + 'static + Send + Sync,
|
|
G: RuntimeGenesis + 'static,
|
|
{
|
|
const NUM_FULL_NODES: usize = 10;
|
|
const NUM_BLOCKS: usize = 512;
|
|
let temp = tempdir_with_prefix("substrate-sync-test");
|
|
let mut network = TestNet::new(
|
|
&temp,
|
|
spec,
|
|
(0..NUM_FULL_NODES).map(|_| |cfg| full_builder(cfg)),
|
|
// Note: this iterator is empty but we can't just use `iter::empty()`, otherwise
|
|
// the type of the closure cannot be inferred.
|
|
(0..0).map(|_| (String::new(), { |cfg| full_builder(cfg) })),
|
|
30500,
|
|
);
|
|
info!("Checking block sync");
|
|
let first_address = {
|
|
let &mut (_, ref first_service, ref mut first_user_data, _) = &mut network.full_nodes[0];
|
|
for i in 0..NUM_BLOCKS {
|
|
if i % 128 == 0 {
|
|
info!("Generating #{}", i + 1);
|
|
}
|
|
|
|
make_block_and_import(first_service, first_user_data);
|
|
}
|
|
let info = network.full_nodes[0].1.client().info();
|
|
network.full_nodes[0]
|
|
.1
|
|
.network()
|
|
.new_best_block_imported(info.best_hash, info.best_number);
|
|
network.full_nodes[0].3.clone()
|
|
};
|
|
|
|
info!("Running sync");
|
|
for (_, service, _, _) in network.full_nodes.iter().skip(1) {
|
|
service
|
|
.network()
|
|
.add_reserved_peer(first_address.to_string())
|
|
.expect("Error adding reserved peer");
|
|
}
|
|
|
|
network.run_until_all_full(|_index, service| {
|
|
service.client().info().best_number == (NUM_BLOCKS as u32).into()
|
|
});
|
|
|
|
info!("Checking extrinsic propagation");
|
|
let first_service = network.full_nodes[0].1.clone();
|
|
let first_user_data = &network.full_nodes[0].2;
|
|
let best_block = BlockId::number(first_service.client().info().best_number);
|
|
let extrinsic = extrinsic_factory(&first_service, first_user_data);
|
|
let source = sc_transaction_pool_api::TransactionSource::External;
|
|
|
|
futures::executor::block_on(first_service.transaction_pool().submit_one(
|
|
&best_block,
|
|
source,
|
|
extrinsic,
|
|
))
|
|
.expect("failed to submit extrinsic");
|
|
|
|
network.run_until_all_full(|_index, service| service.transaction_pool().ready().count() == 1);
|
|
}
|
|
|
|
pub fn consensus<G, E, Fb, F>(
|
|
spec: GenericChainSpec<G, E>,
|
|
full_builder: Fb,
|
|
authorities: impl IntoIterator<Item = String>,
|
|
) where
|
|
Fb: Fn(Configuration) -> Result<F, Error>,
|
|
F: TestNetNode,
|
|
E: ChainSpecExtension + Clone + 'static + Send + Sync,
|
|
G: RuntimeGenesis + 'static,
|
|
{
|
|
const NUM_FULL_NODES: usize = 10;
|
|
const NUM_BLOCKS: usize = 10; // 10 * 2 sec block production time = ~20 seconds
|
|
let temp = tempdir_with_prefix("substrate-consensus-test");
|
|
let mut network = TestNet::new(
|
|
&temp,
|
|
spec,
|
|
(0..NUM_FULL_NODES / 2).map(|_| |cfg| full_builder(cfg).map(|s| (s, ()))),
|
|
authorities
|
|
.into_iter()
|
|
.map(|key| (key, { |cfg| full_builder(cfg).map(|s| (s, ())) })),
|
|
30600,
|
|
);
|
|
|
|
info!("Checking consensus");
|
|
let first_address = network.authority_nodes[0].3.clone();
|
|
for (_, service, _, _) in network.full_nodes.iter() {
|
|
service
|
|
.network()
|
|
.add_reserved_peer(first_address.to_string())
|
|
.expect("Error adding reserved peer");
|
|
}
|
|
for (_, service, _, _) in network.authority_nodes.iter().skip(1) {
|
|
service
|
|
.network()
|
|
.add_reserved_peer(first_address.to_string())
|
|
.expect("Error adding reserved peer");
|
|
}
|
|
network.run_until_all_full(|_index, service| {
|
|
service.client().info().finalized_number >= (NUM_BLOCKS as u32 / 2).into()
|
|
});
|
|
|
|
info!("Adding more peers");
|
|
network.insert_nodes(
|
|
&temp,
|
|
(0..NUM_FULL_NODES / 2).map(|_| |cfg| full_builder(cfg).map(|s| (s, ()))),
|
|
// Note: this iterator is empty but we can't just use `iter::empty()`, otherwise
|
|
// the type of the closure cannot be inferred.
|
|
(0..0).map(|_| (String::new(), { |cfg| full_builder(cfg).map(|s| (s, ())) })),
|
|
);
|
|
for (_, service, _, _) in network.full_nodes.iter() {
|
|
service
|
|
.network()
|
|
.add_reserved_peer(first_address.to_string())
|
|
.expect("Error adding reserved peer");
|
|
}
|
|
|
|
network.run_until_all_full(|_index, service| {
|
|
service.client().info().finalized_number >= (NUM_BLOCKS as u32).into()
|
|
});
|
|
}
|