jsonrpsee integration (#8783)

* Add tokio

* No need to map CallError to CallError

* jsonrpsee proc macros (#9673)

* port error types to `JsonRpseeError`

* migrate chain module to proc macro api

* make it compile with proc macros

* update branch

* update branch

* update to jsonrpsee master

* port system rpc

* port state rpc

* port childstate & offchain

* frame system rpc

* frame transaction payment

* bring back CORS hack to work with polkadot UI

* port babe rpc

* port manual seal rpc

* port frame mmr rpc

* port frame contracts rpc

* port finality grandpa rpc

* port sync state rpc

* resolve a few TODO + no jsonrpc deps

* Update bin/node/rpc-client/src/main.rs

* Update bin/node/rpc-client/src/main.rs

* Update bin/node/rpc-client/src/main.rs

* Update bin/node/rpc-client/src/main.rs

* Port over system_ rpc tests

* Make it compile

* Use prost 0.8

* Use prost 0.8

* Make it compile

* Ignore more failing tests

* Comment out WIP tests

* fix nit in frame system api

* Update lockfile

* No more juggling tokio versions

* No more wait_for_stop ?

* Remove browser-testing

* Arguments must be arrays

* Use same argument names

* Resolve todo: no wait_for_stop for WS server
Add todo: is parse_rpc_result used?
Cleanup imports

* fmt

* log

* One test passes

* update jsonrpsee

* update jsonrpsee

* cleanup rpc-servers crate

* jsonrpsee: add host and origin filtering (#9787)

* add access control in the jsonrpsee servers

* use master

* fix nits

* rpc runtime_version safe

* fix nits

* fix grumbles

* remove unused files

* resolve some todos

* jsonrpsee more cleanup (#9803)

* more cleanup

* resolve TODOs

* fix some unwraps

* remove type hints

* update jsonrpsee

* downgrade zeroize

* pin jsonrpsee rev

* remove unwrap nit

* Comment out more tests that aren't ported

* Comment out more tests

* Fix tests after merge

* Subscription test

* Invalid nonce test

* Pending exts

* WIP removeExtrinsic test

* Test remove_extrinsic

* Make state test: should_return_storage work

* Uncomment/fix the other non-subscription related state tests

* test: author_insertKey

* test: author_rotateKeys

* Get rest of state tests passing

* asyncify a little more

* Add todo to note #msg change

* Crashing test for has_session_keys

* Fix error conversion to avoid stack overflows
Port author_hasSessionKeys test
fmt

* test author_hasKey

* Add two missing tests
Add a check on the return type
Add todos for James's concerns

* RPC tests for state, author and system (#9859)

* Fix test runner

* Impl Default for SubscriptionTaskExecutor

* Keep the minimul amount of code needed to compile tests

* Re-instate `RpcSession` (for now)

* cleanup

* Port over RPC tests

* Add tokio

* No need to map CallError to CallError

* Port over system_ rpc tests

* Make it compile

* Use prost 0.8

* Use prost 0.8

* Make it compile

* Ignore more failing tests

* Comment out WIP tests

* Update lockfile

* No more juggling tokio versions

* No more wait_for_stop ?

* Remove browser-testing

* Arguments must be arrays

* Use same argument names

* Resolve todo: no wait_for_stop for WS server
Add todo: is parse_rpc_result used?
Cleanup imports

* fmt

* log

* One test passes

* Comment out more tests that aren't ported

* Comment out more tests

* Fix tests after merge

* Subscription test

* Invalid nonce test

* Pending exts

* WIP removeExtrinsic test

* Test remove_extrinsic

* Make state test: should_return_storage work

* Uncomment/fix the other non-subscription related state tests

* test: author_insertKey

* test: author_rotateKeys

* Get rest of state tests passing

* asyncify a little more

* Add todo to note #msg change

* Crashing test for has_session_keys

* Fix error conversion to avoid stack overflows
Port author_hasSessionKeys test
fmt

* test author_hasKey

* Add two missing tests
Add a check on the return type
Add todos for James's concerns

* offchain rpc tests

* Address todos

* fmt

Co-authored-by: James Wilson <james@jsdw.me>

* fix drop in state test

* update jsonrpsee

* fix ignored system test

* fix chain tests

* remove some boiler plate

* Port BEEFY RPC (#9883)

* Merge master

* Port beefy RPC (ty @niklas!)

* trivial changes left over from merge

* Remove unused code

* Update jsonrpsee

* fix build

* make tests compile again

* beefy update jsonrpsee

* fix: respect rpc methods policy

* update cargo.lock

* update jsonrpsee

* update jsonrpsee

* downgrade error logs

* update jsonrpsee

* Fix typo

* remove unused file

* Better name

* Port Babe RPC tests

* Put docs back

* Resolve todo

* Port tests for System RPCs

* Resolve todo

* fix build

* Updated jsonrpsee to current master

* fix: port finality grandpa rpc tests

* Move .into() outside of the match

* more review grumbles

* jsonrpsee: add `rpc handlers` back (#10245)

* add back RpcHandlers

* cargo fmt

* fix docs

* fix grumble: remove needless alloc

* resolve TODO

* fmt

* Fix typo

* grumble: Use constants based on BASE_ERROR

* grumble: DRY whitelisted listening addresses
grumble: s/JSONRPC/JSON-RPC/

* cleanup

* grumbles: Making readers aware of the possibility of gaps

* review grumbles

* grumbles

* remove notes from niklasad1

* Update `jsonrpsee`

* fix: jsonrpsee features

* jsonrpsee: fallback to random port in case the specified port failed (#10304)

* jsonrpsee: fallback to random port

* better comment

* Update client/rpc-servers/src/lib.rs

Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com>

* Update client/rpc-servers/src/lib.rs

Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com>

* address grumbles

* cargo fmt

* addrs already slice

Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com>

* Update jsonrpsee to 092081a0a2b8904c6ebd2cd99e16c7bc13ffc3ae

* lockfile

* update jsonrpsee

* fix warning

* Don't fetch jsonrpsee from crates

* make tests compile again

* fix rpc tests

* remove unused deps

* update tokio

* fix rpc tests again

* fix: test runner

`HttpServerBuilder::builder` fails unless it's called within tokio runtime

* cargo fmt

* grumbles: fix subscription aliases

* make clippy happy

* update remaining subscriptions alias

* cleanup

* cleanup

* fix chain subscription: less boiler plate (#10285)

* fix chain subscription: less boiler plate

* fix bad merge

* cargo fmt

* Switch to jsonrpsee 0.5

* fix build

* add missing features

* fix nit: remove needless Box::pin

* Integrate jsonrpsee metrics (#10395)

* draft metrics impl

* Use latest api

* Add missing file

* Http server metrics

* cleanup

* bump jsonrpsee

* Remove `ServerMetrics` and use a single middleware for both connection counting (aka sessions) and call metrics.

* fix build

* remove needless Arc::clone

* Update to jsonrpsee 0.6

* lolz

* fix metrics

* Revert "lolz"

This reverts commit eed6c6a56e78d8e307b4950f4c52a1c3a2322ba1.

* fix: in-memory rpc support subscriptions

* commit Cargo.lock

* Update tests to 0.7

* fix TODOs

* ws server: generate subscriptionIDs as Strings

Some libraries seems to expect the subscription IDs to be Strings, let's not break
this in this PR.

* Increase timeout

* Port over tests

* cleanup

* Using error codes from the spec

* fix clippy

* cargo fmt

* update jsonrpsee

* fix nits

* fix: rpc_query

* enable custom subid gen through spawn_tasks

* remove unsed deps

* unify tokio deps

* Revert "enable custom subid gen through spawn_tasks"

This reverts commit 5c5eb70328fe39d154fdb55c56e637b4548cf470.

* fix bad merge of `test-utils`

* fix more nits

* downgrade wasm-instrument to 0.1.0

* [jsonrpsee]: enable custom RPC subscription ID generatation (#10731)

* enable custom subid gen through spawn_tasks

* fix nits

* Update client/service/src/builder.rs

Co-authored-by: David <dvdplm@gmail.com>

* add Poc; needs jsonrpsee pr

* update jsonrpsee

* add re-exports

* add docs

Co-authored-by: David <dvdplm@gmail.com>

* cargo fmt

* fmt

* port RPC-API dev

* Remove unused file

* fix nit: remove async trait

* fix doc links

* fix merge nit: remove jsonrpc deps

* kill namespace on rpc apis

* companion for jsonrpsee v0.10 (#11158)

* companion for jsonrpsee v0.10

* update versions v0.10.0

* add some fixes

* spelling

* fix spaces

Co-authored-by: Niklas Adolfsson <niklasadolfsson1@gmail.com>

* send error before subs are closed

* fix unsubscribe method names: chain

* fix tests

* jsonrpc server: print binded local address

* grumbles: kill SubscriptionTaskExecutor

* Update client/sync-state-rpc/src/lib.rs

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* Update client/rpc/src/chain/chain_full.rs

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* Update client/rpc/src/chain/chain_full.rs

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* sync-state-rpc: kill anyhow

* no more anyhow

* remove todo

* jsonrpsee:  fix bad params in subscriptions. (#11251)

* update jsonrpsee

* fix error responses

* revert error codes

* dont do weird stuff in drop impl

* rpc servers: remove needless clone

* Remove silly constants

* chore: update jsonrpsee v0.12

* commit Cargo.lock

* deps: downgrade git2

* feat: CLI flag max subscriptions per connection

* metrics: use old logging format

* fix: read WS address from substrate output (#11379)

Co-authored-by: Niklas Adolfsson <niklasadolfsson1@gmail.com>
Co-authored-by: James Wilson <james@jsdw.me>
Co-authored-by: Maciej Hirsz <hello@maciej.codes>
Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
This commit is contained in:
David
2022-05-10 10:52:19 +02:00
committed by GitHub
parent e45d53552d
commit 29c0c6a4a8
93 changed files with 3813 additions and 5094 deletions
+68 -73
View File
@@ -18,31 +18,24 @@
//! Substrate system API.
use self::error::Result;
use futures::{channel::oneshot, FutureExt};
use sc_rpc_api::{DenyUnsafe, Receiver};
#[cfg(test)]
mod tests;
use futures::channel::oneshot;
use jsonrpsee::{
core::{async_trait, error::Error as JsonRpseeError, JsonValue, RpcResult},
types::error::{CallError, ErrorCode, ErrorObject},
};
use sc_rpc_api::DenyUnsafe;
use sc_tracing::logging;
use sc_utils::mpsc::TracingUnboundedSender;
use sp_runtime::traits::{self, Header as HeaderT};
pub use self::{
gen_client::Client as SystemClient,
helpers::{Health, NodeRole, PeerInfo, SyncState, SystemInfo},
};
use self::error::Result;
pub use self::helpers::{Health, NodeRole, PeerInfo, SyncState, SystemInfo};
pub use sc_rpc_api::system::*;
#[cfg(test)]
mod tests;
/// Early exit for RPCs that require `--rpc-methods=Unsafe` to be enabled
macro_rules! bail_if_unsafe {
($value: expr) => {
if let Err(err) = $value.check_if_safe() {
return async move { Err(err.into()) }.boxed()
}
};
}
/// System API implementation
pub struct System<B: traits::Block> {
info: SystemInfo,
@@ -62,7 +55,7 @@ pub enum Request<B: traits::Block> {
/// Must return information about the peers we are connected to.
Peers(oneshot::Sender<Vec<PeerInfo<B::Hash, <B::Header as HeaderT>::Number>>>),
/// Must return the state of the network.
NetworkState(oneshot::Sender<rpc::Value>),
NetworkState(oneshot::Sender<serde_json::Value>),
/// Must return any potential parse error.
NetworkAddReservedPeer(String, oneshot::Sender<Result<()>>),
/// Must return any potential parse error.
@@ -89,121 +82,123 @@ impl<B: traits::Block> System<B> {
}
}
impl<B: traits::Block> SystemApi<B::Hash, <B::Header as HeaderT>::Number> for System<B> {
fn system_name(&self) -> Result<String> {
#[async_trait]
impl<B: traits::Block> SystemApiServer<B::Hash, <B::Header as HeaderT>::Number> for System<B> {
fn system_name(&self) -> RpcResult<String> {
Ok(self.info.impl_name.clone())
}
fn system_version(&self) -> Result<String> {
fn system_version(&self) -> RpcResult<String> {
Ok(self.info.impl_version.clone())
}
fn system_chain(&self) -> Result<String> {
fn system_chain(&self) -> RpcResult<String> {
Ok(self.info.chain_name.clone())
}
fn system_type(&self) -> Result<sc_chain_spec::ChainType> {
fn system_type(&self) -> RpcResult<sc_chain_spec::ChainType> {
Ok(self.info.chain_type.clone())
}
fn system_properties(&self) -> Result<sc_chain_spec::Properties> {
fn system_properties(&self) -> RpcResult<sc_chain_spec::Properties> {
Ok(self.info.properties.clone())
}
fn system_health(&self) -> Receiver<Health> {
async fn system_health(&self) -> RpcResult<Health> {
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::Health(tx));
Receiver(rx)
rx.await.map_err(|e| JsonRpseeError::to_call_error(e))
}
fn system_local_peer_id(&self) -> Receiver<String> {
async fn system_local_peer_id(&self) -> RpcResult<String> {
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::LocalPeerId(tx));
Receiver(rx)
rx.await.map_err(|e| JsonRpseeError::to_call_error(e))
}
fn system_local_listen_addresses(&self) -> Receiver<Vec<String>> {
async fn system_local_listen_addresses(&self) -> RpcResult<Vec<String>> {
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::LocalListenAddresses(tx));
Receiver(rx)
rx.await.map_err(|e| JsonRpseeError::to_call_error(e))
}
fn system_peers(
async fn system_peers(
&self,
) -> rpc::BoxFuture<rpc::Result<Vec<PeerInfo<B::Hash, <B::Header as HeaderT>::Number>>>> {
bail_if_unsafe!(self.deny_unsafe);
) -> RpcResult<Vec<PeerInfo<B::Hash, <B::Header as HeaderT>::Number>>> {
self.deny_unsafe.check_if_safe()?;
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::Peers(tx));
async move { rx.await.map_err(|_| rpc::Error::internal_error()) }.boxed()
rx.await.map_err(|e| JsonRpseeError::to_call_error(e))
}
fn system_network_state(&self) -> rpc::BoxFuture<rpc::Result<rpc::Value>> {
bail_if_unsafe!(self.deny_unsafe);
async fn system_network_state(&self) -> RpcResult<JsonValue> {
self.deny_unsafe.check_if_safe()?;
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::NetworkState(tx));
async move { rx.await.map_err(|_| rpc::Error::internal_error()) }.boxed()
rx.await.map_err(|e| JsonRpseeError::to_call_error(e))
}
fn system_add_reserved_peer(&self, peer: String) -> rpc::BoxFuture<rpc::Result<()>> {
bail_if_unsafe!(self.deny_unsafe);
async fn system_add_reserved_peer(&self, peer: String) -> RpcResult<()> {
self.deny_unsafe.check_if_safe()?;
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::NetworkAddReservedPeer(peer, tx));
async move {
match rx.await {
Ok(Ok(())) => Ok(()),
Ok(Err(e)) => Err(rpc::Error::from(e)),
Err(_) => Err(rpc::Error::internal_error()),
}
match rx.await {
Ok(Ok(())) => Ok(()),
Ok(Err(e)) => Err(JsonRpseeError::from(e)),
Err(e) => Err(JsonRpseeError::to_call_error(e)),
}
.boxed()
}
fn system_remove_reserved_peer(&self, peer: String) -> rpc::BoxFuture<rpc::Result<()>> {
bail_if_unsafe!(self.deny_unsafe);
async fn system_remove_reserved_peer(&self, peer: String) -> RpcResult<()> {
self.deny_unsafe.check_if_safe()?;
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::NetworkRemoveReservedPeer(peer, tx));
async move {
match rx.await {
Ok(Ok(())) => Ok(()),
Ok(Err(e)) => Err(rpc::Error::from(e)),
Err(_) => Err(rpc::Error::internal_error()),
}
match rx.await {
Ok(Ok(())) => Ok(()),
Ok(Err(e)) => Err(JsonRpseeError::from(e)),
Err(e) => Err(JsonRpseeError::to_call_error(e)),
}
.boxed()
}
fn system_reserved_peers(&self) -> Receiver<Vec<String>> {
async fn system_reserved_peers(&self) -> RpcResult<Vec<String>> {
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::NetworkReservedPeers(tx));
Receiver(rx)
rx.await.map_err(|e| JsonRpseeError::to_call_error(e))
}
fn system_node_roles(&self) -> Receiver<Vec<NodeRole>> {
async fn system_node_roles(&self) -> RpcResult<Vec<NodeRole>> {
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::NodeRoles(tx));
Receiver(rx)
rx.await.map_err(|e| JsonRpseeError::to_call_error(e))
}
fn system_sync_state(&self) -> Receiver<SyncState<<B::Header as HeaderT>::Number>> {
async fn system_sync_state(&self) -> RpcResult<SyncState<<B::Header as HeaderT>::Number>> {
let (tx, rx) = oneshot::channel();
let _ = self.send_back.unbounded_send(Request::SyncState(tx));
Receiver(rx)
rx.await.map_err(|e| JsonRpseeError::to_call_error(e))
}
fn system_add_log_filter(&self, directives: String) -> rpc::Result<()> {
fn system_add_log_filter(&self, directives: String) -> RpcResult<()> {
self.deny_unsafe.check_if_safe()?;
logging::add_directives(&directives);
logging::reload_filter().map_err(|_e| rpc::Error::internal_error())
logging::reload_filter().map_err(|e| {
JsonRpseeError::Call(CallError::Custom(ErrorObject::owned(
ErrorCode::InternalError.code(),
e,
None::<()>,
)))
})
}
fn system_reset_log_filter(&self) -> rpc::Result<()> {
fn system_reset_log_filter(&self) -> RpcResult<()> {
self.deny_unsafe.check_if_safe()?;
logging::reset_log_filter().map_err(|_e| rpc::Error::internal_error())
logging::reset_log_filter().map_err(|e| {
JsonRpseeError::Call(CallError::Custom(ErrorObject::owned(
ErrorCode::InternalError.code(),
e,
None::<()>,
)))
})
}
}
+163 -117
View File
@@ -16,12 +16,18 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
use super::*;
use super::{helpers::SyncState, *};
use assert_matches::assert_matches;
use futures::{executor, prelude::*};
use futures::prelude::*;
use jsonrpsee::{
core::Error as RpcError,
types::{error::CallError, EmptyParams},
RpcModule,
};
use sc_network::{self, config::Role, PeerId};
use sc_rpc_api::system::helpers::PeerInfo;
use sc_utils::mpsc::tracing_unbounded;
use sp_core::H256;
use std::{
env,
io::{BufRead, BufReader, Write},
@@ -43,7 +49,7 @@ impl Default for Status {
}
}
fn api<T: Into<Option<Status>>>(sync: T) -> System<Block> {
fn api<T: Into<Option<Status>>>(sync: T) -> RpcModule<System<Block>> {
let status = sync.into().unwrap_or_default();
let should_have_peers = !status.is_dev;
let (tx, rx) = tracing_unbounded("rpc_system_tests");
@@ -136,98 +142,122 @@ fn api<T: Into<Option<Status>>>(sync: T) -> System<Block> {
tx,
sc_rpc_api::DenyUnsafe::No,
)
.into_rpc()
}
fn wait_receiver<T>(rx: Receiver<T>) -> T {
futures::executor::block_on(rx).unwrap()
}
#[test]
fn system_name_works() {
assert_eq!(api(None).system_name().unwrap(), "testclient".to_owned());
}
#[test]
fn system_version_works() {
assert_eq!(api(None).system_version().unwrap(), "0.2.0".to_owned());
}
#[test]
fn system_chain_works() {
assert_eq!(api(None).system_chain().unwrap(), "testchain".to_owned());
}
#[test]
fn system_properties_works() {
assert_eq!(api(None).system_properties().unwrap(), serde_json::map::Map::new());
}
#[test]
fn system_type_works() {
assert_eq!(api(None).system_type().unwrap(), Default::default());
}
#[test]
fn system_health() {
assert_matches!(
wait_receiver(api(None).system_health()),
Health { peers: 0, is_syncing: false, should_have_peers: true }
#[tokio::test]
async fn system_name_works() {
assert_eq!(
api(None).call::<_, String>("system_name", EmptyParams::new()).await.unwrap(),
"testclient".to_string(),
);
}
assert_matches!(
wait_receiver(
api(Status { peer_id: PeerId::random(), peers: 5, is_syncing: true, is_dev: true })
.system_health()
),
Health { peers: 5, is_syncing: true, should_have_peers: false }
#[tokio::test]
async fn system_version_works() {
assert_eq!(
api(None).call::<_, String>("system_version", EmptyParams::new()).await.unwrap(),
"0.2.0".to_string(),
);
}
#[tokio::test]
async fn system_chain_works() {
assert_eq!(
api(None).call::<_, String>("system_chain", EmptyParams::new()).await.unwrap(),
"testchain".to_string(),
);
}
#[tokio::test]
async fn system_properties_works() {
type Map = serde_json::map::Map<String, serde_json::Value>;
assert_eq!(
api(None).call::<_, Map>("system_properties", EmptyParams::new()).await.unwrap(),
Map::new()
);
}
#[tokio::test]
async fn system_type_works() {
assert_eq!(
api(None)
.call::<_, String>("system_chainType", EmptyParams::new())
.await
.unwrap(),
"Live".to_owned(),
);
}
#[tokio::test]
async fn system_health() {
assert_eq!(
api(None).call::<_, Health>("system_health", EmptyParams::new()).await.unwrap(),
Health { peers: 0, is_syncing: false, should_have_peers: true },
);
assert_eq!(
wait_receiver(
api(Status { peer_id: PeerId::random(), peers: 5, is_syncing: false, is_dev: false })
.system_health()
),
Health { peers: 5, is_syncing: false, should_have_peers: true }
api(Status { peer_id: PeerId::random(), peers: 5, is_syncing: true, is_dev: true })
.call::<_, Health>("system_health", EmptyParams::new())
.await
.unwrap(),
Health { peers: 5, is_syncing: true, should_have_peers: false },
);
assert_eq!(
wait_receiver(
api(Status { peer_id: PeerId::random(), peers: 0, is_syncing: false, is_dev: true })
.system_health()
),
Health { peers: 0, is_syncing: false, should_have_peers: false }
api(Status { peer_id: PeerId::random(), peers: 5, is_syncing: false, is_dev: false })
.call::<_, Health>("system_health", EmptyParams::new())
.await
.unwrap(),
Health { peers: 5, is_syncing: false, should_have_peers: true },
);
assert_eq!(
api(Status { peer_id: PeerId::random(), peers: 0, is_syncing: false, is_dev: true })
.call::<_, Health>("system_health", EmptyParams::new())
.await
.unwrap(),
Health { peers: 0, is_syncing: false, should_have_peers: false },
);
}
#[test]
fn system_local_peer_id_works() {
#[tokio::test]
async fn system_local_peer_id_works() {
assert_eq!(
wait_receiver(api(None).system_local_peer_id()),
"QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV".to_owned(),
api(None)
.call::<_, String>("system_localPeerId", EmptyParams::new())
.await
.unwrap(),
"QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV".to_owned()
);
}
#[test]
fn system_local_listen_addresses_works() {
#[tokio::test]
async fn system_local_listen_addresses_works() {
assert_eq!(
wait_receiver(api(None).system_local_listen_addresses()),
api(None)
.call::<_, Vec<String>>("system_localListenAddresses", EmptyParams::new())
.await
.unwrap(),
vec![
"/ip4/198.51.100.19/tcp/30333/p2p/QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV"
.to_string(),
"/ip4/198.51.100.19/tcp/30333/p2p/QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV",
"/ip4/127.0.0.1/tcp/30334/ws/p2p/QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV"
.to_string(),
]
);
}
#[test]
fn system_peers() {
#[tokio::test]
async fn system_peers() {
let peer_id = PeerId::random();
let req = api(Status { peer_id, peers: 1, is_syncing: false, is_dev: true }).system_peers();
let res = executor::block_on(req).unwrap();
let peer_info: Vec<PeerInfo<H256, u64>> =
api(Status { peer_id, peers: 1, is_syncing: false, is_dev: true })
.call("system_peers", EmptyParams::new())
.await
.unwrap();
assert_eq!(
res,
peer_info,
vec![PeerInfo {
peer_id: peer_id.to_base58(),
roles: "FULL".into(),
@@ -237,14 +267,16 @@ fn system_peers() {
);
}
#[test]
fn system_network_state() {
let req = api(None).system_network_state();
let res = executor::block_on(req).unwrap();
#[tokio::test]
async fn system_network_state() {
use sc_network::network_state::NetworkState;
let network_state: NetworkState = api(None)
.call("system_unstable_networkState", EmptyParams::new())
.await
.unwrap();
assert_eq!(
serde_json::from_value::<sc_network::network_state::NetworkState>(res).unwrap(),
sc_network::network_state::NetworkState {
network_state,
NetworkState {
peer_id: String::new(),
listened_addresses: Default::default(),
external_addresses: Default::default(),
@@ -255,51 +287,60 @@ fn system_network_state() {
);
}
#[test]
fn system_node_roles() {
assert_eq!(wait_receiver(api(None).system_node_roles()), vec![NodeRole::Authority]);
#[tokio::test]
async fn system_node_roles() {
let node_roles: Vec<NodeRole> =
api(None).call("system_nodeRoles", EmptyParams::new()).await.unwrap();
assert_eq!(node_roles, vec![NodeRole::Authority]);
}
#[test]
fn system_sync_state() {
#[tokio::test]
async fn system_sync_state() {
let sync_state: SyncState<i32> =
api(None).call("system_syncState", EmptyParams::new()).await.unwrap();
assert_eq!(
wait_receiver(api(None).system_sync_state()),
sync_state,
SyncState { starting_block: 1, current_block: 2, highest_block: Some(3) }
);
}
#[test]
fn system_network_add_reserved() {
#[tokio::test]
async fn system_network_add_reserved() {
let good_peer_id =
"/ip4/198.51.100.19/tcp/30333/p2p/QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV";
let bad_peer_id = "/ip4/198.51.100.19/tcp/30333";
["/ip4/198.51.100.19/tcp/30333/p2p/QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV"];
let _good: () = api(None)
.call("system_addReservedPeer", good_peer_id)
.await
.expect("good peer id works");
let good_fut = api(None).system_add_reserved_peer(good_peer_id.into());
let bad_fut = api(None).system_add_reserved_peer(bad_peer_id.into());
assert_eq!(executor::block_on(good_fut), Ok(()));
assert!(executor::block_on(bad_fut).is_err());
}
#[test]
fn system_network_remove_reserved() {
let good_peer_id = "QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV";
let bad_peer_id =
"/ip4/198.51.100.19/tcp/30333/p2p/QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV";
let good_fut = api(None).system_remove_reserved_peer(good_peer_id.into());
let bad_fut = api(None).system_remove_reserved_peer(bad_peer_id.into());
assert_eq!(executor::block_on(good_fut), Ok(()));
assert!(executor::block_on(bad_fut).is_err());
}
#[test]
fn system_network_reserved_peers() {
assert_eq!(
wait_receiver(api(None).system_reserved_peers()),
vec!["QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV".to_string()]
let bad_peer_id = ["/ip4/198.51.100.19/tcp/30333"];
assert_matches!(
api(None).call::<_, ()>("system_addReservedPeer", bad_peer_id).await,
Err(RpcError::Call(CallError::Custom(err))) if err.message().contains("Peer id is missing from the address")
);
}
#[tokio::test]
async fn system_network_remove_reserved() {
let _good_peer: () = api(None)
.call("system_removeReservedPeer", ["QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV"])
.await
.expect("call with good peer id works");
let bad_peer_id =
["/ip4/198.51.100.19/tcp/30333/p2p/QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV"];
assert_matches!(
api(None).call::<_, String>("system_removeReservedPeer", bad_peer_id).await,
Err(RpcError::Call(CallError::Custom(err))) if err.message().contains("base-58 decode error: provided string contained invalid character '/' at byte 0")
);
}
#[tokio::test]
async fn system_network_reserved_peers() {
let reserved_peers: Vec<String> =
api(None).call("system_reservedPeers", EmptyParams::new()).await.unwrap();
assert_eq!(reserved_peers, vec!["QmSk5HQbn6LhUwDiNMseVUjuRYhEtYj4aUZ6WfWoGURpdV".to_string()],);
}
#[test]
fn test_add_reset_log_filter() {
const EXPECTED_BEFORE_ADD: &'static str = "EXPECTED_BEFORE_ADD";
@@ -315,15 +356,20 @@ fn test_add_reset_log_filter() {
for line in std::io::stdin().lock().lines() {
let line = line.expect("Failed to read bytes");
if line.contains("add_reload") {
api(None)
.system_add_log_filter("test_after_add".into())
.expect("`system_add_log_filter` failed");
let filter = "test_after_add";
let fut =
async move { api(None).call::<_, ()>("system_addLogFilter", [filter]).await };
futures::executor::block_on(fut).expect("`system_addLogFilter` failed");
} else if line.contains("add_trace") {
api(None)
.system_add_log_filter("test_before_add=trace".into())
.expect("`system_add_log_filter` failed");
let filter = "test_before_add=trace";
let fut =
async move { api(None).call::<_, ()>("system_addLogFilter", [filter]).await };
futures::executor::block_on(fut).expect("`system_addLogFilter (trace)` failed");
} else if line.contains("reset") {
api(None).system_reset_log_filter().expect("`system_reset_log_filter` failed");
let fut = async move {
api(None).call::<_, ()>("system_resetLogFilter", EmptyParams::new()).await
};
futures::executor::block_on(fut).expect("`system_resetLogFilter` failed");
} else if line.contains("exit") {
return
}