Async keystore + Authority-Discovery async/await (#7000)

* Asyncify sign_with

* Asyncify generate/get keys

* Complete BareCryptoStore asyncification

* Cleanup

* Rebase

* Add Proxy

* Inject keystore proxy into extensions

* Implement some methods

* Await on send

* Cleanup

* Send result over the oneshot channel sender

* Process one future at a time

* Fix cargo stuff

* Asyncify sr25519_vrf_sign

* Cherry-pick and fix changes

* Introduce SyncCryptoStore

* SQUASH ME WITH THE first commit

* Implement into SyncCryptoStore

* Implement BareCryptoStore for KeystoreProxyAdapter

* authority-discovery

* AURA

* BABE

* finality-grandpa

* offchain-workers

* benchmarking-cli

* sp_io

* test-utils

* application-crypto

* Extensions and RPC

* Client Service

* bin

* Update cargo.lock

* Implement BareCryptoStore on proxy directly

* Simplify proxy setup

* Fix authority-discover

* Pass async keystore to authority-discovery

* Fix tests

* Use async keystore in authority-discovery

* Rename BareCryptoStore to CryptoStore

* WIP

* Remote mutable borrow in CryptoStore trait

* Implement Keystore with backends

* Remove Proxy implementation

* Fix service builder and keystore user-crates

* Fix tests

* Rework authority-discovery after refactoring

* futures::select!

* Fix multiple mut borrows in authority-discovery

* Merge fixes

* Require sync

* Restore Cargo.lock

* PR feedback - round 1

* Remove Keystore and use LocalKeystore directly

Also renamed KeystoreParams to KeystoreContainer

* Join

* Remove sync requirement

* Fix keystore tests

* Fix tests

* client/authority-discovery: Remove event stream dynamic dispatching

With authority-discovery moving from a poll based future to an `async`
future Rust has difficulties propagating the `Sync` trade through the
generated state machine.

Instead of using dynamic dispatching, use a trait parameter to specify
the DHT event stream.

* Make it compile

* Fix submit_transaction

* Fix block_on issue

* Use await in async context

* Fix manual seal keystore

* Fix authoring_blocks test

* fix aura authoring_blocks

* Try to fix tests for auth-discovery

* client/authority-discovery: Fix lookup_throttling test

* client/authority-discovery: Fix triggers_dht_get_query test

* Fix epoch_authorship_works

* client/authority-discovery: Remove timing assumption in unit test

* client/authority-discovery: Revert changes to termination test

* PR feedback

* Remove deadcode and mark test code

* Fix test_sync

* Use the correct keyring type

* Return when from_service stream is closed

* Convert SyncCryptoStore to a trait

* Fix line width

* Fix line width - take 2

* Remove unused import

* Fix keystore instantiation

* PR feedback

* Remove KeystoreContainer

* Revert "Remove KeystoreContainer"

This reverts commit ea4a37c7d74f9772b93d974e05e4498af6192730.

* Take a ref of keystore

* Move keystore to dev-dependencies

* Address some PR feedback

* Missed one

* Pass keystore reference - take 2

* client/finality-grandpa: Use `Arc<dyn CryptoStore>` instead of SyncXXX

Instead of using `SyncCryptoStorePtr` within `client/finality-grandpa`,
which is a type alias for `Arc<dyn SyncCryptoStore>`, use `Arc<dyn
CryptoStore>`. Benefits are:

1. No additional mental overhead of a `SyncCryptoStorePtr`.

2. Ability for new code to use the asynchronous methods of `CryptoStore`
instead of the synchronous `SyncCryptoStore` methods within
`client/finality-granpa` without the need for larger refactorings.

Note: This commit uses `Arc<dyn CryptoStore>` instead of
`CryptoStorePtr`, as I find the type signature more descriptive. This is
subjective and in no way required.

* Remove SyncCryptoStorePtr

* Remove KeystoreContainer & SyncCryptoStorePtr

* PR feedback

* *: Use CryptoStorePtr whereever possible

* *: Define SyncCryptoStore as a pure extension trait of CryptoStore

* Follow up to SyncCryptoStore extension trait

* Adjust docs for SyncCryptoStore as Ben suggested

* Cleanup unnecessary requirements

* sp-keystore

* Use async_std::task::block_on in keystore

* Fix block_on std requirement

* Update primitives/keystore/src/lib.rs

Co-authored-by: Max Inden <mail@max-inden.de>

* Fix wasm build

* Remove unused var

* Fix wasm compilation - take 2

* Revert async-std in keystore

* Fix indent

* Fix version and copyright

* Cleanup feature = "std"

* Auth Discovery: Ignore if from_service is cloed

* Max's suggestion

* Revert async-std usage for block_on

* Address PR feedback

* Fix example offchain worker build

* Address PR feedback

* Update Cargo.lock

* Move unused methods to test helper functions

* Restore accidentally deleted cargo.lock files

* Fix unused imports

Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>
This commit is contained in:
Rakan Alhneiti
2020-10-08 22:56:35 +02:00
committed by GitHub
parent db8a0cafa9
commit 3aa4bfacfc
70 changed files with 2394 additions and 1762 deletions
+42 -2
View File
@@ -682,6 +682,7 @@ dependencies = [
"sc-chain-spec",
"sc-keystore",
"sp-core",
"sp-keystore",
"structopt",
]
@@ -1541,6 +1542,7 @@ dependencies = [
"sc-service",
"sp-core",
"sp-externalities",
"sp-keystore",
"sp-runtime",
"sp-state-machine",
"structopt",
@@ -3736,6 +3738,7 @@ dependencies = [
"sp-inherents",
"sp-io",
"sp-keyring",
"sp-keystore",
"sp-runtime",
"sp-timestamp",
"sp-transaction-pool",
@@ -3776,6 +3779,7 @@ dependencies = [
"sp-core",
"sp-externalities",
"sp-io",
"sp-keystore",
"sp-runtime",
"sp-state-machine",
"sp-trie",
@@ -3836,6 +3840,7 @@ dependencies = [
"sp-blockchain",
"sp-consensus",
"sp-consensus-babe",
"sp-keystore",
"sp-runtime",
"sp-transaction-pool",
"substrate-frame-rpc-system",
@@ -4520,6 +4525,7 @@ dependencies = [
"serde",
"sp-core",
"sp-io",
"sp-keystore",
"sp-runtime",
"sp-std",
]
@@ -6251,6 +6257,7 @@ dependencies = [
"sp-authority-discovery",
"sp-blockchain",
"sp-core",
"sp-keystore",
"sp-runtime",
"sp-tracing",
"substrate-prometheus-endpoint",
@@ -6363,6 +6370,7 @@ dependencies = [
"sp-core",
"sp-io",
"sp-keyring",
"sp-keystore",
"sp-panic-handler",
"sp-runtime",
"sp-state-machine",
@@ -6403,6 +6411,7 @@ dependencies = [
"sp-externalities",
"sp-inherents",
"sp-keyring",
"sp-keystore",
"sp-runtime",
"sp-state-machine",
"sp-std",
@@ -6489,6 +6498,7 @@ dependencies = [
"sp-inherents",
"sp-io",
"sp-keyring",
"sp-keystore",
"sp-runtime",
"sp-timestamp",
"sp-tracing",
@@ -6541,6 +6551,7 @@ dependencies = [
"sp-inherents",
"sp-io",
"sp-keyring",
"sp-keystore",
"sp-runtime",
"sp-timestamp",
"sp-tracing",
@@ -6574,6 +6585,7 @@ dependencies = [
"sp-consensus-babe",
"sp-core",
"sp-keyring",
"sp-keystore",
"sp-runtime",
"substrate-test-runtime-client",
"tempfile",
@@ -6607,7 +6619,6 @@ dependencies = [
"sc-client-api",
"sc-consensus-babe",
"sc-consensus-epochs",
"sc-keystore",
"sc-transaction-pool",
"serde",
"sp-api",
@@ -6616,6 +6627,7 @@ dependencies = [
"sp-consensus-babe",
"sp-core",
"sp-inherents",
"sp-keystore",
"sp-runtime",
"sp-timestamp",
"sp-transaction-pool",
@@ -6807,6 +6819,7 @@ dependencies = [
"sp-finality-tracker",
"sp-inherents",
"sp-keyring",
"sp-keystore",
"sp-runtime",
"sp-state-machine",
"sp-tracing",
@@ -6868,7 +6881,10 @@ dependencies = [
name = "sc-keystore"
version = "2.0.0"
dependencies = [
"async-trait",
"derive_more",
"futures 0.3.5",
"futures-util",
"hex",
"merlin",
"parking_lot 0.10.2",
@@ -6876,6 +6892,7 @@ dependencies = [
"serde_json",
"sp-application-crypto",
"sp-core",
"sp-keystore",
"subtle 2.2.3",
"tempfile",
]
@@ -7084,6 +7101,7 @@ dependencies = [
"sp-chain-spec",
"sp-core",
"sp-io",
"sp-keystore",
"sp-offchain",
"sp-rpc",
"sp-runtime",
@@ -7200,6 +7218,7 @@ dependencies = [
"sp-finality-grandpa",
"sp-inherents",
"sp-io",
"sp-keystore",
"sp-runtime",
"sp-session",
"sp-state-machine",
@@ -7847,6 +7866,7 @@ dependencies = [
"sp-api",
"sp-application-crypto",
"sp-core",
"sp-keystore",
"sp-runtime",
"substrate-test-runtime-client",
]
@@ -7986,6 +8006,7 @@ dependencies = [
"sp-consensus-vrf",
"sp-core",
"sp-inherents",
"sp-keystore",
"sp-runtime",
"sp-std",
"sp-timestamp",
@@ -8029,7 +8050,6 @@ dependencies = [
"blake2-rfc",
"byteorder 1.3.4",
"criterion",
"derive_more",
"dyn-clonable",
"ed25519-dalek",
"futures 0.3.5",
@@ -8108,6 +8128,7 @@ dependencies = [
"sp-api",
"sp-application-crypto",
"sp-core",
"sp-keystore",
"sp-runtime",
"sp-std",
]
@@ -8144,6 +8165,7 @@ dependencies = [
"parking_lot 0.10.2",
"sp-core",
"sp-externalities",
"sp-keystore",
"sp-runtime-interface",
"sp-state-machine",
"sp-std",
@@ -8164,6 +8186,23 @@ dependencies = [
"strum",
]
[[package]]
name = "sp-keystore"
version = "0.8.0"
dependencies = [
"async-trait",
"derive_more",
"futures 0.3.5",
"merlin",
"parity-scale-codec",
"parking_lot 0.10.2",
"rand 0.7.3",
"rand_chacha 0.2.2",
"schnorrkel",
"sp-core",
"sp-externalities",
]
[[package]]
name = "sp-npos-elections"
version = "2.0.0"
@@ -8752,6 +8791,7 @@ dependencies = [
"sp-consensus",
"sp-core",
"sp-keyring",
"sp-keystore",
"sp-runtime",
"sp-state-machine",
]
+1
View File
@@ -137,6 +137,7 @@ members = [
"primitives/finality-grandpa",
"primitives/inherents",
"primitives/keyring",
"primitives/keystore",
"primitives/offchain",
"primitives/panic-handler",
"primitives/npos-elections",
+11 -11
View File
@@ -39,7 +39,7 @@ pub fn new_partial(config: &Configuration) -> Result<sc_service::PartialComponen
>, ServiceError> {
let inherent_data_providers = sp_inherents::InherentDataProviders::new();
let (client, backend, keystore, task_manager) =
let (client, backend, keystore_container, task_manager) =
sc_service::new_full_parts::<Block, RuntimeApi, Executor>(&config)?;
let client = Arc::new(client);
@@ -73,8 +73,8 @@ pub fn new_partial(config: &Configuration) -> Result<sc_service::PartialComponen
)?;
Ok(sc_service::PartialComponents {
client, backend, task_manager, import_queue, keystore, select_chain, transaction_pool,
inherent_data_providers,
client, backend, task_manager, import_queue, keystore_container,
select_chain, transaction_pool,inherent_data_providers,
other: (aura_block_import, grandpa_link),
})
}
@@ -82,8 +82,8 @@ pub fn new_partial(config: &Configuration) -> Result<sc_service::PartialComponen
/// Builds a new service for a full client.
pub fn new_full(config: Configuration) -> Result<TaskManager, ServiceError> {
let sc_service::PartialComponents {
client, backend, mut task_manager, import_queue, keystore, select_chain, transaction_pool,
inherent_data_providers,
client, backend, mut task_manager, import_queue, keystore_container,
select_chain, transaction_pool, inherent_data_providers,
other: (block_import, grandpa_link),
} = new_partial(&config)?;
@@ -134,11 +134,11 @@ pub fn new_full(config: Configuration) -> Result<TaskManager, ServiceError> {
sc_service::spawn_tasks(sc_service::SpawnTasksParams {
network: network.clone(),
client: client.clone(),
keystore: keystore.clone(),
keystore: keystore_container.sync_keystore(),
task_manager: &mut task_manager,
transaction_pool: transaction_pool.clone(),
telemetry_connection_sinks: telemetry_connection_sinks.clone(),
rpc_extensions_builder: rpc_extensions_builder,
rpc_extensions_builder,
on_demand: None,
remote_blockchain: None,
backend, network_status_sinks, system_rpc_tx, config,
@@ -163,7 +163,7 @@ pub fn new_full(config: Configuration) -> Result<TaskManager, ServiceError> {
network.clone(),
inherent_data_providers.clone(),
force_authoring,
keystore.clone(),
keystore_container.sync_keystore(),
can_author_with,
)?;
@@ -175,7 +175,7 @@ pub fn new_full(config: Configuration) -> Result<TaskManager, ServiceError> {
// if the node isn't actively participating in consensus then it doesn't
// need a keystore, regardless of which protocol we use below.
let keystore = if role.is_authority() {
Some(keystore as sp_core::traits::BareCryptoStorePtr)
Some(keystore_container.sync_keystore())
} else {
None
};
@@ -228,7 +228,7 @@ pub fn new_full(config: Configuration) -> Result<TaskManager, ServiceError> {
/// Builds a new service for a light client.
pub fn new_light(config: Configuration) -> Result<TaskManager, ServiceError> {
let (client, backend, keystore, mut task_manager, on_demand) =
let (client, backend, keystore_container, mut task_manager, on_demand) =
sc_service::new_light_parts::<Block, RuntimeApi, Executor>(&config)?;
let transaction_pool = Arc::new(sc_transaction_pool::BasicPool::new_light(
@@ -290,7 +290,7 @@ pub fn new_light(config: Configuration) -> Result<TaskManager, ServiceError> {
telemetry_connection_sinks: sc_service::TelemetryConnectionSinks::default(),
config,
client,
keystore,
keystore: keystore_container.sync_keystore(),
backend,
network,
network_status_sinks,
+1
View File
@@ -54,6 +54,7 @@ sp-timestamp = { version = "2.0.0", default-features = false, path = "../../../p
sp-finality-tracker = { version = "2.0.0", default-features = false, path = "../../../primitives/finality-tracker" }
sp-inherents = { version = "2.0.0", path = "../../../primitives/inherents" }
sp-keyring = { version = "2.0.0", path = "../../../primitives/keyring" }
sp-keystore = { version = "0.8.0", path = "../../../primitives/keystore" }
sp-io = { version = "2.0.0", path = "../../../primitives/io" }
sp-consensus = { version = "0.8.0", path = "../../../primitives/consensus/common" }
sp-transaction-pool = { version = "2.0.0", path = "../../../primitives/transaction-pool" }
+39 -26
View File
@@ -34,7 +34,6 @@ use sc_network::{Event, NetworkService};
use sp_runtime::traits::Block as BlockT;
use futures::prelude::*;
use sc_client_api::{ExecutorProvider, RemoteBackend};
use sp_core::traits::BareCryptoStorePtr;
use node_executor::Executor;
type FullClient = sc_service::TFullClient<Block, RuntimeApi, Executor>;
@@ -64,7 +63,7 @@ pub fn new_partial(config: &Configuration) -> Result<sc_service::PartialComponen
),
)
>, ServiceError> {
let (client, backend, keystore, task_manager) =
let (client, backend, keystore_container, task_manager) =
sc_service::new_full_parts::<Block, RuntimeApi, Executor>(&config)?;
let client = Arc::new(client);
@@ -122,7 +121,7 @@ pub fn new_partial(config: &Configuration) -> Result<sc_service::PartialComponen
let client = client.clone();
let pool = transaction_pool.clone();
let select_chain = select_chain.clone();
let keystore = keystore.clone();
let keystore = keystore_container.sync_keystore();
let rpc_extensions_builder = move |deny_unsafe, subscription_executor| {
let deps = node_rpc::FullDeps {
@@ -151,8 +150,8 @@ pub fn new_partial(config: &Configuration) -> Result<sc_service::PartialComponen
};
Ok(sc_service::PartialComponents {
client, backend, task_manager, keystore, select_chain, import_queue, transaction_pool,
inherent_data_providers,
client, backend, task_manager, keystore_container,
select_chain, import_queue, transaction_pool, inherent_data_providers,
other: (rpc_extensions_builder, import_setup, rpc_setup)
})
}
@@ -175,8 +174,8 @@ pub fn new_full_base(
)
) -> Result<NewFullBase, ServiceError> {
let sc_service::PartialComponents {
client, backend, mut task_manager, import_queue, keystore, select_chain, transaction_pool,
inherent_data_providers,
client, backend, mut task_manager, import_queue, keystore_container,
select_chain, transaction_pool, inherent_data_providers,
other: (rpc_extensions_builder, import_setup, rpc_setup),
} = new_partial(&config)?;
@@ -212,7 +211,7 @@ pub fn new_full_base(
config,
backend: backend.clone(),
client: client.clone(),
keystore: keystore.clone(),
keystore: keystore_container.sync_keystore(),
network: network.clone(),
rpc_extensions_builder: Box::new(rpc_extensions_builder),
transaction_pool: transaction_pool.clone(),
@@ -239,7 +238,7 @@ pub fn new_full_base(
sp_consensus::CanAuthorWithNativeVersion::new(client.executor().clone());
let babe_config = sc_consensus_babe::BabeParams {
keystore: keystore.clone(),
keystore: keystore_container.sync_keystore(),
client: client.clone(),
select_chain,
env: proposer,
@@ -261,7 +260,7 @@ pub fn new_full_base(
sc_service::config::Role::Authority { ref sentry_nodes } => (
sentry_nodes.clone(),
sc_authority_discovery::Role::Authority (
keystore.clone(),
keystore_container.keystore(),
),
),
sc_service::config::Role::Sentry {..} => (
@@ -275,23 +274,23 @@ pub fn new_full_base(
.filter_map(|e| async move { match e {
Event::Dht(e) => Some(e),
_ => None,
}}).boxed();
}});
let (authority_discovery_worker, _service) = sc_authority_discovery::new_worker_and_service(
client.clone(),
network.clone(),
sentries,
dht_event_stream,
Box::pin(dht_event_stream),
authority_discovery_role,
prometheus_registry.clone(),
);
task_manager.spawn_handle().spawn("authority-discovery-worker", authority_discovery_worker);
task_manager.spawn_handle().spawn("authority-discovery-worker", authority_discovery_worker.run());
}
// if the node isn't actively participating in consensus then it doesn't
// need a keystore, regardless of which protocol we use below.
let keystore = if role.is_authority() {
Some(keystore as BareCryptoStorePtr)
Some(keystore_container.sync_keystore())
} else {
None
};
@@ -358,7 +357,7 @@ pub fn new_light_base(config: Configuration) -> Result<(
Arc<NetworkService<Block, <Block as BlockT>::Hash>>,
Arc<sc_transaction_pool::LightPool<Block, LightClient, sc_network::config::OnDemand<Block>>>
), ServiceError> {
let (client, backend, keystore, mut task_manager, on_demand) =
let (client, backend, keystore_container, mut task_manager, on_demand) =
sc_service::new_light_parts::<Block, RuntimeApi, Executor>(&config)?;
let select_chain = sc_consensus::LongestChain::new(backend.clone());
@@ -440,7 +439,8 @@ pub fn new_light_base(config: Configuration) -> Result<(
rpc_extensions_builder: Box::new(sc_service::NoopRpcExtensionBuilder(rpc_extensions)),
client: client.clone(),
transaction_pool: transaction_pool.clone(),
config, keystore, backend, network_status_sinks, system_rpc_tx,
keystore: keystore_container.sync_keystore(),
config, backend, network_status_sinks, system_rpc_tx,
network: network.clone(),
telemetry_connection_sinks: sc_service::TelemetryConnectionSinks::default(),
task_manager: &mut task_manager,
@@ -458,7 +458,7 @@ pub fn new_light(config: Configuration) -> Result<TaskManager, ServiceError> {
#[cfg(test)]
mod tests {
use std::{sync::Arc, borrow::Cow, any::Any};
use std::{sync::Arc, borrow::Cow, any::Any, convert::TryInto};
use sc_consensus_babe::{CompatibleDigestItem, BabeIntermediate, INTERMEDIATE_KEY};
use sc_consensus_epochs::descendent_query;
use sp_consensus::{
@@ -469,7 +469,12 @@ mod tests {
use node_runtime::{BalancesCall, Call, UncheckedExtrinsic, Address};
use node_runtime::constants::{currency::CENTS, time::SLOT_DURATION};
use codec::Encode;
use sp_core::{crypto::Pair as CryptoPair, H256};
use sp_core::{
crypto::Pair as CryptoPair,
H256,
Public
};
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use sp_runtime::{
generic::{BlockId, Era, Digest, SignedPayload},
traits::{Block as BlockT, Header as HeaderT},
@@ -480,9 +485,10 @@ mod tests {
use sp_keyring::AccountKeyring;
use sc_service_test::TestNetNode;
use crate::service::{new_full_base, new_light_base, NewFullBase};
use sp_runtime::traits::IdentifyAccount;
use sp_runtime::{key_types::BABE, traits::IdentifyAccount, RuntimeAppPublic};
use sp_transaction_pool::{MaintainedTransactionPool, ChainEvent};
use sc_client_api::BlockBackend;
use sc_keystore::LocalKeystore;
type AccountPublic = <Signature as Verify>::Signer;
@@ -492,10 +498,10 @@ mod tests {
#[ignore]
fn test_sync() {
let keystore_path = tempfile::tempdir().expect("Creates keystore path");
let keystore = sc_keystore::Store::open(keystore_path.path(), None)
.expect("Creates keystore");
let alice = keystore.write().insert_ephemeral_from_seed::<sc_consensus_babe::AuthorityPair>("//Alice")
.expect("Creates authority pair");
let keystore: SyncCryptoStorePtr = Arc::new(LocalKeystore::open(keystore_path.path(), None)
.expect("Creates keystore"));
let alice: sp_consensus_babe::AuthorityId = SyncCryptoStore::sr25519_generate_new(&*keystore, BABE, Some("//Alice"))
.expect("Creates authority pair").into();
let chain_spec = crate::chain_spec::tests::integration_test_config_with_single_authority();
@@ -574,7 +580,7 @@ mod tests {
slot_num,
&parent_header,
&*service.client(),
&keystore,
keystore.clone(),
&babe_link,
) {
break babe_pre_digest;
@@ -600,9 +606,16 @@ mod tests {
// sign the pre-sealed hash of the block and then
// add it to a digest item.
let to_sign = pre_hash.encode();
let signature = alice.sign(&to_sign[..]);
let signature = SyncCryptoStore::sign_with(
&*keystore,
sp_consensus_babe::AuthorityId::ID,
&alice.to_public_crypto_pair(),
&to_sign,
).unwrap()
.try_into()
.unwrap();
let item = <DigestItem as CompatibleDigestItem>::babe_seal(
signature.into(),
signature,
);
slot_num += 1;
+1
View File
@@ -17,6 +17,7 @@ node-primitives = { version = "2.0.0", path = "../primitives" }
node-runtime = { version = "2.0.0", path = "../runtime" }
sc-executor = { version = "0.8.0", path = "../../../client/executor" }
sp-core = { version = "2.0.0", path = "../../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../../primitives/keystore" }
sp-io = { version = "2.0.0", path = "../../../primitives/io" }
sp-state-machine = { version = "0.8.0", path = "../../../primitives/state-machine" }
sp-trie = { version = "2.0.0", path = "../../../primitives/trie" }
@@ -15,18 +15,18 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use node_runtime::{
Executive, Indices, Runtime, UncheckedExtrinsic,
};
use sp_application_crypto::AppKey;
use sp_core::testing::KeyStore;
use sp_core::{
offchain::{
TransactionPoolExt,
testing::TestTransactionPoolExt,
},
traits::KeystoreExt,
};
use sp_keystore::{KeystoreExt, SyncCryptoStore, testing::KeyStore};
use frame_system::{
offchain::{
Signer,
@@ -72,10 +72,22 @@ fn should_submit_signed_transaction() {
t.register_extension(TransactionPoolExt::new(pool));
let keystore = KeyStore::new();
keystore.write().sr25519_generate_new(sr25519::AuthorityId::ID, Some(&format!("{}/hunter1", PHRASE))).unwrap();
keystore.write().sr25519_generate_new(sr25519::AuthorityId::ID, Some(&format!("{}/hunter2", PHRASE))).unwrap();
keystore.write().sr25519_generate_new(sr25519::AuthorityId::ID, Some(&format!("{}/hunter3", PHRASE))).unwrap();
t.register_extension(KeystoreExt(keystore));
SyncCryptoStore::sr25519_generate_new(
&keystore,
sr25519::AuthorityId::ID,
Some(&format!("{}/hunter1", PHRASE))
).unwrap();
SyncCryptoStore::sr25519_generate_new(
&keystore,
sr25519::AuthorityId::ID,
Some(&format!("{}/hunter2", PHRASE))
).unwrap();
SyncCryptoStore::sr25519_generate_new(
&keystore,
sr25519::AuthorityId::ID,
Some(&format!("{}/hunter3", PHRASE))
).unwrap();
t.register_extension(KeystoreExt(Arc::new(keystore)));
t.execute_with(|| {
let results = Signer::<Runtime, TestAuthorityId>::all_accounts()
@@ -97,9 +109,17 @@ fn should_submit_signed_twice_from_the_same_account() {
t.register_extension(TransactionPoolExt::new(pool));
let keystore = KeyStore::new();
keystore.write().sr25519_generate_new(sr25519::AuthorityId::ID, Some(&format!("{}/hunter1", PHRASE))).unwrap();
keystore.write().sr25519_generate_new(sr25519::AuthorityId::ID, Some(&format!("{}/hunter2", PHRASE))).unwrap();
t.register_extension(KeystoreExt(keystore));
SyncCryptoStore::sr25519_generate_new(
&keystore,
sr25519::AuthorityId::ID,
Some(&format!("{}/hunter1", PHRASE))
).unwrap();
SyncCryptoStore::sr25519_generate_new(
&keystore,
sr25519::AuthorityId::ID,
Some(&format!("{}/hunter2", PHRASE))
).unwrap();
t.register_extension(KeystoreExt(Arc::new(keystore)));
t.execute_with(|| {
let result = Signer::<Runtime, TestAuthorityId>::any_account()
@@ -141,9 +161,15 @@ fn should_submit_signed_twice_from_all_accounts() {
t.register_extension(TransactionPoolExt::new(pool));
let keystore = KeyStore::new();
keystore.write().sr25519_generate_new(sr25519::AuthorityId::ID, Some(&format!("{}/hunter1", PHRASE))).unwrap();
keystore.write().sr25519_generate_new(sr25519::AuthorityId::ID, Some(&format!("{}/hunter2", PHRASE))).unwrap();
t.register_extension(KeystoreExt(keystore));
keystore.sr25519_generate_new(
sr25519::AuthorityId::ID,
Some(&format!("{}/hunter1", PHRASE))
).unwrap();
keystore.sr25519_generate_new(
sr25519::AuthorityId::ID,
Some(&format!("{}/hunter2", PHRASE))
).unwrap();
t.register_extension(KeystoreExt(Arc::new(keystore)));
t.execute_with(|| {
let results = Signer::<Runtime, TestAuthorityId>::all_accounts()
@@ -200,8 +226,11 @@ fn submitted_transaction_should_be_valid() {
t.register_extension(TransactionPoolExt::new(pool));
let keystore = KeyStore::new();
keystore.write().sr25519_generate_new(sr25519::AuthorityId::ID, Some(&format!("{}/hunter1", PHRASE))).unwrap();
t.register_extension(KeystoreExt(keystore));
SyncCryptoStore::sr25519_generate_new(
&keystore,
sr25519::AuthorityId::ID, Some(&format!("{}/hunter1", PHRASE))
).unwrap();
t.register_extension(KeystoreExt(Arc::new(keystore)));
t.execute_with(|| {
let results = Signer::<Runtime, TestAuthorityId>::all_accounts()
+1
View File
@@ -28,6 +28,7 @@ sc-rpc = { version = "2.0.0", path = "../../../client/rpc" }
sp-api = { version = "2.0.0", path = "../../../primitives/api" }
sp-block-builder = { version = "2.0.0", path = "../../../primitives/block-builder" }
sp-blockchain = { version = "2.0.0", path = "../../../primitives/blockchain" }
sp-keystore = { version = "0.8.0", path = "../../../primitives/keystore" }
sp-consensus = { version = "0.8.0", path = "../../../primitives/consensus/common" }
sp-consensus-babe = { version = "0.8.0", path = "../../../primitives/consensus/babe" }
sp-runtime = { version = "2.0.0", path = "../../../primitives/runtime" }
+2 -2
View File
@@ -32,6 +32,7 @@
use std::sync::Arc;
use sp_keystore::SyncCryptoStorePtr;
use node_primitives::{Block, BlockNumber, AccountId, Index, Balance, Hash};
use sc_consensus_babe::{Config, Epoch};
use sc_consensus_babe_rpc::BabeRpcHandler;
@@ -40,7 +41,6 @@ use sc_finality_grandpa::{
SharedVoterState, SharedAuthoritySet, FinalityProofProvider, GrandpaJustificationStream
};
use sc_finality_grandpa_rpc::GrandpaRpcHandler;
use sc_keystore::KeyStorePtr;
pub use sc_rpc_api::DenyUnsafe;
use sp_api::ProvideRuntimeApi;
use sp_block_builder::BlockBuilder;
@@ -69,7 +69,7 @@ pub struct BabeDeps {
/// BABE pending epoch changes.
pub shared_epoch_changes: SharedEpochChanges<Block, Epoch>,
/// The keystore that manages the keys of the node.
pub keystore: KeyStorePtr,
pub keystore: SyncCryptoStorePtr,
}
/// Extra dependencies for GRANDPA
@@ -18,5 +18,6 @@ sc-keystore = { version = "2.0.0", path = "../../../client/keystore" }
sc-chain-spec = { version = "2.0.0", path = "../../../client/chain-spec" }
node-cli = { version = "2.0.0", path = "../../node/cli" }
sp-core = { version = "2.0.0", path = "../../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../../primitives/keystore" }
rand = "0.7.2"
structopt = "0.3.8"
@@ -16,15 +16,19 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
use std::{fs, path::{Path, PathBuf}};
use std::{fs, path::{Path, PathBuf}, sync::Arc};
use ansi_term::Style;
use rand::{Rng, distributions::Alphanumeric, rngs::OsRng};
use structopt::StructOpt;
use sc_keystore::{Store as Keystore};
use sc_keystore::LocalKeystore;
use node_cli::chain_spec::{self, AccountId};
use sp_core::{sr25519, crypto::{Public, Ss58Codec}, traits::BareCryptoStore};
use sp_core::{
sr25519,
crypto::{Public, Ss58Codec},
};
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
/// A utility to easily create a testnet chain spec definition with a given set
/// of authorities and endowed accounts and/or generate random accounts.
@@ -139,16 +143,17 @@ fn generate_authority_keys_and_store(
keystore_path: &Path,
) -> Result<(), String> {
for (n, seed) in seeds.into_iter().enumerate() {
let keystore = Keystore::open(
let keystore: SyncCryptoStorePtr = Arc::new(LocalKeystore::open(
keystore_path.join(format!("auth-{}", n)),
None,
).map_err(|err| err.to_string())?;
).map_err(|err| err.to_string())?);
let (_, _, grandpa, babe, im_online, authority_discovery) =
chain_spec::authority_keys_from_seed(seed);
let insert_key = |key_type, public| {
keystore.write().insert_unknown(
SyncCryptoStore::insert_unknown(
&*keystore,
key_type,
&format!("//{}", seed),
public,
+1
View File
@@ -32,6 +32,7 @@ parking_lot = "0.10.0"
lazy_static = "1.4.0"
sp-database = { version = "2.0.0", path = "../../primitives/database" }
sp-core = { version = "2.0.0", default-features = false, path = "../../primitives/core" }
sp-keystore = { version = "0.8.0", default-features = false, path = "../../primitives/keystore" }
sp-std = { version = "2.0.0", default-features = false, path = "../../primitives/std" }
sp-version = { version = "2.0.0", default-features = false, path = "../../primitives/version" }
sp-api = { version = "2.0.0", path = "../../primitives/api" }
@@ -25,8 +25,8 @@ use codec::Decode;
use sp_core::{
ExecutionContext,
offchain::{self, OffchainExt, TransactionPoolExt},
traits::{BareCryptoStorePtr, KeystoreExt},
};
use sp_keystore::{KeystoreExt, SyncCryptoStorePtr};
use sp_runtime::{
generic::BlockId,
traits,
@@ -81,7 +81,7 @@ impl ExtensionsFactory for () {
/// for each call, based on required `Capabilities`.
pub struct ExecutionExtensions<Block: traits::Block> {
strategies: ExecutionStrategies,
keystore: Option<BareCryptoStorePtr>,
keystore: Option<SyncCryptoStorePtr>,
// FIXME: these two are only RwLock because of https://github.com/paritytech/substrate/issues/4587
// remove when fixed.
// To break retain cycle between `Client` and `TransactionPool` we require this
@@ -107,11 +107,16 @@ impl<Block: traits::Block> ExecutionExtensions<Block> {
/// Create new `ExecutionExtensions` given a `keystore` and `ExecutionStrategies`.
pub fn new(
strategies: ExecutionStrategies,
keystore: Option<BareCryptoStorePtr>,
keystore: Option<SyncCryptoStorePtr>,
) -> Self {
let transaction_pool = RwLock::new(None);
let extensions_factory = Box::new(());
Self { strategies, keystore, extensions_factory: RwLock::new(extensions_factory), transaction_pool }
Self {
strategies,
keystore,
extensions_factory: RwLock::new(extensions_factory),
transaction_pool,
}
}
/// Get a reference to the execution strategies.
@@ -161,7 +166,7 @@ impl<Block: traits::Block> ExecutionExtensions<Block> {
let mut extensions = self.extensions_factory.read().extensions_for(capabilities);
if capabilities.has(offchain::Capability::Keystore) {
if let Some(keystore) = self.keystore.as_ref() {
if let Some(ref keystore) = self.keystore {
extensions.register(KeystoreExt(keystore.clone()));
}
}
@@ -35,6 +35,7 @@ serde_json = "1.0.41"
sp-authority-discovery = { version = "2.0.0", path = "../../primitives/authority-discovery" }
sp-blockchain = { version = "2.0.0", path = "../../primitives/blockchain" }
sp-core = { version = "2.0.0", path = "../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../primitives/keystore" }
sp-runtime = { version = "2.0.0", path = "../../primitives/runtime" }
sp-api = { version = "2.0.0", path = "../../primitives/api" }
@@ -15,7 +15,7 @@
// along with Substrate. If not, see <http://www.gnu.org/licenses/>.
#![warn(missing_docs)]
#![recursion_limit = "1024"]
//! Substrate authority discovery.
//!
//! This crate enables Substrate authorities to discover and directly connect to
@@ -26,7 +26,6 @@
pub use crate::{service::Service, worker::{NetworkProvider, Worker, Role}};
use std::pin::Pin;
use std::sync::Arc;
use futures::channel::{mpsc, oneshot};
@@ -45,19 +44,20 @@ mod tests;
mod worker;
/// Create a new authority discovery [`Worker`] and [`Service`].
pub fn new_worker_and_service<Client, Network, Block>(
pub fn new_worker_and_service<Client, Network, Block, DhtEventStream>(
client: Arc<Client>,
network: Arc<Network>,
sentry_nodes: Vec<MultiaddrWithPeerId>,
dht_event_rx: Pin<Box<dyn Stream<Item = DhtEvent> + Send>>,
dht_event_rx: DhtEventStream,
role: Role,
prometheus_registry: Option<prometheus_endpoint::Registry>,
) -> (Worker<Client, Network, Block>, Service)
) -> (Worker<Client, Network, Block, DhtEventStream>, Service)
where
Block: BlockT + Unpin + 'static,
Network: NetworkProvider,
Client: ProvideRuntimeApi<Block> + Send + Sync + 'static + HeaderBackend<Block>,
<Client as ProvideRuntimeApi<Block>>::Api: AuthorityDiscoveryApi<Block, Error = sp_blockchain::Error>,
DhtEventStream: Stream<Item = DhtEvent> + Unpin,
{
let (to_worker, from_service) = mpsc::channel(0);
@@ -19,28 +19,29 @@
use crate::{new_worker_and_service, worker::{tests::{TestApi, TestNetwork}, Role}};
use std::sync::Arc;
use futures::prelude::*;
use futures::channel::mpsc::channel;
use futures::executor::LocalPool;
use futures::task::LocalSpawn;
use futures::{channel::mpsc::channel, executor::LocalPool, task::LocalSpawn};
use libp2p::core::{multiaddr::{Multiaddr, Protocol}, PeerId};
use sp_authority_discovery::AuthorityId;
use sp_core::crypto::key_types;
use sp_core::testing::KeyStore;
use sp_keystore::{CryptoStore, testing::KeyStore};
#[test]
fn get_addresses_and_authority_id() {
let (_dht_event_tx, dht_event_rx) = channel(0);
let network: Arc<TestNetwork> = Arc::new(Default::default());
let mut pool = LocalPool::new();
let key_store = KeyStore::new();
let remote_authority_id: AuthorityId = key_store
.write()
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None)
.unwrap()
.into();
let remote_authority_id: AuthorityId = pool.run_until(async {
key_store
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None)
.await
.unwrap()
.into()
});
let remote_peer_id = PeerId::random();
let remote_addr = "/ip6/2001:db8:0:0:0:0:0:2/tcp/30333".parse::<Multiaddr>()
@@ -55,15 +56,13 @@ fn get_addresses_and_authority_id() {
test_api,
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(key_store),
Box::pin(dht_event_rx),
Role::Authority(key_store.into()),
None,
);
worker.inject_addresses(remote_authority_id.clone(), vec![remote_addr.clone()]);
let mut pool = LocalPool::new();
pool.spawner().spawn_local_obj(Box::pin(worker).into()).unwrap();
pool.spawner().spawn_local_obj(Box::pin(worker.run()).into()).unwrap();
pool.run_until(async {
assert_eq!(
+144 -173
View File
@@ -19,13 +19,11 @@ use crate::{error::{Error, Result}, ServicetoWorkerMsg};
use std::collections::{HashMap, HashSet};
use std::convert::TryInto;
use std::marker::PhantomData;
use std::pin::Pin;
use std::sync::Arc;
use std::time::{Duration, Instant};
use futures::channel::mpsc;
use futures::task::{Context, Poll};
use futures::{Future, FutureExt, ready, Stream, StreamExt, stream::Fuse};
use futures::{FutureExt, Stream, StreamExt, stream::Fuse};
use futures_timer::Delay;
use addr_cache::AddrCache;
@@ -47,7 +45,7 @@ use sc_network::{
};
use sp_authority_discovery::{AuthorityDiscoveryApi, AuthorityId, AuthoritySignature, AuthorityPair};
use sp_core::crypto::{key_types, Pair};
use sp_core::traits::BareCryptoStorePtr;
use sp_keystore::CryptoStore;
use sp_runtime::{traits::Block as BlockT, generic::BlockId};
use sp_api::ProvideRuntimeApi;
@@ -77,7 +75,7 @@ const MAX_IN_FLIGHT_LOOKUPS: usize = 8;
/// Role an authority discovery module can run as.
pub enum Role {
/// Actual authority as well as a reference to its key store.
Authority(BareCryptoStorePtr),
Authority(Arc<dyn CryptoStore>),
/// Sentry node that guards an authority.
///
/// No reference to its key store needed, as sentry nodes don't have an identity to sign
@@ -115,7 +113,7 @@ pub enum Role {
/// When run as a sentry node, the [`Worker`] does not publish
/// any addresses to the DHT but still discovers validators and sentry nodes of
/// validators, i.e. only step 2 (Discovers other authorities) is executed.
pub struct Worker<Client, Network, Block>
pub struct Worker<Client, Network, Block, DhtEventStream>
where
Block: BlockT + 'static,
Network: NetworkProvider,
@@ -137,7 +135,7 @@ where
// - Some(vec![a, b, c, ...]): Valid addresses were specified.
sentry_nodes: Option<Vec<Multiaddr>>,
/// Channel we receive Dht events on.
dht_event_rx: Pin<Box<dyn Stream<Item = DhtEvent> + Send>>,
dht_event_rx: DhtEventStream,
/// Interval to be proactive, publishing own addresses.
publish_interval: Interval,
@@ -161,14 +159,14 @@ where
phantom: PhantomData<Block>,
}
impl<Client, Network, Block> Worker<Client, Network, Block>
impl<Client, Network, Block, DhtEventStream> Worker<Client, Network, Block, DhtEventStream>
where
Block: BlockT + Unpin + 'static,
Network: NetworkProvider,
Client: ProvideRuntimeApi<Block> + Send + Sync + 'static + HeaderBackend<Block>,
<Client as ProvideRuntimeApi<Block>>::Api:
AuthorityDiscoveryApi<Block, Error = sp_blockchain::Error>,
Self: Future<Output = ()>,
DhtEventStream: Stream<Item = DhtEvent> + Unpin,
{
/// Return a new [`Worker`].
///
@@ -179,7 +177,7 @@ where
client: Arc<Client>,
network: Arc<Network>,
sentry_nodes: Vec<MultiaddrWithPeerId>,
dht_event_rx: Pin<Box<dyn Stream<Item = DhtEvent> + Send>>,
dht_event_rx: DhtEventStream,
role: Role,
prometheus_registry: Option<prometheus_endpoint::Registry>,
) -> Self {
@@ -247,6 +245,72 @@ where
}
}
/// Start the worker
pub async fn run(mut self) {
loop {
self.start_new_lookups();
futures::select! {
// Process incoming events.
event = self.dht_event_rx.next().fuse() => {
if let Some(event) = event {
self.handle_dht_event(event).await;
} else {
// This point is reached if the network has shut down, at which point there is not
// much else to do than to shut down the authority discovery as well.
return;
}
},
// Handle messages from [`Service`]. Ignore if sender side is closed.
msg = self.from_service.select_next_some() => {
self.process_message_from_service(msg);
},
// Set peerset priority group to a new random set of addresses.
_ = self.priority_group_set_interval.next().fuse() => {
if let Err(e) = self.set_priority_group() {
error!(
target: LOG_TARGET,
"Failed to set priority group: {:?}", e,
);
}
},
// Publish own addresses.
_ = self.publish_interval.next().fuse() => {
if let Err(e) = self.publish_ext_addresses().await {
error!(
target: LOG_TARGET,
"Failed to publish external addresses: {:?}", e,
);
}
},
// Request addresses of authorities.
_ = self.query_interval.next().fuse() => {
if let Err(e) = self.refill_pending_lookups_queue().await {
error!(
target: LOG_TARGET,
"Failed to request addresses of authorities: {:?}", e,
);
}
},
}
}
}
fn process_message_from_service(&self, msg: ServicetoWorkerMsg) {
match msg {
ServicetoWorkerMsg::GetAddressesByAuthorityId(authority, sender) => {
let _ = sender.send(
self.addr_cache.get_addresses_by_authority_id(&authority).map(Clone::clone),
);
}
ServicetoWorkerMsg::GetAuthorityIdByPeerId(peer_id, sender) => {
let _ = sender.send(
self.addr_cache.get_authority_id_by_peer_id(&peer_id).map(Clone::clone),
);
}
}
}
fn addresses_to_publish(&self) -> impl ExactSizeIterator<Item = Multiaddr> {
match &self.sentry_nodes {
Some(addrs) => Either::Left(addrs.clone().into_iter()),
@@ -268,7 +332,7 @@ where
}
/// Publish either our own or if specified the public addresses of our sentry nodes.
fn publish_ext_addresses(&mut self) -> Result<()> {
async fn publish_ext_addresses(&mut self) -> Result<()> {
let key_store = match &self.role {
Role::Authority(key_store) => key_store,
// Only authority nodes can put addresses (their own or the ones of their sentry nodes)
@@ -291,18 +355,16 @@ where
.encode(&mut serialized_addresses)
.map_err(Error::EncodingProto)?;
let keys = Worker::get_own_public_keys_within_authority_set(
&key_store,
&self.client,
)?.into_iter().map(Into::into).collect::<Vec<_>>();
let keys = Worker::<Client, Network, Block, DhtEventStream>::get_own_public_keys_within_authority_set(
key_store.clone(),
self.client.as_ref(),
).await?.into_iter().map(Into::into).collect::<Vec<_>>();
let signatures = key_store.read()
.sign_with_all(
key_types::AUTHORITY_DISCOVERY,
keys.clone(),
serialized_addresses.as_slice(),
)
.map_err(|_| Error::Signing)?;
let signatures = key_store.sign_with_all(
key_types::AUTHORITY_DISCOVERY,
keys.clone(),
serialized_addresses.as_slice(),
).await.map_err(|_| Error::Signing)?;
for (sign_result, key) in signatures.into_iter().zip(keys) {
let mut signed_addresses = vec![];
@@ -327,15 +389,14 @@ where
Ok(())
}
fn refill_pending_lookups_queue(&mut self) -> Result<()> {
async fn refill_pending_lookups_queue(&mut self) -> Result<()> {
let id = BlockId::hash(self.client.info().best_hash);
let local_keys = match &self.role {
Role::Authority(key_store) => {
key_store.read()
.sr25519_public_keys(key_types::AUTHORITY_DISCOVERY)
.into_iter()
.collect::<HashSet<_>>()
key_store.sr25519_public_keys(
key_types::AUTHORITY_DISCOVERY
).await.into_iter().collect::<HashSet<_>>()
},
Role::Sentry => HashSet::new(),
};
@@ -387,78 +448,68 @@ where
}
/// Handle incoming Dht events.
///
/// Returns either:
/// - Poll::Pending when there are no more events to handle or
/// - Poll::Ready(()) when the dht event stream terminated.
fn handle_dht_events(&mut self, cx: &mut Context) -> Poll<()>{
loop {
match ready!(self.dht_event_rx.poll_next_unpin(cx)) {
Some(DhtEvent::ValueFound(v)) => {
if let Some(metrics) = &self.metrics {
metrics.dht_event_received.with_label_values(&["value_found"]).inc();
}
if log_enabled!(log::Level::Debug) {
let hashes = v.iter().map(|(hash, _value)| hash.clone());
debug!(
target: LOG_TARGET,
"Value for hash '{:?}' found on Dht.", hashes,
);
}
if let Err(e) = self.handle_dht_value_found_event(v) {
if let Some(metrics) = &self.metrics {
metrics.handle_value_found_event_failure.inc();
}
debug!(
target: LOG_TARGET,
"Failed to handle Dht value found event: {:?}", e,
);
}
async fn handle_dht_event(&mut self, event: DhtEvent) {
match event {
DhtEvent::ValueFound(v) => {
if let Some(metrics) = &self.metrics {
metrics.dht_event_received.with_label_values(&["value_found"]).inc();
}
Some(DhtEvent::ValueNotFound(hash)) => {
if let Some(metrics) = &self.metrics {
metrics.dht_event_received.with_label_values(&["value_not_found"]).inc();
}
if self.in_flight_lookups.remove(&hash).is_some() {
debug!(
target: LOG_TARGET,
"Value for hash '{:?}' not found on Dht.", hash
)
} else {
debug!(
target: LOG_TARGET,
"Received 'ValueNotFound' for unexpected hash '{:?}'.", hash
)
}
},
Some(DhtEvent::ValuePut(hash)) => {
if log_enabled!(log::Level::Debug) {
let hashes = v.iter().map(|(hash, _value)| hash.clone());
debug!(
target: LOG_TARGET,
"Value for hash '{:?}' found on Dht.", hashes,
);
}
if let Err(e) = self.handle_dht_value_found_event(v) {
if let Some(metrics) = &self.metrics {
metrics.dht_event_received.with_label_values(&["value_put"]).inc();
metrics.handle_value_found_event_failure.inc();
}
debug!(
target: LOG_TARGET,
"Successfully put hash '{:?}' on Dht.", hash,
)
},
Some(DhtEvent::ValuePutFailed(hash)) => {
if let Some(metrics) = &self.metrics {
metrics.dht_event_received.with_label_values(&["value_put_failed"]).inc();
}
"Failed to handle Dht value found event: {:?}", e,
);
}
}
DhtEvent::ValueNotFound(hash) => {
if let Some(metrics) = &self.metrics {
metrics.dht_event_received.with_label_values(&["value_not_found"]).inc();
}
if self.in_flight_lookups.remove(&hash).is_some() {
debug!(
target: LOG_TARGET,
"Failed to put hash '{:?}' on Dht.", hash
"Value for hash '{:?}' not found on Dht.", hash
)
},
None => {
debug!(target: LOG_TARGET, "Dht event stream terminated.");
return Poll::Ready(());
},
} else {
debug!(
target: LOG_TARGET,
"Received 'ValueNotFound' for unexpected hash '{:?}'.", hash
)
}
},
DhtEvent::ValuePut(hash) => {
if let Some(metrics) = &self.metrics {
metrics.dht_event_received.with_label_values(&["value_put"]).inc();
}
debug!(
target: LOG_TARGET,
"Successfully put hash '{:?}' on Dht.", hash,
)
},
DhtEvent::ValuePutFailed(hash) => {
if let Some(metrics) = &self.metrics {
metrics.dht_event_received.with_label_values(&["value_put_failed"]).inc();
}
debug!(
target: LOG_TARGET,
"Failed to put hash '{:?}' on Dht.", hash
)
}
}
}
@@ -541,7 +592,6 @@ where
);
}
}
Ok(())
}
@@ -551,12 +601,13 @@ where
// one for the upcoming session. In addition it could be participating in the current and (/ or)
// next authority set with two keys. The function does not return all of the local authority
// discovery public keys, but only the ones intersecting with the current or next authority set.
fn get_own_public_keys_within_authority_set(
key_store: &BareCryptoStorePtr,
async fn get_own_public_keys_within_authority_set(
key_store: Arc<dyn CryptoStore>,
client: &Client,
) -> Result<HashSet<AuthorityId>> {
let local_pub_keys = key_store.read()
let local_pub_keys = key_store
.sr25519_public_keys(key_types::AUTHORITY_DISCOVERY)
.await
.into_iter()
.collect::<HashSet<_>>();
@@ -609,86 +660,6 @@ where
}
}
impl<Client, Network, Block> Future for Worker<Client, Network, Block>
where
Block: BlockT + Unpin + 'static,
Network: NetworkProvider,
Client: ProvideRuntimeApi<Block> + Send + Sync + 'static + HeaderBackend<Block>,
<Client as ProvideRuntimeApi<Block>>::Api:
AuthorityDiscoveryApi<Block, Error = sp_blockchain::Error>,
{
type Output = ();
fn poll(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
// Process incoming events.
if let Poll::Ready(()) = self.handle_dht_events(cx) {
// `handle_dht_events` returns `Poll::Ready(())` when the Dht event stream terminated.
// Termination of the Dht event stream implies that the underlying network terminated,
// thus authority discovery should terminate as well.
return Poll::Ready(());
}
// Publish own addresses.
if let Poll::Ready(_) = self.publish_interval.poll_next_unpin(cx) {
// Register waker of underlying task for next interval.
while let Poll::Ready(_) = self.publish_interval.poll_next_unpin(cx) {}
if let Err(e) = self.publish_ext_addresses() {
error!(
target: LOG_TARGET,
"Failed to publish external addresses: {:?}", e,
);
}
}
// Request addresses of authorities, refilling the pending lookups queue.
if let Poll::Ready(_) = self.query_interval.poll_next_unpin(cx) {
// Register waker of underlying task for next interval.
while let Poll::Ready(_) = self.query_interval.poll_next_unpin(cx) {}
if let Err(e) = self.refill_pending_lookups_queue() {
error!(
target: LOG_TARGET,
"Failed to refill pending lookups queue: {:?}", e,
);
}
}
// Set peerset priority group to a new random set of addresses.
if let Poll::Ready(_) = self.priority_group_set_interval.poll_next_unpin(cx) {
// Register waker of underlying task for next interval.
while let Poll::Ready(_) = self.priority_group_set_interval.poll_next_unpin(cx) {}
if let Err(e) = self.set_priority_group() {
error!(
target: LOG_TARGET,
"Failed to set priority group: {:?}", e,
);
}
}
// Handle messages from [`Service`].
while let Poll::Ready(Some(msg)) = self.from_service.poll_next_unpin(cx) {
match msg {
ServicetoWorkerMsg::GetAddressesByAuthorityId(authority, sender) => {
let _ = sender.send(
self.addr_cache.get_addresses_by_authority_id(&authority).map(Clone::clone),
);
}
ServicetoWorkerMsg::GetAuthorityIdByPeerId(peer_id, sender) => {
let _ = sender.send(
self.addr_cache.get_authority_id_by_peer_id(&peer_id).map(Clone::clone),
);
}
}
}
self.start_new_lookups();
Poll::Pending
}
}
/// NetworkProvider provides [`Worker`] with all necessary hooks into the
/// underlying Substrate networking. Using this trait abstraction instead of [`NetworkService`]
/// directly is necessary to unit test [`Worker`].
@@ -824,7 +795,7 @@ impl Metrics {
// Helper functions for unit testing.
#[cfg(test)]
impl<Client, Network, Block> Worker<Client, Network, Block>
impl<Block, Client, Network, DhtEventStream> Worker<Client, Network, Block, DhtEventStream>
where
Block: BlockT + 'static,
Network: NetworkProvider,
@@ -18,18 +18,19 @@
use crate::worker::schema;
use std::{iter::FromIterator, sync::{Arc, Mutex}};
use std::{iter::FromIterator, sync::{Arc, Mutex}, task::Poll};
use futures::channel::mpsc::channel;
use futures::channel::mpsc::{self, channel};
use futures::executor::{block_on, LocalPool};
use futures::future::{poll_fn, FutureExt};
use futures::future::FutureExt;
use futures::sink::SinkExt;
use futures::task::LocalSpawn;
use futures::poll;
use libp2p::{kad, core::multiaddr, PeerId};
use prometheus_endpoint::prometheus::default_registry;
use sp_api::{ProvideRuntimeApi, ApiRef};
use sp_core::{crypto::Public, testing::KeyStore};
use sp_core::crypto::Public;
use sp_keystore::{testing::KeyStore, CryptoStore};
use sp_runtime::traits::{Zero, Block as BlockT, NumberFor};
use substrate_test_runtime_client::runtime::Block;
@@ -166,6 +167,16 @@ sp_api::mock_impl_runtime_apis! {
}
}
#[derive(Debug)]
pub enum TestNetworkEvent {
GetCalled(kad::record::Key),
PutCalled(kad::record::Key, Vec<u8>),
SetPriorityGroupCalled {
group_id: String,
peers: HashSet<Multiaddr>
},
}
pub struct TestNetwork {
peer_id: PeerId,
external_addresses: Vec<Multiaddr>,
@@ -174,10 +185,19 @@ pub struct TestNetwork {
pub put_value_call: Arc<Mutex<Vec<(kad::record::Key, Vec<u8>)>>>,
pub get_value_call: Arc<Mutex<Vec<kad::record::Key>>>,
pub set_priority_group_call: Arc<Mutex<Vec<(String, HashSet<Multiaddr>)>>>,
event_sender: mpsc::UnboundedSender<TestNetworkEvent>,
event_receiver: Option<mpsc::UnboundedReceiver<TestNetworkEvent>>,
}
impl TestNetwork {
fn get_event_receiver(&mut self) -> Option<mpsc::UnboundedReceiver<TestNetworkEvent>> {
self.event_receiver.take()
}
}
impl Default for TestNetwork {
fn default() -> Self {
let (tx, rx) = mpsc::unbounded();
TestNetwork {
peer_id: PeerId::random(),
external_addresses: vec![
@@ -187,6 +207,8 @@ impl Default for TestNetwork {
put_value_call: Default::default(),
get_value_call: Default::default(),
set_priority_group_call: Default::default(),
event_sender: tx,
event_receiver: Some(rx),
}
}
}
@@ -200,14 +222,20 @@ impl NetworkProvider for TestNetwork {
self.set_priority_group_call
.lock()
.unwrap()
.push((group_id, peers));
.push((group_id.clone(), peers.clone()));
self.event_sender.clone().unbounded_send(TestNetworkEvent::SetPriorityGroupCalled {
group_id,
peers,
}).unwrap();
Ok(())
}
fn put_value(&self, key: kad::record::Key, value: Vec<u8>) {
self.put_value_call.lock().unwrap().push((key, value));
self.put_value_call.lock().unwrap().push((key.clone(), value.clone()));
self.event_sender.clone().unbounded_send(TestNetworkEvent::PutCalled(key, value)).unwrap();
}
fn get_value(&self, key: &kad::record::Key) {
self.get_value_call.lock().unwrap().push(key.clone());
self.event_sender.clone().unbounded_send(TestNetworkEvent::GetCalled(key.clone())).unwrap();
}
}
@@ -221,10 +249,10 @@ impl NetworkStateInfo for TestNetwork {
}
}
fn build_dht_event(
async fn build_dht_event(
addresses: Vec<Multiaddr>,
public_key: AuthorityId,
key_store: &BareCryptoStorePtr,
key_store: &KeyStore,
) -> (libp2p::kad::record::Key, Vec<u8>) {
let mut serialized_addresses = vec![];
schema::AuthorityAddresses {
@@ -233,12 +261,13 @@ fn build_dht_event(
.map_err(Error::EncodingProto)
.unwrap();
let signature = key_store.read()
let signature = key_store
.sign_with(
key_types::AUTHORITY_DISCOVERY,
&public_key.clone().into(),
serialized_addresses.as_slice(),
)
.await
.map_err(|_| Error::Signing)
.unwrap();
@@ -258,7 +287,7 @@ fn build_dht_event(
#[test]
fn new_registers_metrics() {
let (_dht_event_tx, dht_event_rx) = channel(1000);
let (_dht_event_tx, dht_event_rx) = mpsc::channel(1000);
let network: Arc<TestNetwork> = Arc::new(Default::default());
let key_store = KeyStore::new();
let test_api = Arc::new(TestApi {
@@ -273,8 +302,8 @@ fn new_registers_metrics() {
test_api,
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(key_store),
Box::pin(dht_event_rx),
Role::Authority(key_store.into()),
Some(registry.clone()),
);
@@ -289,12 +318,11 @@ fn triggers_dht_get_query() {
// Generate authority keys
let authority_1_key_pair = AuthorityPair::from_seed_slice(&[1; 32]).unwrap();
let authority_2_key_pair = AuthorityPair::from_seed_slice(&[2; 32]).unwrap();
let authorities = vec![authority_1_key_pair.public(), authority_2_key_pair.public()];
let test_api = Arc::new(TestApi {
authorities: vec![authority_1_key_pair.public(), authority_2_key_pair.public()],
});
let test_api = Arc::new(TestApi { authorities: authorities.clone() });
let network: Arc<TestNetwork> = Arc::new(Default::default());
let network = Arc::new(TestNetwork::default());
let key_store = KeyStore::new();
let (_to_worker, from_service) = mpsc::channel(0);
@@ -303,26 +331,24 @@ fn triggers_dht_get_query() {
test_api,
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(key_store),
Box::pin(dht_event_rx),
Role::Authority(key_store.into()),
None,
);
worker.refill_pending_lookups_queue().unwrap();
futures::executor::block_on(futures::future::poll_fn(|cx| {
assert_eq!(Poll::Pending, worker.poll_unpin(cx));
Poll::Ready(())
}));
// Expect authority discovery to request new records from the dht.
assert_eq!(network.get_value_call.lock().unwrap().len(), 2);
futures::executor::block_on(async {
worker.refill_pending_lookups_queue().await.unwrap();
worker.start_new_lookups();
assert_eq!(network.get_value_call.lock().unwrap().len(), authorities.len());
})
}
#[test]
fn publish_discover_cycle() {
sp_tracing::try_init_simple();
let mut pool = LocalPool::new();
// Node A publishing its address.
let (_dht_event_tx, dht_event_rx) = channel(1000);
@@ -338,66 +364,66 @@ fn publish_discover_cycle() {
};
let key_store = KeyStore::new();
let node_a_public = key_store
.write()
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None)
.unwrap();
let test_api = Arc::new(TestApi {
authorities: vec![node_a_public.into()],
});
let (_to_worker, from_service) = mpsc::channel(0);
let mut worker = Worker::new(
from_service,
test_api,
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(key_store),
None,
);
let _ = pool.spawner().spawn_local_obj(async move {
let node_a_public = key_store
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None)
.await
.unwrap();
let test_api = Arc::new(TestApi {
authorities: vec![node_a_public.into()],
});
worker.publish_ext_addresses().unwrap();
let (_to_worker, from_service) = mpsc::channel(0);
let mut worker = Worker::new(
from_service,
test_api,
network.clone(),
vec![],
Box::pin(dht_event_rx),
Role::Authority(key_store.into()),
None,
);
// Expect authority discovery to put a new record onto the dht.
assert_eq!(network.put_value_call.lock().unwrap().len(), 1);
worker.publish_ext_addresses().await.unwrap();
let dht_event = {
let (key, value) = network.put_value_call.lock().unwrap().pop().unwrap();
sc_network::DhtEvent::ValueFound(vec![(key, value)])
};
// Expect authority discovery to put a new record onto the dht.
assert_eq!(network.put_value_call.lock().unwrap().len(), 1);
// Node B discovering node A's address.
let dht_event = {
let (key, value) = network.put_value_call.lock().unwrap().pop().unwrap();
sc_network::DhtEvent::ValueFound(vec![(key, value)])
};
let (mut dht_event_tx, dht_event_rx) = channel(1000);
let test_api = Arc::new(TestApi {
// Make sure node B identifies node A as an authority.
authorities: vec![node_a_public.into()],
});
let network: Arc<TestNetwork> = Arc::new(Default::default());
let key_store = KeyStore::new();
// Node B discovering node A's address.
let (_to_worker, from_service) = mpsc::channel(0);
let mut worker = Worker::new(
from_service,
test_api,
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(key_store),
None,
);
let (mut dht_event_tx, dht_event_rx) = channel(1000);
let test_api = Arc::new(TestApi {
// Make sure node B identifies node A as an authority.
authorities: vec![node_a_public.into()],
});
let network: Arc<TestNetwork> = Arc::new(Default::default());
let key_store = KeyStore::new();
dht_event_tx.try_send(dht_event).unwrap();
let (_to_worker, from_service) = mpsc::channel(0);
let mut worker = Worker::new(
from_service,
test_api,
network.clone(),
vec![],
Box::pin(dht_event_rx),
Role::Authority(key_store.into()),
None,
);
let f = |cx: &mut Context<'_>| -> Poll<()> {
worker.refill_pending_lookups_queue().unwrap();
dht_event_tx.try_send(dht_event.clone()).unwrap();
worker.refill_pending_lookups_queue().await.unwrap();
worker.start_new_lookups();
// Make authority discovery handle the event.
if let Poll::Ready(e) = worker.handle_dht_events(cx) {
panic!("Unexpected error: {:?}", e);
}
worker.handle_dht_event(dht_event).await;
worker.set_priority_group().unwrap();
// Expect authority discovery to set the priority set.
@@ -410,13 +436,12 @@ fn publish_discover_cycle() {
HashSet::from_iter(vec![node_a_multiaddr.clone()].into_iter())
)
);
}.boxed_local().into());
Poll::Ready(())
};
let _ = block_on(poll_fn(f));
pool.run();
}
/// Don't terminate when sender side of service channel is dropped. Terminate when network event
/// stream terminates.
#[test]
fn terminate_when_event_stream_terminates() {
let (dht_event_tx, dht_event_rx) = channel(1000);
@@ -426,91 +451,76 @@ fn terminate_when_event_stream_terminates() {
authorities: vec![],
});
let (_to_worker, from_service) = mpsc::channel(0);
let mut worker = Worker::new(
let (to_worker, from_service) = mpsc::channel(0);
let worker = Worker::new(
from_service,
test_api,
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(key_store),
Box::pin(dht_event_rx),
Role::Authority(key_store.into()),
None,
);
).run();
futures::pin_mut!(worker);
block_on(async {
assert_eq!(Poll::Pending, poll!(&mut worker));
assert_eq!(Poll::Pending, futures::poll!(&mut worker));
// Simulate termination of the network through dropping the sender side of the dht event
// channel.
// Drop sender side of service channel.
drop(to_worker);
assert_eq!(
Poll::Pending, futures::poll!(&mut worker),
"Expect the authority discovery module not to terminate once the \
sender side of the service channel is closed.",
);
// Simulate termination of the network through dropping the sender side
// of the dht event channel.
drop(dht_event_tx);
assert_eq!(
Poll::Ready(()), poll!(&mut worker),
"Expect the authority discovery module to terminate once the sending side of the dht \
event channel is terminated.",
Poll::Ready(()), futures::poll!(&mut worker),
"Expect the authority discovery module to terminate once the \
sending side of the dht event channel is closed.",
);
});
}
});}
#[test]
fn continue_operating_when_service_channel_is_dropped() {
let (_dht_event_tx, dht_event_rx) = channel(0);
let network: Arc<TestNetwork> = Arc::new(Default::default());
let key_store = KeyStore::new();
let test_api = Arc::new(TestApi {
authorities: vec![],
});
fn dont_stop_polling_dht_event_stream_after_bogus_event() {
let remote_multiaddr = {
let peer_id = PeerId::random();
let address: Multiaddr = "/ip6/2001:db8:0:0:0:0:0:1/tcp/30333".parse().unwrap();
let (to_worker, from_service) = mpsc::channel(0);
let mut worker = Worker::new(
from_service,
test_api,
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(key_store),
None,
);
address.with(multiaddr::Protocol::P2p(
peer_id.into(),
))
};
let remote_key_store = KeyStore::new();
let remote_public_key: AuthorityId = block_on(
remote_key_store.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None),
).unwrap().into();
block_on(async {
assert_eq!(Poll::Pending, poll!(&mut worker));
drop(to_worker);
for _ in 0..100 {
assert_eq!(
Poll::Pending, poll!(&mut worker),
"Expect authority discovery `Worker` not to panic when service channel is dropped.",
);
}
});
}
#[test]
fn dont_stop_polling_when_error_is_returned() {
#[derive(PartialEq, Debug)]
enum Event {
Processed,
End,
let (mut dht_event_tx, dht_event_rx) = channel(1);
let (network, mut network_events) = {
let mut n = TestNetwork::default();
let r = n.get_event_receiver().unwrap();
(Arc::new(n), r)
};
let (mut dht_event_tx, dht_event_rx) = channel(1000);
let (mut discovery_update_tx, mut discovery_update_rx) = channel(1000);
let network: Arc<TestNetwork> = Arc::new(Default::default());
let key_store = KeyStore::new();
let test_api = Arc::new(TestApi {
authorities: vec![],
authorities: vec![remote_public_key.clone()],
});
let mut pool = LocalPool::new();
let (_to_worker, from_service) = mpsc::channel(0);
let (mut to_worker, from_service) = mpsc::channel(1);
let mut worker = Worker::new(
from_service,
test_api,
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(key_store),
Box::pin(dht_event_rx),
Role::Authority(Arc::new(key_store)),
None,
);
@@ -518,45 +528,43 @@ fn dont_stop_polling_when_error_is_returned() {
//
// As this is a local pool, only one future at a time will have the CPU and
// can make progress until the future returns `Pending`.
pool.spawner().spawn_local_obj(
futures::future::poll_fn(move |ctx| {
match std::pin::Pin::new(&mut worker).poll(ctx) {
Poll::Ready(()) => {},
Poll::Pending => {
discovery_update_tx.send(Event::Processed).now_or_never();
return Poll::Pending;
},
}
let _ = discovery_update_tx.send(Event::End).now_or_never().unwrap();
Poll::Ready(())
}).boxed_local().into(),
).expect("Spawns authority discovery");
let _ = pool.spawner().spawn_local_obj(async move {
// Refilling `pending_lookups` only happens every X minutes. Fast
// forward by calling `refill_pending_lookups_queue` directly.
worker.refill_pending_lookups_queue().await.unwrap();
worker.run().await
}.boxed_local().into());
pool.run_until(
// The future that drives the event stream
async {
// Send an event that should generate an error
let _ = dht_event_tx.send(DhtEvent::ValueFound(Default::default())).now_or_never();
// Send the same event again to make sure that the event stream needs to be polled twice
// to be woken up again.
let _ = dht_event_tx.send(DhtEvent::ValueFound(Default::default())).now_or_never();
pool.run_until(async {
// Assert worker to trigger a lookup for the one and only authority.
assert!(matches!(
network_events.next().await,
Some(TestNetworkEvent::GetCalled(_))
));
// Now we call `await` and give the control to the authority discovery future.
assert_eq!(Some(Event::Processed), discovery_update_rx.next().await);
// Send an event that should generate an error
dht_event_tx.send(DhtEvent::ValueFound(Default::default())).await
.expect("Channel has capacity of 1.");
// Drop the event rx to stop the authority discovery. If it was polled correctly, it
// should end properly.
drop(dht_event_tx);
// Make previously triggered lookup succeed.
let dht_event = {
let (key, value) = build_dht_event(
vec![remote_multiaddr.clone()],
remote_public_key.clone(), &remote_key_store,
).await;
sc_network::DhtEvent::ValueFound(vec![(key, value)])
};
dht_event_tx.send(dht_event).await.expect("Channel has capacity of 1.");
assert!(
discovery_update_rx.collect::<Vec<Event>>()
.await
.into_iter()
.any(|evt| evt == Event::End),
"The authority discovery should have ended",
);
}
);
// Expect authority discovery to function normally, now knowing the
// address for the remote node.
let (sender, addresses) = futures::channel::oneshot::channel();
to_worker.send(ServicetoWorkerMsg::GetAddressesByAuthorityId(
remote_public_key,
sender,
)).await.expect("Channel has capacity of 1.");
assert_eq!(Some(vec![remote_multiaddr]), addresses.await.unwrap());
});
}
/// In the scenario of a validator publishing the address of its sentry node to
@@ -565,9 +573,8 @@ fn dont_stop_polling_when_error_is_returned() {
#[test]
fn never_add_own_address_to_priority_group() {
let validator_key_store = KeyStore::new();
let validator_public = validator_key_store
.write()
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None)
let validator_public = block_on(validator_key_store
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None))
.unwrap();
let sentry_network: Arc<TestNetwork> = Arc::new(Default::default());
@@ -589,11 +596,11 @@ fn never_add_own_address_to_priority_group() {
))
};
let dht_event = build_dht_event(
let dht_event = block_on(build_dht_event(
vec![sentry_multiaddr, random_multiaddr.clone()],
validator_public.into(),
&validator_key_store,
);
));
let (_dht_event_tx, dht_event_rx) = channel(1);
let sentry_test_api = Arc::new(TestApi {
@@ -607,12 +614,12 @@ fn never_add_own_address_to_priority_group() {
sentry_test_api,
sentry_network.clone(),
vec![],
dht_event_rx.boxed(),
Box::pin(dht_event_rx),
Role::Sentry,
None,
);
sentry_worker.refill_pending_lookups_queue().unwrap();
block_on(sentry_worker.refill_pending_lookups_queue()).unwrap();
sentry_worker.start_new_lookups();
sentry_worker.handle_dht_value_found_event(vec![dht_event]).unwrap();
@@ -636,9 +643,8 @@ fn never_add_own_address_to_priority_group() {
#[test]
fn limit_number_of_addresses_added_to_cache_per_authority() {
let remote_key_store = KeyStore::new();
let remote_public = remote_key_store
.write()
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None)
let remote_public = block_on(remote_key_store
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None))
.unwrap();
let addresses = (0..100).map(|_| {
@@ -649,11 +655,11 @@ fn limit_number_of_addresses_added_to_cache_per_authority() {
))
}).collect();
let dht_event = build_dht_event(
let dht_event = block_on(build_dht_event(
addresses,
remote_public.into(),
&remote_key_store,
);
));
let (_dht_event_tx, dht_event_rx) = channel(1);
@@ -663,12 +669,12 @@ fn limit_number_of_addresses_added_to_cache_per_authority() {
Arc::new(TestApi { authorities: vec![remote_public.into()] }),
Arc::new(TestNetwork::default()),
vec![],
dht_event_rx.boxed(),
Box::pin(dht_event_rx),
Role::Sentry,
None,
);
worker.refill_pending_lookups_queue().unwrap();
block_on(worker.refill_pending_lookups_queue()).unwrap();
worker.start_new_lookups();
worker.handle_dht_value_found_event(vec![dht_event]).unwrap();
@@ -681,9 +687,8 @@ fn limit_number_of_addresses_added_to_cache_per_authority() {
#[test]
fn do_not_cache_addresses_without_peer_id() {
let remote_key_store = KeyStore::new();
let remote_public = remote_key_store
.write()
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None)
let remote_public = block_on(remote_key_store
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None))
.unwrap();
let multiaddr_with_peer_id = {
@@ -695,14 +700,14 @@ fn do_not_cache_addresses_without_peer_id() {
let multiaddr_without_peer_id: Multiaddr = "/ip6/2001:db8:0:0:0:0:0:1/tcp/30333".parse().unwrap();
let dht_event = build_dht_event(
let dht_event = block_on(build_dht_event(
vec![
multiaddr_with_peer_id.clone(),
multiaddr_without_peer_id,
],
remote_public.into(),
&remote_key_store,
);
));
let (_dht_event_tx, dht_event_rx) = channel(1);
let local_test_api = Arc::new(TestApi {
@@ -718,12 +723,12 @@ fn do_not_cache_addresses_without_peer_id() {
local_test_api,
local_network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(local_key_store),
Box::pin(dht_event_rx),
Role::Authority(Arc::new(local_key_store)),
None,
);
local_worker.refill_pending_lookups_queue().unwrap();
block_on(local_worker.refill_pending_lookups_queue()).unwrap();
local_worker.start_new_lookups();
local_worker.handle_dht_value_found_event(vec![dht_event]).unwrap();
@@ -753,8 +758,8 @@ fn addresses_to_publish_adds_p2p() {
}),
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(KeyStore::new()),
Box::pin(dht_event_rx),
Role::Authority(Arc::new(KeyStore::new())),
Some(prometheus_endpoint::Registry::new()),
);
@@ -788,8 +793,8 @@ fn addresses_to_publish_respects_existing_p2p_protocol() {
}),
network.clone(),
vec![],
dht_event_rx.boxed(),
Role::Authority(KeyStore::new()),
Box::pin(dht_event_rx),
Role::Authority(Arc::new(KeyStore::new())),
Some(prometheus_endpoint::Registry::new()),
);
@@ -811,10 +816,9 @@ fn lookup_throttling() {
};
let remote_key_store = KeyStore::new();
let remote_public_keys: Vec<AuthorityId> = (0..20).map(|_| {
remote_key_store
.write()
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None)
.unwrap().into()
block_on(remote_key_store
.sr25519_generate_new(key_types::AUTHORITY_DISCOVERY, None))
.unwrap().into()
}).collect();
let remote_hash_to_key = remote_public_keys.iter()
.map(|k| (hash_authority_id(k.as_ref()), k.clone()))
@@ -823,7 +827,9 @@ fn lookup_throttling() {
let (mut dht_event_tx, dht_event_rx) = channel(1);
let (_to_worker, from_service) = mpsc::channel(0);
let network = Arc::new(TestNetwork::default());
let mut network = TestNetwork::default();
let mut receiver = network.get_event_receiver().unwrap();
let network = Arc::new(network);
let mut worker = Worker::new(
from_service,
Arc::new(TestApi { authorities: remote_public_keys.clone() }),
@@ -831,50 +837,62 @@ fn lookup_throttling() {
vec![],
dht_event_rx.boxed(),
Role::Sentry,
None,
Some(default_registry().clone()),
);
futures::executor::block_on(futures::future::poll_fn(|cx| {
worker.refill_pending_lookups_queue().unwrap();
let mut pool = LocalPool::new();
let metrics = worker.metrics.clone().unwrap();
let _ = pool.spawner().spawn_local_obj(async move {
// Refilling `pending_lookups` only happens every X minutes. Fast
// forward by calling `refill_pending_lookups_queue` directly.
worker.refill_pending_lookups_queue().await.unwrap();
worker.run().await
}.boxed_local().into());
pool.run_until(async {
// Assert worker to trigger MAX_IN_FLIGHT_LOOKUPS lookups.
assert_eq!(Poll::Pending, worker.poll_unpin(cx));
assert_eq!(worker.pending_lookups.len(), remote_public_keys.len() - MAX_IN_FLIGHT_LOOKUPS);
assert_eq!(worker.in_flight_lookups.len(), MAX_IN_FLIGHT_LOOKUPS);
for _ in 0..MAX_IN_FLIGHT_LOOKUPS {
assert!(matches!(receiver.next().await, Some(TestNetworkEvent::GetCalled(_))));
}
assert_eq!(
metrics.requests_pending.get(),
(remote_public_keys.len() - MAX_IN_FLIGHT_LOOKUPS) as u64
);
assert_eq!(network.get_value_call.lock().unwrap().len(), MAX_IN_FLIGHT_LOOKUPS);
// Make first lookup succeed.
let remote_hash = network.get_value_call.lock().unwrap().pop().unwrap();
let remote_key: AuthorityId = remote_hash_to_key.get(&remote_hash).unwrap().clone();
let dht_event = {
let (key, value) = build_dht_event(vec![remote_multiaddr.clone()], remote_key, &remote_key_store);
let (key, value) = build_dht_event(
vec![remote_multiaddr.clone()],
remote_key,
&remote_key_store
).await;
sc_network::DhtEvent::ValueFound(vec![(key, value)])
};
dht_event_tx.try_send(dht_event).expect("Channel has capacity of 1.");
dht_event_tx.send(dht_event).await.expect("Channel has capacity of 1.");
// Assert worker to trigger another lookup.
assert_eq!(Poll::Pending, worker.poll_unpin(cx));
assert_eq!(worker.pending_lookups.len(), remote_public_keys.len() - MAX_IN_FLIGHT_LOOKUPS - 1);
assert_eq!(worker.in_flight_lookups.len(), MAX_IN_FLIGHT_LOOKUPS);
assert!(matches!(receiver.next().await, Some(TestNetworkEvent::GetCalled(_))));
assert_eq!(
metrics.requests_pending.get(),
(remote_public_keys.len() - MAX_IN_FLIGHT_LOOKUPS - 1) as u64
);
assert_eq!(network.get_value_call.lock().unwrap().len(), MAX_IN_FLIGHT_LOOKUPS);
// Make second one fail.
let remote_hash = network.get_value_call.lock().unwrap().pop().unwrap();
let dht_event = sc_network::DhtEvent::ValueNotFound(remote_hash);
dht_event_tx.try_send(dht_event).expect("Channel has capacity of 1.");
dht_event_tx.send(dht_event).await.expect("Channel has capacity of 1.");
// Assert worker to trigger another lookup.
assert_eq!(Poll::Pending, worker.poll_unpin(cx));
assert_eq!(worker.pending_lookups.len(), remote_public_keys.len() - MAX_IN_FLIGHT_LOOKUPS - 2);
assert_eq!(worker.in_flight_lookups.len(), MAX_IN_FLIGHT_LOOKUPS);
assert!(matches!(receiver.next().await, Some(TestNetworkEvent::GetCalled(_))));
assert_eq!(
metrics.requests_pending.get(),
(remote_public_keys.len() - MAX_IN_FLIGHT_LOOKUPS - 2) as u64
);
assert_eq!(network.get_value_call.lock().unwrap().len(), MAX_IN_FLIGHT_LOOKUPS);
worker.refill_pending_lookups_queue().unwrap();
// Assert worker to restock pending lookups and forget about in-flight lookups.
assert_eq!(worker.pending_lookups.len(), remote_public_keys.len());
assert_eq!(worker.in_flight_lookups.len(), 0);
Poll::Ready(())
}));
}.boxed_local());
}
+1
View File
@@ -39,6 +39,7 @@ sp-runtime = { version = "2.0.0", path = "../../primitives/runtime" }
sp-utils = { version = "2.0.0", path = "../../primitives/utils" }
sp-version = { version = "2.0.0", path = "../../primitives/version" }
sp-core = { version = "2.0.0", path = "../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../primitives/keystore" }
sc-service = { version = "0.8.0", default-features = false, path = "../service" }
sp-state-machine = { version = "0.8.0", path = "../../primitives/state-machine" }
sc-telemetry = { version = "2.0.0", path = "../telemetry" }
+7 -6
View File
@@ -18,11 +18,13 @@
//! Implementation of the `insert` subcommand
use crate::{Error, KeystoreParams, CryptoSchemeFlag, SharedParams, utils, with_crypto_scheme};
use std::sync::Arc;
use structopt::StructOpt;
use sp_core::{crypto::KeyTypeId, traits::BareCryptoStore};
use sp_core::crypto::KeyTypeId;
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use std::convert::TryFrom;
use sc_service::config::KeystoreConfig;
use sc_keystore::Store as KeyStore;
use sc_keystore::LocalKeystore;
use sp_core::crypto::SecretString;
/// The `insert` command
@@ -68,8 +70,8 @@ impl InsertCmd {
self.crypto_scheme.scheme,
to_vec(&suri, password.clone())
)?;
let keystore = KeyStore::open(path, password)
.map_err(|e| format!("{}", e))?;
let keystore: SyncCryptoStorePtr = Arc::new(LocalKeystore::open(path, password)
.map_err(|e| format!("{}", e))?);
(keystore, public)
},
_ => unreachable!("keystore_config always returns path and password; qed")
@@ -80,8 +82,7 @@ impl InsertCmd {
Error::Other("Cannot convert argument to keytype: argument should be 4-character string".into())
})?;
keystore.write()
.insert_unknown(key_type, &suri, &public[..])
SyncCryptoStore::insert_unknown(&*keystore, key_type, &suri, &public[..])
.map_err(|e| Error::Other(format!("{:?}", e)))?;
Ok(())
+2 -1
View File
@@ -24,7 +24,6 @@ derive_more = "0.99.2"
futures = "0.3.4"
futures-timer = "3.0.1"
sp-inherents = { version = "2.0.0", path = "../../../primitives/inherents" }
sc-keystore = { version = "2.0.0", path = "../../keystore" }
log = "0.4.8"
parking_lot = "0.10.0"
sp-core = { version = "2.0.0", path = "../../../primitives/core" }
@@ -35,6 +34,7 @@ sc-consensus-slots = { version = "0.8.0", path = "../slots" }
sp-api = { version = "2.0.0", path = "../../../primitives/api" }
sp-runtime = { version = "2.0.0", path = "../../../primitives/runtime" }
sp-timestamp = { version = "2.0.0", path = "../../../primitives/timestamp" }
sp-keystore = { version = "0.8.0", path = "../../../primitives/keystore" }
sc-telemetry = { version = "2.0.0", path = "../../telemetry" }
prometheus-endpoint = { package = "substrate-prometheus-endpoint", path = "../../../utils/prometheus", version = "0.8.0"}
@@ -42,6 +42,7 @@ prometheus-endpoint = { package = "substrate-prometheus-endpoint", path = "../..
sp-keyring = { version = "2.0.0", path = "../../../primitives/keyring" }
sp-tracing = { version = "2.0.0", path = "../../../primitives/tracing" }
sc-executor = { version = "0.8.0", path = "../../executor" }
sc-keystore = { version = "2.0.0", path = "../../keystore" }
sc-network = { version = "0.8.0", path = "../../network" }
sc-network-test = { version = "0.8.0", path = "../../network/test" }
sc-service = { version = "0.8.0", default-features = false, path = "../../service" }
+31 -24
View File
@@ -63,7 +63,8 @@ use sp_runtime::{
};
use sp_runtime::traits::{Block as BlockT, Header, DigestItemFor, Zero, Member};
use sp_api::ProvideRuntimeApi;
use sp_core::{traits::BareCryptoStore, crypto::Pair};
use sp_core::crypto::Pair;
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use sp_inherents::{InherentDataProviders, InherentData};
use sp_timestamp::{
TimestampInherentData, InherentType as TimestampInherent, InherentError as TIError
@@ -74,7 +75,6 @@ use sc_consensus_slots::{
CheckedHeader, SlotWorker, SlotInfo, SlotCompatible, StorageChanges, check_equivocation,
};
use sc_keystore::KeyStorePtr;
use sp_api::ApiExt;
pub use sp_consensus_aura::{
@@ -147,7 +147,7 @@ pub fn start_aura<B, C, SC, E, I, P, SO, CAW, Error>(
sync_oracle: SO,
inherent_data_providers: InherentDataProviders,
force_authoring: bool,
keystore: KeyStorePtr,
keystore: SyncCryptoStorePtr,
can_author_with: CAW,
) -> Result<impl Future<Output = ()>, sp_consensus::Error> where
B: BlockT,
@@ -192,7 +192,7 @@ struct AuraWorker<C, E, I, P, SO> {
client: Arc<C>,
block_import: Arc<Mutex<I>>,
env: E,
keystore: KeyStorePtr,
keystore: SyncCryptoStorePtr,
sync_oracle: SO,
force_authoring: bool,
_key_type: PhantomData<P>,
@@ -248,10 +248,14 @@ impl<B, C, E, I, P, Error, SO> sc_consensus_slots::SimpleSlotWorker<B> for AuraW
) -> Option<Self::Claim> {
let expected_author = slot_author::<P>(slot_number, epoch_data);
expected_author.and_then(|p| {
self.keystore.read()
.key_pair_by_type::<P>(&p, sp_application_crypto::key_types::AURA).ok()
}).and_then(|p| {
Some(p.public())
if SyncCryptoStore::has_keys(
&*self.keystore,
&[(p.to_raw_vec(), sp_application_crypto::key_types::AURA)],
) {
Some(p.clone())
} else {
None
}
})
}
@@ -282,15 +286,14 @@ impl<B, C, E, I, P, Error, SO> sc_consensus_slots::SimpleSlotWorker<B> for AuraW
// add it to a digest item.
let public_type_pair = public.to_public_crypto_pair();
let public = public.to_raw_vec();
let signature = keystore.read()
.sign_with(
<AuthorityId<P> as AppKey>::ID,
&public_type_pair,
header_hash.as_ref()
)
.map_err(|e| sp_consensus::Error::CannotSign(
public.clone(), e.to_string(),
))?;
let signature = SyncCryptoStore::sign_with(
&*keystore,
<AuthorityId<P> as AppKey>::ID,
&public_type_pair,
header_hash.as_ref()
).map_err(|e| sp_consensus::Error::CannotSign(
public.clone(), e.to_string(),
))?;
let signature = signature.clone().try_into()
.map_err(|_| sp_consensus::Error::InvalidSignature(
signature, public
@@ -884,6 +887,8 @@ mod tests {
use sc_block_builder::BlockBuilderProvider;
use sp_runtime::traits::Header as _;
use substrate_test_runtime_client::runtime::{Header, H256};
use sc_keystore::LocalKeystore;
use sp_application_crypto::key_types::AURA;
type Error = sp_blockchain::Error;
@@ -1011,9 +1016,11 @@ mod tests {
let client = peer.client().as_full().expect("full clients are created").clone();
let select_chain = peer.select_chain().expect("full client has a select chain");
let keystore_path = tempfile::tempdir().expect("Creates keystore path");
let keystore = sc_keystore::Store::open(keystore_path.path(), None).expect("Creates keystore.");
let keystore = Arc::new(LocalKeystore::open(keystore_path.path(), None)
.expect("Creates keystore."));
keystore.write().insert_ephemeral_from_seed::<AuthorityPair>(&key.to_seed())
SyncCryptoStore::sr25519_generate_new(&*keystore, AURA, Some(&key.to_seed()))
.expect("Creates authority key");
keystore_paths.push(keystore_path);
@@ -1080,11 +1087,11 @@ mod tests {
];
let keystore_path = tempfile::tempdir().expect("Creates keystore path");
let keystore = sc_keystore::Store::open(keystore_path.path(), None).expect("Creates keystore.");
let my_key = keystore.write()
.generate_by_type::<AuthorityPair>(AuthorityPair::ID)
let keystore = LocalKeystore::open(keystore_path.path(), None)
.expect("Creates keystore.");
let public = SyncCryptoStore::sr25519_generate_new(&keystore, AuthorityPair::ID, None)
.expect("Key should be created");
authorities.push(my_key.public());
authorities.push(public.into());
let net = Arc::new(Mutex::new(net));
@@ -1097,7 +1104,7 @@ mod tests {
client: client.clone(),
block_import: Arc::new(Mutex::new(client)),
env: environ,
keystore,
keystore: keystore.into(),
sync_oracle: DummyOracle.clone(),
force_authoring: false,
_key_type: PhantomData::<AuthorityPair>,
@@ -18,6 +18,7 @@ codec = { package = "parity-scale-codec", version = "1.3.4", features = ["derive
sp-consensus-babe = { version = "0.8.0", path = "../../../primitives/consensus/babe" }
sp-core = { version = "2.0.0", path = "../../../primitives/core" }
sp-application-crypto = { version = "2.0.0", path = "../../../primitives/application-crypto" }
sp-keystore = { version = "0.8.0", path = "../../../primitives/keystore" }
num-bigint = "0.2.3"
num-rational = "0.2.2"
num-traits = "0.2.8"
@@ -29,11 +29,12 @@ sp-api = { version = "2.0.0", path = "../../../../primitives/api" }
sp-consensus = { version = "0.8.0", path = "../../../../primitives/consensus/common" }
sp-core = { version = "2.0.0", path = "../../../../primitives/core" }
sp-application-crypto = { version = "2.0.0", path = "../../../../primitives/application-crypto" }
sc-keystore = { version = "2.0.0", path = "../../../keystore" }
sp-keystore = { version = "0.8.0", path = "../../../../primitives/keystore" }
[dev-dependencies]
sc-consensus = { version = "0.8.0", path = "../../../consensus/common" }
serde_json = "1.0.50"
sp-keyring = { version = "2.0.0", path = "../../../../primitives/keyring" }
sc-keystore = { version = "2.0.0", path = "../../../keystore" }
substrate-test-runtime-client = { version = "2.0.0", path = "../../../../test-utils/runtime/client" }
tempfile = "3.1.0"
+15 -12
View File
@@ -34,10 +34,9 @@ use sp_consensus_babe::{
use serde::{Deserialize, Serialize};
use sp_core::{
crypto::Public,
traits::BareCryptoStore,
};
use sp_application_crypto::AppKey;
use sc_keystore::KeyStorePtr;
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use sc_rpc_api::DenyUnsafe;
use sp_api::{ProvideRuntimeApi, BlockId};
use sp_runtime::traits::{Block as BlockT, Header as _};
@@ -63,7 +62,7 @@ pub struct BabeRpcHandler<B: BlockT, C, SC> {
/// shared reference to EpochChanges
shared_epoch_changes: SharedEpochChanges<B, Epoch>,
/// shared reference to the Keystore
keystore: KeyStorePtr,
keystore: SyncCryptoStorePtr,
/// config (actually holds the slot duration)
babe_config: Config,
/// The SelectChain strategy
@@ -77,7 +76,7 @@ impl<B: BlockT, C, SC> BabeRpcHandler<B, C, SC> {
pub fn new(
client: Arc<C>,
shared_epoch_changes: SharedEpochChanges<B, Epoch>,
keystore: KeyStorePtr,
keystore: SyncCryptoStorePtr,
babe_config: Config,
select_chain: SC,
deny_unsafe: DenyUnsafe,
@@ -131,11 +130,10 @@ impl<B, C, SC> BabeApi for BabeRpcHandler<B, C, SC>
let mut claims: HashMap<AuthorityId, EpochAuthorship> = HashMap::new();
let keys = {
let ks = keystore.read();
epoch.authorities.iter()
.enumerate()
.filter_map(|(i, a)| {
if ks.has_keys(&[(a.0.to_raw_vec(), AuthorityId::ID)]) {
if SyncCryptoStore::has_keys(&*keystore, &[(a.0.to_raw_vec(), AuthorityId::ID)]) {
Some((a.0.clone(), i))
} else {
None
@@ -236,18 +234,23 @@ mod tests {
TestClientBuilder,
};
use sp_application_crypto::AppPair;
use sp_keyring::Ed25519Keyring;
use sc_keystore::Store;
use sp_keyring::Sr25519Keyring;
use sp_core::{crypto::key_types::BABE};
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use sc_keystore::LocalKeystore;
use std::sync::Arc;
use sc_consensus_babe::{Config, block_import, AuthorityPair};
use jsonrpc_core::IoHandler;
/// creates keystore backed by a temp file
fn create_temp_keystore<P: AppPair>(authority: Ed25519Keyring) -> (KeyStorePtr, tempfile::TempDir) {
fn create_temp_keystore<P: AppPair>(
authority: Sr25519Keyring,
) -> (SyncCryptoStorePtr, tempfile::TempDir) {
let keystore_path = tempfile::tempdir().expect("Creates keystore path");
let keystore = Store::open(keystore_path.path(), None).expect("Creates keystore");
keystore.write().insert_ephemeral_from_seed::<P>(&authority.to_seed())
let keystore = Arc::new(LocalKeystore::open(keystore_path.path(), None)
.expect("Creates keystore"));
SyncCryptoStore::sr25519_generate_new(&*keystore, BABE, Some(&authority.to_seed()))
.expect("Creates authority key");
(keystore, keystore_path)
@@ -267,7 +270,7 @@ mod tests {
).expect("can initialize block-import");
let epoch_changes = link.epoch_changes().clone();
let keystore = create_temp_keystore::<AuthorityPair>(Ed25519Keyring::Alice).0;
let keystore = create_temp_keystore::<AuthorityPair>(Sr25519Keyring::Alice).0;
BabeRpcHandler::new(
client.clone(),
@@ -28,13 +28,13 @@ use sp_consensus_babe::digests::{
PreDigest, PrimaryPreDigest, SecondaryPlainPreDigest, SecondaryVRFPreDigest,
};
use sp_consensus_vrf::schnorrkel::{VRFOutput, VRFProof};
use sp_core::{U256, blake2_256, crypto::Public, traits::BareCryptoStore};
use sp_core::{U256, blake2_256, crypto::Public};
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use codec::Encode;
use schnorrkel::{
keys::PublicKey,
vrf::VRFInOut,
};
use sc_keystore::KeyStorePtr;
use super::Epoch;
/// Calculates the primary selection threshold for a given authority, taking
@@ -131,7 +131,7 @@ fn claim_secondary_slot(
slot_number: SlotNumber,
epoch: &Epoch,
keys: &[(AuthorityId, usize)],
keystore: &KeyStorePtr,
keystore: &SyncCryptoStorePtr,
author_secondary_vrf: bool,
) -> Option<(PreDigest, AuthorityId)> {
let Epoch { authorities, randomness, epoch_index, .. } = epoch;
@@ -154,7 +154,8 @@ fn claim_secondary_slot(
slot_number,
*epoch_index,
);
let result = keystore.read().sr25519_vrf_sign(
let result = SyncCryptoStore::sr25519_vrf_sign(
&**keystore,
AuthorityId::ID,
authority_id.as_ref(),
transcript_data,
@@ -169,7 +170,7 @@ fn claim_secondary_slot(
} else {
None
}
} else if keystore.read().has_keys(&[(authority_id.to_raw_vec(), AuthorityId::ID)]) {
} else if SyncCryptoStore::has_keys(&**keystore, &[(authority_id.to_raw_vec(), AuthorityId::ID)]) {
Some(PreDigest::SecondaryPlain(SecondaryPlainPreDigest {
slot_number,
authority_index: *authority_index as u32,
@@ -194,7 +195,7 @@ fn claim_secondary_slot(
pub fn claim_slot(
slot_number: SlotNumber,
epoch: &Epoch,
keystore: &KeyStorePtr,
keystore: &SyncCryptoStorePtr,
) -> Option<(PreDigest, AuthorityId)> {
let authorities = epoch.authorities.iter()
.enumerate()
@@ -208,7 +209,7 @@ pub fn claim_slot(
pub fn claim_slot_using_keys(
slot_number: SlotNumber,
epoch: &Epoch,
keystore: &KeyStorePtr,
keystore: &SyncCryptoStorePtr,
keys: &[(AuthorityId, usize)],
) -> Option<(PreDigest, AuthorityId)> {
claim_primary_slot(slot_number, epoch, epoch.config.c, keystore, &keys)
@@ -220,7 +221,7 @@ pub fn claim_slot_using_keys(
slot_number,
&epoch,
keys,
keystore,
&keystore,
epoch.config.allowed_slots.is_secondary_vrf_slots_allowed(),
)
} else {
@@ -237,7 +238,7 @@ fn claim_primary_slot(
slot_number: SlotNumber,
epoch: &Epoch,
c: (u64, u64),
keystore: &KeyStorePtr,
keystore: &SyncCryptoStorePtr,
keys: &[(AuthorityId, usize)],
) -> Option<(PreDigest, AuthorityId)> {
let Epoch { authorities, randomness, epoch_index, .. } = epoch;
@@ -259,7 +260,8 @@ fn claim_primary_slot(
// be empty. Therefore, this division in `calculate_threshold` is safe.
let threshold = super::authorship::calculate_primary_threshold(c, authorities, *authority_index);
let result = keystore.read().sr25519_vrf_sign(
let result = SyncCryptoStore::sr25519_vrf_sign(
&**keystore,
AuthorityId::ID,
authority_id.as_ref(),
transcript_data,
@@ -289,13 +291,16 @@ fn claim_primary_slot(
#[cfg(test)]
mod tests {
use super::*;
use std::sync::Arc;
use sp_core::{sr25519::Pair, crypto::Pair as _};
use sp_consensus_babe::{AuthorityId, BabeEpochConfiguration, AllowedSlots};
use sc_keystore::LocalKeystore;
#[test]
fn claim_secondary_plain_slot_works() {
let keystore = sc_keystore::Store::new_in_memory();
let valid_public_key = keystore.write().sr25519_generate_new(
let keystore: SyncCryptoStorePtr = Arc::new(LocalKeystore::in_memory());
let valid_public_key = SyncCryptoStore::sr25519_generate_new(
&*keystore,
AuthorityId::ID,
Some(sp_core::crypto::DEV_PHRASE),
).unwrap();
+15 -15
View File
@@ -82,14 +82,14 @@ use sp_consensus::{ImportResult, CanAuthorWith};
use sp_consensus::import_queue::{
BoxJustificationImport, BoxFinalityProofImport,
};
use sp_core::{crypto::Public, traits::BareCryptoStore};
use sp_core::crypto::Public;
use sp_application_crypto::AppKey;
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use sp_runtime::{
generic::{BlockId, OpaqueDigestItemId}, Justification,
traits::{Block as BlockT, Header, DigestItemFor, Zero},
};
use sp_api::{ProvideRuntimeApi, NumberFor};
use sc_keystore::KeyStorePtr;
use parking_lot::Mutex;
use sp_inherents::{InherentDataProviders, InherentData};
use sc_telemetry::{telemetry, CONSENSUS_TRACE, CONSENSUS_DEBUG};
@@ -328,7 +328,7 @@ impl std::ops::Deref for Config {
/// Parameters for BABE.
pub struct BabeParams<B: BlockT, C, E, I, SO, SC, CAW> {
/// The keystore that manages the keys of the node.
pub keystore: KeyStorePtr,
pub keystore: SyncCryptoStorePtr,
/// The client to use
pub client: Arc<C>,
@@ -468,7 +468,7 @@ struct BabeSlotWorker<B: BlockT, C, E, I, SO> {
env: E,
sync_oracle: SO,
force_authoring: bool,
keystore: KeyStorePtr,
keystore: SyncCryptoStorePtr,
epoch_changes: SharedEpochChanges<B, Epoch>,
slot_notification_sinks: SlotNotificationSinks<B>,
config: Config,
@@ -597,15 +597,15 @@ impl<B, C, E, I, Error, SO> sc_consensus_slots::SimpleSlotWorker<B> for BabeSlot
// add it to a digest item.
let public_type_pair = public.clone().into();
let public = public.to_raw_vec();
let signature = keystore.read()
.sign_with(
<AuthorityId as AppKey>::ID,
&public_type_pair,
header_hash.as_ref()
)
.map_err(|e| sp_consensus::Error::CannotSign(
public.clone(), e.to_string(),
))?;
let signature = SyncCryptoStore::sign_with(
&*keystore,
<AuthorityId as AppKey>::ID,
&public_type_pair,
header_hash.as_ref()
)
.map_err(|e| sp_consensus::Error::CannotSign(
public.clone(), e.to_string(),
))?;
let signature: AuthoritySignature = signature.clone().try_into()
.map_err(|_| sp_consensus::Error::InvalidSignature(
signature, public
@@ -1492,7 +1492,7 @@ pub mod test_helpers {
slot_number: u64,
parent: &B::Header,
client: &C,
keystore: &KeyStorePtr,
keystore: SyncCryptoStorePtr,
link: &BabeLink<B>,
) -> Option<PreDigest> where
B: BlockT,
@@ -1514,7 +1514,7 @@ pub mod test_helpers {
authorship::claim_slot(
slot_number,
&epoch,
keystore,
&keystore,
).map(|(digest, _)| digest)
}
}
+18 -10
View File
@@ -21,7 +21,11 @@
#![allow(deprecated)]
use super::*;
use authorship::claim_slot;
use sp_core::{crypto::Pair, vrf::make_transcript as transcript_from_data};
use sp_core::crypto::Pair;
use sp_keystore::{
SyncCryptoStore,
vrf::make_transcript as transcript_from_data,
};
use sp_consensus_babe::{
AuthorityPair,
SlotNumber,
@@ -46,6 +50,8 @@ use rand_chacha::{
rand_core::SeedableRng,
ChaChaRng,
};
use sc_keystore::LocalKeystore;
use sp_application_crypto::key_types::BABE;
type Item = DigestItem<Hash>;
@@ -384,8 +390,9 @@ fn run_one_test(
let select_chain = peer.select_chain().expect("Full client has select_chain");
let keystore_path = tempfile::tempdir().expect("Creates keystore path");
let keystore = sc_keystore::Store::open(keystore_path.path(), None).expect("Creates keystore");
keystore.write().insert_ephemeral_from_seed::<AuthorityPair>(seed).expect("Generates authority key");
let keystore: SyncCryptoStorePtr = Arc::new(LocalKeystore::open(keystore_path.path(), None)
.expect("Creates keystore"));
SyncCryptoStore::sr25519_generate_new(&*keystore, BABE, Some(seed)).expect("Generates authority key");
keystore_paths.push(keystore_path);
let mut got_own = false;
@@ -432,7 +439,6 @@ fn run_one_test(
can_author_with: sp_consensus::AlwaysCanAuthor,
}).expect("Starts babe"));
}
futures::executor::block_on(future::select(
futures::future::poll_fn(move |cx| {
let mut net = net.lock();
@@ -516,14 +522,15 @@ fn sig_is_not_pre_digest() {
fn can_author_block() {
sp_tracing::try_init_simple();
let keystore_path = tempfile::tempdir().expect("Creates keystore path");
let keystore = sc_keystore::Store::open(keystore_path.path(), None).expect("Creates keystore");
let pair = keystore.write().insert_ephemeral_from_seed::<AuthorityPair>("//Alice")
let keystore: SyncCryptoStorePtr = Arc::new(LocalKeystore::open(keystore_path.path(), None)
.expect("Creates keystore"));
let public = SyncCryptoStore::sr25519_generate_new(&*keystore, BABE, Some("//Alice"))
.expect("Generates authority pair");
let mut i = 0;
let epoch = Epoch {
start_slot: 0,
authorities: vec![(pair.public(), 1)],
authorities: vec![(public.into(), 1)],
randomness: [0; 32],
epoch_index: 1,
duration: 100,
@@ -823,13 +830,14 @@ fn verify_slots_are_strictly_increasing() {
fn babe_transcript_generation_match() {
sp_tracing::try_init_simple();
let keystore_path = tempfile::tempdir().expect("Creates keystore path");
let keystore = sc_keystore::Store::open(keystore_path.path(), None).expect("Creates keystore");
let pair = keystore.write().insert_ephemeral_from_seed::<AuthorityPair>("//Alice")
let keystore: SyncCryptoStorePtr = Arc::new(LocalKeystore::open(keystore_path.path(), None)
.expect("Creates keystore"));
let public = SyncCryptoStore::sr25519_generate_new(&*keystore, BABE, Some("//Alice"))
.expect("Generates authority pair");
let epoch = Epoch {
start_slot: 0,
authorities: vec![(pair.public(), 1)],
authorities: vec![(public.into(), 1)],
randomness: [0; 32],
epoch_index: 1,
duration: 100,
@@ -27,7 +27,6 @@ sc-client-api = { path = "../../api", version = "2.0.0" }
sc-consensus-babe = { path = "../../consensus/babe", version = "0.8.0" }
sc-consensus-epochs = { path = "../../consensus/epochs", version = "0.8.0" }
sp-consensus-babe = { path = "../../../primitives/consensus/babe", version = "0.8.0" }
sc-keystore = { path = "../../keystore", version = "2.0.0" }
sc-transaction-pool = { path = "../../transaction-pool", version = "2.0.0" }
sp-blockchain = { path = "../../../primitives/blockchain", version = "2.0.0" }
@@ -35,6 +34,7 @@ sp-consensus = { package = "sp-consensus", path = "../../../primitives/consensus
sp-inherents = { path = "../../../primitives/inherents", version = "2.0.0" }
sp-runtime = { path = "../../../primitives/runtime", version = "2.0.0" }
sp-core = { path = "../../../primitives/core", version = "2.0.0" }
sp-keystore = { path = "../../../primitives/keystore", version = "0.8.0" }
sp-api = { path = "../../../primitives/api", version = "2.0.0" }
sp-transaction-pool = { path = "../../../primitives/transaction-pool", version = "2.0.0" }
sp-timestamp = { path = "../../../primitives/timestamp", version = "2.0.0" }
@@ -33,12 +33,12 @@ use sc_consensus_babe::{
register_babe_inherent_data_provider, INTERMEDIATE_KEY,
};
use sc_consensus_epochs::{SharedEpochChanges, descendent_query};
use sc_keystore::KeyStorePtr;
use sp_api::{ProvideRuntimeApi, TransactionFor};
use sp_blockchain::{HeaderBackend, HeaderMetadata};
use sp_consensus::BlockImportParams;
use sp_consensus_babe::{BabeApi, inherents::BabeInherentData};
use sp_keystore::SyncCryptoStorePtr;
use sp_inherents::{InherentDataProviders, InherentData, ProvideInherentData, InherentIdentifier};
use sp_runtime::{
traits::{DigestItemFor, DigestFor, Block as BlockT, Header as _},
@@ -50,7 +50,7 @@ use sp_timestamp::{InherentType, InherentError, INHERENT_IDENTIFIER};
/// Intended for use with BABE runtimes.
pub struct BabeConsensusDataProvider<B: BlockT, C> {
/// shared reference to keystore
keystore: KeyStorePtr,
keystore: SyncCryptoStorePtr,
/// Shared reference to the client.
client: Arc<C>,
@@ -70,7 +70,7 @@ impl<B, C> BabeConsensusDataProvider<B, C>
{
pub fn new(
client: Arc<C>,
keystore: KeyStorePtr,
keystore: SyncCryptoStorePtr,
provider: &InherentDataProviders,
epoch_changes: SharedEpochChanges<B, Epoch>,
) -> Result<Self, Error> {
@@ -194,4 +194,4 @@ impl ProvideInherentData for SlotTimestampProvider {
fn error_to_string(&self, error: &[u8]) -> Option<String> {
InherentError::try_from(&INHERENT_IDENTIFIER, error).map(|e| format!("{:?}", e))
}
}
}
@@ -30,6 +30,7 @@ sp-utils = { version = "2.0.0", path = "../../primitives/utils" }
sp-consensus = { version = "0.8.0", path = "../../primitives/consensus/common" }
sc-consensus = { version = "0.8.0", path = "../../client/consensus/common" }
sp-core = { version = "2.0.0", path = "../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../primitives/keystore" }
sp-api = { version = "2.0.0", path = "../../primitives/api" }
sc-telemetry = { version = "2.0.0", path = "../telemetry" }
sc-keystore = { version = "2.0.0", path = "../keystore" }
@@ -35,7 +35,7 @@ use parking_lot::Mutex;
use prometheus_endpoint::Registry;
use std::{pin::Pin, sync::Arc, task::{Context, Poll}};
use sp_core::traits::BareCryptoStorePtr;
use sp_keystore::SyncCryptoStorePtr;
use finality_grandpa::Message::{Prevote, Precommit, PrimaryPropose};
use finality_grandpa::{voter, voter_set::VoterSet};
use sc_network::{NetworkService, ReputationChange};
@@ -107,7 +107,7 @@ mod benefit {
/// A type that ties together our local authority id and a keystore where it is
/// available for signing.
pub struct LocalIdKeystore((AuthorityId, BareCryptoStorePtr));
pub struct LocalIdKeystore((AuthorityId, SyncCryptoStorePtr));
impl LocalIdKeystore {
/// Returns a reference to our local authority id.
@@ -116,19 +116,13 @@ impl LocalIdKeystore {
}
/// Returns a reference to the keystore.
fn keystore(&self) -> &BareCryptoStorePtr {
&(self.0).1
fn keystore(&self) -> SyncCryptoStorePtr{
(self.0).1.clone()
}
}
impl AsRef<BareCryptoStorePtr> for LocalIdKeystore {
fn as_ref(&self) -> &BareCryptoStorePtr {
self.keystore()
}
}
impl From<(AuthorityId, BareCryptoStorePtr)> for LocalIdKeystore {
fn from(inner: (AuthorityId, BareCryptoStorePtr)) -> LocalIdKeystore {
impl From<(AuthorityId, SyncCryptoStorePtr)> for LocalIdKeystore {
fn from(inner: (AuthorityId, SyncCryptoStorePtr)) -> LocalIdKeystore {
LocalIdKeystore(inner)
}
}
@@ -696,7 +690,7 @@ impl<Block: BlockT> Sink<Message<Block>> for OutgoingMessages<Block>
if let Some(ref keystore) = self.keystore {
let target_hash = *(msg.target().0);
let signed = sp_finality_grandpa::sign_message(
keystore.as_ref(),
keystore.keystore(),
msg,
keystore.local_id().clone(),
self.round,
+7 -8
View File
@@ -76,8 +76,8 @@ use sp_inherents::InherentDataProviders;
use sp_consensus::{SelectChain, BlockImport};
use sp_core::{
crypto::Public,
traits::BareCryptoStorePtr,
};
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use sp_application_crypto::AppKey;
use sp_utils::mpsc::{tracing_unbounded, TracingUnboundedReceiver};
use sc_telemetry::{telemetry, CONSENSUS_INFO, CONSENSUS_DEBUG};
@@ -272,7 +272,7 @@ pub struct Config {
/// Some local identifier of the voter.
pub name: Option<String>,
/// The keystore that manages the keys of this node.
pub keystore: Option<BareCryptoStorePtr>,
pub keystore: Option<SyncCryptoStorePtr>,
}
impl Config {
@@ -609,7 +609,7 @@ fn global_communication<BE, Block: BlockT, C, N>(
voters: &Arc<VoterSet<AuthorityId>>,
client: Arc<C>,
network: &NetworkBridge<Block, N>,
keystore: Option<&BareCryptoStorePtr>,
keystore: Option<&SyncCryptoStorePtr>,
metrics: Option<until_imported::Metrics>,
) -> (
impl Stream<
@@ -1125,14 +1125,13 @@ pub fn setup_disabled_grandpa<Block: BlockT, Client, N>(
/// Returns the key pair of the node that is being used in the current voter set or `None`.
fn is_voter(
voters: &Arc<VoterSet<AuthorityId>>,
keystore: Option<&BareCryptoStorePtr>,
keystore: Option<& SyncCryptoStorePtr>,
) -> Option<AuthorityId> {
match keystore {
Some(keystore) => voters
.iter()
.find(|(p, _)| {
keystore.read()
.has_keys(&[(p.to_raw_vec(), AuthorityId::ID)])
SyncCryptoStore::has_keys(&**keystore, &[(p.to_raw_vec(), AuthorityId::ID)])
})
.map(|(p, _)| p.clone()),
None => None,
@@ -1142,14 +1141,14 @@ fn is_voter(
/// Returns the authority id of this node, if available.
fn authority_id<'a, I>(
authorities: &mut I,
keystore: Option<&BareCryptoStorePtr>,
keystore: Option<&SyncCryptoStorePtr>,
) -> Option<AuthorityId> where
I: Iterator<Item = &'a AuthorityId>,
{
match keystore {
Some(keystore) => {
authorities
.find(|p| keystore.read().has_keys(&[(p.to_raw_vec(), AuthorityId::ID)]))
.find(|p| SyncCryptoStore::has_keys(&**keystore, &[(p.to_raw_vec(), AuthorityId::ID)]))
.cloned()
},
None => None,
@@ -26,7 +26,7 @@ use finality_grandpa::{
BlockNumberOps, Error as GrandpaError, voter, voter_set::VoterSet
};
use log::{debug, info, warn};
use sp_core::traits::BareCryptoStorePtr;
use sp_keystore::SyncCryptoStorePtr;
use sp_consensus::SelectChain;
use sc_client_api::backend::Backend;
use sp_utils::mpsc::TracingUnboundedReceiver;
@@ -216,7 +216,7 @@ struct ObserverWork<B: BlockT, BE, Client, N: NetworkT<B>> {
client: Arc<Client>,
network: NetworkBridge<B, N>,
persistent_data: PersistentData<B>,
keystore: Option<BareCryptoStorePtr>,
keystore: Option<SyncCryptoStorePtr>,
voter_commands_rx: TracingUnboundedReceiver<VoterCommand<B::Hash, NumberFor<B>>>,
justification_sender: Option<GrandpaJustificationSender<B>>,
_phantom: PhantomData<BE>,
@@ -234,7 +234,7 @@ where
client: Arc<Client>,
network: NetworkBridge<B, Network>,
persistent_data: PersistentData<B>,
keystore: Option<BareCryptoStorePtr>,
keystore: Option<SyncCryptoStorePtr>,
voter_commands_rx: TracingUnboundedReceiver<VoterCommand<B::Hash, NumberFor<B>>>,
justification_sender: Option<GrandpaJustificationSender<B>>,
) -> Self {
+10 -6
View File
@@ -26,7 +26,7 @@ use sc_network_test::{
TestClient, TestNetFactory, FullPeerConfig,
};
use sc_network::config::{ProtocolConfig, BoxFinalityProofRequestBuilder};
use parking_lot::Mutex;
use parking_lot::{RwLock, Mutex};
use futures_timer::Delay;
use tokio::runtime::{Runtime, Handle};
use sp_keyring::Ed25519Keyring;
@@ -43,6 +43,7 @@ use parity_scale_codec::Decode;
use sp_runtime::traits::{Block as BlockT, Header as HeaderT, HashFor};
use sp_runtime::generic::{BlockId, DigestItem};
use sp_core::{H256, crypto::Public};
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use sp_finality_grandpa::{GRANDPA_ENGINE_ID, AuthorityList, EquivocationProof, GrandpaApi, OpaqueKeyOwnershipProof};
use sp_state_machine::{InMemoryBackend, prove_read, read_proof_check};
@@ -53,6 +54,8 @@ use finality_proof::{
use consensus_changes::ConsensusChanges;
use sc_block_builder::BlockBuilderProvider;
use sc_consensus::LongestChain;
use sc_keystore::LocalKeystore;
use sp_application_crypto::key_types::GRANDPA;
type TestLinkHalf =
LinkHalf<Block, PeersFullClient, LongestChain<substrate_test_runtime_client::Backend, Block>>;
@@ -285,10 +288,11 @@ fn make_ids(keys: &[Ed25519Keyring]) -> AuthorityList {
keys.iter().map(|key| key.clone().public().into()).map(|id| (id, 1)).collect()
}
fn create_keystore(authority: Ed25519Keyring) -> (BareCryptoStorePtr, tempfile::TempDir) {
fn create_keystore(authority: Ed25519Keyring) -> (SyncCryptoStorePtr, tempfile::TempDir) {
let keystore_path = tempfile::tempdir().expect("Creates keystore path");
let keystore = sc_keystore::Store::open(keystore_path.path(), None).expect("Creates keystore");
keystore.write().insert_ephemeral_from_seed::<AuthorityPair>(&authority.to_seed())
let keystore = Arc::new(LocalKeystore::open(keystore_path.path(), None)
.expect("Creates keystore"));
SyncCryptoStore::ed25519_generate_new(&*keystore, GRANDPA, Some(&authority.to_seed()))
.expect("Creates authority key");
(keystore, keystore_path)
@@ -1053,7 +1057,7 @@ fn voter_persists_its_votes() {
voter_rx: TracingUnboundedReceiver<()>,
net: Arc<Mutex<GrandpaTestNet>>,
client: PeersClient,
keystore: BareCryptoStorePtr,
keystore: SyncCryptoStorePtr,
}
impl Future for ResettableVoter {
@@ -1533,7 +1537,7 @@ type TestEnvironment<N, VR> = Environment<
fn test_environment<N, VR>(
link: &TestLinkHalf,
keystore: Option<BareCryptoStorePtr>,
keystore: Option<SyncCryptoStorePtr>,
network_service: N,
voting_rule: VR,
) -> TestEnvironment<N, VR>
+5 -1
View File
@@ -15,9 +15,13 @@ targets = ["x86_64-unknown-linux-gnu"]
[dependencies]
async-trait = "0.1.30"
derive_more = "0.99.2"
sp-core = { version = "2.0.0", path = "../../primitives/core" }
futures = "0.3.4"
futures-util = "0.3.4"
sp-application-crypto = { version = "2.0.0", path = "../../primitives/application-crypto" }
sp-core = { version = "2.0.0", path = "../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../primitives/keystore" }
hex = "0.4.0"
merlin = { version = "2.0", default-features = false }
parking_lot = "0.10.0"
+6 -516
View File
@@ -17,19 +17,13 @@
//! Keystore (and session key management) for ed25519 based chains like Polkadot.
#![warn(missing_docs)]
use std::{collections::{HashMap, HashSet}, path::PathBuf, fs::{self, File}, io::{self, Write}, sync::Arc};
use sp_core::{
crypto::{IsWrappedBy, CryptoTypePublicPair, KeyTypeId, Pair as PairT, ExposeSecret, SecretString, Public},
traits::{BareCryptoStore, Error as TraitError},
sr25519::{Public as Sr25519Public, Pair as Sr25519Pair},
vrf::{VRFTranscriptData, VRFSignature, make_transcript},
Encode,
};
use sp_application_crypto::{AppKey, AppPublic, AppPair, ed25519, sr25519, ecdsa};
use parking_lot::RwLock;
use std::io;
use sp_core::crypto::KeyTypeId;
use sp_keystore::Error as TraitError;
/// Keystore pointer
pub type KeyStorePtr = Arc<RwLock<Store>>;
/// Local keystore implementation
mod local;
pub use local::LocalKeystore;
/// Keystore error.
#[derive(Debug, derive_more::Display, derive_more::From)]
@@ -86,507 +80,3 @@ impl std::error::Error for Error {
}
}
/// Key store.
///
/// Stores key pairs in a file system store + short lived key pairs in memory.
///
/// Every pair that is being generated by a `seed`, will be placed in memory.
pub struct Store {
path: Option<PathBuf>,
/// Map over `(KeyTypeId, Raw public key)` -> `Key phrase/seed`
additional: HashMap<(KeyTypeId, Vec<u8>), String>,
password: Option<SecretString>,
}
impl Store {
/// Open the store at the given path.
///
/// Optionally takes a password that will be used to encrypt/decrypt the keys.
pub fn open<T: Into<PathBuf>>(path: T, password: Option<SecretString>) -> Result<KeyStorePtr> {
let path = path.into();
fs::create_dir_all(&path)?;
let instance = Self { path: Some(path), additional: HashMap::new(), password };
Ok(Arc::new(RwLock::new(instance)))
}
/// Create a new in-memory store.
pub fn new_in_memory() -> KeyStorePtr {
Arc::new(RwLock::new(Self {
path: None,
additional: HashMap::new(),
password: None
}))
}
/// Get the key phrase for the given public key and key type from the in-memory store.
fn get_additional_pair(
&self,
public: &[u8],
key_type: KeyTypeId,
) -> Option<&String> {
let key = (key_type, public.to_vec());
self.additional.get(&key)
}
/// Insert the given public/private key pair with the given key type.
///
/// Does not place it into the file system store.
fn insert_ephemeral_pair<Pair: PairT>(&mut self, pair: &Pair, seed: &str, key_type: KeyTypeId) {
let key = (key_type, pair.public().to_raw_vec());
self.additional.insert(key, seed.into());
}
/// Insert a new key with anonymous crypto.
///
/// Places it into the file system store.
fn insert_unknown(&self, key_type: KeyTypeId, suri: &str, public: &[u8]) -> Result<()> {
if let Some(path) = self.key_file_path(public, key_type) {
let mut file = File::create(path).map_err(Error::Io)?;
serde_json::to_writer(&file, &suri).map_err(Error::Json)?;
file.flush().map_err(Error::Io)?;
}
Ok(())
}
/// Insert a new key.
///
/// Places it into the file system store.
pub fn insert_by_type<Pair: PairT>(&self, key_type: KeyTypeId, suri: &str) -> Result<Pair> {
let pair = Pair::from_string(
suri,
self.password()
).map_err(|_| Error::InvalidSeed)?;
self.insert_unknown(key_type, suri, pair.public().as_slice())
.map_err(|_| Error::Unavailable)?;
Ok(pair)
}
/// Insert a new key.
///
/// Places it into the file system store.
pub fn insert<Pair: AppPair>(&self, suri: &str) -> Result<Pair> {
self.insert_by_type::<Pair::Generic>(Pair::ID, suri).map(Into::into)
}
/// Generate a new key.
///
/// Places it into the file system store.
pub fn generate_by_type<Pair: PairT>(&self, key_type: KeyTypeId) -> Result<Pair> {
let (pair, phrase, _) = Pair::generate_with_phrase(self.password());
if let Some(path) = self.key_file_path(pair.public().as_slice(), key_type) {
let mut file = File::create(path)?;
serde_json::to_writer(&file, &phrase)?;
file.flush()?;
}
Ok(pair)
}
/// Generate a new key.
///
/// Places it into the file system store.
pub fn generate<Pair: AppPair>(&self) -> Result<Pair> {
self.generate_by_type::<Pair::Generic>(Pair::ID).map(Into::into)
}
/// Create a new key from seed.
///
/// Does not place it into the file system store.
pub fn insert_ephemeral_from_seed_by_type<Pair: PairT>(
&mut self,
seed: &str,
key_type: KeyTypeId,
) -> Result<Pair> {
let pair = Pair::from_string(seed, None).map_err(|_| Error::InvalidSeed)?;
self.insert_ephemeral_pair(&pair, seed, key_type);
Ok(pair)
}
/// Create a new key from seed.
///
/// Does not place it into the file system store.
pub fn insert_ephemeral_from_seed<Pair: AppPair>(&mut self, seed: &str) -> Result<Pair> {
self.insert_ephemeral_from_seed_by_type::<Pair::Generic>(seed, Pair::ID).map(Into::into)
}
/// Get the key phrase for a given public key and key type.
fn key_phrase_by_type(&self, public: &[u8], key_type: KeyTypeId) -> Result<String> {
if let Some(phrase) = self.get_additional_pair(public, key_type) {
return Ok(phrase.clone())
}
let path = self.key_file_path(public, key_type).ok_or_else(|| Error::Unavailable)?;
let file = File::open(path)?;
serde_json::from_reader(&file).map_err(Into::into)
}
/// Get a key pair for the given public key and key type.
pub fn key_pair_by_type<Pair: PairT>(&self,
public: &Pair::Public,
key_type: KeyTypeId,
) -> Result<Pair> {
let phrase = self.key_phrase_by_type(public.as_slice(), key_type)?;
let pair = Pair::from_string(
&phrase,
self.password(),
).map_err(|_| Error::InvalidPhrase)?;
if &pair.public() == public {
Ok(pair)
} else {
Err(Error::InvalidPassword)
}
}
/// Get a key pair for the given public key.
pub fn key_pair<Pair: AppPair>(&self, public: &<Pair as AppKey>::Public) -> Result<Pair> {
self.key_pair_by_type::<Pair::Generic>(IsWrappedBy::from_ref(public), Pair::ID).map(Into::into)
}
/// Get public keys of all stored keys that match the key type.
///
/// This will just use the type of the public key (a list of which to be returned) in order
/// to determine the key type. Unless you use a specialized application-type public key, then
/// this only give you keys registered under generic cryptography, and will not return keys
/// registered under the application type.
pub fn public_keys<Public: AppPublic>(&self) -> Result<Vec<Public>> {
self.raw_public_keys(Public::ID)
.map(|v| {
v.into_iter()
.map(|k| Public::from_slice(k.as_slice()))
.collect()
})
}
/// Returns the file path for the given public key and key type.
fn key_file_path(&self, public: &[u8], key_type: KeyTypeId) -> Option<PathBuf> {
let mut buf = self.path.as_ref()?.clone();
let key_type = hex::encode(key_type.0);
let key = hex::encode(public);
buf.push(key_type + key.as_str());
Some(buf)
}
/// Returns a list of raw public keys filtered by `KeyTypeId`
fn raw_public_keys(&self, id: KeyTypeId) -> Result<Vec<Vec<u8>>> {
let mut public_keys: Vec<Vec<u8>> = self.additional.keys()
.into_iter()
.filter_map(|k| if k.0 == id { Some(k.1.clone()) } else { None })
.collect();
if let Some(path) = &self.path {
for entry in fs::read_dir(&path)? {
let entry = entry?;
let path = entry.path();
// skip directories and non-unicode file names (hex is unicode)
if let Some(name) = path.file_name().and_then(|n| n.to_str()) {
match hex::decode(name) {
Ok(ref hex) if hex.len() > 4 => {
if &hex[0..4] != &id.0 {
continue;
}
let public = hex[4..].to_vec();
public_keys.push(public);
}
_ => continue,
}
}
}
}
Ok(public_keys)
}
}
impl BareCryptoStore for Store {
fn keys(
&self,
id: KeyTypeId
) -> std::result::Result<Vec<CryptoTypePublicPair>, TraitError> {
let raw_keys = self.raw_public_keys(id)?;
Ok(raw_keys.into_iter()
.fold(Vec::new(), |mut v, k| {
v.push(CryptoTypePublicPair(sr25519::CRYPTO_ID, k.clone()));
v.push(CryptoTypePublicPair(ed25519::CRYPTO_ID, k.clone()));
v.push(CryptoTypePublicPair(ecdsa::CRYPTO_ID, k));
v
}))
}
fn supported_keys(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>
) -> std::result::Result<Vec<CryptoTypePublicPair>, TraitError> {
let all_keys = self.keys(id)?.into_iter().collect::<HashSet<_>>();
Ok(keys.into_iter()
.filter(|key| all_keys.contains(key))
.collect::<Vec<_>>())
}
fn sign_with(
&self,
id: KeyTypeId,
key: &CryptoTypePublicPair,
msg: &[u8],
) -> std::result::Result<Vec<u8>, TraitError> {
match key.0 {
ed25519::CRYPTO_ID => {
let pub_key = ed25519::Public::from_slice(key.1.as_slice());
let key_pair: ed25519::Pair = self
.key_pair_by_type::<ed25519::Pair>(&pub_key, id)
.map_err(|e| TraitError::from(e))?;
Ok(key_pair.sign(msg).encode())
}
sr25519::CRYPTO_ID => {
let pub_key = sr25519::Public::from_slice(key.1.as_slice());
let key_pair: sr25519::Pair = self
.key_pair_by_type::<sr25519::Pair>(&pub_key, id)
.map_err(|e| TraitError::from(e))?;
Ok(key_pair.sign(msg).encode())
},
ecdsa::CRYPTO_ID => {
let pub_key = ecdsa::Public::from_slice(key.1.as_slice());
let key_pair: ecdsa::Pair = self
.key_pair_by_type::<ecdsa::Pair>(&pub_key, id)
.map_err(|e| TraitError::from(e))?;
Ok(key_pair.sign(msg).encode())
}
_ => Err(TraitError::KeyNotSupported(id))
}
}
fn sr25519_public_keys(&self, key_type: KeyTypeId) -> Vec<sr25519::Public> {
self.raw_public_keys(key_type)
.map(|v| {
v.into_iter()
.map(|k| sr25519::Public::from_slice(k.as_slice()))
.collect()
})
.unwrap_or_default()
}
fn sr25519_generate_new(
&mut self,
id: KeyTypeId,
seed: Option<&str>,
) -> std::result::Result<sr25519::Public, TraitError> {
let pair = match seed {
Some(seed) => self.insert_ephemeral_from_seed_by_type::<sr25519::Pair>(seed, id),
None => self.generate_by_type::<sr25519::Pair>(id),
}.map_err(|e| -> TraitError { e.into() })?;
Ok(pair.public())
}
fn ed25519_public_keys(&self, key_type: KeyTypeId) -> Vec<ed25519::Public> {
self.raw_public_keys(key_type)
.map(|v| {
v.into_iter()
.map(|k| ed25519::Public::from_slice(k.as_slice()))
.collect()
})
.unwrap_or_default()
}
fn ed25519_generate_new(
&mut self,
id: KeyTypeId,
seed: Option<&str>,
) -> std::result::Result<ed25519::Public, TraitError> {
let pair = match seed {
Some(seed) => self.insert_ephemeral_from_seed_by_type::<ed25519::Pair>(seed, id),
None => self.generate_by_type::<ed25519::Pair>(id),
}.map_err(|e| -> TraitError { e.into() })?;
Ok(pair.public())
}
fn ecdsa_public_keys(&self, key_type: KeyTypeId) -> Vec<ecdsa::Public> {
self.raw_public_keys(key_type)
.map(|v| {
v.into_iter()
.map(|k| ecdsa::Public::from_slice(k.as_slice()))
.collect()
})
.unwrap_or_default()
}
fn ecdsa_generate_new(
&mut self,
id: KeyTypeId,
seed: Option<&str>,
) -> std::result::Result<ecdsa::Public, TraitError> {
let pair = match seed {
Some(seed) => self.insert_ephemeral_from_seed_by_type::<ecdsa::Pair>(seed, id),
None => self.generate_by_type::<ecdsa::Pair>(id),
}.map_err(|e| -> TraitError { e.into() })?;
Ok(pair.public())
}
fn insert_unknown(&mut self, key_type: KeyTypeId, suri: &str, public: &[u8])
-> std::result::Result<(), ()>
{
Store::insert_unknown(self, key_type, suri, public).map_err(|_| ())
}
fn password(&self) -> Option<&str> {
self.password.as_ref()
.map(|p| p.expose_secret())
.map(|p| p.as_str())
}
fn has_keys(&self, public_keys: &[(Vec<u8>, KeyTypeId)]) -> bool {
public_keys.iter().all(|(p, t)| self.key_phrase_by_type(&p, *t).is_ok())
}
fn sr25519_vrf_sign(
&self,
key_type: KeyTypeId,
public: &Sr25519Public,
transcript_data: VRFTranscriptData,
) -> std::result::Result<VRFSignature, TraitError> {
let transcript = make_transcript(transcript_data);
let pair = self.key_pair_by_type::<Sr25519Pair>(public, key_type)
.map_err(|e| TraitError::PairNotFound(e.to_string()))?;
let (inout, proof, _) = pair.as_ref().vrf_sign(transcript);
Ok(VRFSignature {
output: inout.to_output(),
proof,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
use sp_core::{testing::SR25519, crypto::Ss58Codec};
use std::str::FromStr;
#[test]
fn basic_store() {
let temp_dir = TempDir::new().unwrap();
let store = Store::open(temp_dir.path(), None).unwrap();
assert!(store.read().public_keys::<ed25519::AppPublic>().unwrap().is_empty());
let key: ed25519::AppPair = store.write().generate().unwrap();
let key2: ed25519::AppPair = store.read().key_pair(&key.public()).unwrap();
assert_eq!(key.public(), key2.public());
assert_eq!(store.read().public_keys::<ed25519::AppPublic>().unwrap()[0], key.public());
}
#[test]
fn test_insert_ephemeral_from_seed() {
let temp_dir = TempDir::new().unwrap();
let store = Store::open(temp_dir.path(), None).unwrap();
let pair: ed25519::AppPair = store
.write()
.insert_ephemeral_from_seed("0x3d97c819d68f9bafa7d6e79cb991eebcd77d966c5334c0b94d9e1fa7ad0869dc")
.unwrap();
assert_eq!(
"5DKUrgFqCPV8iAXx9sjy1nyBygQCeiUYRFWurZGhnrn3HJCA",
pair.public().to_ss58check()
);
drop(store);
let store = Store::open(temp_dir.path(), None).unwrap();
// Keys generated from seed should not be persisted!
assert!(store.read().key_pair::<ed25519::AppPair>(&pair.public()).is_err());
}
#[test]
fn password_being_used() {
let password = String::from("password");
let temp_dir = TempDir::new().unwrap();
let store = Store::open(
temp_dir.path(),
Some(FromStr::from_str(password.as_str()).unwrap()),
).unwrap();
let pair: ed25519::AppPair = store.write().generate().unwrap();
assert_eq!(
pair.public(),
store.read().key_pair::<ed25519::AppPair>(&pair.public()).unwrap().public(),
);
// Without the password the key should not be retrievable
let store = Store::open(temp_dir.path(), None).unwrap();
assert!(store.read().key_pair::<ed25519::AppPair>(&pair.public()).is_err());
let store = Store::open(
temp_dir.path(),
Some(FromStr::from_str(password.as_str()).unwrap()),
).unwrap();
assert_eq!(
pair.public(),
store.read().key_pair::<ed25519::AppPair>(&pair.public()).unwrap().public(),
);
}
#[test]
fn public_keys_are_returned() {
let temp_dir = TempDir::new().unwrap();
let store = Store::open(temp_dir.path(), None).unwrap();
let mut public_keys = Vec::new();
for i in 0..10 {
public_keys.push(store.write().generate::<ed25519::AppPair>().unwrap().public());
public_keys.push(store.write().insert_ephemeral_from_seed::<ed25519::AppPair>(
&format!("0x3d97c819d68f9bafa7d6e79cb991eebcd7{}d966c5334c0b94d9e1fa7ad0869dc", i),
).unwrap().public());
}
// Generate a key of a different type
store.write().generate::<sr25519::AppPair>().unwrap();
public_keys.sort();
let mut store_pubs = store.read().public_keys::<ed25519::AppPublic>().unwrap();
store_pubs.sort();
assert_eq!(public_keys, store_pubs);
}
#[test]
fn store_unknown_and_extract_it() {
let temp_dir = TempDir::new().unwrap();
let store = Store::open(temp_dir.path(), None).unwrap();
let secret_uri = "//Alice";
let key_pair = sr25519::AppPair::from_string(secret_uri, None).expect("Generates key pair");
store.write().insert_unknown(
SR25519,
secret_uri,
key_pair.public().as_ref(),
).expect("Inserts unknown key");
let store_key_pair = store.read().key_pair_by_type::<sr25519::AppPair>(
&key_pair.public(),
SR25519,
).expect("Gets key pair from keystore");
assert_eq!(key_pair.public(), store_key_pair.public());
}
#[test]
fn store_ignores_files_with_invalid_name() {
let temp_dir = TempDir::new().unwrap();
let store = Store::open(temp_dir.path(), None).unwrap();
let file_name = temp_dir.path().join(hex::encode(&SR25519.0[..2]));
fs::write(file_name, "test").expect("Invalid file is written");
assert!(
store.read().sr25519_public_keys(SR25519).is_empty(),
);
}
}
+647
View File
@@ -0,0 +1,647 @@
// This file is part of Substrate.
// Copyright (C) 2019-2020 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
//! Local keystore implementation
use std::{
collections::{HashMap, HashSet},
fs::{self, File},
io::Write,
path::PathBuf,
sync::Arc,
};
use async_trait::async_trait;
use parking_lot::RwLock;
use sp_core::{
crypto::{CryptoTypePublicPair, KeyTypeId, Pair as PairT, ExposeSecret, SecretString, Public},
sr25519::{Public as Sr25519Public, Pair as Sr25519Pair},
Encode,
};
use sp_keystore::{
CryptoStore,
SyncCryptoStorePtr,
Error as TraitError,
SyncCryptoStore,
vrf::{VRFTranscriptData, VRFSignature, make_transcript},
};
use sp_application_crypto::{ed25519, sr25519, ecdsa};
use crate::{Result, Error};
/// A local based keystore that is either memory-based or filesystem-based.
pub struct LocalKeystore(RwLock<KeystoreInner>);
impl LocalKeystore {
/// Create a local keystore from filesystem.
pub fn open<T: Into<PathBuf>>(path: T, password: Option<SecretString>) -> Result<Self> {
let inner = KeystoreInner::open(path, password)?;
Ok(Self(RwLock::new(inner)))
}
/// Create a local keystore in memory.
pub fn in_memory() -> Self {
let inner = KeystoreInner::new_in_memory();
Self(RwLock::new(inner))
}
}
#[async_trait]
impl CryptoStore for LocalKeystore {
async fn keys(&self, id: KeyTypeId) -> std::result::Result<Vec<CryptoTypePublicPair>, TraitError> {
SyncCryptoStore::keys(self, id)
}
async fn sr25519_public_keys(&self, id: KeyTypeId) -> Vec<sr25519::Public> {
SyncCryptoStore::sr25519_public_keys(self, id)
}
async fn sr25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> std::result::Result<sr25519::Public, TraitError> {
SyncCryptoStore::sr25519_generate_new(self, id, seed)
}
async fn ed25519_public_keys(&self, id: KeyTypeId) -> Vec<ed25519::Public> {
SyncCryptoStore::ed25519_public_keys(self, id)
}
async fn ed25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> std::result::Result<ed25519::Public, TraitError> {
SyncCryptoStore::ed25519_generate_new(self, id, seed)
}
async fn ecdsa_public_keys(&self, id: KeyTypeId) -> Vec<ecdsa::Public> {
SyncCryptoStore::ecdsa_public_keys(self, id)
}
async fn ecdsa_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> std::result::Result<ecdsa::Public, TraitError> {
SyncCryptoStore::ecdsa_generate_new(self, id, seed)
}
async fn insert_unknown(&self, id: KeyTypeId, suri: &str, public: &[u8]) -> std::result::Result<(), ()> {
SyncCryptoStore::insert_unknown(self, id, suri, public)
}
async fn has_keys(&self, public_keys: &[(Vec<u8>, KeyTypeId)]) -> bool {
SyncCryptoStore::has_keys(self, public_keys)
}
async fn supported_keys(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
) -> std::result::Result<Vec<CryptoTypePublicPair>, TraitError> {
SyncCryptoStore::supported_keys(self, id, keys)
}
async fn sign_with(
&self,
id: KeyTypeId,
key: &CryptoTypePublicPair,
msg: &[u8],
) -> std::result::Result<Vec<u8>, TraitError> {
SyncCryptoStore::sign_with(self, id, key, msg)
}
async fn sr25519_vrf_sign(
&self,
key_type: KeyTypeId,
public: &sr25519::Public,
transcript_data: VRFTranscriptData,
) -> std::result::Result<VRFSignature, TraitError> {
SyncCryptoStore::sr25519_vrf_sign(self, key_type, public, transcript_data)
}
}
impl SyncCryptoStore for LocalKeystore {
fn keys(
&self,
id: KeyTypeId
) -> std::result::Result<Vec<CryptoTypePublicPair>, TraitError> {
let raw_keys = self.0.read().raw_public_keys(id)?;
Ok(raw_keys.into_iter()
.fold(Vec::new(), |mut v, k| {
v.push(CryptoTypePublicPair(sr25519::CRYPTO_ID, k.clone()));
v.push(CryptoTypePublicPair(ed25519::CRYPTO_ID, k.clone()));
v.push(CryptoTypePublicPair(ecdsa::CRYPTO_ID, k));
v
}))
}
fn supported_keys(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>
) -> std::result::Result<Vec<CryptoTypePublicPair>, TraitError> {
let all_keys = SyncCryptoStore::keys(self, id)?
.into_iter()
.collect::<HashSet<_>>();
Ok(keys.into_iter()
.filter(|key| all_keys.contains(key))
.collect::<Vec<_>>())
}
fn sign_with(
&self,
id: KeyTypeId,
key: &CryptoTypePublicPair,
msg: &[u8],
) -> std::result::Result<Vec<u8>, TraitError> {
match key.0 {
ed25519::CRYPTO_ID => {
let pub_key = ed25519::Public::from_slice(key.1.as_slice());
let key_pair: ed25519::Pair = self.0.read()
.key_pair_by_type::<ed25519::Pair>(&pub_key, id)
.map_err(|e| TraitError::from(e))?;
Ok(key_pair.sign(msg).encode())
}
sr25519::CRYPTO_ID => {
let pub_key = sr25519::Public::from_slice(key.1.as_slice());
let key_pair: sr25519::Pair = self.0.read()
.key_pair_by_type::<sr25519::Pair>(&pub_key, id)
.map_err(|e| TraitError::from(e))?;
Ok(key_pair.sign(msg).encode())
},
ecdsa::CRYPTO_ID => {
let pub_key = ecdsa::Public::from_slice(key.1.as_slice());
let key_pair: ecdsa::Pair = self.0.read()
.key_pair_by_type::<ecdsa::Pair>(&pub_key, id)
.map_err(|e| TraitError::from(e))?;
Ok(key_pair.sign(msg).encode())
}
_ => Err(TraitError::KeyNotSupported(id))
}
}
fn sr25519_public_keys(&self, key_type: KeyTypeId) -> Vec<sr25519::Public> {
self.0.read().raw_public_keys(key_type)
.map(|v| {
v.into_iter()
.map(|k| sr25519::Public::from_slice(k.as_slice()))
.collect()
})
.unwrap_or_default()
}
fn sr25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> std::result::Result<sr25519::Public, TraitError> {
let pair = match seed {
Some(seed) => self.0.write().insert_ephemeral_from_seed_by_type::<sr25519::Pair>(seed, id),
None => self.0.write().generate_by_type::<sr25519::Pair>(id),
}.map_err(|e| -> TraitError { e.into() })?;
Ok(pair.public())
}
fn ed25519_public_keys(&self, key_type: KeyTypeId) -> Vec<ed25519::Public> {
self.0.read().raw_public_keys(key_type)
.map(|v| {
v.into_iter()
.map(|k| ed25519::Public::from_slice(k.as_slice()))
.collect()
})
.unwrap_or_default()
}
fn ed25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> std::result::Result<ed25519::Public, TraitError> {
let pair = match seed {
Some(seed) => self.0.write().insert_ephemeral_from_seed_by_type::<ed25519::Pair>(seed, id),
None => self.0.write().generate_by_type::<ed25519::Pair>(id),
}.map_err(|e| -> TraitError { e.into() })?;
Ok(pair.public())
}
fn ecdsa_public_keys(&self, key_type: KeyTypeId) -> Vec<ecdsa::Public> {
self.0.read().raw_public_keys(key_type)
.map(|v| {
v.into_iter()
.map(|k| ecdsa::Public::from_slice(k.as_slice()))
.collect()
})
.unwrap_or_default()
}
fn ecdsa_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> std::result::Result<ecdsa::Public, TraitError> {
let pair = match seed {
Some(seed) => self.0.write().insert_ephemeral_from_seed_by_type::<ecdsa::Pair>(seed, id),
None => self.0.write().generate_by_type::<ecdsa::Pair>(id),
}.map_err(|e| -> TraitError { e.into() })?;
Ok(pair.public())
}
fn insert_unknown(&self, key_type: KeyTypeId, suri: &str, public: &[u8])
-> std::result::Result<(), ()>
{
self.0.write().insert_unknown(key_type, suri, public).map_err(|_| ())
}
fn has_keys(&self, public_keys: &[(Vec<u8>, KeyTypeId)]) -> bool {
public_keys.iter().all(|(p, t)| self.0.read().key_phrase_by_type(&p, *t).is_ok())
}
fn sr25519_vrf_sign(
&self,
key_type: KeyTypeId,
public: &Sr25519Public,
transcript_data: VRFTranscriptData,
) -> std::result::Result<VRFSignature, TraitError> {
let transcript = make_transcript(transcript_data);
let pair = self.0.read().key_pair_by_type::<Sr25519Pair>(public, key_type)
.map_err(|e| TraitError::PairNotFound(e.to_string()))?;
let (inout, proof, _) = pair.as_ref().vrf_sign(transcript);
Ok(VRFSignature {
output: inout.to_output(),
proof,
})
}
}
impl Into<SyncCryptoStorePtr> for LocalKeystore {
fn into(self) -> SyncCryptoStorePtr {
Arc::new(self)
}
}
impl Into<Arc<dyn CryptoStore>> for LocalKeystore {
fn into(self) -> Arc<dyn CryptoStore> {
Arc::new(self)
}
}
/// A local key store.
///
/// Stores key pairs in a file system store + short lived key pairs in memory.
///
/// Every pair that is being generated by a `seed`, will be placed in memory.
struct KeystoreInner {
path: Option<PathBuf>,
/// Map over `(KeyTypeId, Raw public key)` -> `Key phrase/seed`
additional: HashMap<(KeyTypeId, Vec<u8>), String>,
password: Option<SecretString>,
}
impl KeystoreInner {
/// Open the store at the given path.
///
/// Optionally takes a password that will be used to encrypt/decrypt the keys.
pub fn open<T: Into<PathBuf>>(path: T, password: Option<SecretString>) -> Result<Self> {
let path = path.into();
fs::create_dir_all(&path)?;
let instance = Self { path: Some(path), additional: HashMap::new(), password };
Ok(instance)
}
/// Get the password for this store.
fn password(&self) -> Option<&str> {
self.password.as_ref()
.map(|p| p.expose_secret())
.map(|p| p.as_str())
}
/// Create a new in-memory store.
pub fn new_in_memory() -> Self {
Self {
path: None,
additional: HashMap::new(),
password: None
}
}
/// Get the key phrase for the given public key and key type from the in-memory store.
fn get_additional_pair(
&self,
public: &[u8],
key_type: KeyTypeId,
) -> Option<&String> {
let key = (key_type, public.to_vec());
self.additional.get(&key)
}
/// Insert the given public/private key pair with the given key type.
///
/// Does not place it into the file system store.
fn insert_ephemeral_pair<Pair: PairT>(&mut self, pair: &Pair, seed: &str, key_type: KeyTypeId) {
let key = (key_type, pair.public().to_raw_vec());
self.additional.insert(key, seed.into());
}
/// Insert a new key with anonymous crypto.
///
/// Places it into the file system store.
pub fn insert_unknown(&self, key_type: KeyTypeId, suri: &str, public: &[u8]) -> Result<()> {
if let Some(path) = self.key_file_path(public, key_type) {
let mut file = File::create(path).map_err(Error::Io)?;
serde_json::to_writer(&file, &suri).map_err(Error::Json)?;
file.flush().map_err(Error::Io)?;
}
Ok(())
}
/// Generate a new key.
///
/// Places it into the file system store.
pub fn generate_by_type<Pair: PairT>(&self, key_type: KeyTypeId) -> Result<Pair> {
let (pair, phrase, _) = Pair::generate_with_phrase(self.password());
if let Some(path) = self.key_file_path(pair.public().as_slice(), key_type) {
let mut file = File::create(path)?;
serde_json::to_writer(&file, &phrase)?;
file.flush()?;
}
Ok(pair)
}
/// Create a new key from seed.
///
/// Does not place it into the file system store.
pub fn insert_ephemeral_from_seed_by_type<Pair: PairT>(
&mut self,
seed: &str,
key_type: KeyTypeId,
) -> Result<Pair> {
let pair = Pair::from_string(seed, None).map_err(|_| Error::InvalidSeed)?;
self.insert_ephemeral_pair(&pair, seed, key_type);
Ok(pair)
}
/// Get the key phrase for a given public key and key type.
fn key_phrase_by_type(&self, public: &[u8], key_type: KeyTypeId) -> Result<String> {
if let Some(phrase) = self.get_additional_pair(public, key_type) {
return Ok(phrase.clone())
}
let path = self.key_file_path(public, key_type).ok_or_else(|| Error::Unavailable)?;
let file = File::open(path)?;
serde_json::from_reader(&file).map_err(Into::into)
}
/// Get a key pair for the given public key and key type.
pub fn key_pair_by_type<Pair: PairT>(&self,
public: &Pair::Public,
key_type: KeyTypeId,
) -> Result<Pair> {
let phrase = self.key_phrase_by_type(public.as_slice(), key_type)?;
let pair = Pair::from_string(
&phrase,
self.password(),
).map_err(|_| Error::InvalidPhrase)?;
if &pair.public() == public {
Ok(pair)
} else {
Err(Error::InvalidPassword)
}
}
/// Returns the file path for the given public key and key type.
fn key_file_path(&self, public: &[u8], key_type: KeyTypeId) -> Option<PathBuf> {
let mut buf = self.path.as_ref()?.clone();
let key_type = hex::encode(key_type.0);
let key = hex::encode(public);
buf.push(key_type + key.as_str());
Some(buf)
}
/// Returns a list of raw public keys filtered by `KeyTypeId`
fn raw_public_keys(&self, id: KeyTypeId) -> Result<Vec<Vec<u8>>> {
let mut public_keys: Vec<Vec<u8>> = self.additional.keys()
.into_iter()
.filter_map(|k| if k.0 == id { Some(k.1.clone()) } else { None })
.collect();
if let Some(path) = &self.path {
for entry in fs::read_dir(&path)? {
let entry = entry?;
let path = entry.path();
// skip directories and non-unicode file names (hex is unicode)
if let Some(name) = path.file_name().and_then(|n| n.to_str()) {
match hex::decode(name) {
Ok(ref hex) if hex.len() > 4 => {
if &hex[0..4] != &id.0 {
continue;
}
let public = hex[4..].to_vec();
public_keys.push(public);
}
_ => continue,
}
}
}
}
Ok(public_keys)
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
use sp_core::{
Pair,
crypto::{IsWrappedBy, Ss58Codec},
testing::SR25519,
};
use sp_application_crypto::{ed25519, sr25519, AppPublic, AppKey, AppPair};
use std::{
fs,
str::FromStr,
};
/// Generate a new key.
///
/// Places it into the file system store.
fn generate<Pair: AppPair>(store: &KeystoreInner) -> Result<Pair> {
store.generate_by_type::<Pair::Generic>(Pair::ID).map(Into::into)
}
/// Create a new key from seed.
///
/// Does not place it into the file system store.
fn insert_ephemeral_from_seed<Pair: AppPair>(store: &mut KeystoreInner, seed: &str) -> Result<Pair> {
store.insert_ephemeral_from_seed_by_type::<Pair::Generic>(seed, Pair::ID).map(Into::into)
}
/// Get public keys of all stored keys that match the key type.
///
/// This will just use the type of the public key (a list of which to be returned) in order
/// to determine the key type. Unless you use a specialized application-type public key, then
/// this only give you keys registered under generic cryptography, and will not return keys
/// registered under the application type.
fn public_keys<Public: AppPublic>(store: &KeystoreInner) -> Result<Vec<Public>> {
store.raw_public_keys(Public::ID)
.map(|v| {
v.into_iter()
.map(|k| Public::from_slice(k.as_slice()))
.collect()
})
}
/// Get a key pair for the given public key.
fn key_pair<Pair: AppPair>(store: &KeystoreInner, public: &<Pair as AppKey>::Public) -> Result<Pair> {
store.key_pair_by_type::<Pair::Generic>(IsWrappedBy::from_ref(public), Pair::ID).map(Into::into)
}
#[test]
fn basic_store() {
let temp_dir = TempDir::new().unwrap();
let store = KeystoreInner::open(temp_dir.path(), None).unwrap();
assert!(public_keys::<ed25519::AppPublic>(&store).unwrap().is_empty());
let key: ed25519::AppPair = generate(&store).unwrap();
let key2: ed25519::AppPair = key_pair(&store, &key.public()).unwrap();
assert_eq!(key.public(), key2.public());
assert_eq!(public_keys::<ed25519::AppPublic>(&store).unwrap()[0], key.public());
}
#[test]
fn test_insert_ephemeral_from_seed() {
let temp_dir = TempDir::new().unwrap();
let mut store = KeystoreInner::open(temp_dir.path(), None).unwrap();
let pair: ed25519::AppPair = insert_ephemeral_from_seed(
&mut store,
"0x3d97c819d68f9bafa7d6e79cb991eebcd77d966c5334c0b94d9e1fa7ad0869dc"
).unwrap();
assert_eq!(
"5DKUrgFqCPV8iAXx9sjy1nyBygQCeiUYRFWurZGhnrn3HJCA",
pair.public().to_ss58check()
);
drop(store);
let store = KeystoreInner::open(temp_dir.path(), None).unwrap();
// Keys generated from seed should not be persisted!
assert!(key_pair::<ed25519::AppPair>(&store, &pair.public()).is_err());
}
#[test]
fn password_being_used() {
let password = String::from("password");
let temp_dir = TempDir::new().unwrap();
let store = KeystoreInner::open(
temp_dir.path(),
Some(FromStr::from_str(password.as_str()).unwrap()),
).unwrap();
let pair: ed25519::AppPair = generate(&store).unwrap();
assert_eq!(
pair.public(),
key_pair::<ed25519::AppPair>(&store, &pair.public()).unwrap().public(),
);
// Without the password the key should not be retrievable
let store = KeystoreInner::open(temp_dir.path(), None).unwrap();
assert!(key_pair::<ed25519::AppPair>(&store, &pair.public()).is_err());
let store = KeystoreInner::open(
temp_dir.path(),
Some(FromStr::from_str(password.as_str()).unwrap()),
).unwrap();
assert_eq!(
pair.public(),
key_pair::<ed25519::AppPair>(&store, &pair.public()).unwrap().public(),
);
}
#[test]
fn public_keys_are_returned() {
let temp_dir = TempDir::new().unwrap();
let mut store = KeystoreInner::open(temp_dir.path(), None).unwrap();
let mut keys = Vec::new();
for i in 0..10 {
keys.push(generate::<ed25519::AppPair>(&store).unwrap().public());
keys.push(insert_ephemeral_from_seed::<ed25519::AppPair>(
&mut store,
&format!("0x3d97c819d68f9bafa7d6e79cb991eebcd7{}d966c5334c0b94d9e1fa7ad0869dc", i),
).unwrap().public());
}
// Generate a key of a different type
generate::<sr25519::AppPair>(&store).unwrap();
keys.sort();
let mut store_pubs = public_keys::<ed25519::AppPublic>(&store).unwrap();
store_pubs.sort();
assert_eq!(keys, store_pubs);
}
#[test]
fn store_unknown_and_extract_it() {
let temp_dir = TempDir::new().unwrap();
let store = KeystoreInner::open(temp_dir.path(), None).unwrap();
let secret_uri = "//Alice";
let key_pair = sr25519::AppPair::from_string(secret_uri, None).expect("Generates key pair");
store.insert_unknown(
SR25519,
secret_uri,
key_pair.public().as_ref(),
).expect("Inserts unknown key");
let store_key_pair = store.key_pair_by_type::<sr25519::AppPair>(
&key_pair.public(),
SR25519,
).expect("Gets key pair from keystore");
assert_eq!(key_pair.public(), store_key_pair.public());
}
#[test]
fn store_ignores_files_with_invalid_name() {
let temp_dir = TempDir::new().unwrap();
let store = LocalKeystore::open(temp_dir.path(), None).unwrap();
let file_name = temp_dir.path().join(hex::encode(&SR25519.0[..2]));
fs::write(file_name, "test").expect("Invalid file is written");
assert!(
SyncCryptoStore::sr25519_public_keys(&store, SR25519).is_empty(),
);
}
}
+1
View File
@@ -21,6 +21,7 @@ futures = { version = "0.3.1", features = ["compat"] }
jsonrpc-pubsub = "15.0.0"
log = "0.4.8"
sp-core = { version = "2.0.0", path = "../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../primitives/keystore" }
rpc = { package = "jsonrpc-core", version = "15.0.0" }
sp-version = { version = "2.0.0", path = "../../primitives/version" }
serde_json = "1.0.41"
+7 -7
View File
@@ -35,7 +35,8 @@ use futures::future::{ready, FutureExt, TryFutureExt};
use sc_rpc_api::DenyUnsafe;
use jsonrpc_pubsub::{typed::Subscriber, SubscriptionId, manager::SubscriptionManager};
use codec::{Encode, Decode};
use sp_core::{Bytes, traits::BareCryptoStorePtr};
use sp_core::Bytes;
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
use sp_api::ProvideRuntimeApi;
use sp_runtime::generic;
use sp_transaction_pool::{
@@ -57,7 +58,7 @@ pub struct Author<P, Client> {
/// Subscriptions manager
subscriptions: SubscriptionManager,
/// The key store.
keystore: BareCryptoStorePtr,
keystore: SyncCryptoStorePtr,
/// Whether to deny unsafe calls
deny_unsafe: DenyUnsafe,
}
@@ -68,7 +69,7 @@ impl<P, Client> Author<P, Client> {
client: Arc<Client>,
pool: Arc<P>,
subscriptions: SubscriptionManager,
keystore: BareCryptoStorePtr,
keystore: SyncCryptoStorePtr,
deny_unsafe: DenyUnsafe,
) -> Self {
Author {
@@ -105,8 +106,7 @@ impl<P, Client> AuthorApi<TxHash<P>, BlockHash<P>> for Author<P, Client>
self.deny_unsafe.check_if_safe()?;
let key_type = key_type.as_str().try_into().map_err(|_| Error::BadKeyType)?;
let mut keystore = self.keystore.write();
keystore.insert_unknown(key_type, &suri, &public[..])
SyncCryptoStore::insert_unknown(&*self.keystore, key_type, &suri, &public[..])
.map_err(|_| Error::KeyStoreUnavailable)?;
Ok(())
}
@@ -131,14 +131,14 @@ impl<P, Client> AuthorApi<TxHash<P>, BlockHash<P>> for Author<P, Client>
).map_err(|e| Error::Client(Box::new(e)))?
.ok_or_else(|| Error::InvalidSessionKeys)?;
Ok(self.keystore.read().has_keys(&keys))
Ok(SyncCryptoStore::has_keys(&*self.keystore, &keys))
}
fn has_key(&self, public_key: Bytes, key_type: String) -> Result<bool> {
self.deny_unsafe.check_if_safe()?;
let key_type = key_type.as_str().try_into().map_err(|_| Error::BadKeyType)?;
Ok(self.keystore.read().has_keys(&[(public_key.to_vec(), key_type)]))
Ok(SyncCryptoStore::has_keys(&*self.keystore, &[(public_key.to_vec(), key_type)]))
}
fn submit_extrinsic(&self, ext: Bytes) -> FutureResult<TxHash<P>> {
+8 -7
View File
@@ -22,10 +22,11 @@ use std::{mem, sync::Arc};
use assert_matches::assert_matches;
use codec::Encode;
use sp_core::{
H256, blake2_256, hexdisplay::HexDisplay, testing::{ED25519, SR25519, KeyStore},
traits::BareCryptoStorePtr, ed25519, sr25519,
ed25519, sr25519,
H256, blake2_256, hexdisplay::HexDisplay, testing::{ED25519, SR25519},
crypto::{CryptoTypePublicPair, Pair, Public},
};
use sp_keystore::testing::KeyStore;
use rpc::futures::Stream as _;
use substrate_test_runtime_client::{
self, AccountKeyring, runtime::{Extrinsic, Transfer, SessionKeys, Block},
@@ -51,13 +52,13 @@ type FullTransactionPool = BasicPool<
struct TestSetup {
pub client: Arc<Client<Backend>>,
pub keystore: BareCryptoStorePtr,
pub keystore: Arc<KeyStore>,
pub pool: Arc<FullTransactionPool>,
}
impl Default for TestSetup {
fn default() -> Self {
let keystore = KeyStore::new();
let keystore = Arc::new(KeyStore::new());
let client_builder = substrate_test_runtime_client::TestClientBuilder::new();
let client = Arc::new(client_builder.set_keystore(keystore.clone()).build());
@@ -235,7 +236,7 @@ fn should_insert_key() {
key_pair.public().0.to_vec().into(),
).expect("Insert key");
let public_keys = setup.keystore.read().keys(ED25519).unwrap();
let public_keys = SyncCryptoStore::keys(&*setup.keystore, ED25519).unwrap();
assert!(public_keys.contains(&CryptoTypePublicPair(ed25519::CRYPTO_ID, key_pair.public().to_raw_vec())));
}
@@ -250,8 +251,8 @@ fn should_rotate_keys() {
let session_keys = SessionKeys::decode(&mut &new_public_keys[..])
.expect("SessionKeys decode successfully");
let ed25519_public_keys = setup.keystore.read().keys(ED25519).unwrap();
let sr25519_public_keys = setup.keystore.read().keys(SR25519).unwrap();
let ed25519_public_keys = SyncCryptoStore::keys(&*setup.keystore, ED25519).unwrap();
let sr25519_public_keys = SyncCryptoStore::keys(&*setup.keystore, SR25519).unwrap();
assert!(ed25519_public_keys.contains(&CryptoTypePublicPair(ed25519::CRYPTO_ID, session_keys.ed25519.to_raw_vec())));
assert!(sr25519_public_keys.contains(&CryptoTypePublicPair(sr25519::CRYPTO_ID, session_keys.sr25519.to_raw_vec())));
+1
View File
@@ -50,6 +50,7 @@ sp-utils = { version = "2.0.0", path = "../../primitives/utils" }
sp-version = { version = "2.0.0", path = "../../primitives/version" }
sp-blockchain = { version = "2.0.0", path = "../../primitives/blockchain" }
sp-core = { version = "2.0.0", path = "../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../primitives/keystore" }
sp-session = { version = "2.0.0", path = "../../primitives/session" }
sp-state-machine = { version = "0.8.0", path = "../../primitives/state-machine" }
sp-application-crypto = { version = "2.0.0", path = "../../primitives/application-crypto" }
+60 -27
View File
@@ -33,13 +33,16 @@ use sp_consensus::{
block_validation::{BlockAnnounceValidator, DefaultBlockAnnounceValidator, Chain},
import_queue::ImportQueue,
};
use futures::{FutureExt, StreamExt, future::ready, channel::oneshot};
use jsonrpc_pubsub::manager::SubscriptionManager;
use sc_keystore::Store as Keystore;
use futures::{
FutureExt, StreamExt,
future::ready,
channel::oneshot,
};
use sc_keystore::LocalKeystore;
use log::{info, warn};
use sc_network::config::{Role, FinalityProofProvider, OnDemand, BoxFinalityProofRequestBuilder};
use sc_network::NetworkService;
use parking_lot::RwLock;
use sp_runtime::generic::BlockId;
use sp_runtime::traits::{
Block as BlockT, SaturatedConversion, HashFor, Zero, BlockIdTo,
@@ -52,7 +55,11 @@ use sc_telemetry::{telemetry, SUBSTRATE_INFO};
use sp_transaction_pool::MaintainedTransactionPool;
use prometheus_endpoint::Registry;
use sc_client_db::{Backend, DatabaseSettings};
use sp_core::traits::{CodeExecutor, SpawnNamed};
use sp_core::traits::{
CodeExecutor,
SpawnNamed,
};
use sp_keystore::{CryptoStore, SyncCryptoStorePtr};
use sp_runtime::BuildStorage;
use sc_client_api::{
BlockBackend, BlockchainEvents,
@@ -169,14 +176,14 @@ pub type TLightCallExecutor<TBl, TExecDisp> = sc_light::GenesisCallExecutor<
type TFullParts<TBl, TRtApi, TExecDisp> = (
TFullClient<TBl, TRtApi, TExecDisp>,
Arc<TFullBackend<TBl>>,
Arc<RwLock<sc_keystore::Store>>,
KeystoreContainer,
TaskManager,
);
type TLightParts<TBl, TRtApi, TExecDisp> = (
Arc<TLightClient<TBl, TRtApi, TExecDisp>>,
Arc<TLightBackend<TBl>>,
Arc<RwLock<sc_keystore::Store>>,
KeystoreContainer,
TaskManager,
Arc<OnDemand<TBl>>,
);
@@ -198,6 +205,41 @@ pub type TLightClientWithBackend<TBl, TRtApi, TExecDisp, TBackend> = Client<
TRtApi,
>;
/// Construct and hold different layers of Keystore wrappers
pub struct KeystoreContainer {
keystore: Arc<dyn CryptoStore>,
sync_keystore: SyncCryptoStorePtr,
}
impl KeystoreContainer {
/// Construct KeystoreContainer
pub fn new(config: &KeystoreConfig) -> Result<Self, Error> {
let keystore = Arc::new(match config {
KeystoreConfig::Path { path, password } => LocalKeystore::open(
path.clone(),
password.clone(),
)?,
KeystoreConfig::InMemory => LocalKeystore::in_memory(),
});
let sync_keystore = keystore.clone() as SyncCryptoStorePtr;
Ok(Self {
keystore,
sync_keystore,
})
}
/// Returns an adapter to the asynchronous keystore that implements `CryptoStore`
pub fn keystore(&self) -> Arc<dyn CryptoStore> {
self.keystore.clone()
}
/// Returns the synchrnous keystore wrapper
pub fn sync_keystore(&self) -> SyncCryptoStorePtr {
self.sync_keystore.clone()
}
}
/// Creates a new full client for the given config.
pub fn new_full_client<TBl, TRtApi, TExecDisp>(
config: &Configuration,
@@ -215,13 +257,7 @@ pub fn new_full_parts<TBl, TRtApi, TExecDisp>(
TBl: BlockT,
TExecDisp: NativeExecutionDispatch + 'static,
{
let keystore = match &config.keystore {
KeystoreConfig::Path { path, password } => Keystore::open(
path.clone(),
password.clone()
)?,
KeystoreConfig::InMemory => Keystore::new_in_memory(),
};
let keystore_container = KeystoreContainer::new(&config.keystore)?;
let task_manager = {
let registry = config.prometheus_config.as_ref().map(|cfg| &cfg.registry);
@@ -254,7 +290,7 @@ pub fn new_full_parts<TBl, TRtApi, TExecDisp>(
let extensions = sc_client_api::execution_extensions::ExecutionExtensions::new(
config.execution_strategies.clone(),
Some(keystore.clone()),
Some(keystore_container.sync_keystore()),
);
new_client(
@@ -273,7 +309,12 @@ pub fn new_full_parts<TBl, TRtApi, TExecDisp>(
)?
};
Ok((client, backend, keystore, task_manager))
Ok((
client,
backend,
keystore_container,
task_manager,
))
}
/// Create the initial parts of a light node.
@@ -283,20 +324,12 @@ pub fn new_light_parts<TBl, TRtApi, TExecDisp>(
TBl: BlockT,
TExecDisp: NativeExecutionDispatch + 'static,
{
let keystore_container = KeystoreContainer::new(&config.keystore)?;
let task_manager = {
let registry = config.prometheus_config.as_ref().map(|cfg| &cfg.registry);
TaskManager::new(config.task_executor.clone(), registry)?
};
let keystore = match &config.keystore {
KeystoreConfig::Path { path, password } => Keystore::open(
path.clone(),
password.clone()
)?,
KeystoreConfig::InMemory => Keystore::new_in_memory(),
};
let executor = NativeExecutor::<TExecDisp>::new(
config.wasm_method,
config.default_heap_pages,
@@ -331,7 +364,7 @@ pub fn new_light_parts<TBl, TRtApi, TExecDisp>(
config.prometheus_config.as_ref().map(|config| config.registry.clone()),
)?);
Ok((client, backend, keystore, task_manager, on_demand))
Ok((client, backend, keystore_container, task_manager, on_demand))
}
/// Create an instance of db-backed client.
@@ -390,7 +423,7 @@ pub struct SpawnTasksParams<'a, TBl: BlockT, TCl, TExPool, TRpc, Backend> {
/// A task manager returned by `new_full_parts`/`new_light_parts`.
pub task_manager: &'a mut TaskManager,
/// A shared keystore returned by `new_full_parts`/`new_light_parts`.
pub keystore: Arc<RwLock<Keystore>>,
pub keystore: SyncCryptoStorePtr,
/// An optional, shared data fetcher for light clients.
pub on_demand: Option<Arc<OnDemand<TBl>>>,
/// A shared transaction pool.
@@ -673,7 +706,7 @@ fn gen_handler<TBl, TBackend, TExPool, TRpc, TCl>(
spawn_handle: SpawnTaskHandle,
client: Arc<TCl>,
transaction_pool: Arc<TExPool>,
keystore: Arc<RwLock<Keystore>>,
keystore: SyncCryptoStorePtr,
on_demand: Option<Arc<OnDemand<TBl>>>,
remote_blockchain: Option<Arc<dyn RemoteBlockchain<TBl>>>,
rpc_extensions_builder: &(dyn RpcExtensionBuilder<Output = TRpc> + Send),
@@ -32,6 +32,8 @@ use sp_core::{
storage::{well_known_keys, ChildInfo, PrefixedStorageKey, StorageData, StorageKey},
ChangesTrieConfiguration, ExecutionContext, NativeOrEncoded,
};
#[cfg(feature="test-helpers")]
use sp_keystore::SyncCryptoStorePtr;
use sc_telemetry::{telemetry, SUBSTRATE_INFO};
use sp_runtime::{
Justification, BuildStorage,
@@ -147,7 +149,7 @@ impl<H> PrePostHeader<H> {
pub fn new_in_mem<E, Block, S, RA>(
executor: E,
genesis_storage: &S,
keystore: Option<sp_core::traits::BareCryptoStorePtr>,
keystore: Option<SyncCryptoStorePtr>,
prometheus_registry: Option<Registry>,
spawn_handle: Box<dyn SpawnNamed>,
config: ClientConfig,
@@ -188,7 +190,7 @@ pub fn new_with_backend<B, E, Block, S, RA>(
backend: Arc<B>,
executor: E,
build_genesis_storage: &S,
keystore: Option<sp_core::traits::BareCryptoStorePtr>,
keystore: Option<SyncCryptoStorePtr>,
spawn_handle: Box<dyn SpawnNamed>,
prometheus_registry: Option<Registry>,
config: ClientConfig,
+5 -6
View File
@@ -53,9 +53,9 @@ use sp_utils::{status_sinks, mpsc::{tracing_unbounded, TracingUnboundedReceiver,
pub use self::error::Error;
pub use self::builder::{
new_full_client, new_client, new_full_parts, new_light_parts,
spawn_tasks, build_network, BuildNetworkParams, NetworkStarter, build_offchain_workers,
SpawnTasksParams, TFullClient, TLightClient, TFullBackend, TLightBackend,
TLightBackendWithHash, TLightClientWithBackend,
spawn_tasks, build_network, build_offchain_workers,
BuildNetworkParams, KeystoreContainer, NetworkStarter, SpawnTasksParams, TFullClient, TLightClient,
TFullBackend, TLightBackend, TLightBackendWithHash, TLightClientWithBackend,
TFullCallExecutor, TLightCallExecutor, RpcExtensionBuilder, NoopRpcExtensionBuilder,
};
pub use config::{
@@ -81,7 +81,6 @@ pub use task_manager::SpawnTaskHandle;
pub use task_manager::TaskManager;
pub use sp_consensus::import_queue::ImportQueue;
use sc_client_api::BlockchainEvents;
pub use sc_keystore::KeyStorePtr as KeyStore;
const DEFAULT_PROTOCOL_ID: &str = "sup";
@@ -181,8 +180,8 @@ pub struct PartialComponents<Client, Backend, SelectChain, ImportQueue, Transact
pub backend: Arc<Backend>,
/// The chain task manager.
pub task_manager: TaskManager,
/// A shared keystore instance.
pub keystore: KeyStore,
/// A keystore container instance..
pub keystore_container: KeystoreContainer,
/// A chain selection algorithm instance.
pub select_chain: SelectChain,
/// An import queue.
@@ -1737,7 +1737,7 @@ fn cleans_up_closed_notification_sinks_on_block_import() {
_,
substrate_test_runtime_client::runtime::Block,
_,
substrate_test_runtime_client::runtime::RuntimeApi
substrate_test_runtime_client::runtime::RuntimeApi,
>(
substrate_test_runtime_client::new_native_executor(),
&substrate_test_runtime_client::GenesisParameters::default().genesis_storage(),
@@ -1855,4 +1855,4 @@ fn reorg_triggers_a_notification_even_for_sources_that_should_not_trigger_notifi
// We should have a tree route of the re-org
let tree_route = notification.tree_route.unwrap();
assert_eq!(tree_route.enacted()[0].hash, b1.hash());
}
}
@@ -18,6 +18,7 @@ frame-support = { version = "2.0.0", default-features = false, path = "../suppor
frame-system = { version = "2.0.0", default-features = false, path = "../system" }
serde = { version = "1.0.101", optional = true }
sp-core = { version = "2.0.0", default-features = false, path = "../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../primitives/keystore", optional = true }
sp-io = { version = "2.0.0", default-features = false, path = "../../primitives/io" }
sp-runtime = { version = "2.0.0", default-features = false, path = "../../primitives/runtime" }
sp-std = { version = "2.0.0", default-features = false, path = "../../primitives/std" }
@@ -33,6 +34,7 @@ std = [
"lite-json/std",
"sp-core/std",
"sp-io/std",
"sp-keystore",
"sp-runtime/std",
"sp-std/std",
]
@@ -16,7 +16,7 @@
// limitations under the License.
use crate::*;
use std::sync::Arc;
use codec::{Encode, Decode};
use frame_support::{
assert_ok, impl_outer_origin, parameter_types,
@@ -26,8 +26,11 @@ use sp_core::{
H256,
offchain::{OffchainExt, TransactionPoolExt, testing},
sr25519::Signature,
};
use sp_keystore::{
{KeystoreExt, SyncCryptoStore},
testing::KeyStore,
traits::KeystoreExt,
};
use sp_runtime::{
Perbill, RuntimeAppPublic,
@@ -208,7 +211,8 @@ fn should_submit_signed_transaction_on_chain() {
let (offchain, offchain_state) = testing::TestOffchainExt::new();
let (pool, pool_state) = testing::TestTransactionPoolExt::new();
let keystore = KeyStore::new();
keystore.write().sr25519_generate_new(
SyncCryptoStore::sr25519_generate_new(
&keystore,
crate::crypto::Public::ID,
Some(&format!("{}/hunter1", PHRASE))
).unwrap();
@@ -217,7 +221,7 @@ fn should_submit_signed_transaction_on_chain() {
let mut t = sp_io::TestExternalities::default();
t.register_extension(OffchainExt::new(offchain));
t.register_extension(TransactionPoolExt::new(pool));
t.register_extension(KeystoreExt(keystore));
t.register_extension(KeystoreExt(Arc::new(keystore)));
price_oracle_response(&mut offchain_state.write());
@@ -241,24 +245,24 @@ fn should_submit_unsigned_transaction_on_chain_for_any_account() {
let keystore = KeyStore::new();
keystore.write().sr25519_generate_new(
SyncCryptoStore::sr25519_generate_new(
&keystore,
crate::crypto::Public::ID,
Some(&format!("{}/hunter1", PHRASE))
).unwrap();
let mut t = sp_io::TestExternalities::default();
t.register_extension(OffchainExt::new(offchain));
t.register_extension(TransactionPoolExt::new(pool));
t.register_extension(KeystoreExt(keystore.clone()));
price_oracle_response(&mut offchain_state.write());
let public_key = keystore.read()
.sr25519_public_keys(crate::crypto::Public::ID)
let public_key = SyncCryptoStore::sr25519_public_keys(&keystore, crate::crypto::Public::ID)
.get(0)
.unwrap()
.clone();
let mut t = sp_io::TestExternalities::default();
t.register_extension(OffchainExt::new(offchain));
t.register_extension(TransactionPoolExt::new(pool));
t.register_extension(KeystoreExt(Arc::new(keystore)));
price_oracle_response(&mut offchain_state.write());
let price_payload = PricePayload {
block_number: 1,
price: 15523,
@@ -294,24 +298,24 @@ fn should_submit_unsigned_transaction_on_chain_for_all_accounts() {
let keystore = KeyStore::new();
keystore.write().sr25519_generate_new(
SyncCryptoStore::sr25519_generate_new(
&keystore,
crate::crypto::Public::ID,
Some(&format!("{}/hunter1", PHRASE))
).unwrap();
let mut t = sp_io::TestExternalities::default();
t.register_extension(OffchainExt::new(offchain));
t.register_extension(TransactionPoolExt::new(pool));
t.register_extension(KeystoreExt(keystore.clone()));
price_oracle_response(&mut offchain_state.write());
let public_key = keystore.read()
.sr25519_public_keys(crate::crypto::Public::ID)
let public_key = SyncCryptoStore::sr25519_public_keys(&keystore, crate::crypto::Public::ID)
.get(0)
.unwrap()
.clone();
let mut t = sp_io::TestExternalities::default();
t.register_extension(OffchainExt::new(offchain));
t.register_extension(TransactionPoolExt::new(pool));
t.register_extension(KeystoreExt(Arc::new(keystore)));
price_oracle_response(&mut offchain_state.write());
let price_payload = PricePayload {
block_number: 1,
price: 15523,
@@ -349,7 +353,7 @@ fn should_submit_raw_unsigned_transaction_on_chain() {
let mut t = sp_io::TestExternalities::default();
t.register_extension(OffchainExt::new(offchain));
t.register_extension(TransactionPoolExt::new(pool));
t.register_extension(KeystoreExt(keystore));
t.register_extension(KeystoreExt(Arc::new(keystore)));
price_oracle_response(&mut offchain_state.write());
@@ -14,6 +14,7 @@ targets = ["x86_64-unknown-linux-gnu"]
[dependencies]
sp-core = { version = "2.0.0", default-features = false, path = "../../core" }
sp-keystore = { version = "0.8.0", path = "../../keystore" }
substrate-test-runtime-client = { version = "2.0.0", path = "../../../test-utils/runtime/client" }
sp-runtime = { version = "2.0.0", path = "../../runtime" }
sp-api = { version = "2.0.0", path = "../../api" }
@@ -15,11 +15,15 @@
// along with Substrate. If not, see <http://www.gnu.org/licenses/>.
//! Integration tests for ecdsa
use std::sync::Arc;
use sp_runtime::generic::BlockId;
use sp_core::{
crypto::Pair,
testing::{KeyStore, ECDSA},
testing::ECDSA,
};
use sp_keystore::{
SyncCryptoStore,
testing::KeyStore,
};
use substrate_test_runtime_client::{
TestClientBuilder, DefaultTestClientBuilderExt, TestClientBuilderExt,
@@ -30,13 +34,13 @@ use sp_application_crypto::ecdsa::{AppPair, AppPublic};
#[test]
fn ecdsa_works_in_runtime() {
let keystore = KeyStore::new();
let keystore = Arc::new(KeyStore::new());
let test_client = TestClientBuilder::new().set_keystore(keystore.clone()).build();
let (signature, public) = test_client.runtime_api()
.test_ecdsa_crypto(&BlockId::Number(0))
.expect("Tests `ecdsa` crypto.");
let supported_keys = keystore.read().keys(ECDSA).unwrap();
let supported_keys = SyncCryptoStore::keys(&*keystore, ECDSA).unwrap();
assert!(supported_keys.contains(&public.clone().into()));
assert!(AppPair::verify(&signature, "ecdsa", &AppPublic::from(public)));
}
@@ -17,10 +17,15 @@
//! Integration tests for ed25519
use std::sync::Arc;
use sp_runtime::generic::BlockId;
use sp_core::{
crypto::Pair,
testing::{KeyStore, ED25519},
testing::ED25519,
};
use sp_keystore::{
SyncCryptoStore,
testing::KeyStore,
};
use substrate_test_runtime_client::{
TestClientBuilder, DefaultTestClientBuilderExt, TestClientBuilderExt,
@@ -31,13 +36,13 @@ use sp_application_crypto::ed25519::{AppPair, AppPublic};
#[test]
fn ed25519_works_in_runtime() {
let keystore = KeyStore::new();
let keystore = Arc::new(KeyStore::new());
let test_client = TestClientBuilder::new().set_keystore(keystore.clone()).build();
let (signature, public) = test_client.runtime_api()
.test_ed25519_crypto(&BlockId::Number(0))
.expect("Tests `ed25519` crypto.");
let supported_keys = keystore.read().keys(ED25519).unwrap();
let supported_keys = SyncCryptoStore::keys(&*keystore, ED25519).unwrap();
assert!(supported_keys.contains(&public.clone().into()));
assert!(AppPair::verify(&signature, "ed25519", &AppPublic::from(public)));
}
@@ -17,11 +17,15 @@
//! Integration tests for sr25519
use std::sync::Arc;
use sp_runtime::generic::BlockId;
use sp_core::{
crypto::Pair,
testing::{KeyStore, SR25519},
testing::SR25519,
};
use sp_keystore::{
SyncCryptoStore,
testing::KeyStore,
};
use substrate_test_runtime_client::{
TestClientBuilder, DefaultTestClientBuilderExt, TestClientBuilderExt,
@@ -32,13 +36,13 @@ use sp_application_crypto::sr25519::{AppPair, AppPublic};
#[test]
fn sr25519_works_in_runtime() {
let keystore = KeyStore::new();
let keystore = Arc::new(KeyStore::new());
let test_client = TestClientBuilder::new().set_keystore(keystore.clone()).build();
let (signature, public) = test_client.runtime_api()
.test_sr25519_crypto(&BlockId::Number(0))
.expect("Tests `sr25519` crypto.");
let supported_keys = keystore.read().keys(SR25519).unwrap();
let supported_keys = SyncCryptoStore::keys(&*keystore, SR25519).unwrap();
assert!(supported_keys.contains(&public.clone().into()));
assert!(AppPair::verify(&signature, "sr25519", &AppPublic::from(public)));
}
@@ -23,6 +23,7 @@ sp-consensus-slots = { version = "0.8.0", default-features = false, path = "../s
sp-consensus-vrf = { version = "0.8.0", path = "../vrf", default-features = false }
sp-core = { version = "2.0.0", default-features = false, path = "../../core" }
sp-inherents = { version = "2.0.0", default-features = false, path = "../../inherents" }
sp-keystore = { version = "0.8.0", default-features = false, path = "../../keystore", optional = true }
sp-runtime = { version = "2.0.0", default-features = false, path = "../../runtime" }
sp-timestamp = { version = "2.0.0", default-features = false, path = "../../timestamp" }
@@ -39,6 +40,7 @@ std = [
"sp-consensus-vrf/std",
"sp-core/std",
"sp-inherents/std",
"sp-keystore",
"sp-runtime/std",
"sp-timestamp/std",
]
@@ -30,7 +30,7 @@ pub use sp_consensus_vrf::schnorrkel::{
use codec::{Decode, Encode};
#[cfg(feature = "std")]
use sp_core::vrf::{VRFTranscriptData, VRFTranscriptValue};
use sp_keystore::vrf::{VRFTranscriptData, VRFTranscriptValue};
use sp_runtime::{traits::Header, ConsensusEngineId, RuntimeDebug};
use sp_std::vec::Vec;
@@ -115,7 +115,7 @@ pub fn make_transcript_data(
items: vec![
("slot number", VRFTranscriptValue::U64(slot_number)),
("current epoch", VRFTranscriptValue::U64(epoch)),
("chain randomness", VRFTranscriptValue::Bytes(&randomness[..])),
("chain randomness", VRFTranscriptValue::Bytes(randomness.to_vec())),
]
}
}
-1
View File
@@ -13,7 +13,6 @@ documentation = "https://docs.rs/sp-core"
targets = ["x86_64-unknown-linux-gnu"]
[dependencies]
derive_more = "0.99.2"
sp-std = { version = "2.0.0", default-features = false, path = "../std" }
codec = { package = "parity-scale-codec", version = "1.3.1", default-features = false, features = ["derive"] }
log = { version = "0.4.8", default-features = false }
-2
View File
@@ -72,8 +72,6 @@ mod changes_trie;
#[cfg(feature = "std")]
pub mod traits;
pub mod testing;
#[cfg(feature = "std")]
pub mod vrf;
pub use self::hash::{H160, H256, H512, convert_hash};
pub use self::uint::{U256, U512};
-310
View File
@@ -18,15 +18,6 @@
//! Types that should only be used for testing!
use crate::crypto::KeyTypeId;
#[cfg(feature = "std")]
use crate::{
crypto::{Pair, Public, CryptoTypePublicPair},
ed25519, sr25519, ecdsa,
traits::Error,
vrf::{VRFTranscriptData, VRFSignature, make_transcript},
};
#[cfg(feature = "std")]
use std::collections::HashSet;
/// Key type for generic Ed25519 key.
pub const ED25519: KeyTypeId = KeyTypeId(*b"ed25");
@@ -35,230 +26,6 @@ pub const SR25519: KeyTypeId = KeyTypeId(*b"sr25");
/// Key type for generic Sr 25519 key.
pub const ECDSA: KeyTypeId = KeyTypeId(*b"ecds");
/// A keystore implementation usable in tests.
#[cfg(feature = "std")]
#[derive(Default)]
pub struct KeyStore {
/// `KeyTypeId` maps to public keys and public keys map to private keys.
keys: std::collections::HashMap<KeyTypeId, std::collections::HashMap<Vec<u8>, String>>,
}
#[cfg(feature = "std")]
impl KeyStore {
/// Creates a new instance of `Self`.
pub fn new() -> crate::traits::BareCryptoStorePtr {
std::sync::Arc::new(parking_lot::RwLock::new(Self::default()))
}
fn sr25519_key_pair(&self, id: KeyTypeId, pub_key: &sr25519::Public) -> Option<sr25519::Pair> {
self.keys.get(&id)
.and_then(|inner|
inner.get(pub_key.as_slice())
.map(|s| sr25519::Pair::from_string(s, None).expect("`sr25519` seed slice is valid"))
)
}
fn ed25519_key_pair(&self, id: KeyTypeId, pub_key: &ed25519::Public) -> Option<ed25519::Pair> {
self.keys.get(&id)
.and_then(|inner|
inner.get(pub_key.as_slice())
.map(|s| ed25519::Pair::from_string(s, None).expect("`ed25519` seed slice is valid"))
)
}
fn ecdsa_key_pair(&self, id: KeyTypeId, pub_key: &ecdsa::Public) -> Option<ecdsa::Pair> {
self.keys.get(&id)
.and_then(|inner|
inner.get(pub_key.as_slice())
.map(|s| ecdsa::Pair::from_string(s, None).expect("`ecdsa` seed slice is valid"))
)
}
}
#[cfg(feature = "std")]
impl crate::traits::BareCryptoStore for KeyStore {
fn keys(&self, id: KeyTypeId) -> Result<Vec<CryptoTypePublicPair>, Error> {
self.keys
.get(&id)
.map(|map| {
Ok(map.keys()
.fold(Vec::new(), |mut v, k| {
v.push(CryptoTypePublicPair(sr25519::CRYPTO_ID, k.clone()));
v.push(CryptoTypePublicPair(ed25519::CRYPTO_ID, k.clone()));
v.push(CryptoTypePublicPair(ecdsa::CRYPTO_ID, k.clone()));
v
}))
})
.unwrap_or_else(|| Ok(vec![]))
}
fn sr25519_public_keys(&self, id: KeyTypeId) -> Vec<sr25519::Public> {
self.keys.get(&id)
.map(|keys|
keys.values()
.map(|s| sr25519::Pair::from_string(s, None).expect("`sr25519` seed slice is valid"))
.map(|p| p.public())
.collect()
)
.unwrap_or_default()
}
fn sr25519_generate_new(
&mut self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<sr25519::Public, Error> {
match seed {
Some(seed) => {
let pair = sr25519::Pair::from_string(seed, None)
.map_err(|_| Error::ValidationError("Generates an `sr25519` pair.".to_owned()))?;
self.keys.entry(id).or_default().insert(pair.public().to_raw_vec(), seed.into());
Ok(pair.public())
},
None => {
let (pair, phrase, _) = sr25519::Pair::generate_with_phrase(None);
self.keys.entry(id).or_default().insert(pair.public().to_raw_vec(), phrase);
Ok(pair.public())
}
}
}
fn ed25519_public_keys(&self, id: KeyTypeId) -> Vec<ed25519::Public> {
self.keys.get(&id)
.map(|keys|
keys.values()
.map(|s| ed25519::Pair::from_string(s, None).expect("`ed25519` seed slice is valid"))
.map(|p| p.public())
.collect()
)
.unwrap_or_default()
}
fn ed25519_generate_new(
&mut self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ed25519::Public, Error> {
match seed {
Some(seed) => {
let pair = ed25519::Pair::from_string(seed, None)
.map_err(|_| Error::ValidationError("Generates an `ed25519` pair.".to_owned()))?;
self.keys.entry(id).or_default().insert(pair.public().to_raw_vec(), seed.into());
Ok(pair.public())
},
None => {
let (pair, phrase, _) = ed25519::Pair::generate_with_phrase(None);
self.keys.entry(id).or_default().insert(pair.public().to_raw_vec(), phrase);
Ok(pair.public())
}
}
}
fn ecdsa_public_keys(&self, id: KeyTypeId) -> Vec<ecdsa::Public> {
self.keys.get(&id)
.map(|keys|
keys.values()
.map(|s| ecdsa::Pair::from_string(s, None).expect("`ecdsa` seed slice is valid"))
.map(|p| p.public())
.collect()
)
.unwrap_or_default()
}
fn ecdsa_generate_new(
&mut self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ecdsa::Public, Error> {
match seed {
Some(seed) => {
let pair = ecdsa::Pair::from_string(seed, None)
.map_err(|_| Error::ValidationError("Generates an `ecdsa` pair.".to_owned()))?;
self.keys.entry(id).or_default().insert(pair.public().to_raw_vec(), seed.into());
Ok(pair.public())
},
None => {
let (pair, phrase, _) = ecdsa::Pair::generate_with_phrase(None);
self.keys.entry(id).or_default().insert(pair.public().to_raw_vec(), phrase);
Ok(pair.public())
}
}
}
fn insert_unknown(&mut self, id: KeyTypeId, suri: &str, public: &[u8]) -> Result<(), ()> {
self.keys.entry(id).or_default().insert(public.to_owned(), suri.to_string());
Ok(())
}
fn password(&self) -> Option<&str> {
None
}
fn has_keys(&self, public_keys: &[(Vec<u8>, KeyTypeId)]) -> bool {
public_keys.iter().all(|(k, t)| self.keys.get(&t).and_then(|s| s.get(k)).is_some())
}
fn supported_keys(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
) -> std::result::Result<Vec<CryptoTypePublicPair>, Error> {
let provided_keys = keys.into_iter().collect::<HashSet<_>>();
let all_keys = self.keys(id)?.into_iter().collect::<HashSet<_>>();
Ok(provided_keys.intersection(&all_keys).cloned().collect())
}
fn sign_with(
&self,
id: KeyTypeId,
key: &CryptoTypePublicPair,
msg: &[u8],
) -> Result<Vec<u8>, Error> {
use codec::Encode;
match key.0 {
ed25519::CRYPTO_ID => {
let key_pair: ed25519::Pair = self
.ed25519_key_pair(id, &ed25519::Public::from_slice(key.1.as_slice()))
.ok_or_else(|| Error::PairNotFound("ed25519".to_owned()))?;
return Ok(key_pair.sign(msg).encode());
}
sr25519::CRYPTO_ID => {
let key_pair: sr25519::Pair = self
.sr25519_key_pair(id, &sr25519::Public::from_slice(key.1.as_slice()))
.ok_or_else(|| Error::PairNotFound("sr25519".to_owned()))?;
return Ok(key_pair.sign(msg).encode());
}
ecdsa::CRYPTO_ID => {
let key_pair: ecdsa::Pair = self
.ecdsa_key_pair(id, &ecdsa::Public::from_slice(key.1.as_slice()))
.ok_or_else(|| Error::PairNotFound("ecdsa".to_owned()))?;
return Ok(key_pair.sign(msg).encode());
}
_ => Err(Error::KeyNotSupported(id))
}
}
fn sr25519_vrf_sign(
&self,
key_type: KeyTypeId,
public: &sr25519::Public,
transcript_data: VRFTranscriptData,
) -> Result<VRFSignature, Error> {
let transcript = make_transcript(transcript_data);
let pair = self.sr25519_key_pair(key_type, public)
.ok_or_else(|| Error::PairNotFound("Not found".to_owned()))?;
let (inout, proof, _) = pair.as_ref().vrf_sign(transcript);
Ok(VRFSignature {
output: inout.to_output(),
proof,
})
}
}
/// Macro for exporting functions from wasm in with the expected signature for using it with the
/// wasm executor. This is useful for tests where you need to call a function in wasm.
///
@@ -385,80 +152,3 @@ impl crate::traits::SpawnNamed for TaskExecutor {
self.0.spawn_ok(future);
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::sr25519;
use crate::testing::{ED25519, SR25519};
use crate::vrf::VRFTranscriptValue;
#[test]
fn store_key_and_extract() {
let store = KeyStore::new();
let public = store.write()
.ed25519_generate_new(ED25519, None)
.expect("Generates key");
let public_keys = store.read().keys(ED25519).unwrap();
assert!(public_keys.contains(&public.into()));
}
#[test]
fn store_unknown_and_extract_it() {
let store = KeyStore::new();
let secret_uri = "//Alice";
let key_pair = sr25519::Pair::from_string(secret_uri, None).expect("Generates key pair");
store.write().insert_unknown(
SR25519,
secret_uri,
key_pair.public().as_ref(),
).expect("Inserts unknown key");
let public_keys = store.read().keys(SR25519).unwrap();
assert!(public_keys.contains(&key_pair.public().into()));
}
#[test]
fn vrf_sign() {
let store = KeyStore::new();
let secret_uri = "//Alice";
let key_pair = sr25519::Pair::from_string(secret_uri, None).expect("Generates key pair");
let transcript_data = VRFTranscriptData {
label: b"Test",
items: vec![
("one", VRFTranscriptValue::U64(1)),
("two", VRFTranscriptValue::U64(2)),
("three", VRFTranscriptValue::Bytes("test".as_bytes())),
]
};
let result = store.read().sr25519_vrf_sign(
SR25519,
&key_pair.public(),
transcript_data.clone(),
);
assert!(result.is_err());
store.write().insert_unknown(
SR25519,
secret_uri,
key_pair.public().as_ref(),
).expect("Inserts unknown key");
let result = store.read().sr25519_vrf_sign(
SR25519,
&key_pair.public(),
transcript_data,
);
assert!(result.is_ok());
}
}
-178
View File
@@ -17,192 +17,14 @@
//! Shareable Substrate traits.
use crate::{
crypto::{KeyTypeId, CryptoTypePublicPair},
vrf::{VRFTranscriptData, VRFSignature},
ed25519, sr25519, ecdsa,
};
use std::{
borrow::Cow,
fmt::{Debug, Display},
panic::UnwindSafe,
sync::Arc,
};
pub use sp_externalities::{Externalities, ExternalitiesExt};
/// BareCryptoStore error
#[derive(Debug, derive_more::Display)]
pub enum Error {
/// Public key type is not supported
#[display(fmt="Key not supported: {:?}", _0)]
KeyNotSupported(KeyTypeId),
/// Pair not found for public key and KeyTypeId
#[display(fmt="Pair was not found: {}", _0)]
PairNotFound(String),
/// Validation error
#[display(fmt="Validation error: {}", _0)]
ValidationError(String),
/// Keystore unavailable
#[display(fmt="Keystore unavailable")]
Unavailable,
/// Programming errors
#[display(fmt="An unknown keystore error occurred: {}", _0)]
Other(String)
}
/// Something that generates, stores and provides access to keys.
pub trait BareCryptoStore: Send + Sync {
/// Returns all sr25519 public keys for the given key type.
fn sr25519_public_keys(&self, id: KeyTypeId) -> Vec<sr25519::Public>;
/// Generate a new sr25519 key pair for the given key type and an optional seed.
///
/// If the given seed is `Some(_)`, the key pair will only be stored in memory.
///
/// Returns the public key of the generated key pair.
fn sr25519_generate_new(
&mut self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<sr25519::Public, Error>;
/// Returns all ed25519 public keys for the given key type.
fn ed25519_public_keys(&self, id: KeyTypeId) -> Vec<ed25519::Public>;
/// Generate a new ed25519 key pair for the given key type and an optional seed.
///
/// If the given seed is `Some(_)`, the key pair will only be stored in memory.
///
/// Returns the public key of the generated key pair.
fn ed25519_generate_new(
&mut self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ed25519::Public, Error>;
/// Returns all ecdsa public keys for the given key type.
fn ecdsa_public_keys(&self, id: KeyTypeId) -> Vec<ecdsa::Public>;
/// Generate a new ecdsa key pair for the given key type and an optional seed.
///
/// If the given seed is `Some(_)`, the key pair will only be stored in memory.
///
/// Returns the public key of the generated key pair.
fn ecdsa_generate_new(
&mut self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ecdsa::Public, Error>;
/// Insert a new key. This doesn't require any known of the crypto; but a public key must be
/// manually provided.
///
/// Places it into the file system store.
///
/// `Err` if there's some sort of weird filesystem error, but should generally be `Ok`.
fn insert_unknown(&mut self, _key_type: KeyTypeId, _suri: &str, _public: &[u8]) -> Result<(), ()>;
/// Get the password for this store.
fn password(&self) -> Option<&str>;
/// Find intersection between provided keys and supported keys
///
/// Provided a list of (CryptoTypeId,[u8]) pairs, this would return
/// a filtered set of public keys which are supported by the keystore.
fn supported_keys(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>
) -> Result<Vec<CryptoTypePublicPair>, Error>;
/// List all supported keys
///
/// Returns a set of public keys the signer supports.
fn keys(&self, id: KeyTypeId) -> Result<Vec<CryptoTypePublicPair>, Error>;
/// Checks if the private keys for the given public key and key type combinations exist.
///
/// Returns `true` iff all private keys could be found.
fn has_keys(&self, public_keys: &[(Vec<u8>, KeyTypeId)]) -> bool;
/// Sign with key
///
/// Signs a message with the private key that matches
/// the public key passed.
///
/// Returns the SCALE encoded signature if key is found & supported,
/// an error otherwise.
fn sign_with(
&self,
id: KeyTypeId,
key: &CryptoTypePublicPair,
msg: &[u8],
) -> Result<Vec<u8>, Error>;
/// Sign with any key
///
/// Given a list of public keys, find the first supported key and
/// sign the provided message with that key.
///
/// Returns a tuple of the used key and the SCALE encoded signature.
fn sign_with_any(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
msg: &[u8]
) -> Result<(CryptoTypePublicPair, Vec<u8>), Error> {
if keys.len() == 1 {
return self.sign_with(id, &keys[0], msg).map(|s| (keys[0].clone(), s));
} else {
for k in self.supported_keys(id, keys)? {
if let Ok(sign) = self.sign_with(id, &k, msg) {
return Ok((k, sign));
}
}
}
Err(Error::KeyNotSupported(id))
}
/// Sign with all keys
///
/// Provided a list of public keys, sign a message with
/// each key given that the key is supported.
///
/// Returns a list of `Result`s each representing the SCALE encoded
/// signature of each key or a Error for non-supported keys.
fn sign_with_all(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
msg: &[u8],
) -> Result<Vec<Result<Vec<u8>, Error>>, ()>{
Ok(keys.iter().map(|k| self.sign_with(id, k, msg)).collect())
}
/// Generate VRF signature for given transcript data.
///
/// Receives KeyTypeId and Public key to be able to map
/// them to a private key that exists in the keystore which
/// is, in turn, used for signing the provided transcript.
///
/// Returns a result containing the signature data.
/// Namely, VRFOutput and VRFProof which are returned
/// inside the `VRFSignature` container struct.
///
/// This function will return an error in the cases where
/// the public key and key type provided do not match a private
/// key in the keystore. Or, in the context of remote signing
/// an error could be a network one.
fn sr25519_vrf_sign(
&self,
key_type: KeyTypeId,
public: &sr25519::Public,
transcript_data: VRFTranscriptData,
) -> Result<VRFSignature, Error>;
}
/// A pointer to the key store.
pub type BareCryptoStorePtr = Arc<parking_lot::RwLock<dyn BareCryptoStore>>;
sp_externalities::decl_extension! {
/// The keystore extension to register/retrieve from the externalities.
pub struct KeystoreExt(BareCryptoStorePtr);
}
/// Code execution engine.
pub trait CodeExecutor: Sized + Send + Sync + CallInWasm + Clone + 'static {
/// Externalities error type.
@@ -15,26 +15,28 @@ targets = ["x86_64-unknown-linux-gnu"]
[dependencies]
sp-application-crypto = { version = "2.0.0", default-features = false, path = "../application-crypto" }
codec = { package = "parity-scale-codec", version = "1.3.1", default-features = false, features = ["derive"] }
grandpa = { package = "finality-grandpa", version = "0.12.3", default-features = false, features = ["derive-codec"] }
log = { version = "0.4.8", optional = true }
serde = { version = "1.0.101", optional = true, features = ["derive"] }
sp-api = { version = "2.0.0", default-features = false, path = "../api" }
sp-application-crypto = { version = "2.0.0", default-features = false, path = "../application-crypto" }
sp-core = { version = "2.0.0", default-features = false, path = "../core" }
sp-keystore = { version = "0.8.0", default-features = false, path = "../keystore", optional = true }
sp-runtime = { version = "2.0.0", default-features = false, path = "../runtime" }
sp-std = { version = "2.0.0", default-features = false, path = "../std" }
[features]
default = ["std"]
std = [
"sp-application-crypto/std",
"codec/std",
"grandpa/std",
"log",
"serde",
"codec/std",
"grandpa/std",
"sp-api/std",
"sp-application-crypto/std",
"sp-core/std",
"sp-keystore",
"sp-runtime/std",
"sp-std/std",
]
@@ -30,7 +30,7 @@ use sp_runtime::{ConsensusEngineId, RuntimeDebug, traits::NumberFor};
use sp_std::borrow::Cow;
use sp_std::vec::Vec;
#[cfg(feature = "std")]
use sp_core::traits::BareCryptoStorePtr;
use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
#[cfg(feature = "std")]
use log::debug;
@@ -372,7 +372,7 @@ where
/// Localizes the message to the given set and round and signs the payload.
#[cfg(feature = "std")]
pub fn sign_message<H, N>(
keystore: &BareCryptoStorePtr,
keystore: SyncCryptoStorePtr,
message: grandpa::Message<H, N>,
public: AuthorityId,
round: RoundNumber,
@@ -387,11 +387,12 @@ where
use sp_std::convert::TryInto;
let encoded = localized_payload(round, set_id, &message);
let signature = keystore.read()
.sign_with(AuthorityId::ID, &public.to_public_crypto_pair(), &encoded[..])
.ok()?
.try_into()
.ok()?;
let signature = SyncCryptoStore::sign_with(
&*keystore,
AuthorityId::ID,
&public.to_public_crypto_pair(),
&encoded[..],
).ok()?.try_into().ok()?;
Some(grandpa::SignedMessage {
message,
+2
View File
@@ -18,6 +18,7 @@ targets = ["x86_64-unknown-linux-gnu"]
codec = { package = "parity-scale-codec", version = "1.3.1", default-features = false }
hash-db = { version = "0.15.2", default-features = false }
sp-core = { version = "2.0.0", default-features = false, path = "../core" }
sp-keystore = { version = "0.8.0", default-features = false, optional = true, path = "../keystore" }
sp-std = { version = "2.0.0", default-features = false, path = "../std" }
libsecp256k1 = { version = "0.3.4", optional = true }
sp-state-machine = { version = "0.8.0", optional = true, path = "../../primitives/state-machine" }
@@ -36,6 +37,7 @@ tracing-core = { version = "0.1.17", default-features = false}
default = ["std"]
std = [
"sp-core/std",
"sp-keystore",
"codec/std",
"sp-std/std",
"hash-db/std",
+30 -37
View File
@@ -38,11 +38,13 @@ use tracing;
#[cfg(feature = "std")]
use sp_core::{
crypto::Pair,
traits::{KeystoreExt, CallInWasmExt, TaskExecutorExt},
traits::{CallInWasmExt, TaskExecutorExt},
offchain::{OffchainExt, TransactionPoolExt},
hexdisplay::HexDisplay,
storage::ChildInfo,
};
#[cfg(feature = "std")]
use sp_keystore::{KeystoreExt, SyncCryptoStore};
use sp_core::{
OpaquePeerId, crypto::KeyTypeId, ed25519, sr25519, ecdsa, H256, LogLevel,
@@ -417,10 +419,9 @@ pub trait Misc {
pub trait Crypto {
/// Returns all `ed25519` public keys for the given key id from the keystore.
fn ed25519_public_keys(&mut self, id: KeyTypeId) -> Vec<ed25519::Public> {
self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!")
.read()
.ed25519_public_keys(id)
let keystore = &***self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!");
SyncCryptoStore::ed25519_public_keys(keystore, id)
}
/// Generate an `ed22519` key for the given key type using an optional `seed` and
@@ -431,10 +432,9 @@ pub trait Crypto {
/// Returns the public key.
fn ed25519_generate(&mut self, id: KeyTypeId, seed: Option<Vec<u8>>) -> ed25519::Public {
let seed = seed.as_ref().map(|s| std::str::from_utf8(&s).expect("Seed is valid utf8!"));
self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!")
.write()
.ed25519_generate_new(id, seed)
let keystore = &***self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!");
SyncCryptoStore::ed25519_generate_new(keystore, id, seed)
.expect("`ed25519_generate` failed")
}
@@ -448,10 +448,9 @@ pub trait Crypto {
pub_key: &ed25519::Public,
msg: &[u8],
) -> Option<ed25519::Signature> {
self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!")
.read()
.sign_with(id, &pub_key.into(), msg)
let keystore = &***self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!");
SyncCryptoStore::sign_with(keystore, id, &pub_key.into(), msg)
.map(|sig| ed25519::Signature::from_slice(sig.as_slice()))
.ok()
}
@@ -547,10 +546,9 @@ pub trait Crypto {
/// Returns all `sr25519` public keys for the given key id from the keystore.
fn sr25519_public_keys(&mut self, id: KeyTypeId) -> Vec<sr25519::Public> {
self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!")
.read()
.sr25519_public_keys(id)
let keystore = &*** self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!");
SyncCryptoStore::sr25519_public_keys(keystore, id)
}
/// Generate an `sr22519` key for the given key type using an optional seed and
@@ -561,10 +559,9 @@ pub trait Crypto {
/// Returns the public key.
fn sr25519_generate(&mut self, id: KeyTypeId, seed: Option<Vec<u8>>) -> sr25519::Public {
let seed = seed.as_ref().map(|s| std::str::from_utf8(&s).expect("Seed is valid utf8!"));
self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!")
.write()
.sr25519_generate_new(id, seed)
let keystore = &***self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!");
SyncCryptoStore::sr25519_generate_new(keystore, id, seed)
.expect("`sr25519_generate` failed")
}
@@ -578,10 +575,9 @@ pub trait Crypto {
pub_key: &sr25519::Public,
msg: &[u8],
) -> Option<sr25519::Signature> {
self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!")
.read()
.sign_with(id, &pub_key.into(), msg)
let keystore = &***self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!");
SyncCryptoStore::sign_with(keystore, id, &pub_key.into(), msg)
.map(|sig| sr25519::Signature::from_slice(sig.as_slice()))
.ok()
}
@@ -596,10 +592,9 @@ pub trait Crypto {
/// Returns all `ecdsa` public keys for the given key id from the keystore.
fn ecdsa_public_keys(&mut self, id: KeyTypeId) -> Vec<ecdsa::Public> {
self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!")
.read()
.ecdsa_public_keys(id)
let keystore = &***self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!");
SyncCryptoStore::ecdsa_public_keys(keystore, id)
}
/// Generate an `ecdsa` key for the given key type using an optional `seed` and
@@ -610,10 +605,9 @@ pub trait Crypto {
/// Returns the public key.
fn ecdsa_generate(&mut self, id: KeyTypeId, seed: Option<Vec<u8>>) -> ecdsa::Public {
let seed = seed.as_ref().map(|s| std::str::from_utf8(&s).expect("Seed is valid utf8!"));
self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!")
.write()
.ecdsa_generate_new(id, seed)
let keystore = &***self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!");
SyncCryptoStore::ecdsa_generate_new(keystore, id, seed)
.expect("`ecdsa_generate` failed")
}
@@ -627,10 +621,9 @@ pub trait Crypto {
pub_key: &ecdsa::Public,
msg: &[u8],
) -> Option<ecdsa::Signature> {
self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!")
.read()
.sign_with(id, &pub_key.into(), msg)
let keystore = &***self.extension::<KeystoreExt>()
.expect("No `keystore` associated for the current context!");
SyncCryptoStore::sign_with(keystore, id, &pub_key.into(), msg)
.map(|sig| ecdsa::Signature::from_slice(sig.as_slice()))
.ok()
}
+29
View File
@@ -0,0 +1,29 @@
[package]
name = "sp-keystore"
version = "0.8.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2018"
license = "Apache-2.0"
homepage = "https://substrate.dev"
repository = "https://github.com/paritytech/substrate/"
description = "Keystore primitives."
documentation = "https://docs.rs/sp-core"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]
[dependencies]
async-trait = "0.1.30"
derive_more = "0.99.2"
codec = { package = "parity-scale-codec", version = "1.3.1", default-features = false, features = ["derive"] }
futures = { version = "0.3.1" }
schnorrkel = { version = "0.9.1", features = ["preaudit_deprecated", "u64_backend"], default-features = false }
merlin = { version = "2.0", default-features = false }
parking_lot = { version = "0.10.0", default-features = false }
sp-core = { version = "2.0.0", path = "../core" }
sp-externalities = { version = "0.8.0", path = "../externalities", default-features = false }
[dev-dependencies]
rand = "0.7.2"
rand_chacha = "0.2.2"
+365
View File
@@ -0,0 +1,365 @@
// This file is part of Substrate.
// Copyright (C) 2020 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Keystore traits
pub mod testing;
pub mod vrf;
use std::sync::Arc;
use async_trait::async_trait;
use futures::{executor::block_on, future::join_all};
use sp_core::{
crypto::{KeyTypeId, CryptoTypePublicPair},
ed25519, sr25519, ecdsa,
};
use crate::vrf::{VRFTranscriptData, VRFSignature};
/// CryptoStore error
#[derive(Debug, derive_more::Display)]
pub enum Error {
/// Public key type is not supported
#[display(fmt="Key not supported: {:?}", _0)]
KeyNotSupported(KeyTypeId),
/// Pair not found for public key and KeyTypeId
#[display(fmt="Pair was not found: {}", _0)]
PairNotFound(String),
/// Validation error
#[display(fmt="Validation error: {}", _0)]
ValidationError(String),
/// Keystore unavailable
#[display(fmt="Keystore unavailable")]
Unavailable,
/// Programming errors
#[display(fmt="An unknown keystore error occurred: {}", _0)]
Other(String)
}
/// Something that generates, stores and provides access to keys.
#[async_trait]
pub trait CryptoStore: Send + Sync {
/// Returns all sr25519 public keys for the given key type.
async fn sr25519_public_keys(&self, id: KeyTypeId) -> Vec<sr25519::Public>;
/// Generate a new sr25519 key pair for the given key type and an optional seed.
///
/// If the given seed is `Some(_)`, the key pair will only be stored in memory.
///
/// Returns the public key of the generated key pair.
async fn sr25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<sr25519::Public, Error>;
/// Returns all ed25519 public keys for the given key type.
async fn ed25519_public_keys(&self, id: KeyTypeId) -> Vec<ed25519::Public>;
/// Generate a new ed25519 key pair for the given key type and an optional seed.
///
/// If the given seed is `Some(_)`, the key pair will only be stored in memory.
///
/// Returns the public key of the generated key pair.
async fn ed25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ed25519::Public, Error>;
/// Returns all ecdsa public keys for the given key type.
async fn ecdsa_public_keys(&self, id: KeyTypeId) -> Vec<ecdsa::Public>;
/// Generate a new ecdsa key pair for the given key type and an optional seed.
///
/// If the given seed is `Some(_)`, the key pair will only be stored in memory.
///
/// Returns the public key of the generated key pair.
async fn ecdsa_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ecdsa::Public, Error>;
/// Insert a new key. This doesn't require any known of the crypto; but a public key must be
/// manually provided.
///
/// Places it into the file system store.
///
/// `Err` if there's some sort of weird filesystem error, but should generally be `Ok`.
async fn insert_unknown(
&self,
_key_type: KeyTypeId,
_suri: &str,
_public: &[u8]
) -> Result<(), ()>;
/// Find intersection between provided keys and supported keys
///
/// Provided a list of (CryptoTypeId,[u8]) pairs, this would return
/// a filtered set of public keys which are supported by the keystore.
async fn supported_keys(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>
) -> Result<Vec<CryptoTypePublicPair>, Error>;
/// List all supported keys
///
/// Returns a set of public keys the signer supports.
async fn keys(&self, id: KeyTypeId) -> Result<Vec<CryptoTypePublicPair>, Error>;
/// Checks if the private keys for the given public key and key type combinations exist.
///
/// Returns `true` iff all private keys could be found.
async fn has_keys(&self, public_keys: &[(Vec<u8>, KeyTypeId)]) -> bool;
/// Sign with key
///
/// Signs a message with the private key that matches
/// the public key passed.
///
/// Returns the SCALE encoded signature if key is found & supported,
/// an error otherwise.
async fn sign_with(
&self,
id: KeyTypeId,
key: &CryptoTypePublicPair,
msg: &[u8],
) -> Result<Vec<u8>, Error>;
/// Sign with any key
///
/// Given a list of public keys, find the first supported key and
/// sign the provided message with that key.
///
/// Returns a tuple of the used key and the SCALE encoded signature.
async fn sign_with_any(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
msg: &[u8]
) -> Result<(CryptoTypePublicPair, Vec<u8>), Error> {
if keys.len() == 1 {
return self.sign_with(id, &keys[0], msg).await.map(|s| (keys[0].clone(), s));
} else {
for k in self.supported_keys(id, keys).await? {
if let Ok(sign) = self.sign_with(id, &k, msg).await {
return Ok((k, sign));
}
}
}
Err(Error::KeyNotSupported(id))
}
/// Sign with all keys
///
/// Provided a list of public keys, sign a message with
/// each key given that the key is supported.
///
/// Returns a list of `Result`s each representing the SCALE encoded
/// signature of each key or a Error for non-supported keys.
async fn sign_with_all(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
msg: &[u8],
) -> Result<Vec<Result<Vec<u8>, Error>>, ()> {
let futs = keys.iter()
.map(|k| self.sign_with(id, k, msg));
Ok(join_all(futs).await)
}
/// Generate VRF signature for given transcript data.
///
/// Receives KeyTypeId and Public key to be able to map
/// them to a private key that exists in the keystore which
/// is, in turn, used for signing the provided transcript.
///
/// Returns a result containing the signature data.
/// Namely, VRFOutput and VRFProof which are returned
/// inside the `VRFSignature` container struct.
///
/// This function will return an error in the cases where
/// the public key and key type provided do not match a private
/// key in the keystore. Or, in the context of remote signing
/// an error could be a network one.
async fn sr25519_vrf_sign(
&self,
key_type: KeyTypeId,
public: &sr25519::Public,
transcript_data: VRFTranscriptData,
) -> Result<VRFSignature, Error>;
}
/// Sync version of the CryptoStore
///
/// Some parts of Substrate still rely on a sync version of the `CryptoStore`.
/// To make the transition easier this auto trait wraps any async `CryptoStore` and
/// exposes a `sync` interface using `block_on`. Usage of this is deprecated and it
/// will be removed as soon as the internal usage has transitioned successfully.
/// If you are starting out building something new **do not use this**,
/// instead, use [`CryptoStore`].
pub trait SyncCryptoStore: CryptoStore + Send + Sync {
/// Returns all sr25519 public keys for the given key type.
fn sr25519_public_keys(&self, id: KeyTypeId) -> Vec<sr25519::Public>;
/// Generate a new sr25519 key pair for the given key type and an optional seed.
///
/// If the given seed is `Some(_)`, the key pair will only be stored in memory.
///
/// Returns the public key of the generated key pair.
fn sr25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<sr25519::Public, Error>;
/// Returns all ed25519 public keys for the given key type.
fn ed25519_public_keys(&self, id: KeyTypeId) -> Vec<ed25519::Public>;
/// Generate a new ed25519 key pair for the given key type and an optional seed.
///
/// If the given seed is `Some(_)`, the key pair will only be stored in memory.
///
/// Returns the public key of the generated key pair.
fn ed25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ed25519::Public, Error>;
/// Returns all ecdsa public keys for the given key type.
fn ecdsa_public_keys(&self, id: KeyTypeId) -> Vec<ecdsa::Public>;
/// Generate a new ecdsa key pair for the given key type and an optional seed.
///
/// If the given seed is `Some(_)`, the key pair will only be stored in memory.
///
/// Returns the public key of the generated key pair.
fn ecdsa_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ecdsa::Public, Error>;
/// Insert a new key. This doesn't require any known of the crypto; but a public key must be
/// manually provided.
///
/// Places it into the file system store.
///
/// `Err` if there's some sort of weird filesystem error, but should generally be `Ok`.
fn insert_unknown(&self, key_type: KeyTypeId, suri: &str, public: &[u8]) -> Result<(), ()>;
/// Find intersection between provided keys and supported keys
///
/// Provided a list of (CryptoTypeId,[u8]) pairs, this would return
/// a filtered set of public keys which are supported by the keystore.
fn supported_keys(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>
) -> Result<Vec<CryptoTypePublicPair>, Error>;
/// List all supported keys
///
/// Returns a set of public keys the signer supports.
fn keys(&self, id: KeyTypeId) -> Result<Vec<CryptoTypePublicPair>, Error> {
block_on(CryptoStore::keys(self, id))
}
/// Checks if the private keys for the given public key and key type combinations exist.
///
/// Returns `true` iff all private keys could be found.
fn has_keys(&self, public_keys: &[(Vec<u8>, KeyTypeId)]) -> bool;
/// Sign with key
///
/// Signs a message with the private key that matches
/// the public key passed.
///
/// Returns the SCALE encoded signature if key is found & supported,
/// an error otherwise.
fn sign_with(
&self,
id: KeyTypeId,
key: &CryptoTypePublicPair,
msg: &[u8],
) -> Result<Vec<u8>, Error>;
/// Sign with any key
///
/// Given a list of public keys, find the first supported key and
/// sign the provided message with that key.
///
/// Returns a tuple of the used key and the SCALE encoded signature.
fn sign_with_any(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
msg: &[u8]
) -> Result<(CryptoTypePublicPair, Vec<u8>), Error> {
if keys.len() == 1 {
return SyncCryptoStore::sign_with(self, id, &keys[0], msg).map(|s| (keys[0].clone(), s));
} else {
for k in SyncCryptoStore::supported_keys(self, id, keys)? {
if let Ok(sign) = SyncCryptoStore::sign_with(self, id, &k, msg) {
return Ok((k, sign));
}
}
}
Err(Error::KeyNotSupported(id))
}
/// Sign with all keys
///
/// Provided a list of public keys, sign a message with
/// each key given that the key is supported.
///
/// Returns a list of `Result`s each representing the SCALE encoded
/// signature of each key or a Error for non-supported keys.
fn sign_with_all(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
msg: &[u8],
) -> Result<Vec<Result<Vec<u8>, Error>>, ()>{
Ok(keys.iter().map(|k| SyncCryptoStore::sign_with(self, id, k, msg)).collect())
}
/// Generate VRF signature for given transcript data.
///
/// Receives KeyTypeId and Public key to be able to map
/// them to a private key that exists in the keystore which
/// is, in turn, used for signing the provided transcript.
///
/// Returns a result containing the signature data.
/// Namely, VRFOutput and VRFProof which are returned
/// inside the `VRFSignature` container struct.
///
/// This function will return an error in the cases where
/// the public key and key type provided do not match a private
/// key in the keystore. Or, in the context of remote signing
/// an error could be a network one.
fn sr25519_vrf_sign(
&self,
key_type: KeyTypeId,
public: &sr25519::Public,
transcript_data: VRFTranscriptData,
) -> Result<VRFSignature, Error>;
}
/// A pointer to a keystore.
pub type SyncCryptoStorePtr = Arc<dyn SyncCryptoStore>;
sp_externalities::decl_extension! {
/// The keystore extension to register/retrieve from the externalities.
pub struct KeystoreExt(SyncCryptoStorePtr);
}
@@ -0,0 +1,415 @@
// This file is part of Substrate.
// Copyright (C) 2019-2020 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Types that should only be used for testing!
use sp_core::crypto::KeyTypeId;
use sp_core::{
crypto::{Pair, Public, CryptoTypePublicPair},
ed25519, sr25519, ecdsa,
};
use crate::{
{CryptoStore, SyncCryptoStorePtr, Error, SyncCryptoStore},
vrf::{VRFTranscriptData, VRFSignature, make_transcript},
};
use std::{collections::{HashMap, HashSet}, sync::Arc};
use parking_lot::RwLock;
use async_trait::async_trait;
/// A keystore implementation usable in tests.
#[derive(Default)]
pub struct KeyStore {
/// `KeyTypeId` maps to public keys and public keys map to private keys.
keys: Arc<RwLock<HashMap<KeyTypeId, HashMap<Vec<u8>, String>>>>,
}
impl KeyStore {
/// Creates a new instance of `Self`.
pub fn new() -> Self {
Self::default()
}
fn sr25519_key_pair(&self, id: KeyTypeId, pub_key: &sr25519::Public) -> Option<sr25519::Pair> {
self.keys.read().get(&id)
.and_then(|inner|
inner.get(pub_key.as_slice())
.map(|s| sr25519::Pair::from_string(s, None).expect("`sr25519` seed slice is valid"))
)
}
fn ed25519_key_pair(&self, id: KeyTypeId, pub_key: &ed25519::Public) -> Option<ed25519::Pair> {
self.keys.read().get(&id)
.and_then(|inner|
inner.get(pub_key.as_slice())
.map(|s| ed25519::Pair::from_string(s, None).expect("`ed25519` seed slice is valid"))
)
}
fn ecdsa_key_pair(&self, id: KeyTypeId, pub_key: &ecdsa::Public) -> Option<ecdsa::Pair> {
self.keys.read().get(&id)
.and_then(|inner|
inner.get(pub_key.as_slice())
.map(|s| ecdsa::Pair::from_string(s, None).expect("`ecdsa` seed slice is valid"))
)
}
}
#[async_trait]
impl CryptoStore for KeyStore {
async fn keys(&self, id: KeyTypeId) -> Result<Vec<CryptoTypePublicPair>, Error> {
SyncCryptoStore::keys(self, id)
}
async fn sr25519_public_keys(&self, id: KeyTypeId) -> Vec<sr25519::Public> {
SyncCryptoStore::sr25519_public_keys(self, id)
}
async fn sr25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<sr25519::Public, Error> {
SyncCryptoStore::sr25519_generate_new(self, id, seed)
}
async fn ed25519_public_keys(&self, id: KeyTypeId) -> Vec<ed25519::Public> {
SyncCryptoStore::ed25519_public_keys(self, id)
}
async fn ed25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ed25519::Public, Error> {
SyncCryptoStore::ed25519_generate_new(self, id, seed)
}
async fn ecdsa_public_keys(&self, id: KeyTypeId) -> Vec<ecdsa::Public> {
SyncCryptoStore::ecdsa_public_keys(self, id)
}
async fn ecdsa_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ecdsa::Public, Error> {
SyncCryptoStore::ecdsa_generate_new(self, id, seed)
}
async fn insert_unknown(&self, id: KeyTypeId, suri: &str, public: &[u8]) -> Result<(), ()> {
SyncCryptoStore::insert_unknown(self, id, suri, public)
}
async fn has_keys(&self, public_keys: &[(Vec<u8>, KeyTypeId)]) -> bool {
SyncCryptoStore::has_keys(self, public_keys)
}
async fn supported_keys(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
) -> std::result::Result<Vec<CryptoTypePublicPair>, Error> {
SyncCryptoStore::supported_keys(self, id, keys)
}
async fn sign_with(
&self,
id: KeyTypeId,
key: &CryptoTypePublicPair,
msg: &[u8],
) -> Result<Vec<u8>, Error> {
SyncCryptoStore::sign_with(self, id, key, msg)
}
async fn sr25519_vrf_sign(
&self,
key_type: KeyTypeId,
public: &sr25519::Public,
transcript_data: VRFTranscriptData,
) -> Result<VRFSignature, Error> {
SyncCryptoStore::sr25519_vrf_sign(self, key_type, public, transcript_data)
}
}
impl SyncCryptoStore for KeyStore {
fn keys(&self, id: KeyTypeId) -> Result<Vec<CryptoTypePublicPair>, Error> {
self.keys.read()
.get(&id)
.map(|map| {
Ok(map.keys()
.fold(Vec::new(), |mut v, k| {
v.push(CryptoTypePublicPair(sr25519::CRYPTO_ID, k.clone()));
v.push(CryptoTypePublicPair(ed25519::CRYPTO_ID, k.clone()));
v.push(CryptoTypePublicPair(ecdsa::CRYPTO_ID, k.clone()));
v
}))
})
.unwrap_or_else(|| Ok(vec![]))
}
fn sr25519_public_keys(&self, id: KeyTypeId) -> Vec<sr25519::Public> {
self.keys.read().get(&id)
.map(|keys|
keys.values()
.map(|s| sr25519::Pair::from_string(s, None).expect("`sr25519` seed slice is valid"))
.map(|p| p.public())
.collect()
)
.unwrap_or_default()
}
fn sr25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<sr25519::Public, Error> {
match seed {
Some(seed) => {
let pair = sr25519::Pair::from_string(seed, None)
.map_err(|_| Error::ValidationError("Generates an `sr25519` pair.".to_owned()))?;
self.keys.write().entry(id).or_default().insert(pair.public().to_raw_vec(), seed.into());
Ok(pair.public())
},
None => {
let (pair, phrase, _) = sr25519::Pair::generate_with_phrase(None);
self.keys.write().entry(id).or_default().insert(pair.public().to_raw_vec(), phrase);
Ok(pair.public())
}
}
}
fn ed25519_public_keys(&self, id: KeyTypeId) -> Vec<ed25519::Public> {
self.keys.read().get(&id)
.map(|keys|
keys.values()
.map(|s| ed25519::Pair::from_string(s, None).expect("`ed25519` seed slice is valid"))
.map(|p| p.public())
.collect()
)
.unwrap_or_default()
}
fn ed25519_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ed25519::Public, Error> {
match seed {
Some(seed) => {
let pair = ed25519::Pair::from_string(seed, None)
.map_err(|_| Error::ValidationError("Generates an `ed25519` pair.".to_owned()))?;
self.keys.write().entry(id).or_default().insert(pair.public().to_raw_vec(), seed.into());
Ok(pair.public())
},
None => {
let (pair, phrase, _) = ed25519::Pair::generate_with_phrase(None);
self.keys.write().entry(id).or_default().insert(pair.public().to_raw_vec(), phrase);
Ok(pair.public())
}
}
}
fn ecdsa_public_keys(&self, id: KeyTypeId) -> Vec<ecdsa::Public> {
self.keys.read().get(&id)
.map(|keys|
keys.values()
.map(|s| ecdsa::Pair::from_string(s, None).expect("`ecdsa` seed slice is valid"))
.map(|p| p.public())
.collect()
)
.unwrap_or_default()
}
fn ecdsa_generate_new(
&self,
id: KeyTypeId,
seed: Option<&str>,
) -> Result<ecdsa::Public, Error> {
match seed {
Some(seed) => {
let pair = ecdsa::Pair::from_string(seed, None)
.map_err(|_| Error::ValidationError("Generates an `ecdsa` pair.".to_owned()))?;
self.keys.write().entry(id).or_default().insert(pair.public().to_raw_vec(), seed.into());
Ok(pair.public())
},
None => {
let (pair, phrase, _) = ecdsa::Pair::generate_with_phrase(None);
self.keys.write().entry(id).or_default().insert(pair.public().to_raw_vec(), phrase);
Ok(pair.public())
}
}
}
fn insert_unknown(&self, id: KeyTypeId, suri: &str, public: &[u8]) -> Result<(), ()> {
self.keys.write().entry(id).or_default().insert(public.to_owned(), suri.to_string());
Ok(())
}
fn has_keys(&self, public_keys: &[(Vec<u8>, KeyTypeId)]) -> bool {
public_keys.iter().all(|(k, t)| self.keys.read().get(&t).and_then(|s| s.get(k)).is_some())
}
fn supported_keys(
&self,
id: KeyTypeId,
keys: Vec<CryptoTypePublicPair>,
) -> std::result::Result<Vec<CryptoTypePublicPair>, Error> {
let provided_keys = keys.into_iter().collect::<HashSet<_>>();
let all_keys = SyncCryptoStore::keys(self, id)?.into_iter().collect::<HashSet<_>>();
Ok(provided_keys.intersection(&all_keys).cloned().collect())
}
fn sign_with(
&self,
id: KeyTypeId,
key: &CryptoTypePublicPair,
msg: &[u8],
) -> Result<Vec<u8>, Error> {
use codec::Encode;
match key.0 {
ed25519::CRYPTO_ID => {
let key_pair: ed25519::Pair = self
.ed25519_key_pair(id, &ed25519::Public::from_slice(key.1.as_slice()))
.ok_or_else(|| Error::PairNotFound("ed25519".to_owned()))?;
return Ok(key_pair.sign(msg).encode());
}
sr25519::CRYPTO_ID => {
let key_pair: sr25519::Pair = self
.sr25519_key_pair(id, &sr25519::Public::from_slice(key.1.as_slice()))
.ok_or_else(|| Error::PairNotFound("sr25519".to_owned()))?;
return Ok(key_pair.sign(msg).encode());
}
ecdsa::CRYPTO_ID => {
let key_pair: ecdsa::Pair = self
.ecdsa_key_pair(id, &ecdsa::Public::from_slice(key.1.as_slice()))
.ok_or_else(|| Error::PairNotFound("ecdsa".to_owned()))?;
return Ok(key_pair.sign(msg).encode());
}
_ => Err(Error::KeyNotSupported(id))
}
}
fn sr25519_vrf_sign(
&self,
key_type: KeyTypeId,
public: &sr25519::Public,
transcript_data: VRFTranscriptData,
) -> Result<VRFSignature, Error> {
let transcript = make_transcript(transcript_data);
let pair = self.sr25519_key_pair(key_type, public)
.ok_or_else(|| Error::PairNotFound("Not found".to_owned()))?;
let (inout, proof, _) = pair.as_ref().vrf_sign(transcript);
Ok(VRFSignature {
output: inout.to_output(),
proof,
})
}
}
impl Into<SyncCryptoStorePtr> for KeyStore {
fn into(self) -> SyncCryptoStorePtr {
Arc::new(self)
}
}
impl Into<Arc<dyn CryptoStore>> for KeyStore {
fn into(self) -> Arc<dyn CryptoStore> {
Arc::new(self)
}
}
#[cfg(test)]
mod tests {
use super::*;
use sp_core::{sr25519, testing::{ED25519, SR25519}};
use crate::{SyncCryptoStore, vrf::VRFTranscriptValue};
#[test]
fn store_key_and_extract() {
let store = KeyStore::new();
let public = SyncCryptoStore::ed25519_generate_new(&store, ED25519, None)
.expect("Generates key");
let public_keys = SyncCryptoStore::keys(&store, ED25519).unwrap();
assert!(public_keys.contains(&public.into()));
}
#[test]
fn store_unknown_and_extract_it() {
let store = KeyStore::new();
let secret_uri = "//Alice";
let key_pair = sr25519::Pair::from_string(secret_uri, None).expect("Generates key pair");
SyncCryptoStore::insert_unknown(
&store,
SR25519,
secret_uri,
key_pair.public().as_ref(),
).expect("Inserts unknown key");
let public_keys = SyncCryptoStore::keys(&store, SR25519).unwrap();
assert!(public_keys.contains(&key_pair.public().into()));
}
#[test]
fn vrf_sign() {
let store = KeyStore::new();
let secret_uri = "//Alice";
let key_pair = sr25519::Pair::from_string(secret_uri, None).expect("Generates key pair");
let transcript_data = VRFTranscriptData {
label: b"Test",
items: vec![
("one", VRFTranscriptValue::U64(1)),
("two", VRFTranscriptValue::U64(2)),
("three", VRFTranscriptValue::Bytes("test".as_bytes().to_vec())),
]
};
let result = SyncCryptoStore::sr25519_vrf_sign(
&store,
SR25519,
&key_pair.public(),
transcript_data.clone(),
);
assert!(result.is_err());
SyncCryptoStore::insert_unknown(
&store,
SR25519,
secret_uri,
key_pair.public().as_ref(),
).expect("Inserts unknown key");
let result = SyncCryptoStore::sr25519_vrf_sign(
&store,
SR25519,
&key_pair.public(),
transcript_data,
);
assert!(result.is_ok());
}
}
@@ -23,19 +23,19 @@ use schnorrkel::vrf::{VRFOutput, VRFProof};
/// An enum whose variants represent possible
/// accepted values to construct the VRF transcript
#[derive(Clone, Encode)]
pub enum VRFTranscriptValue<'a> {
pub enum VRFTranscriptValue {
/// Value is an array of bytes
Bytes(&'a [u8]),
Bytes(Vec<u8>),
/// Value is a u64 integer
U64(u64),
}
/// VRF Transcript data
#[derive(Clone, Encode)]
pub struct VRFTranscriptData<'a> {
pub struct VRFTranscriptData {
/// The transcript's label
pub label: &'static [u8],
/// Additional data to be registered into the transcript
pub items: Vec<(&'static str, VRFTranscriptValue<'a>)>,
pub items: Vec<(&'static str, VRFTranscriptValue)>,
}
/// VRF signature data
pub struct VRFSignature {
@@ -84,7 +84,7 @@ mod tests {
label: b"My label",
items: vec![
("one", VRFTranscriptValue::U64(1)),
("two", VRFTranscriptValue::Bytes("test".as_bytes())),
("two", VRFTranscriptValue::Bytes("test".as_bytes().to_vec())),
],
});
let test = |t: Transcript| -> [u8; 16] {
+1
View File
@@ -28,6 +28,7 @@ sc-service = { version = "0.8.0", default-features = false, features = ["test-he
sp-blockchain = { version = "2.0.0", path = "../../primitives/blockchain" }
sp-consensus = { version = "0.8.0", path = "../../primitives/consensus/common" }
sp-core = { version = "2.0.0", path = "../../primitives/core" }
sp-keystore = { version = "0.8.0", path = "../../primitives/keystore" }
sp-keyring = { version = "2.0.0", path = "../../primitives/keyring" }
sp-runtime = { version = "2.0.0", path = "../../primitives/runtime" }
sp-state-machine = { version = "0.8.0", path = "../../primitives/state-machine" }
+4 -4
View File
@@ -33,7 +33,7 @@ pub use sp_keyring::{
ed25519::Keyring as Ed25519Keyring,
sr25519::Keyring as Sr25519Keyring,
};
pub use sp_core::traits::BareCryptoStorePtr;
pub use sp_keystore::{SyncCryptoStorePtr, SyncCryptoStore};
pub use sp_runtime::{Storage, StorageChild};
pub use sp_state_machine::ExecutionStrategy;
pub use sc_service::{RpcHandlers, RpcSession, client};
@@ -76,7 +76,7 @@ pub struct TestClientBuilder<Block: BlockT, Executor, Backend, G: GenesisInit> {
child_storage_extension: HashMap<Vec<u8>, StorageChild>,
backend: Arc<Backend>,
_executor: std::marker::PhantomData<Executor>,
keystore: Option<BareCryptoStorePtr>,
keystore: Option<SyncCryptoStorePtr>,
fork_blocks: ForkBlocks<Block>,
bad_blocks: BadBlocks<Block>,
}
@@ -118,7 +118,7 @@ impl<Block: BlockT, Executor, Backend, G: GenesisInit> TestClientBuilder<Block,
}
/// Set the keystore that should be used by the externalities.
pub fn set_keystore(mut self, keystore: BareCryptoStorePtr) -> Self {
pub fn set_keystore(mut self, keystore: SyncCryptoStorePtr) -> Self {
self.keystore = Some(keystore);
self
}
@@ -216,7 +216,7 @@ impl<Block: BlockT, Executor, Backend, G: GenesisInit> TestClientBuilder<Block,
self.bad_blocks,
ExecutionExtensions::new(
self.execution_strategies,
self.keystore.clone(),
self.keystore,
),
None,
ClientConfig::default(),
@@ -20,6 +20,7 @@ sc-cli = { version = "0.8.0", path = "../../../client/cli" }
sc-client-db = { version = "0.8.0", path = "../../../client/db" }
sc-executor = { version = "0.8.0", path = "../../../client/executor" }
sp-externalities = { version = "0.8.0", path = "../../../primitives/externalities" }
sp-keystore = { version = "0.8.0", path = "../../../primitives/keystore" }
sp-runtime = { version = "2.0.0", path = "../../../primitives/runtime" }
sp-state-machine = { version = "0.8.0", path = "../../../primitives/state-machine" }
structopt = "0.3.8"
@@ -15,6 +15,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use crate::BenchmarkCmd;
use codec::{Decode, Encode};
use frame_benchmarking::{Analysis, BenchmarkBatch, BenchmarkSelector};
@@ -25,10 +26,10 @@ use sp_state_machine::StateMachine;
use sp_externalities::Extensions;
use sc_service::{Configuration, NativeExecutionDispatch};
use sp_runtime::traits::{Block as BlockT, Header as HeaderT, NumberFor};
use sp_core::{
use sp_core::offchain::{OffchainExt, testing::TestOffchainExt};
use sp_keystore::{
SyncCryptoStorePtr, KeystoreExt,
testing::KeyStore,
traits::KeystoreExt,
offchain::{OffchainExt, testing::TestOffchainExt},
};
use std::fmt::Debug;
@@ -65,7 +66,7 @@ impl BenchmarkCmd {
);
let mut extensions = Extensions::default();
extensions.register(KeystoreExt(KeyStore::new()));
extensions.register(KeystoreExt(Arc::new(KeyStore::new()) as SyncCryptoStorePtr));
let (offchain, _) = TestOffchainExt::new();
extensions.register(OffchainExt::new(offchain));