* Run RustFmt as part of the CI
* Format repo
* Run RustFmt before the default Travis build step
Apparently if you override `script` you also need to make
sure to `build` and `test` the code yourself.
* Format repo
* Update dependencies
Upgrades Substrate based dependencies from v2.0.0 -> v2.0.0-alpha.1
and uses the `jsonrpsee`'s new feature flags. The actual code hasn't
been updated though, so this won't compile.
* Use `RawClient`s from `jsonrpsee`
* Update to use jsonrpsee's new API
* Hook up Ethereum Bridge Runtime, Relay, and Node Runtime
* Bump `parity-crypto` from v0.4 to v0.6
Fixes error when trying to compile tests. This was caused by
`parity-crypto` v0.4's use of `parity-secp256k1` over `secp256k1'.
Using the Parity fork meant multiple version of the same underlying
C library were being pulled in. `parity-crypto` v0.6 moved away from
this, only relying on `secp256k1` thus fixing the issue.
* Copy node-template over from Substrate repo
Got the template at rev=6e6d06c33911
* Use dependencies from crates.io + stop renaming on import
* Remove template pallet
* Stop using crates.io dependencies
Instead they're going to be pinned at v2.0.0-alpha.2
at commit `2afecf81ee19b8a6edb364b419190ea47c4a4a31`
until something stable comes along.
* Remove LICENSE
* Change references of `node-template` to `bridge-node`
* Remove README
* Fix some missed node-template references
* Add WASM toolchain to CI
* Be more specific about nightly version to use
* Maybe don't tie to a specific nightly
* Use composite accounts
* Update to use lazy reaping
* Only use Development chain config
* Initial commit. CLI which parses RPC urls.
* Establish ws connections and make simple RPC requests.
* Complete bridge setup.
* Process subscription events.
* Ctrl-C handler.
* Write a bare-bones README and copy in design doc.
* Modularize code a little bit.
* Communicate with each chain in a separate task.
* Parse headers from RPC subscription notifications.
* Send (fake) extrinsics across bridge channels.
And now it's deadlocked.
* Fix deadlock.
* Clarify in README that this is not-in-progress.
* Move everything into a single folder
* Move Substrate relay into appropriate folder
* Get the Substrate Relay node compiling
* Update Cargo.lock
* Use new composite accounts from Substrate
* Remove specification document
It has been moved to the Wiki on the Github repo.
* Update author + remove comments
* Use latest master for jsonrpsee
Required renaming some stuff (e.g Client -> RawClient)
Co-authored-by: Jim Posen <jim.posen@gmail.com>
commit 265365920836bb1d286c9b48b1902a2de278fdd9
Author: Hernando Castano <castano.ha@gmail.com>
Date: Wed Jan 29 19:51:15 2020 -0500
Move hc-jp-bridge repo to different folder
commit 8271991e95320baba70bd1cb9c4234d0ffd5b638
Merge: 57d0811 304cbc5
Author: Hernando Castano <castano.ha@gmail.com>
Date: Wed Jan 29 19:36:41 2020 -0500
Merge branch 'hc-jp-bridge-module' of hc-jp-bridge-module
commit 304cbc5f02d003ffa5404c1c01e461e5b8539888
Author: Hernando Castano <HCastano@users.noreply.github.com>
Date: Wed Jan 29 00:38:27 2020 -0500
Update bridge pallet to work with the (almost) lastest master (#4672)
* Update decl_error usage
* WIP: Update error handling to use DispatchResult
* Get module compiling with new error handling
* Make tests compile again
Main change was updating the usage of InMemoryBackend
* Move `sp-state-machine` into dev-dependencies
* Bump dependencies to v2.0.0
* Remove some stray comments
* Appy code review suggestion
commit 510cd6d96372688517496efa61773ea2839f8474
Author: Hernando Castano <HCastano@users.noreply.github.com>
Date: Tue Dec 17 12:52:51 2019 -0500
Move Bridge Pallet into FRAME (#4373)
* Move `bridge` crate into `frame` folder
* Make `bridge` pallet compile after `the-big-reorg`
commit ab54e838ef75e6a3f68fd0944bf22598c10c552f
Author: Hernando Castano <castano.ha@gmail.com>
Date: Mon Nov 11 21:56:40 2019 +0100
Use new StorageProof type from #3834
commit 8fc8911fd1b4acc2274c6863fb3dba91b30c90af
Author: Hernando Castano <HCastano@users.noreply.github.com>
Date: Tue Nov 5 00:50:34 2019 +0100
Verify Ancestry between Headers (#3963)
* Create module for checking ancestry proofs
* Use Vec of Headers instead of a HashMap
* Move the ancestry verification into the lib.rs file
* Change the proof format to exclude `child` and `ancestor` headers
* Add a testing function for building header chains
* Rename AncestorNotFound error to InvalidAncestryProof
* Use ancestor hash instead of header when verifying ancestry
* Clean up some stuff missed in the merge
commit dbe85738b68358b790cf927b34a804b965a88f96
or: Hernando Castano <HCastano@users.noreply.github.com>
Date: Fri Nov 1 15:41:58 2019 +0100
Check given Grandpa validator set against set found in storage (#3915)
* Make StorageProofChecker happy
* Update some tests
* Check given validator set against set found in storage
* Use Finality Grandpa's Authority Id and Weight
* Add better error handling
* Use error type from decl_error! macro
commit 31b09216603d3e9c21144ce8c0b6bf59307a4f97
or: Hernando Castano <HCastano@users.noreply.github.com>
Date: Wed Oct 23 14:55:37 2019 +0200
Make tests work after the changes introduced in #3793 (#3874)
* Make tests work after the changes introduced in #3793
* Remove unneccessary import
commit bce6d804aa86504599ff912387295c58f846cbf3
Author: Jim Posen <jim.posen@gmail.com>
Date: Thu Oct 10 12:18:58 2019 +0200
Logic for checking Substrate proofs from within runtime module. (#3783)
commit a7013e94b6c772c1d45a7cacbb445f73f6554fca
Author: Hernando Castano <castano.ha@gmail.com>
Date: Fri Oct 4 15:21:00 2019 +0300
Allow tracking of multiple bridges
commit 3cf648242d631e32bd553a67df54bf5a48912839
Author: Hernando Castano <castano.ha@gmail.com>
Date: Tue Oct 1 14:55:04 2019 +0200
Add BridgeId => Bridge mapping
commit 001c74c45072213e01857d0a2454379b447c5a76
Author: Hernando Castano <castano.ha@gmail.com>
Date: Tue Oct 1 11:10:19 2019 +0200
Get the mock runtime for tests set up
commit 38443a1e8b424ed2f148eb95121d009f730e3b5a
Author: Hernando Castano <castano.ha@gmail.com>
Date: Fri Sep 27 14:52:53 2019 +0200
Clean up some warnings
commit bdc3b01401e89c7111f8bf71f84c50750d25089f
Author: Hernando Castano <castano.ha@gmail.com>
Date: Thu Sep 26 16:41:01 2019 +0200
Add more skeleton code
commit 26995efbf4bac2842eb2822322f7ad3c3e88feb8
Author: Hernando Castano <castano.ha@gmail.com>
Date: Wed Sep 25 15:16:57 2019 +0200
Create `bridge` module skeleton
Cumulus test-parachain node and test runtime were still using relay
chain consensus and 12s blocktimes. With async backing around the corner
on the major chains we should switch our tests too.
Also needed to nicely test the changes coming to collators in #3168.
### Changes Overview
- Followed the [migration
guide](https://wiki.polkadot.network/docs/maintain-guides-async-backing)
for async backing for the cumulus-test-runtime
- Adjusted the cumulus-test-service to use the correct import-queue,
lookahead collator etc.
- The block validation function now uses the Aura Ext Executor so that
the seal of the block is validated
- Previous point requires that we seal block before calling into
`validate_block`, I introduced a helper function for that
- Test client adjusted to provide a slot to the relay chain proof and
the aura pre-digest
Added check if forklift already exists in ci image as forklift binary is
now bundled with the ci-unified.
This is a temporary check for the transition period
This PR brings the fix
https://github.com/paritytech/substrate/pull/13396 to polkadot-sdk.
In the past, due to insufficient inbound slot count on polkadot &
kusama, this fix led to low peer count. The situation has improved since
then after changing the default ratio between `--in-peers` &
`--out-peers`.
Nevertheless, it's expected that the reported total peer count with this
fix is going to be lower than without it. This should be seen as the
correct number of working connections reported, as opposed to also
reporting already closed connections, and not as lower count of working
connections with peers.
This PR also removes the peer eviction mechanism, as closed substream
detection is a more granular way of detecting peers that stopped syncing
with us.
The burn-in has been already performed as part of testing these changes
in https://github.com/paritytech/polkadot-sdk/pull/3426.
---------
Co-authored-by: Aaro Altonen <a.altonen@hotmail.com>
This is a tiny PR to increase the time a peer remains banned.
A peer is banned when the reputation drops below a threshold.
With every second, the peer reputation is exponentially decayed towards
zero.
For the previous setup:
- decaying to zero from (i32::MAX or i32::MIN) would take 948 seconds
(15mins 48seconds)
- from i32::MIN to escaping the banned threshold would take 10 seconds
This means we are decaying reputation a bit too aggressive and
misbehaving peers can misbehave again in 10 seconds.
Another side effect of this is that we have encountered multiple
warnings caused by a few misbehaving peers.
In the new setup:
- decaying to zero from (i32::MAX or i32::MIN) would take 3544 seconds
(59 minutes)
- from i32::MIN to escaping the banned threshold would take ~69 seconds
This is a followup of:
- https://github.com/paritytech/polkadot-sdk/pull/4000.
### Testing Done
- Created a misbehaving client with
[subp2p-explorer](https://github.com/lexnv/subp2p-explorer), the client
is banned for approx 69seconds until it is allowed to connect again.
cc @paritytech/networking
---------
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
# Description
- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.
Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
// ************* VERIFIER (RUNTIME) *************
// Verify proof. This generates a partial trie based on the proof and
// checks that the root hash matches the `expected_root`.
let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();
// Print all leaf node keys and values.
println!("\nPrinting leaf nodes of partial tree...");
for key in trie.key_iter().unwrap() {
if key.is_ok() {
println!("Leaf node key: {:?}", key.clone().unwrap());
let val = trie.get(&key.unwrap());
if val.is_ok() {
println!("Leaf node value: {:?}", val.unwrap());
} else {
println!("Leaf node value: None");
}
}
}
println!("RECONSTRUCTED TRIE {:#?}", trie);
// Create an iterator over the leaf nodes.
let mut iter = trie.iter().unwrap();
// First element with a value should be the previous existing leaf to the challenged hash.
let mut prev_key = None;
for element in &mut iter {
if element.is_ok() {
let (key, _) = element.unwrap();
prev_key = Some(key);
break;
}
}
assert!(prev_key.is_some());
// Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
assert!(prev_key.unwrap() <= challenge_hash.to_vec());
// The next element should exist (meaning there is no other existing leaf between the
// previous and next leaf) and it should be greater than the challenged hash.
let next_key = iter.next().unwrap().unwrap().0;
assert!(next_key >= challenge_hash.to_vec());
```
With DoubleEnded iterators, we can avoid that, like this:
```rust
// ************* VERIFIER (RUNTIME) *************
// Verify proof. This generates a partial trie based on the proof and
// checks that the root hash matches the `expected_root`.
let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();
// Print all leaf node keys and values.
println!("\nPrinting leaf nodes of partial tree...");
for key in trie.key_iter().unwrap() {
if key.is_ok() {
println!("Leaf node key: {:?}", key.clone().unwrap());
let val = trie.get(&key.unwrap());
if val.is_ok() {
println!("Leaf node value: {:?}", val.unwrap());
} else {
println!("Leaf node value: None");
}
}
}
// println!("RECONSTRUCTED TRIE {:#?}", trie);
println!("\nChallenged key: {:?}", challenge_hash);
// Create an iterator over the leaf nodes.
let mut double_ended_iter = trie.into_double_ended_iter().unwrap();
// First element with a value should be the previous existing leaf to the challenged hash.
double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;
// Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
println!("Prev key: {:?}", prev_key);
assert!(prev_key <= challenge_hash.to_vec());
println!("Next key: {:?}", next_key);
assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.
---------
Co-authored-by: Bastian Köcher <git@kchr.de>
Update the Contracts API to use `WeightMeter`, as it simplifies the code
and makes it easier to reason about, rather than taking a mutable weight
or returning a tuple with the weight consumed
---------
Co-authored-by: Alexander Theißen <alex.theissen@me.com>