Get rid of public `ChainSync::..._requests()` functions and return all
requests as actions.
---------
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
All `ChainSync` actions that `SyncingEngine` should perform are unified
under one `ChainSyncAction`. Processing of these actions put into a
single place after `select!` in `SyncingEngine::run` instead of multiple
places where calling `ChainSync` methods.
The `BlockBuilderProvider` was a trait that was defined in
`sc-block-builder`. The trait was implemented for `Client`. This
basically meant that you needed to import `sc-block-builder` any way to
have access to the block builder. So, this trait was not providing any
real value. This pull request is removing the said trait. Instead of the
trait it introduces a builder for creating a `BlockBuilder`. The builder
currently has the quite fabulous name `BlockBuilderBuilder` (I'm open to
any better name 😅). The rest of the pull request is about
replacing the old trait with the new builder.
# Downstream code changes
If you used `new_block` or `new_block_at` before you now need to switch
it over to the new `BlockBuilderBuilder` pattern:
```rust
// `new` requires a type that implements `CallApiAt`.
let mut block_builder = BlockBuilderBuilder::new(client)
// Then you need to specify the hash of the parent block the block will be build on top of
.on_parent_block(at)
// The block builder also needs the block number of the parent block.
// Here it is fetched from the given `client` using the `HeaderBackend`
// However, there also exists `with_parent_block_number` for directly passing the number
.fetch_parent_block_number(client)
.unwrap()
// Enable proof recording if required. This call is optional.
.enable_proof_recording()
// Pass the digests. This call is optional.
.with_inherent_digests(digests)
.build()
.expect("Creates new block builder");
```
---------
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Co-authored-by: command-bot <>
This PR moves syncing-related code from `sc-network-common` to
`sc-network-sync`.
Unfortunately, some parts are tightly integrated with networking, so
they were left in `sc-network-common` for now:
1. `SyncMode` in `common/src/sync.rs` (used in `NetworkConfiguration`).
2. `BlockAnnouncesHandshake`, `BlockRequest`, `BlockResponse`, etc. in
`common/src/sync/message.rs` (used in `src/protocol.rs` and
`src/protocol/message.rs`).
More substantial refactoring is needed to decouple syncing and
networking completely, including getting rid of the hardcoded sync
protocol.
## Release notes
Move syncing-related code from `sc-network-common` to `sc-network-sync`.
Delete `ChainSync` trait as it's never used (the only implementation is
accessed directly from `SyncingEngine` and exposes a lot of public
methods that are not part of the trait). Some new trait(s) for syncing
will likely be introduced as part of Sync 2.0 refactoring to represent
syncing strategies.
The change adds a test to show the failure scenario that caused #1812 to
be rolled back (more context:
https://github.com/paritytech/polkadot-sdk/issues/493#issuecomment-1772009924)
Summary of the scenario:
1. Node has finished downloading up to block 1000 from the peers, from
the canonical chain.
2. Peers are undergoing re-org around this time. One of the peers has
switched to a non-canonical chain, announces block 1001 from that chain
3. Node downloads 1001 from the peer, and tries to import which would
fail (as we don't have the parent block 1000 from the other chain)
---------
Co-authored-by: Dmitry Markin <dmitry@markin.tech>
When retrieving the ready blocks, verify that the parent of the first
ready block is on chain. If the parent is not on chain, we are
downloading from a fork. In this case, keep downloading until we have a
parent on chain (common ancestor).
Resolves https://github.com/paritytech/polkadot-sdk/issues/493.
---------
Co-authored-by: Aaro Altonen <48052676+altonen@users.noreply.github.com>
See #1345, <https://github.com/paritytech/substrate/pull/14207>.
This adds all the necessary mixnet components, and puts them together in
the "kitchen-sink" node/runtime. The components added are:
- A pallet (`frame/mixnet`). This is responsible for determining the
current mixnet session and phase, and the mixnodes to use in each
session. It provides a function that validators can call to register a
mixnode for the next session. The logic of this pallet is very similar
to that of the `im-online` pallet.
- A service (`client/mixnet`). This implements the core mixnet logic,
building on the `mixnet` crate. The service communicates with other
nodes using notifications sent over the "mixnet" protocol.
- An RPC interface. This currently only supports sending transactions
over the mixnet.
---------
Co-authored-by: David Emett <dave@sp4m.net>
Co-authored-by: Javier Viola <javier@parity.io>
Submit the outstanding PRs from the old repos(these were already
reviewed and approved before the repo rorg, but not yet submitted):
Main PR: https://github.com/paritytech/substrate/pull/14014
Companion PRs: https://github.com/paritytech/polkadot/pull/7134,
https://github.com/paritytech/cumulus/pull/2489
The changes in the PR:
1. ChainSync currently calls into the block request handler directly.
Instead, move the block request handler behind a trait. This allows new
protocols to be plugged into ChainSync.
2. BuildNetworkParams is changed so that custom relay protocol
implementations can be (optionally) passed in during network creation
time. If custom protocol is not specified, it defaults to the existing
block handler
3. BlockServer and BlockDownloader traits are introduced for the
protocol implementation. The existing block handler has been changed to
implement these traits
4. Other changes:
[X] Make TxHash serializable. This is needed for exchanging the
serialized hash in the relay protocol messages
[X] Clean up types no longer used(OpaqueBlockRequest,
OpaqueBlockResponse)
---------
Co-authored-by: Dmitry Markin <dmitry@markin.tech>
Co-authored-by: command-bot <>
* substrate: peer_store: log error on disconnecting because of reputation
Disconnecting and banning a peer because of negative reputation is
usually an indicative of one of two things:
1. We've got a bug that forces disconnects.
2. We've got malicious peers that try to attack us.
We both cases I don't think we should hide this behind a trace log
and we should log errors, so that things are easy to notice and
debug/mitigated.
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
* Move from error to warn
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
---------
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
* Make peer evictions less aggressive
The original implementation of peer eviction prioritized aliveness over
connection stability which made the peer count unstable for some users.
As this may cause discomfort or infrastructure alerts if stability is
tracked, adjust the eviction to be less aggressive by only evicting
peers when the node has fully stalled. This causes the node to have some
peers who are inactive and won't send any block announcements.
These nodes are removed if the local node is able to receive at least
one block announcement from one of its peers as the inactivity of the
substream is detected when a notification is sent.
If the node won't send or receive any block annoucements for 30 seconds,
it's considered stalled and it will evict all peers,
causing `ProtocolController` to accept and establish connections from new
peers.
* Update client/network/sync/src/engine.rs
Co-authored-by: Dmitry Markin <dmitry@markin.tech>
* Track last send and received notification simultaneously
---------
Co-authored-by: Dmitry Markin <dmitry@markin.tech>
Co-authored-by: parity-processbot <>
* update libp2p to 0.52.0
* proto name now must implement `AsRef<str>`
* update libp2p version everywhere
* ToSwarm, FromBehaviour, ToBehaviour
also LocalProtocolsChange and RemoteProtocolsChange
* new NetworkBehaviour invariants
* replace `Vec<u8>` with `StreamProtocol`
* rename ConnectionHandlerEvent::Custom to NotifyBehaviour
* remove DialError & ListenError invariants
also fix pending_events
* use connection_limits::Behaviour
See https://github.com/libp2p/rust-libp2p/pull/3885
* impl `void::Void` for `BehaviourOut`
also use `Behaviour::with_codec`
* KademliaHandler no longer public
* fix StreamProtocol construction
* update libp2p-identify to 0.2.0
* remove non-existing methods from PollParameters
rename ConnectionHandlerUpgrErr to StreamUpgradeError
* `P2p` now contains `PeerId`, not `Multihash`
* use multihash-codetable crate
* update Cargo.lock
* reformat text
* comment out tests for now
* remove `.into()` from P2p
* confirm observed addr manually
See https://github.com/libp2p/rust-libp2p/blob/master/protocols/identify/CHANGELOG.md#0430
* remove SwarmEvent::Banned
since we're not using `ban_peer_id`, this can be safely removed.
we may want to introduce `libp2p::allow_block_list` module in the future.
* fix imports
* replace `libp2p` with smaller deps in network-gossip
* bring back tests
* finish rewriting tests
* uncomment handler tests
* Revert "uncomment handler tests"
This reverts commit 720a06815887f4e10767c62b58864a7ec3a48e50.
* add a fixme
* update Cargo.lock
* remove extra From
* make void uninhabited
* fix discovery test
* use autonat protocols
confirming external addresses manually is unsafe in open networks
* fix SyncNotificationsClogged invariant
* only set server mode manually in tests
doubt that we need to set it on node since we're adding public addresses
* address @dmitry-markin comments
* remove autonat
* removed unused var
* fix EOL
* update smallvec and sha2
in attempt to compile polkadot
* bump k256
in attempt to build cumulus
---------
Co-authored-by: parity-processbot <>
* Accept only `--in-peers` many inbound full nodes in `SyncingEngine`
Due to full and light nodes being stored in the same set, it's possible
that `SyncingEngine` accepts more than `--in-peers` many inbound full
nodes which leaves some of its outbound slots unoccupied.
`ProtocolController` still tries to occupy these slots by opening
outbound substreams. As these substreams are accepted by the remote peer,
the connection is relayed to `SyncingEngine` which rejects the node
because it's already full. This in turn results in the substream being
inactive and the peer getting evicted.
Fixing this properly would require relocating the light peer slot
allocation away from `ProtocolController` or alternatively moving entire
the substream validation there, both of which are epic refactorings and
not necessarily in line with other goals. As a temporary measure, verify
in `SyncingEngine` that it doesn't accept more than the specified amount
of inbound full peers.
* Fix tests
* Apply review comments
* frame-benchmarking-cli: Remove native dispatch requirement
No need for this, we can just use the `WasmExecutor` directly.
* Fixes
* Pass benchmarking host functions
* Ensure we can pass custom host functions
This improves the reporting of invalid boot nodes. First, it will only report each boot node once
as invalid and not every time we try to connect to the node. Second, the node will only report for
addresses that we added as startup and not for addresses of the boot node that the node learned from
other nodes.
Closes: https://github.com/paritytech/substrate/issues/13584
Closes: https://github.com/paritytech/polkadot/issues/7385