Integrate litep2p into Polkadot SDK (#2944)

[litep2p](https://github.com/altonen/litep2p) is a libp2p-compatible P2P
networking library. It supports all of the features of `rust-libp2p`
that are currently being utilized by Polkadot SDK.

Compared to `rust-libp2p`, `litep2p` has a quite different architecture
which is why the new `litep2p` network backend is only able to use a
little of the existing code in `sc-network`. The design has been mainly
influenced by how we'd wish to structure our networking-related code in
Polkadot SDK: independent higher-levels protocols directly communicating
with the network over links that support bidirectional backpressure. A
good example would be `NotificationHandle`/`RequestResponseHandle`
abstractions which allow, e.g., `SyncingEngine` to directly communicate
with peers to announce/request blocks.

I've tried running `polkadot --network-backend litep2p` with a few
different peer configurations and there is a noticeable reduction in
networking CPU usage. For high load (`--out-peers 200`), networking CPU
usage goes down from ~110% to ~30% (80 pp) and for normal load
(`--out-peers 40`), the usage goes down from ~55% to ~18% (37 pp).

These should not be taken as final numbers because:

a) there are still some low-hanging optimization fruits, such as
enabling [receive window
auto-tuning](https://github.com/libp2p/rust-yamux/pull/176), integrating
`Peerset` more closely with `litep2p` or improving memory usage of the
WebSocket transport
b) fixing bugs/instabilities that incorrectly cause `litep2p` to do less
work will increase the networking CPU usage
c) verification in a more diverse set of tests/conditions is needed

Nevertheless, these numbers should give an early estimate for CPU usage
of the new networking backend.

This PR consists of three separate changes:
* introduce a generic `PeerId` (wrapper around `Multihash`) so that we
don't have use `NetworkService::PeerId` in every part of the code that
uses a `PeerId`
* introduce `NetworkBackend` trait, implement it for the libp2p network
stack and make Polkadot SDK generic over `NetworkBackend`
  * implement `NetworkBackend` for litep2p

The new library should be considered experimental which is why
`rust-libp2p` will remain as the default option for the time being. This
PR currently depends on the master branch of `litep2p` but I'll cut a
new release for the library once all review comments have been
addresses.

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Dmitry Markin <dmitry@markin.tech>
Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
Co-authored-by: Alexandru Vasile <alexandru.vasile@parity.io>
This commit is contained in:
Aaro Altonen
2024-04-08 19:44:13 +03:00
committed by GitHub
parent 9543d31474
commit 80616f6d03
181 changed files with 11055 additions and 1862 deletions
@@ -26,9 +26,8 @@ use crate::{
};
use sc_network::{
config::ProtocolId,
request_responses::{
IncomingRequest, OutgoingResponse, ProtocolConfig as RequestResponseConfig,
},
request_responses::{IncomingRequest, OutgoingResponse},
NetworkBackend,
};
use sp_runtime::traits::Block as BlockT;
@@ -39,22 +38,26 @@ const MAX_RESPONSE_SIZE: u64 = 16 * 1024 * 1024;
/// Incoming warp requests bounded queue size.
const MAX_WARP_REQUEST_QUEUE: usize = 20;
/// Generates a [`RequestResponseConfig`] for the grandpa warp sync request protocol, refusing
/// Generates a `RequestResponseProtocolConfig` for the grandpa warp sync request protocol, refusing
/// incoming requests.
pub fn generate_request_response_config<Hash: AsRef<[u8]>>(
pub fn generate_request_response_config<
Hash: AsRef<[u8]>,
B: BlockT,
N: NetworkBackend<B, <B as BlockT>::Hash>,
>(
protocol_id: ProtocolId,
genesis_hash: Hash,
fork_id: Option<&str>,
) -> RequestResponseConfig {
RequestResponseConfig {
name: generate_protocol_name(genesis_hash, fork_id).into(),
fallback_names: std::iter::once(generate_legacy_protocol_name(protocol_id).into())
.collect(),
max_request_size: 32,
max_response_size: MAX_RESPONSE_SIZE,
request_timeout: Duration::from_secs(10),
inbound_queue: None,
}
inbound_queue: async_channel::Sender<IncomingRequest>,
) -> N::RequestResponseProtocolConfig {
N::request_response_config(
generate_protocol_name(genesis_hash, fork_id).into(),
std::iter::once(generate_legacy_protocol_name(protocol_id).into()).collect(),
32,
MAX_RESPONSE_SIZE,
Duration::from_secs(10),
Some(inbound_queue),
)
}
/// Generate the grandpa warp sync protocol name from the genesis hash and fork id.
@@ -80,17 +83,20 @@ pub struct RequestHandler<TBlock: BlockT> {
impl<TBlock: BlockT> RequestHandler<TBlock> {
/// Create a new [`RequestHandler`].
pub fn new<Hash: AsRef<[u8]>>(
pub fn new<Hash: AsRef<[u8]>, N: NetworkBackend<TBlock, <TBlock as BlockT>::Hash>>(
protocol_id: ProtocolId,
genesis_hash: Hash,
fork_id: Option<&str>,
backend: Arc<dyn WarpSyncProvider<TBlock>>,
) -> (Self, RequestResponseConfig) {
) -> (Self, N::RequestResponseProtocolConfig) {
let (tx, request_receiver) = async_channel::bounded(MAX_WARP_REQUEST_QUEUE);
let mut request_response_config =
generate_request_response_config(protocol_id, genesis_hash, fork_id);
request_response_config.inbound_queue = Some(tx);
let request_response_config = generate_request_response_config::<_, TBlock, N>(
protocol_id,
genesis_hash,
fork_id,
tx,
);
(Self { backend, request_receiver }, request_response_config)
}