mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-05-08 06:38:01 +00:00
fd5f9292f5
Closes #2160 First part of [Extrinsic Horizon](https://github.com/paritytech/polkadot-sdk/issues/2415) Introduces a new trait `TransactionExtension` to replace `SignedExtension`. Introduce the idea of transactions which obey the runtime's extensions and have according Extension data (né Extra data) yet do not have hard-coded signatures. Deprecate the terminology of "Unsigned" when used for transactions/extrinsics owing to there now being "proper" unsigned transactions which obey the extension framework and "old-style" unsigned which do not. Instead we have __*General*__ for the former and __*Bare*__ for the latter. (Ultimately, the latter will be phased out as a type of transaction, and Bare will only be used for Inherents.) Types of extrinsic are now therefore: - Bare (no hardcoded signature, no Extra data; used to be known as "Unsigned") - Bare transactions (deprecated): Gossiped, validated with `ValidateUnsigned` (deprecated) and the `_bare_compat` bits of `TransactionExtension` (deprecated). - Inherents: Not gossiped, validated with `ProvideInherent`. - Extended (Extra data): Gossiped, validated via `TransactionExtension`. - Signed transactions (with a hardcoded signature). - General transactions (without a hardcoded signature). `TransactionExtension` differs from `SignedExtension` because: - A signature on the underlying transaction may validly not be present. - It may alter the origin during validation. - `pre_dispatch` is renamed to `prepare` and need not contain the checks present in `validate`. - `validate` and `prepare` is passed an `Origin` rather than a `AccountId`. - `validate` may pass arbitrary information into `prepare` via a new user-specifiable type `Val`. - `AdditionalSigned`/`additional_signed` is renamed to `Implicit`/`implicit`. It is encoded *for the entire transaction* and passed in to each extension as a new argument to `validate`. This facilitates the ability of extensions to acts as underlying crypto. There is a new `DispatchTransaction` trait which contains only default function impls and is impl'ed for any `TransactionExtension` impler. It provides several utility functions which reduce some of the tedium from using `TransactionExtension` (indeed, none of its regular functions should now need to be called directly). Three transaction version discriminator ("versions") are now permissible: - 0b000000100: Bare (used to be called "Unsigned"): contains Signature or Extra (extension data). After bare transactions are no longer supported, this will strictly identify an Inherents only. - 0b100000100: Old-school "Signed" Transaction: contains Signature and Extra (extension data). - 0b010000100: New-school "General" Transaction: contains Extra (extension data), but no Signature. For the New-school General Transaction, it becomes trivial for authors to publish extensions to the mechanism for authorizing an Origin, e.g. through new kinds of key-signing schemes, ZK proofs, pallet state, mutations over pre-authenticated origins or any combination of the above. ## Code Migration ### NOW: Getting it to build Wrap your `SignedExtension`s in `AsTransactionExtension`. This should be accompanied by renaming your aggregate type in line with the new terminology. E.g. Before: ```rust /// The SignedExtension to the basic transaction logic. pub type SignedExtra = ( /* snip */ MySpecialSignedExtension, ); /// Unchecked extrinsic type as expected by this runtime. pub type UncheckedExtrinsic = generic::UncheckedExtrinsic<Address, RuntimeCall, Signature, SignedExtra>; ``` After: ```rust /// The extension to the basic transaction logic. pub type TxExtension = ( /* snip */ AsTransactionExtension<MySpecialSignedExtension>, ); /// Unchecked extrinsic type as expected by this runtime. pub type UncheckedExtrinsic = generic::UncheckedExtrinsic<Address, RuntimeCall, Signature, TxExtension>; ``` You'll also need to alter any transaction building logic to add a `.into()` to make the conversion happen. E.g. Before: ```rust fn construct_extrinsic( /* snip */ ) -> UncheckedExtrinsic { let extra: SignedExtra = ( /* snip */ MySpecialSignedExtension::new(/* snip */), ); let payload = SignedPayload::new(call.clone(), extra.clone()).unwrap(); let signature = payload.using_encoded(|e| sender.sign(e)); UncheckedExtrinsic::new_signed( /* snip */ Signature::Sr25519(signature), extra, ) } ``` After: ```rust fn construct_extrinsic( /* snip */ ) -> UncheckedExtrinsic { let tx_ext: TxExtension = ( /* snip */ MySpecialSignedExtension::new(/* snip */).into(), ); let payload = SignedPayload::new(call.clone(), tx_ext.clone()).unwrap(); let signature = payload.using_encoded(|e| sender.sign(e)); UncheckedExtrinsic::new_signed( /* snip */ Signature::Sr25519(signature), tx_ext, ) } ``` ### SOON: Migrating to `TransactionExtension` Most `SignedExtension`s can be trivially converted to become a `TransactionExtension`. There are a few things to know. - Instead of a single trait like `SignedExtension`, you should now implement two traits individually: `TransactionExtensionBase` and `TransactionExtension`. - Weights are now a thing and must be provided via the new function `fn weight`. #### `TransactionExtensionBase` This trait takes care of anything which is not dependent on types specific to your runtime, most notably `Call`. - `AdditionalSigned`/`additional_signed` is renamed to `Implicit`/`implicit`. - Weight must be returned by implementing the `weight` function. If your extension is associated with a pallet, you'll probably want to do this via the pallet's existing benchmarking infrastructure. #### `TransactionExtension` Generally: - `pre_dispatch` is now `prepare` and you *should not reexecute the `validate` functionality in there*! - You don't get an account ID any more; you get an origin instead. If you need to presume an account ID, then you can use the trait function `AsSystemOriginSigner::as_system_origin_signer`. - You get an additional ticket, similar to `Pre`, called `Val`. This defines data which is passed from `validate` into `prepare`. This is important since you should not be duplicating logic from `validate` to `prepare`, you need a way of passing your working from the former into the latter. This is it. - This trait takes two type parameters: `Call` and `Context`. `Call` is the runtime call type which used to be an associated type; you can just move it to become a type parameter for your trait impl. `Context` is not currently used and you can safely implement over it as an unbounded type. - There's no `AccountId` associated type any more. Just remove it. Regarding `validate`: - You get three new parameters in `validate`; all can be ignored when migrating from `SignedExtension`. - `validate` returns a tuple on success; the second item in the tuple is the new ticket type `Self::Val` which gets passed in to `prepare`. If you use any information extracted during `validate` (off-chain and on-chain, non-mutating) in `prepare` (on-chain, mutating) then you can pass it through with this. For the tuple's last item, just return the `origin` argument. Regarding `prepare`: - This is renamed from `pre_dispatch`, but there is one change: - FUNCTIONALITY TO VALIDATE THE TRANSACTION NEED NOT BE DUPLICATED FROM `validate`!! - (This is different to `SignedExtension` which was required to run the same checks in `pre_dispatch` as in `validate`.) Regarding `post_dispatch`: - Since there are no unsigned transactions handled by `TransactionExtension`, `Pre` is always defined, so the first parameter is `Self::Pre` rather than `Option<Self::Pre>`. If you make use of `SignedExtension::validate_unsigned` or `SignedExtension::pre_dispatch_unsigned`, then: - Just use the regular versions of these functions instead. - Have your logic execute in the case that the `origin` is `None`. - Ensure your transaction creation logic creates a General Transaction rather than a Bare Transaction; this means having to include all `TransactionExtension`s' data. - `ValidateUnsigned` can still be used (for now) if you need to be able to construct transactions which contain none of the extension data, however these will be phased out in stage 2 of the Transactions Horizon, so you should consider moving to an extension-centric design. ## TODO - [x] Introduce `CheckSignature` impl of `TransactionExtension` to ensure it's possible to have crypto be done wholly in a `TransactionExtension`. - [x] Deprecate `SignedExtension` and move all uses in codebase to `TransactionExtension`. - [x] `ChargeTransactionPayment` - [x] `DummyExtension` - [x] `ChargeAssetTxPayment` (asset-tx-payment) - [x] `ChargeAssetTxPayment` (asset-conversion-tx-payment) - [x] `CheckWeight` - [x] `CheckTxVersion` - [x] `CheckSpecVersion` - [x] `CheckNonce` - [x] `CheckNonZeroSender` - [x] `CheckMortality` - [x] `CheckGenesis` - [x] `CheckOnlySudoAccount` - [x] `WatchDummy` - [x] `PrevalidateAttests` - [x] `GenericSignedExtension` - [x] `SignedExtension` (chain-polkadot-bulletin) - [x] `RefundSignedExtensionAdapter` - [x] Implement `fn weight` across the board. - [ ] Go through all pre-existing extensions which assume an account signer and explicitly handle the possibility of another kind of origin. - [x] `CheckNonce` should probably succeed in the case of a non-account origin. - [x] `CheckNonZeroSender` should succeed in the case of a non-account origin. - [x] `ChargeTransactionPayment` and family should fail in the case of a non-account origin. - [ ] - [x] Fix any broken tests. --------- Signed-off-by: georgepisaltu <george.pisaltu@parity.io> Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io> Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> Co-authored-by: Nikhil Gupta <17176722+gupnik@users.noreply.github.com> Co-authored-by: georgepisaltu <52418509+georgepisaltu@users.noreply.github.com> Co-authored-by: Chevdor <chevdor@users.noreply.github.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Maciej <maciej.zyszkiewicz@parity.io> Co-authored-by: Javier Viola <javier@parity.io> Co-authored-by: Marcin S. <marcin@realemail.net> Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: Javier Bullrich <javier@bullrich.dev> Co-authored-by: Koute <koute@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: Vladimir Istyufeev <vladimir@parity.io> Co-authored-by: Ross Bulat <ross@parity.io> Co-authored-by: Gonçalo Pestana <g6pestana@gmail.com> Co-authored-by: Liam Aharon <liam.aharon@hotmail.com> Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com> Co-authored-by: André Silva <123550+andresilva@users.noreply.github.com> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: s0me0ne-unkn0wn <48632512+s0me0ne-unkn0wn@users.noreply.github.com> Co-authored-by: ordian <write@reusable.software> Co-authored-by: Sebastian Kunert <skunert49@gmail.com> Co-authored-by: Aaro Altonen <48052676+altonen@users.noreply.github.com> Co-authored-by: Dmitry Markin <dmitry@markin.tech> Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com> Co-authored-by: Julian Eager <eagr@tutanota.com> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Davide Galassi <davxy@datawok.net> Co-authored-by: Dónal Murray <donal.murray@parity.io> Co-authored-by: yjh <yjh465402634@gmail.com> Co-authored-by: Tom Mi <tommi@niemi.lol> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Will | Paradox | ParaNodes.io <79228812+paradox-tt@users.noreply.github.com> Co-authored-by: Bastian Köcher <info@kchr.de> Co-authored-by: Joshy Orndorff <JoshOrndorff@users.noreply.github.com> Co-authored-by: Joshy Orndorff <git-user-email.h0ly5@simplelogin.com> Co-authored-by: PG Herveou <pgherveou@gmail.com> Co-authored-by: Alexander Theißen <alex.theissen@me.com> Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com> Co-authored-by: Juan Girini <juangirini@gmail.com> Co-authored-by: bader y <ibnbassem@gmail.com> Co-authored-by: James Wilson <james@jsdw.me> Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com> Co-authored-by: asynchronous rob <rphmeier@gmail.com> Co-authored-by: Parth <desaiparth08@gmail.com> Co-authored-by: Andrew Jones <ascjones@gmail.com> Co-authored-by: Jonathan Udd <jonathan@dwellir.com> Co-authored-by: Serban Iorga <serban@parity.io> Co-authored-by: Egor_P <egor@parity.io> Co-authored-by: Branislav Kontur <bkontur@gmail.com> Co-authored-by: Evgeny Snitko <evgeny@parity.io> Co-authored-by: Just van Stam <vstam1@users.noreply.github.com> Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: gupnik <nikhilgupta.iitk@gmail.com> Co-authored-by: dzmitry-lahoda <dzmitry@lahoda.pro> Co-authored-by: zhiqiangxu <652732310@qq.com> Co-authored-by: Nazar Mokrynskyi <nazar@mokrynskyi.com> Co-authored-by: Anwesh <anweshknayak@gmail.com> Co-authored-by: cheme <emericchevalier.pro@gmail.com> Co-authored-by: Sam Johnson <sam@durosoft.com> Co-authored-by: kianenigma <kian@parity.io> Co-authored-by: Jegor Sidorenko <5252494+jsidorenko@users.noreply.github.com> Co-authored-by: Muharem <ismailov.m.h@gmail.com> Co-authored-by: joepetrowski <joe@parity.io> Co-authored-by: Alexandru Gheorghe <49718502+alexggh@users.noreply.github.com> Co-authored-by: Gabriel Facco de Arruda <arrudagates@gmail.com> Co-authored-by: Squirrel <gilescope@gmail.com> Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by: georgepisaltu <george.pisaltu@parity.io> Co-authored-by: command-bot <>
653 lines
18 KiB
Rust
653 lines
18 KiB
Rust
// This file is part of Substrate.
|
|
|
|
// Copyright (C) Parity Technologies (UK) Ltd.
|
|
// SPDX-License-Identifier: GPL-3.0-or-later WITH Classpath-exception-2.0
|
|
|
|
// This program is free software: you can redistribute it and/or modify
|
|
// it under the terms of the GNU General Public License as published by
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
// (at your option) any later version.
|
|
|
|
// This program is distributed in the hope that it will be useful,
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
// GNU General Public License for more details.
|
|
|
|
// You should have received a copy of the GNU General Public License
|
|
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
|
|
use crate::LOG_TARGET;
|
|
use libp2p::PeerId;
|
|
use log::trace;
|
|
use sc_network_common::sync::message;
|
|
use sp_runtime::traits::{Block as BlockT, NumberFor, One};
|
|
use std::{
|
|
cmp,
|
|
collections::{BTreeMap, HashMap},
|
|
ops::Range,
|
|
};
|
|
|
|
/// Block data with origin.
|
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
|
pub struct BlockData<B: BlockT> {
|
|
/// The Block Message from the wire
|
|
pub block: message::BlockData<B>,
|
|
/// The peer, we received this from
|
|
pub origin: Option<PeerId>,
|
|
}
|
|
|
|
#[derive(Debug)]
|
|
enum BlockRangeState<B: BlockT> {
|
|
Downloading { len: NumberFor<B>, downloading: u32 },
|
|
Complete(Vec<BlockData<B>>),
|
|
Queued { len: NumberFor<B> },
|
|
}
|
|
|
|
impl<B: BlockT> BlockRangeState<B> {
|
|
pub fn len(&self) -> NumberFor<B> {
|
|
match *self {
|
|
Self::Downloading { len, .. } => len,
|
|
Self::Complete(ref blocks) => (blocks.len() as u32).into(),
|
|
Self::Queued { len } => len,
|
|
}
|
|
}
|
|
}
|
|
|
|
/// A collection of blocks being downloaded.
|
|
#[derive(Default)]
|
|
pub struct BlockCollection<B: BlockT> {
|
|
/// Downloaded blocks.
|
|
blocks: BTreeMap<NumberFor<B>, BlockRangeState<B>>,
|
|
peer_requests: HashMap<PeerId, NumberFor<B>>,
|
|
/// Block ranges downloaded and queued for import.
|
|
/// Maps start_hash => (start_num, end_num).
|
|
queued_blocks: HashMap<B::Hash, (NumberFor<B>, NumberFor<B>)>,
|
|
}
|
|
|
|
impl<B: BlockT> BlockCollection<B> {
|
|
/// Create a new instance.
|
|
pub fn new() -> Self {
|
|
Self {
|
|
blocks: BTreeMap::new(),
|
|
peer_requests: HashMap::new(),
|
|
queued_blocks: HashMap::new(),
|
|
}
|
|
}
|
|
|
|
/// Clear everything.
|
|
pub fn clear(&mut self) {
|
|
self.blocks.clear();
|
|
self.peer_requests.clear();
|
|
}
|
|
|
|
/// Insert a set of blocks into collection.
|
|
pub fn insert(&mut self, start: NumberFor<B>, blocks: Vec<message::BlockData<B>>, who: PeerId) {
|
|
if blocks.is_empty() {
|
|
return
|
|
}
|
|
|
|
match self.blocks.get(&start) {
|
|
Some(&BlockRangeState::Downloading { .. }) => {
|
|
trace!(target: LOG_TARGET, "Inserting block data still marked as being downloaded: {}", start);
|
|
},
|
|
Some(BlockRangeState::Complete(existing)) if existing.len() >= blocks.len() => {
|
|
trace!(target: LOG_TARGET, "Ignored block data already downloaded: {}", start);
|
|
return
|
|
},
|
|
_ => (),
|
|
}
|
|
|
|
self.blocks.insert(
|
|
start,
|
|
BlockRangeState::Complete(
|
|
blocks.into_iter().map(|b| BlockData { origin: Some(who), block: b }).collect(),
|
|
),
|
|
);
|
|
}
|
|
|
|
/// Returns a set of block hashes that require a header download. The returned set is marked as
|
|
/// being downloaded.
|
|
pub fn needed_blocks(
|
|
&mut self,
|
|
who: PeerId,
|
|
count: u32,
|
|
peer_best: NumberFor<B>,
|
|
common: NumberFor<B>,
|
|
max_parallel: u32,
|
|
max_ahead: u32,
|
|
) -> Option<Range<NumberFor<B>>> {
|
|
if peer_best <= common {
|
|
// Bail out early
|
|
return None
|
|
}
|
|
// First block number that we need to download
|
|
let first_different = common + <NumberFor<B>>::one();
|
|
let count = (count as u32).into();
|
|
let (mut range, downloading) = {
|
|
// Iterate through the ranges in `self.blocks` looking for a range to download
|
|
let mut downloading_iter = self.blocks.iter().peekable();
|
|
let mut prev: Option<(&NumberFor<B>, &BlockRangeState<B>)> = None;
|
|
loop {
|
|
let next = downloading_iter.next();
|
|
break match (prev, next) {
|
|
// If we are already downloading this range, request it from `max_parallel`
|
|
// peers (`max_parallel = 5` by default).
|
|
// Do not request already downloading range from peers with common number above
|
|
// the range start.
|
|
(Some((start, &BlockRangeState::Downloading { ref len, downloading })), _)
|
|
if downloading < max_parallel && *start >= first_different =>
|
|
(*start..*start + *len, downloading),
|
|
// If there is a gap between ranges requested, download this gap unless the peer
|
|
// has common number above the gap start
|
|
(Some((start, r)), Some((next_start, _)))
|
|
if *start + r.len() < *next_start &&
|
|
*start + r.len() >= first_different =>
|
|
(*start + r.len()..cmp::min(*next_start, *start + r.len() + count), 0),
|
|
// Download `count` blocks after the last range requested unless the peer
|
|
// has common number above this new range
|
|
(Some((start, r)), None) if *start + r.len() >= first_different =>
|
|
(*start + r.len()..*start + r.len() + count, 0),
|
|
// If there are no ranges currently requested, download `count` blocks after
|
|
// `common` number
|
|
(None, None) => (first_different..first_different + count, 0),
|
|
// If the first range starts above `common + 1`, download the gap at the start
|
|
(None, Some((start, _))) if *start > first_different =>
|
|
(first_different..cmp::min(first_different + count, *start), 0),
|
|
// Move on to the next range pair
|
|
_ => {
|
|
prev = next;
|
|
continue
|
|
},
|
|
}
|
|
}
|
|
};
|
|
// crop to peers best
|
|
if range.start > peer_best {
|
|
trace!(target: LOG_TARGET, "Out of range for peer {} ({} vs {})", who, range.start, peer_best);
|
|
return None
|
|
}
|
|
range.end = cmp::min(peer_best + One::one(), range.end);
|
|
|
|
if self
|
|
.blocks
|
|
.iter()
|
|
.next()
|
|
.map_or(false, |(n, _)| range.start > *n + max_ahead.into())
|
|
{
|
|
trace!(target: LOG_TARGET, "Too far ahead for peer {} ({})", who, range.start);
|
|
return None
|
|
}
|
|
|
|
self.peer_requests.insert(who, range.start);
|
|
self.blocks.insert(
|
|
range.start,
|
|
BlockRangeState::Downloading {
|
|
len: range.end - range.start,
|
|
downloading: downloading + 1,
|
|
},
|
|
);
|
|
if range.end <= range.start {
|
|
panic!(
|
|
"Empty range {:?}, count={}, peer_best={}, common={}, blocks={:?}",
|
|
range, count, peer_best, common, self.blocks
|
|
);
|
|
}
|
|
Some(range)
|
|
}
|
|
|
|
/// Get a valid chain of blocks ordered in descending order and ready for importing into
|
|
/// the blockchain.
|
|
/// `from` is the maximum block number for the start of the range that we are interested in.
|
|
/// The function will return empty Vec if the first block ready is higher than `from`.
|
|
/// For each returned block hash `clear_queued` must be called at some later stage.
|
|
pub fn ready_blocks(&mut self, from: NumberFor<B>) -> Vec<BlockData<B>> {
|
|
let mut ready = Vec::new();
|
|
|
|
let mut prev = from;
|
|
for (&start, range_data) in &mut self.blocks {
|
|
if start > prev {
|
|
break
|
|
}
|
|
let len = match range_data {
|
|
BlockRangeState::Complete(blocks) => {
|
|
let len = (blocks.len() as u32).into();
|
|
prev = start + len;
|
|
if let Some(BlockData { block, .. }) = blocks.first() {
|
|
self.queued_blocks
|
|
.insert(block.hash, (start, start + (blocks.len() as u32).into()));
|
|
}
|
|
// Remove all elements from `blocks` and add them to `ready`
|
|
ready.append(blocks);
|
|
len
|
|
},
|
|
BlockRangeState::Queued { .. } => continue,
|
|
_ => break,
|
|
};
|
|
*range_data = BlockRangeState::Queued { len };
|
|
}
|
|
trace!(target: LOG_TARGET, "{} blocks ready for import", ready.len());
|
|
ready
|
|
}
|
|
|
|
pub fn clear_queued(&mut self, hash: &B::Hash) {
|
|
if let Some((from, to)) = self.queued_blocks.remove(hash) {
|
|
let mut block_num = from;
|
|
while block_num < to {
|
|
self.blocks.remove(&block_num);
|
|
block_num += One::one();
|
|
}
|
|
trace!(target: LOG_TARGET, "Cleared blocks from {:?} to {:?}", from, to);
|
|
}
|
|
}
|
|
|
|
pub fn clear_peer_download(&mut self, who: &PeerId) {
|
|
if let Some(start) = self.peer_requests.remove(who) {
|
|
let remove = match self.blocks.get_mut(&start) {
|
|
Some(&mut BlockRangeState::Downloading { ref mut downloading, .. })
|
|
if *downloading > 1 =>
|
|
{
|
|
*downloading -= 1;
|
|
false
|
|
},
|
|
Some(&mut BlockRangeState::Downloading { .. }) => true,
|
|
_ => false,
|
|
};
|
|
if remove {
|
|
self.blocks.remove(&start);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
#[cfg(test)]
|
|
mod test {
|
|
use super::{BlockCollection, BlockData, BlockRangeState};
|
|
use libp2p::PeerId;
|
|
use sc_network_common::sync::message;
|
|
use sp_core::H256;
|
|
use sp_runtime::{
|
|
generic::UncheckedExtrinsic,
|
|
testing::{Block as RawBlock, MockCallU64},
|
|
};
|
|
|
|
type Block = RawBlock<UncheckedExtrinsic<u64, MockCallU64, (), ()>>;
|
|
|
|
fn is_empty(bc: &BlockCollection<Block>) -> bool {
|
|
bc.blocks.is_empty() && bc.peer_requests.is_empty()
|
|
}
|
|
|
|
fn generate_blocks(n: usize) -> Vec<message::BlockData<Block>> {
|
|
(0..n)
|
|
.map(|_| message::generic::BlockData {
|
|
hash: H256::random(),
|
|
header: None,
|
|
body: None,
|
|
indexed_body: None,
|
|
message_queue: None,
|
|
receipt: None,
|
|
justification: None,
|
|
justifications: None,
|
|
})
|
|
.collect()
|
|
}
|
|
|
|
#[test]
|
|
fn create_clear() {
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
bc.insert(1, generate_blocks(100), PeerId::random());
|
|
assert!(!is_empty(&bc));
|
|
bc.clear();
|
|
assert!(is_empty(&bc));
|
|
}
|
|
|
|
#[test]
|
|
fn insert_blocks() {
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
let peer0 = PeerId::random();
|
|
let peer1 = PeerId::random();
|
|
let peer2 = PeerId::random();
|
|
|
|
let blocks = generate_blocks(150);
|
|
assert_eq!(bc.needed_blocks(peer0, 40, 150, 0, 1, 200), Some(1..41));
|
|
assert_eq!(bc.needed_blocks(peer1, 40, 150, 0, 1, 200), Some(41..81));
|
|
assert_eq!(bc.needed_blocks(peer2, 40, 150, 0, 1, 200), Some(81..121));
|
|
|
|
bc.clear_peer_download(&peer1);
|
|
bc.insert(41, blocks[41..81].to_vec(), peer1);
|
|
assert_eq!(bc.ready_blocks(1), vec![]);
|
|
assert_eq!(bc.needed_blocks(peer1, 40, 150, 0, 1, 200), Some(121..151));
|
|
bc.clear_peer_download(&peer0);
|
|
bc.insert(1, blocks[1..11].to_vec(), peer0);
|
|
|
|
assert_eq!(bc.needed_blocks(peer0, 40, 150, 0, 1, 200), Some(11..41));
|
|
assert_eq!(
|
|
bc.ready_blocks(1),
|
|
blocks[1..11]
|
|
.iter()
|
|
.map(|b| BlockData { block: b.clone(), origin: Some(peer0) })
|
|
.collect::<Vec<_>>()
|
|
);
|
|
|
|
bc.clear_peer_download(&peer0);
|
|
bc.insert(11, blocks[11..41].to_vec(), peer0);
|
|
|
|
let ready = bc.ready_blocks(12);
|
|
assert_eq!(
|
|
ready[..30],
|
|
blocks[11..41]
|
|
.iter()
|
|
.map(|b| BlockData { block: b.clone(), origin: Some(peer0) })
|
|
.collect::<Vec<_>>()[..]
|
|
);
|
|
assert_eq!(
|
|
ready[30..],
|
|
blocks[41..81]
|
|
.iter()
|
|
.map(|b| BlockData { block: b.clone(), origin: Some(peer1) })
|
|
.collect::<Vec<_>>()[..]
|
|
);
|
|
|
|
bc.clear_peer_download(&peer2);
|
|
assert_eq!(bc.needed_blocks(peer2, 40, 150, 80, 1, 200), Some(81..121));
|
|
bc.clear_peer_download(&peer2);
|
|
bc.insert(81, blocks[81..121].to_vec(), peer2);
|
|
bc.clear_peer_download(&peer1);
|
|
bc.insert(121, blocks[121..150].to_vec(), peer1);
|
|
|
|
assert_eq!(bc.ready_blocks(80), vec![]);
|
|
let ready = bc.ready_blocks(81);
|
|
assert_eq!(
|
|
ready[..40],
|
|
blocks[81..121]
|
|
.iter()
|
|
.map(|b| BlockData { block: b.clone(), origin: Some(peer2) })
|
|
.collect::<Vec<_>>()[..]
|
|
);
|
|
assert_eq!(
|
|
ready[40..],
|
|
blocks[121..150]
|
|
.iter()
|
|
.map(|b| BlockData { block: b.clone(), origin: Some(peer1) })
|
|
.collect::<Vec<_>>()[..]
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn large_gap() {
|
|
let mut bc: BlockCollection<Block> = BlockCollection::new();
|
|
bc.blocks.insert(100, BlockRangeState::Downloading { len: 128, downloading: 1 });
|
|
let blocks = generate_blocks(10)
|
|
.into_iter()
|
|
.map(|b| BlockData { block: b, origin: None })
|
|
.collect();
|
|
bc.blocks.insert(114305, BlockRangeState::Complete(blocks));
|
|
|
|
let peer0 = PeerId::random();
|
|
assert_eq!(bc.needed_blocks(peer0, 128, 10000, 0, 1, 200), Some(1..100));
|
|
assert_eq!(bc.needed_blocks(peer0, 128, 10000, 0, 1, 200), None); // too far ahead
|
|
assert_eq!(
|
|
bc.needed_blocks(peer0, 128, 10000, 0, 1, 200000),
|
|
Some(100 + 128..100 + 128 + 128)
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn no_duplicate_requests_on_fork() {
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
let peer = PeerId::random();
|
|
|
|
let blocks = generate_blocks(10);
|
|
|
|
// count = 5, peer_best = 50, common = 39, max_parallel = 0, max_ahead = 200
|
|
assert_eq!(bc.needed_blocks(peer, 5, 50, 39, 0, 200), Some(40..45));
|
|
|
|
// got a response on the request for `40..45`
|
|
bc.clear_peer_download(&peer);
|
|
bc.insert(40, blocks[..5].to_vec(), peer);
|
|
|
|
// our "node" started on a fork, with its current best = 47, which is > common
|
|
let ready = bc.ready_blocks(48);
|
|
assert_eq!(
|
|
ready,
|
|
blocks[..5]
|
|
.iter()
|
|
.map(|b| BlockData { block: b.clone(), origin: Some(peer) })
|
|
.collect::<Vec<_>>()
|
|
);
|
|
|
|
assert_eq!(bc.needed_blocks(peer, 5, 50, 39, 0, 200), Some(45..50));
|
|
}
|
|
|
|
#[test]
|
|
fn clear_queued_subsequent_ranges() {
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
let peer = PeerId::random();
|
|
|
|
let blocks = generate_blocks(10);
|
|
|
|
// Request 2 ranges
|
|
assert_eq!(bc.needed_blocks(peer, 5, 50, 39, 0, 200), Some(40..45));
|
|
assert_eq!(bc.needed_blocks(peer, 5, 50, 39, 0, 200), Some(45..50));
|
|
|
|
// got a response on the request for `40..50`
|
|
bc.clear_peer_download(&peer);
|
|
bc.insert(40, blocks.to_vec(), peer);
|
|
|
|
// request any blocks starting from 1000 or lower.
|
|
let ready = bc.ready_blocks(1000);
|
|
assert_eq!(
|
|
ready,
|
|
blocks
|
|
.iter()
|
|
.map(|b| BlockData { block: b.clone(), origin: Some(peer) })
|
|
.collect::<Vec<_>>()
|
|
);
|
|
|
|
bc.clear_queued(&blocks[0].hash);
|
|
assert!(bc.blocks.is_empty());
|
|
assert!(bc.queued_blocks.is_empty());
|
|
}
|
|
|
|
#[test]
|
|
fn downloaded_range_is_requested_from_max_parallel_peers() {
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
|
|
let count = 5;
|
|
// identical ranges requested from 2 peers
|
|
let max_parallel = 2;
|
|
let max_ahead = 200;
|
|
|
|
let peer1 = PeerId::random();
|
|
let peer2 = PeerId::random();
|
|
let peer3 = PeerId::random();
|
|
|
|
// common for all peers
|
|
let best = 100;
|
|
let common = 10;
|
|
|
|
assert_eq!(
|
|
bc.needed_blocks(peer1, count, best, common, max_parallel, max_ahead),
|
|
Some(11..16)
|
|
);
|
|
assert_eq!(
|
|
bc.needed_blocks(peer2, count, best, common, max_parallel, max_ahead),
|
|
Some(11..16)
|
|
);
|
|
assert_eq!(
|
|
bc.needed_blocks(peer3, count, best, common, max_parallel, max_ahead),
|
|
Some(16..21)
|
|
);
|
|
}
|
|
#[test]
|
|
fn downloaded_range_not_requested_from_peers_with_higher_common_number() {
|
|
// A peer connects with a common number falling behind our best number
|
|
// (either a fork or lagging behind).
|
|
// We request a range from this peer starting at its common number + 1.
|
|
// Even though we have less than `max_parallel` downloads, we do not request
|
|
// this range from peers with a common number above the start of this range.
|
|
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
|
|
let count = 5;
|
|
let max_parallel = 2;
|
|
let max_ahead = 200;
|
|
|
|
let peer1 = PeerId::random();
|
|
let peer1_best = 20;
|
|
let peer1_common = 10;
|
|
|
|
// `peer2` has first different above the start of the range downloaded from `peer1`
|
|
let peer2 = PeerId::random();
|
|
let peer2_best = 20;
|
|
let peer2_common = 11; // first_different = 12
|
|
|
|
assert_eq!(
|
|
bc.needed_blocks(peer1, count, peer1_best, peer1_common, max_parallel, max_ahead),
|
|
Some(11..16),
|
|
);
|
|
assert_eq!(
|
|
bc.needed_blocks(peer2, count, peer2_best, peer2_common, max_parallel, max_ahead),
|
|
Some(16..21),
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn gap_above_common_number_requested() {
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
|
|
let count = 5;
|
|
let best = 30;
|
|
// We need at least 3 ranges requested to have a gap, so to minimize the number of peers
|
|
// set `max_parallel = 1`
|
|
let max_parallel = 1;
|
|
let max_ahead = 200;
|
|
|
|
let peer1 = PeerId::random();
|
|
let peer2 = PeerId::random();
|
|
let peer3 = PeerId::random();
|
|
|
|
let common = 10;
|
|
assert_eq!(
|
|
bc.needed_blocks(peer1, count, best, common, max_parallel, max_ahead),
|
|
Some(11..16),
|
|
);
|
|
assert_eq!(
|
|
bc.needed_blocks(peer2, count, best, common, max_parallel, max_ahead),
|
|
Some(16..21),
|
|
);
|
|
assert_eq!(
|
|
bc.needed_blocks(peer3, count, best, common, max_parallel, max_ahead),
|
|
Some(21..26),
|
|
);
|
|
|
|
// For some reason there is now a gap at 16..21. We just disconnect `peer2`, but it might
|
|
// also happen that 16..21 received first and got imported if our best is actually >= 15.
|
|
bc.clear_peer_download(&peer2);
|
|
|
|
// Some peer connects with common number below the gap. The gap is requested from it.
|
|
assert_eq!(
|
|
bc.needed_blocks(peer2, count, best, common, max_parallel, max_ahead),
|
|
Some(16..21),
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn gap_below_common_number_not_requested() {
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
|
|
let count = 5;
|
|
let best = 30;
|
|
// We need at least 3 ranges requested to have a gap, so to minimize the number of peers
|
|
// set `max_parallel = 1`
|
|
let max_parallel = 1;
|
|
let max_ahead = 200;
|
|
|
|
let peer1 = PeerId::random();
|
|
let peer2 = PeerId::random();
|
|
let peer3 = PeerId::random();
|
|
|
|
let common = 10;
|
|
assert_eq!(
|
|
bc.needed_blocks(peer1, count, best, common, max_parallel, max_ahead),
|
|
Some(11..16),
|
|
);
|
|
assert_eq!(
|
|
bc.needed_blocks(peer2, count, best, common, max_parallel, max_ahead),
|
|
Some(16..21),
|
|
);
|
|
assert_eq!(
|
|
bc.needed_blocks(peer3, count, best, common, max_parallel, max_ahead),
|
|
Some(21..26),
|
|
);
|
|
|
|
// For some reason there is now a gap at 16..21. We just disconnect `peer2`, but it might
|
|
// also happen that 16..21 received first and got imported if our best is actually >= 15.
|
|
bc.clear_peer_download(&peer2);
|
|
|
|
// Some peer connects with common number above the gap. The gap is not requested from it.
|
|
let common = 23;
|
|
assert_eq!(
|
|
bc.needed_blocks(peer2, count, best, common, max_parallel, max_ahead),
|
|
Some(26..31), // not 16..21
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn range_at_the_end_above_common_number_requested() {
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
|
|
let count = 5;
|
|
let best = 30;
|
|
let max_parallel = 1;
|
|
let max_ahead = 200;
|
|
|
|
let peer1 = PeerId::random();
|
|
let peer2 = PeerId::random();
|
|
|
|
let common = 10;
|
|
assert_eq!(
|
|
bc.needed_blocks(peer1, count, best, common, max_parallel, max_ahead),
|
|
Some(11..16),
|
|
);
|
|
assert_eq!(
|
|
bc.needed_blocks(peer2, count, best, common, max_parallel, max_ahead),
|
|
Some(16..21),
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn range_at_the_end_below_common_number_not_requested() {
|
|
let mut bc = BlockCollection::new();
|
|
assert!(is_empty(&bc));
|
|
|
|
let count = 5;
|
|
let best = 30;
|
|
let max_parallel = 1;
|
|
let max_ahead = 200;
|
|
|
|
let peer1 = PeerId::random();
|
|
let peer2 = PeerId::random();
|
|
|
|
let common = 10;
|
|
assert_eq!(
|
|
bc.needed_blocks(peer1, count, best, common, max_parallel, max_ahead),
|
|
Some(11..16),
|
|
);
|
|
|
|
let common = 20;
|
|
assert_eq!(
|
|
bc.needed_blocks(peer2, count, best, common, max_parallel, max_ahead),
|
|
Some(21..26), // not 16..21
|
|
);
|
|
}
|
|
}
|