Parathreads Feature Branch (#6969)

* First baby steps

* Split scheduler into several modules

* Towards a more modular approach for scheduling

* move free_cores; IntoInterator -> BTreeMap

* Move clear()

* Move more functions out of scheduler

* Change weight composition

* More abstraction

* Further refactor

* clippy

* fmt

* fix test-runtime

* Add parathreads pallet to construct_runtime!

* Make all runtimes use (Parachains, Parathreads) scheduling

* Delete commented out code

* Remove parathreads scheduler from westend, rococo, and kusama

* fix rococo, westend, and kusama config

* Revert "fix rococo, westend, and kusama config"

This reverts commit 59e4de380d5c7d17eaaba5e2c2b81405de3465e3.

* Revert "Remove parathreads scheduler from westend, rococo, and kusama"

This reverts commit 4c44255296083ac5670560790ed77104917890a4.

* Remove CoreIndex from free_cores

* Remove unnecessary struct for parathreads

* parathreads provider take 1

* Comment out parathread tests

* Pop into lookahead

* fmt

* Fill lookahead with two entries for parachains

* fmt

* Current stage

* Towards ab parathreads

* no AB use

* Make tests typecheck

* quick hack to set scheduling lookahead to 1

* Fix scheduler tests

* fix paras_inherent tests

* misc

* Update more of a test

* cfg(test)

* some cleanup

* Undo paras_inherent changes

* Adjust paras inherent tests

* Undo changes to v2 primitives

* Undo v2 mod changes to tests

* minor

* Remove parathreads assigner and pallet

* minor

* minor

* more cleanup

* fmt

* minor

* minor

* minor

* Remove on_new_session from assignment provider

* Make adder collator integration test pass

* disable failing unit tests

* minor

* minor

* re-enable one unit test

* minor

* handle retries, add concluded para to pop interface

* comment out unused code

* Remove core_para from interface

* Remove first claimqueue element on clear if None instead removing all Nones

* Move claimqueue get out of loop

* Use VecDeque instead of Ved in ClaimQueue

* Make occupied() AB ready(?)

* handle freed disputed in clear_and_fill_claimqueue

* clear_and_fill_claimqueue returns scheduled Vec

* Rename and minor refactor

* return position of assignment taken from claimqueue

* minor

* Fix session boundary parachains number change + extended test

* Fix runtimes

* Fix polkadot runtime

* Remove polkadot pallet from benchmarks

* fix test runtime

* Add storage migration

* Minor refactor

* Minor

* migratin typechecks

* Add migration to runtimes

* Towards modular scheduling II (#6568)

* Add post migration check

* pebkac

* Disable migrations but mine

* Revert "Disable migrations but mine"

This reverts commit 4fa5c5a370c199944a7e0926f50b08626bfbad4c.

* Move scheduler migration

* Revert "Move scheduler migration"

This reverts commit a16b1659a907950bae048a9f7010f2aa76e02b6d.

* Fix migration

* cleanup

* Don't lose retries value anymore

* comment out test function

* Remove retries value from Assignment again

* minor

* Make collator for parathreads optional

* data type refactor

* update scheduler tests

* Change test function cfg

* comment out test function

* Try cfg(test) only

* fix cfg flags

* Add get_max_retries function to provider interface (#7047)

* Fix merge commit

* pebkac

* fix merge

* update cargo.lock

* fix merge

* fix merge

* Use btreemap instead of vec, fix scheduler calls.

* Use imported `ScheduledCore`

* Remove unused import in inclusion tests

* Use keys() instead of mapping over a BTreeMap

* Fix migrations for parachains scheduler

* Use BlockNumberFor<T> everywhere in scheduler

* Add on demand assignment provider pallet (#7110)

* Address some PR comments

* minor

* more cleanup

* find_map and timeout availability fixes

* Change default scheduling_lookahead to 1

* Add on demand assignment provider pallet

* Move test-runtime to new assignment provider

* Run cargo format on scheduler tests

* minor

* Mutate cores in single loop

* timeout predicate simplification

* claimqueue desired size fix

* Replace expect by ok_or

* More improvements

* Fix push back order and next_up_on_timeout

* minor

* session change docs

* Add pre_new_session call to hand pre session updates

* Remove sc_network dependency and PeerId from unnecessary data structures

* Remove unnecessary peer_ids

* Add OnDemandOrdering proxy (#7156)

* Add OnDemandBidding proxy

* Fix names

* OnDemandAssigner for rococo only

* Check PeerId in collator protocol before fetching collation

* On occupied, remove non occupied cores from the claimqueue front and refill

* Add missing docs

* Comment out unused field

* fix ScheduledCore in tests

* Fix the fix

* pebkac

* fmt

* Fix occupied dropping

* Remove double import

* ScheduledCore fixes

* Readd sc-network dep

* pebkac

* OpaquePeerId -> PeerId in can_collate interface

* Cargo.lock update for interface change

* Remove checks not needed anymore?

* Drop occupied core on session change if it would time out after the new session

* Add on demand assignment provider pallet

* Move test-runtime to new assignment provider

* Run cargo format on scheduler tests

* Add OnDemandOrdering proxy (#7156)

* Add OnDemandBidding proxy

* Fix names

* OnDemandAssigner for rococo only

* Remove unneeded config values

* Update comments

* Use and_then for queue position

* Return the max size of the spot queue on error

* Add comments to add_parathread_entry

* Add module comments

* Add log for when can_collate fails

* Change assigner queue type to `Assignment`

* Update assignment provider tests

* More logs

* Remove unused keyring import

* disable can_collate

* comment out can_collate

* Can collate first checks set if empty

* Move can_collate call to collation advertisement

* Fix backing test

* map to loop

* Remove obsolete check

* Move invalid collation test from backing to collator-protocol

* fix unused imports

* fix test

* fix Debug derivation

* Increase time limit on zombienet predicates

* Increase zombienet timeout

* Minor

* Address some PR comments

* Address PR comments

* Comment out failing assert due to on-demand assigner missing

* remove collator_restrictions info from backing

* Move can_collate to ActiveParas

* minor

* minor

* Update weight information for on demand config

* Add ttl to parasentry

* Fix tests missing parasentry ttl

* Adjust scheduler tests to use ttl default values

* Use match instead of if let for ttl drop

* Use RuntimeDebug trait for `ParasEntry` fields

* Add comments to on demand assignment pallet

* Fix spot traffic calculation

* Revert runtimedebug changes to primitives

* Remove runtimedebug derivation from `ParasEntry`

* Mention affinity in pallet level docs

* Use RuntimeDebug trait for ParasEntry child types

* Remove collator restrictions

* Fix primitive versioning and other merge issues

* Fix tests post merge

* Fix node side tests

* Edit parascheduler migration for clarity

* Move parascheduler migration up to next release

* Remove vestiges from merge

* Fix tests

* Refactor ttl handling

* Remove unused things from scheduler tests

* Move on demand assigner to own directory

* Update documentation

* Remove unused sc-network dependency in primitives

Was used for collator restrictions

* Remove unused import

* Reenable scheduler test

* Remove unused storage value

* Enable timeout predicate test and fix fn

Turns out that the issue with the compiler is fixed and we can now
use impl Trait in the manner used here.

* Remove unused imports

* Add benchmarking entry for perbill in config

* Correct typo

* Address review comments

* Log out errors when calculating spot traffic.

* Change parascheduler's log target name

* Update scheduler_common documentation

* Use mutate for affinity fns, add tests

* Add another on demand affinity test

* Unify parathreads and parachains in HostConfig (take 2) (#7452)

* Unify parathreads and parachains in HostConfig

* Fixed missed occurences

* Remove commented out lines

* `HostConfiguration v7`

* Fix version check

* Add `MigrateToV7` to `Unreleased`

* fmt

* fmt

* Fix compilation errors after the rebase

* Update runtime/parachains/src/scheduler/tests.rs

Co-authored-by: Anton Vilhelm Ásgeirsson <antonva@users.noreply.github.com>

* Update runtime/parachains/src/scheduler/tests.rs

Co-authored-by: Anton Vilhelm Ásgeirsson <antonva@users.noreply.github.com>

* fmt

* Fix migration test

* Fix tests

* Remove unneeded assert from tests

* parathread_cores -> on_demand_cores; parathread_retries -> on_demand_retries

* Fix a compilation error in tests

* Remove unused `use`

* update colander image version

---------

Co-authored-by: alexgparity <alex.gremm@parity.io>
Co-authored-by: Anton Vilhelm Ásgeirsson <antonva@users.noreply.github.com>
Co-authored-by: Javier Viola <javier@parity.io>

* Fix branch after merge with master

* Refactor out duplicate checks into a helper fn

* Fix tests post merge

* Rename add_parathread_assignment, add test

* Update docs

* Remove unused on_finalize function

* Add weight info to on demand pallet

* Update runtime/parachains/src/configuration.rs

Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>

* Update runtime/parachains/src/scheduler_common/mod.rs

Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>

* Update runtime/parachains/src/assigner_on_demand/mod.rs

Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>

* Add benchmarking to on demand pallet

* Make place_order test check for success

* Add on demand benchmarks

* Add local test weights to rococo runtime

* Modify TTL drop behaviour to not skip claims

Previous behaviour would jump a new claim from the assignment provider
ahead in the claimqueue, assuming lookahead is larger than 1.

* Refactor ttl test to test claimqueue order

* Disable place_order ext. when no on_demand cores

* Use default genesis config for benchmark tests

* Refactor config builder param

* Move lifecycle test from scheduler to on demand

* Remove unneeded lifecycle test

Paras module via the parachain assignment provider doesn't provide
new assignments if a parachain loses it's lease. The on demand
assignment provider doesn't provide an assignment that is not a
parathread.

* Re enable validator shuffle test

* More realistic weights for place_order

* Remove redundant import

* Fix backwards compatibility (hopefully)

* ".git/.scripts/commands/bench/bench.sh" --subcommand=runtime --runtime=rococo --target_dir=polkadot --pallet=runtime_parachains::assigner_on_demand

* Fix tests.

* Fix off-by-one.

* Re enable claimqueue fills test

* Re enable schedule_rotates_groups test

* Fix fill_claimqueue_fills test

* Re enable next_up_on_timeout test, move fn

* Do not pop from assignment provider when retrying

* Fix tests missing collator in scheduledcore

* Add comment about timeout predicate.

* Rename parasentry retries to availability timeouts

* Re enable schedule_schedules... test

* Refactor prune retried test to new scheduler

* Have all scheduler tests use genesis_cfg fn

* Update docs

* Update copyright notices on new files

* Rename is_parachain_core to is_bulk_core

* Remove erroneous TODO

* Simplify import

* ".git/.scripts/commands/bench/bench.sh" --subcommand=runtime --runtime=rococo --target_dir=polkadot --pallet=runtime_parachains::configuration

* Revert AdvertiseCollation order shuffle

* Refactor place_order into keepalive and allowdeath

* Revert rename of hrmp max inbound channels

parachain encompasses both on demand and slot auction / bulk.

* Restore availability_timeout_predicate function

* Clean up leftover comments

* Update runtime/parachains/src/scheduler/tests.rs

Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>

* ".git/.scripts/commands/bench/bench.sh" --subcommand=runtime --runtime=westend --target_dir=polkadot --pallet=runtime_parachains::configuration

---------

Co-authored-by: alexgparity <alex.gremm@parity.io>
Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com>
Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>
Co-authored-by: Javier Viola <javier@parity.io>
Co-authored-by: eskimor <eskimor@no-such-url.com>
Co-authored-by: command-bot <>

* On Demand - update weights and small nits (#7605)

* Remove collator restriction test in inclusion

On demand parachains won't have collator restrictions implemented in
this way but will instead use a preferred collator registered to a
`ParaId` in `paras_registrar`.

* Remove redundant config guard for test fns

* Update weights

* Update WeightInfo for on_demand assigner

* Unify assignment provider parameters into one call (#7606)

* Combine assignmentprovider params into one fn call

* Move scheduler_common to a module under scheduler

* Fix ttl handling in benchmark builder

* Run cargo format

* Remove obsolete test.

* Small improvement.

* Use same migration pattern as config module

* Remove old TODO

* Change log target name for assigner on demand

* Fix migration

* Fix clippy warnings

* Add HostConfiguration storage migration to V8

* Add `MigrateToV8` to unreleased migrations for all runtimes

* Fix storage version check for config v8

* Set `StorageVersion` to 8 in `MigrateToV8`

* Remove dups.

* Update primitives/src/v5/mod.rs

Co-authored-by: Bastian Köcher <git@kchr.de>

---------

Co-authored-by: alexgparity <alex.gremm@parity.io>
Co-authored-by: alexgparity <115470171+alexgparity@users.noreply.github.com>
Co-authored-by: antonva <anton.asgeirsson@parity.io>
Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>
Co-authored-by: Anton Vilhelm Ásgeirsson <antonva@users.noreply.github.com>
Co-authored-by: Javier Viola <javier@parity.io>
Co-authored-by: eskimor <eskimor@no-such-url.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
This commit is contained in:
eskimor
2023-08-17 14:52:23 +02:00
committed by GitHub
parent 26b5f259a3
commit eaf057c5ed
53 changed files with 4207 additions and 1884 deletions
+6 -36
View File
@@ -48,7 +48,7 @@ use polkadot_node_subsystem_util::{
request_validators, Validator,
};
use polkadot_primitives::{
BackedCandidate, CandidateCommitments, CandidateHash, CandidateReceipt, CollatorId,
BackedCandidate, CandidateCommitments, CandidateHash, CandidateReceipt,
CommittedCandidateReceipt, CoreIndex, CoreState, Hash, Id as ParaId, PvfExecTimeoutKind,
SigningContext, ValidatorId, ValidatorIndex, ValidatorSignature, ValidityAttestation,
};
@@ -354,7 +354,7 @@ async fn handle_active_leaves_update<Context>(
let group_index = group_rotation_info.group_for_core(core_index, n_cores);
if let Some(g) = validator_groups.get(group_index.0 as usize) {
if validator.as_ref().map_or(false, |v| g.contains(&v.index())) {
assignment = Some((scheduled.para_id, scheduled.collator));
assignment = Some(scheduled.para_id);
}
groups.insert(scheduled.para_id, g.clone());
}
@@ -363,15 +363,15 @@ async fn handle_active_leaves_update<Context>(
let table_context = TableContext { groups, validators, validator };
let (assignment, required_collator) = match assignment {
let assignment = match assignment {
None => {
assignments_span.add_string_tag("assigned", "false");
(None, None)
None
},
Some((assignment, required_collator)) => {
Some(assignment) => {
assignments_span.add_string_tag("assigned", "true");
assignments_span.add_para_id(assignment);
(Some(assignment), required_collator)
Some(assignment)
},
};
@@ -381,7 +381,6 @@ async fn handle_active_leaves_update<Context>(
let job = CandidateBackingJob {
parent,
assignment,
required_collator,
issued_statements: HashSet::new(),
awaiting_validation: HashSet::new(),
fallbacks: HashMap::new(),
@@ -412,8 +411,6 @@ struct CandidateBackingJob<Context> {
parent: Hash,
/// The `ParaId` assigned to this validator
assignment: Option<ParaId>,
/// The collator required to author the candidate, if any.
required_collator: Option<CollatorId>,
/// Spans for all candidates that are not yet backable.
unbacked_candidates: HashMap<CandidateHash, jaeger::Span>,
/// We issued `Seconded`, `Valid` or `Invalid` statements on about these candidates.
@@ -913,21 +910,6 @@ impl<Context> CandidateBackingJob<Context> {
candidate: &CandidateReceipt,
pov: Arc<PoV>,
) -> Result<(), Error> {
// Check that candidate is collated by the right collator.
if self
.required_collator
.as_ref()
.map_or(false, |c| c != &candidate.descriptor().collator)
{
// Break cycle - bounded as there is only one candidate to
// second per block.
ctx.send_unbounded_message(CollatorProtocolMessage::Invalid(
self.parent,
candidate.clone(),
));
return Ok(())
}
let candidate_hash = candidate.hash();
let mut span = self.get_unbacked_validation_child(
root_span,
@@ -1171,8 +1153,6 @@ impl<Context> CandidateBackingJob<Context> {
return Ok(())
}
let descriptor = attesting.candidate.descriptor().clone();
gum::debug!(
target: LOG_TARGET,
candidate_hash = ?candidate_hash,
@@ -1180,16 +1160,6 @@ impl<Context> CandidateBackingJob<Context> {
"Kicking off validation",
);
// Check that candidate is collated by the right collator.
if self.required_collator.as_ref().map_or(false, |c| c != &descriptor.collator) {
// If not, we've got the statement in the table but we will
// not issue validation work for it.
//
// Act as though we've issued a statement.
self.issued_statements.insert(candidate_hash);
return Ok(())
}
let bg_sender = ctx.sender().clone();
let pov = PoVData::FetchFromValidator {
from_validator: attesting.from_validator,
+3 -117
View File
@@ -31,8 +31,8 @@ use polkadot_node_subsystem::{
};
use polkadot_node_subsystem_test_helpers as test_helpers;
use polkadot_primitives::{
CandidateDescriptor, CollatorId, GroupRotationInfo, HeadData, PersistedValidationData,
PvfExecTimeoutKind, ScheduledCore,
CandidateDescriptor, GroupRotationInfo, HeadData, PersistedValidationData, PvfExecTimeoutKind,
ScheduledCore,
};
use sp_application_crypto::AppCrypto;
use sp_keyring::Sr25519Keyring;
@@ -98,14 +98,10 @@ impl Default for TestState {
let group_rotation_info =
GroupRotationInfo { session_start_block: 0, group_rotation_frequency: 100, now: 1 };
let thread_collator: CollatorId = Sr25519Keyring::Two.public().into();
let availability_cores = vec![
CoreState::Scheduled(ScheduledCore { para_id: chain_a, collator: None }),
CoreState::Scheduled(ScheduledCore { para_id: chain_b, collator: None }),
CoreState::Scheduled(ScheduledCore {
para_id: thread_a,
collator: Some(thread_collator.clone()),
}),
CoreState::Scheduled(ScheduledCore { para_id: thread_a, collator: None }),
];
let mut head_data = HashMap::new();
@@ -1186,116 +1182,6 @@ fn backing_works_after_failed_validation() {
});
}
// Test that a `CandidateBackingMessage::Second` issues validation work
// and in case validation is successful issues a `StatementDistributionMessage`.
#[test]
fn backing_doesnt_second_wrong_collator() {
let mut test_state = TestState::default();
test_state.availability_cores[0] = CoreState::Scheduled(ScheduledCore {
para_id: ParaId::from(1),
collator: Some(Sr25519Keyring::Bob.public().into()),
});
test_harness(test_state.keystore.clone(), |mut virtual_overseer| async move {
test_startup(&mut virtual_overseer, &test_state).await;
let pov = PoV { block_data: BlockData(vec![42, 43, 44]) };
let expected_head_data = test_state.head_data.get(&test_state.chain_ids[0]).unwrap();
let pov_hash = pov.hash();
let candidate = TestCandidateBuilder {
para_id: test_state.chain_ids[0],
relay_parent: test_state.relay_parent,
pov_hash,
head_data: expected_head_data.clone(),
erasure_root: make_erasure_root(&test_state, pov.clone()),
}
.build();
let second = CandidateBackingMessage::Second(
test_state.relay_parent,
candidate.to_plain(),
pov.clone(),
);
virtual_overseer.send(FromOrchestra::Communication { msg: second }).await;
assert_matches!(
virtual_overseer.recv().await,
AllMessages::CollatorProtocol(
CollatorProtocolMessage::Invalid(parent, c)
) if parent == test_state.relay_parent && c == candidate.to_plain() => {
}
);
virtual_overseer
.send(FromOrchestra::Signal(OverseerSignal::ActiveLeaves(
ActiveLeavesUpdate::stop_work(test_state.relay_parent),
)))
.await;
virtual_overseer
});
}
#[test]
fn validation_work_ignores_wrong_collator() {
let mut test_state = TestState::default();
test_state.availability_cores[0] = CoreState::Scheduled(ScheduledCore {
para_id: ParaId::from(1),
collator: Some(Sr25519Keyring::Bob.public().into()),
});
test_harness(test_state.keystore.clone(), |mut virtual_overseer| async move {
test_startup(&mut virtual_overseer, &test_state).await;
let pov = PoV { block_data: BlockData(vec![1, 2, 3]) };
let pov_hash = pov.hash();
let expected_head_data = test_state.head_data.get(&test_state.chain_ids[0]).unwrap();
let candidate_a = TestCandidateBuilder {
para_id: test_state.chain_ids[0],
relay_parent: test_state.relay_parent,
pov_hash,
head_data: expected_head_data.clone(),
erasure_root: make_erasure_root(&test_state, pov.clone()),
}
.build();
let public2 = Keystore::sr25519_generate_new(
&*test_state.keystore,
ValidatorId::ID,
Some(&test_state.validators[2].to_seed()),
)
.expect("Insert key into keystore");
let seconding = SignedFullStatement::sign(
&test_state.keystore,
Statement::Seconded(candidate_a.clone()),
&test_state.signing_context,
ValidatorIndex(2),
&public2.into(),
)
.ok()
.flatten()
.expect("should be signed");
let statement =
CandidateBackingMessage::Statement(test_state.relay_parent, seconding.clone());
virtual_overseer.send(FromOrchestra::Communication { msg: statement }).await;
// The statement will be ignored because it has the wrong collator.
virtual_overseer
.send(FromOrchestra::Signal(OverseerSignal::ActiveLeaves(
ActiveLeavesUpdate::stop_work(test_state.relay_parent),
)))
.await;
virtual_overseer
});
}
#[test]
fn candidate_backing_reorders_votes() {
use sp_core::Encode;
@@ -921,6 +921,7 @@ async fn process_incoming_peer_message<Context>(
.span_per_relay_parent
.get(&relay_parent)
.map(|s| s.child("advertise-collation"));
if !state.view.contains(&relay_parent) {
gum::debug!(
target: LOG_TARGET,
+1 -4
View File
@@ -211,8 +211,7 @@ fn default_parachains_host_configuration(
max_pov_size: MAX_POV_SIZE,
max_head_data_size: 32 * 1024,
group_rotation_frequency: 20,
chain_availability_period: 4,
thread_availability_period: 4,
paras_availability_period: 4,
max_upward_queue_count: 8,
max_upward_queue_size: 1024 * 1024,
max_downward_message_size: 1024 * 1024,
@@ -223,10 +222,8 @@ fn default_parachains_host_configuration(
hrmp_channel_max_capacity: 8,
hrmp_channel_max_total_size: 8 * 1024,
hrmp_max_parachain_inbound_channels: 4,
hrmp_max_parathread_inbound_channels: 4,
hrmp_channel_max_message_size: 1024 * 1024,
hrmp_max_parachain_outbound_channels: 4,
hrmp_max_parathread_outbound_channels: 4,
hrmp_max_message_num_per_candidate: 5,
dispute_period: 6,
no_show_slots: 2,
+1 -2
View File
@@ -175,8 +175,7 @@ fn polkadot_testnet_genesis(
max_pov_size: MAX_POV_SIZE,
max_head_data_size: 32 * 1024,
group_rotation_frequency: 20,
chain_availability_period: 4,
thread_availability_period: 4,
paras_availability_period: 4,
no_show_slots: 10,
minimum_validation_upgrade_delay: 5,
..Default::default()
+2 -1
View File
@@ -56,7 +56,8 @@ pub use v5::{
UpgradeRestriction, UpwardMessage, ValidDisputeStatementKind, ValidationCode,
ValidationCodeHash, ValidatorId, ValidatorIndex, ValidatorSignature, ValidityAttestation,
ValidityError, ASSIGNMENT_KEY_TYPE_ID, LOWEST_PUBLIC_ID, MAX_CODE_SIZE, MAX_HEAD_DATA_SIZE,
MAX_POV_SIZE, PARACHAINS_INHERENT_IDENTIFIER, PARACHAIN_KEY_TYPE_ID,
MAX_POV_SIZE, ON_DEMAND_DEFAULT_QUEUE_MAX_SIZE, PARACHAINS_INHERENT_IDENTIFIER,
PARACHAIN_KEY_TYPE_ID,
};
#[cfg(feature = "std")]
+62 -13
View File
@@ -385,6 +385,11 @@ pub const MAX_HEAD_DATA_SIZE: u32 = 1 * 1024 * 1024;
// NOTE: This value is used in the runtime so be careful when changing it.
pub const MAX_POV_SIZE: u32 = 5 * 1024 * 1024;
/// Default queue size we use for the on-demand order book.
///
/// Can be adjusted in configuration.
pub const ON_DEMAND_DEFAULT_QUEUE_MAX_SIZE: u32 = 10_000;
// The public key of a keypair used by a validator for determining assignments
/// to approve included parachain candidates.
mod assignment_app {
@@ -809,28 +814,70 @@ impl TypeIndex for GroupIndex {
}
/// A claim on authoring the next block for a given parathread.
#[derive(Clone, Encode, Decode, TypeInfo, RuntimeDebug)]
#[cfg_attr(feature = "std", derive(PartialEq))]
pub struct ParathreadClaim(pub Id, pub CollatorId);
#[derive(Clone, Encode, Decode, TypeInfo, PartialEq, RuntimeDebug)]
pub struct ParathreadClaim(pub Id, pub Option<CollatorId>);
/// An entry tracking a claim to ensure it does not pass the maximum number of retries.
#[derive(Clone, Encode, Decode, TypeInfo, RuntimeDebug)]
#[cfg_attr(feature = "std", derive(PartialEq))]
#[derive(Clone, Encode, Decode, TypeInfo, PartialEq, RuntimeDebug)]
pub struct ParathreadEntry {
/// The claim.
pub claim: ParathreadClaim,
/// Number of retries.
/// Number of retries
pub retries: u32,
}
/// An assignment for a parachain scheduled to be backed and included in a relay chain block.
#[derive(Clone, Encode, Decode, PartialEq, TypeInfo, RuntimeDebug)]
pub struct Assignment {
/// Assignment's ParaId
pub para_id: Id,
}
impl Assignment {
/// Create a new `Assignment`.
pub fn new(para_id: Id) -> Self {
Self { para_id }
}
}
/// An entry tracking a paras
#[derive(Clone, Encode, Decode, TypeInfo, PartialEq, RuntimeDebug)]
pub struct ParasEntry<N = BlockNumber> {
/// The `Assignment`
pub assignment: Assignment,
/// The number of times the entry has timed out in availability.
pub availability_timeouts: u32,
/// The block height where this entry becomes invalid.
pub ttl: N,
}
impl<N> ParasEntry<N> {
/// Return `Id` from the underlying `Assignment`.
pub fn para_id(&self) -> Id {
self.assignment.para_id
}
/// Create a new `ParasEntry`.
pub fn new(assignment: Assignment, now: N) -> Self {
ParasEntry { assignment, availability_timeouts: 0, ttl: now }
}
}
/// What is occupying a specific availability core.
#[derive(Clone, Encode, Decode, TypeInfo, RuntimeDebug)]
#[cfg_attr(feature = "std", derive(PartialEq))]
pub enum CoreOccupied {
/// A parathread.
Parathread(ParathreadEntry),
/// A parachain.
Parachain,
pub enum CoreOccupied<N> {
/// The core is not occupied.
Free,
/// A paras.
Paras(ParasEntry<N>),
}
impl<N> CoreOccupied<N> {
/// Is core free?
pub fn is_free(&self) -> bool {
matches!(self, Self::Free)
}
}
/// A helper data-type for tracking validator-group rotations.
@@ -962,7 +1009,9 @@ impl<H, N> OccupiedCore<H, N> {
pub struct ScheduledCore {
/// The ID of a para scheduled.
pub para_id: Id,
/// The collator required to author the block, if any.
/// DEPRECATED: see: https://github.com/paritytech/polkadot/issues/7575
///
/// Will be removed in a future version.
pub collator: Option<CollatorId>,
}
@@ -992,7 +1041,7 @@ impl<N> CoreState<N> {
pub fn para_id(&self) -> Option<Id> {
match self {
Self::Occupied(ref core) => Some(core.para_id()),
Self::Scheduled(ScheduledCore { para_id, .. }) => Some(*para_id),
Self::Scheduled(core) => Some(core.para_id),
Self::Free => None,
}
}
+9 -1
View File
@@ -39,6 +39,7 @@ use scale_info::TypeInfo;
use sp_std::{cmp::Ordering, collections::btree_map::BTreeMap, prelude::*};
use runtime_parachains::{
assigner_parachains as parachains_assigner_parachains,
configuration as parachains_configuration, disputes as parachains_disputes,
disputes::slashing as parachains_slashing,
dmp as parachains_dmp, hrmp as parachains_hrmp, inclusion as parachains_inclusion,
@@ -1166,7 +1167,11 @@ impl parachains_paras_inherent::Config for Runtime {
type WeightInfo = weights::runtime_parachains_paras_inherent::WeightInfo<Runtime>;
}
impl parachains_scheduler::Config for Runtime {}
impl parachains_scheduler::Config for Runtime {
type AssignmentProvider = ParaAssignmentProvider;
}
impl parachains_assigner_parachains::Config for Runtime {}
impl parachains_initializer::Config for Runtime {
type Randomness = pallet_babe::RandomnessFromOneEpochAgo<Runtime>;
@@ -1470,6 +1475,7 @@ construct_runtime! {
ParaSessionInfo: parachains_session_info::{Pallet, Storage} = 61,
ParasDisputes: parachains_disputes::{Pallet, Call, Storage, Event<T>} = 62,
ParasSlashing: parachains_slashing::{Pallet, Call, Storage, ValidateUnsigned} = 63,
ParaAssignmentProvider: parachains_assigner_parachains::{Pallet, Storage} = 64,
// Parachain Onboarding Pallets. Start indices at 70 to leave room.
Registrar: paras_registrar::{Pallet, Call, Storage, Event<T>} = 70,
@@ -1538,6 +1544,8 @@ pub mod migrations {
>,
pallet_im_online::migration::v1::Migration<Runtime>,
parachains_configuration::migration::v7::MigrateToV7<Runtime>,
parachains_scheduler::migration::v1::MigrateToV1<Runtime>,
parachains_configuration::migration::v8::MigrateToV8<Runtime>,
);
}
@@ -17,27 +17,25 @@
//! Autogenerated weights for `runtime_parachains::configuration`
//!
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 4.0.0-dev
//! DATE: 2023-06-19, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! DATE: 2023-08-11, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! WORST CASE MAP SIZE: `1000000`
//! HOSTNAME: `runner-e8ezs4ez-project-163-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! EXECUTION: Some(Wasm), WASM-EXECUTION: Compiled, CHAIN: Some("kusama-dev"), DB CACHE: 1024
//! HOSTNAME: `runner-fljshgub-project-163-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! WASM-EXECUTION: `Compiled`, CHAIN: `Some("kusama-dev")`, DB CACHE: 1024
// Executed Command:
// ./target/production/polkadot
// target/production/polkadot
// benchmark
// pallet
// --chain=kusama-dev
// --steps=50
// --repeat=20
// --no-storage-info
// --no-median-slopes
// --no-min-squares
// --pallet=runtime_parachains::configuration
// --extrinsic=*
// --execution=wasm
// --wasm-execution=compiled
// --heap-pages=4096
// --json-file=/builds/parity/mirrors/polkadot/.git/.artifacts/bench.json
// --pallet=runtime_parachains::configuration
// --chain=kusama-dev
// --header=./file_header.txt
// --output=./runtime/kusama/src/weights/runtime_parachains_configuration.rs
// --output=./runtime/kusama/src/weights/
#![cfg_attr(rustfmt, rustfmt_skip)]
#![allow(unused_parens)]
@@ -50,56 +48,56 @@ use core::marker::PhantomData;
/// Weight functions for `runtime_parachains::configuration`.
pub struct WeightInfo<T>(PhantomData<T>);
impl<T: frame_system::Config> runtime_parachains::configuration::WeightInfo for WeightInfo<T> {
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_block_number() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_448_000 picoseconds.
Weight::from_parts(9_847_000, 0)
// Minimum execution time: 9_186_000 picoseconds.
Weight::from_parts(9_567_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_u32() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_493_000 picoseconds.
Weight::from_parts(9_882_000, 0)
// Minimum execution time: 9_388_000 picoseconds.
Weight::from_parts(9_723_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_option_u32() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_512_000 picoseconds.
Weight::from_parts(9_883_000, 0)
// Minimum execution time: 9_264_000 picoseconds.
Weight::from_parts(9_477_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Benchmark Override (r:0 w:0)
/// Proof Skipped: Benchmark Override (max_values: None, max_size: None, mode: Measured)
/// Storage: `Benchmark::Override` (r:0 w:0)
/// Proof: `Benchmark::Override` (`max_values`: None, `max_size`: None, mode: `Measured`)
fn set_hrmp_open_request_ttl() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
@@ -108,34 +106,50 @@ impl<T: frame_system::Config> runtime_parachains::configuration::WeightInfo for
Weight::from_parts(2_000_000_000_000, 0)
.saturating_add(Weight::from_parts(0, 0))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_balance() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_452_000 picoseconds.
Weight::from_parts(9_821_000, 0)
// Minimum execution time: 9_282_000 picoseconds.
Weight::from_parts(9_641_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_executor_params() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 10_107_000 picoseconds.
Weight::from_parts(10_553_000, 0)
// Minimum execution time: 9_937_000 picoseconds.
Weight::from_parts(10_445_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_perbill() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_106_000 picoseconds.
Weight::from_parts(9_645_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
+111
View File
@@ -0,0 +1,111 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! The Polkadot multiplexing assignment provider.
//! Provides blockspace assignments for both bulk and on demand parachains.
use frame_system::pallet_prelude::BlockNumberFor;
use primitives::{v5::Assignment, CoreIndex, Id as ParaId};
use crate::{
configuration, paras,
scheduler::common::{AssignmentProvider, AssignmentProviderConfig},
};
pub use pallet::*;
#[frame_support::pallet]
pub mod pallet {
use super::*;
#[pallet::pallet]
#[pallet::without_storage_info]
pub struct Pallet<T>(_);
#[pallet::config]
pub trait Config: frame_system::Config + configuration::Config + paras::Config {
type ParachainsAssignmentProvider: AssignmentProvider<BlockNumberFor<Self>>;
type OnDemandAssignmentProvider: AssignmentProvider<BlockNumberFor<Self>>;
}
}
// Aliases to make the impl more readable.
type ParachainAssigner<T> = <T as Config>::ParachainsAssignmentProvider;
type OnDemandAssigner<T> = <T as Config>::OnDemandAssignmentProvider;
impl<T: Config> Pallet<T> {
// Helper fn for the AssignmentProvider implementation.
// Assumes that the first allocation of cores is to bulk parachains.
// This function will return false if there are no cores assigned to the bulk parachain
// assigner.
fn is_bulk_core(core_idx: &CoreIndex) -> bool {
let parachain_cores =
<ParachainAssigner<T> as AssignmentProvider<BlockNumberFor<T>>>::session_core_count();
(0..parachain_cores).contains(&core_idx.0)
}
}
impl<T: Config> AssignmentProvider<BlockNumberFor<T>> for Pallet<T> {
fn session_core_count() -> u32 {
let parachain_cores =
<ParachainAssigner<T> as AssignmentProvider<BlockNumberFor<T>>>::session_core_count();
let on_demand_cores =
<OnDemandAssigner<T> as AssignmentProvider<BlockNumberFor<T>>>::session_core_count();
parachain_cores.saturating_add(on_demand_cores)
}
/// Pops an `Assignment` from a specified `CoreIndex`
fn pop_assignment_for_core(
core_idx: CoreIndex,
concluded_para: Option<ParaId>,
) -> Option<Assignment> {
if Pallet::<T>::is_bulk_core(&core_idx) {
<ParachainAssigner<T> as AssignmentProvider<BlockNumberFor<T>>>::pop_assignment_for_core(
core_idx,
concluded_para,
)
} else {
<OnDemandAssigner<T> as AssignmentProvider<BlockNumberFor<T>>>::pop_assignment_for_core(
core_idx,
concluded_para,
)
}
}
fn push_assignment_for_core(core_idx: CoreIndex, assignment: Assignment) {
if Pallet::<T>::is_bulk_core(&core_idx) {
<ParachainAssigner<T> as AssignmentProvider<BlockNumberFor<T>>>::push_assignment_for_core(
core_idx, assignment,
)
} else {
<OnDemandAssigner<T> as AssignmentProvider<BlockNumberFor<T>>>::push_assignment_for_core(
core_idx, assignment,
)
}
}
fn get_provider_config(core_idx: CoreIndex) -> AssignmentProviderConfig<BlockNumberFor<T>> {
if Pallet::<T>::is_bulk_core(&core_idx) {
<ParachainAssigner<T> as AssignmentProvider<BlockNumberFor<T>>>::get_provider_config(
core_idx,
)
} else {
<OnDemandAssigner<T> as AssignmentProvider<BlockNumberFor<T>>>::get_provider_config(
core_idx,
)
}
}
}
@@ -0,0 +1,109 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! On demand assigner pallet benchmarking.
#![cfg(feature = "runtime-benchmarks")]
use super::{Pallet, *};
use crate::{
configuration::{HostConfiguration, Pallet as ConfigurationPallet},
paras::{Pallet as ParasPallet, ParaGenesisArgs, ParaKind, ParachainsCache},
shared::Pallet as ParasShared,
};
use frame_benchmarking::v2::*;
use frame_system::RawOrigin;
use sp_runtime::traits::Bounded;
use primitives::{
HeadData, Id as ParaId, SessionIndex, ValidationCode, ON_DEMAND_DEFAULT_QUEUE_MAX_SIZE,
};
// Constants for the benchmarking
const SESSION_INDEX: SessionIndex = 1;
// Initialize a parathread for benchmarking.
pub fn init_parathread<T>(para_id: ParaId)
where
T: Config + crate::paras::Config + crate::shared::Config,
{
ParasShared::<T>::set_session_index(SESSION_INDEX);
let mut config = HostConfiguration::default();
config.on_demand_cores = 1;
ConfigurationPallet::<T>::force_set_active_config(config);
let mut parachains = ParachainsCache::new();
ParasPallet::<T>::initialize_para_now(
&mut parachains,
para_id,
&ParaGenesisArgs {
para_kind: ParaKind::Parathread,
genesis_head: HeadData(vec![1, 2, 3, 4]),
validation_code: ValidationCode(vec![1, 2, 3, 4]),
},
);
}
#[benchmarks]
mod benchmarks {
/// We want to fill the queue to the maximum, so exactly one more item fits.
const MAX_FILL_BENCH: u32 = ON_DEMAND_DEFAULT_QUEUE_MAX_SIZE.saturating_sub(1);
use super::*;
#[benchmark]
fn place_order_keep_alive(s: Linear<1, MAX_FILL_BENCH>) {
// Setup
let caller = whitelisted_caller();
let para_id = ParaId::from(111u32);
init_parathread::<T>(para_id);
T::Currency::make_free_balance_be(&caller, BalanceOf::<T>::max_value());
let assignment = Assignment::new(para_id);
for _ in 0..s {
Pallet::<T>::add_on_demand_assignment(assignment.clone(), QueuePushDirection::Back)
.unwrap();
}
#[extrinsic_call]
_(RawOrigin::Signed(caller.into()), BalanceOf::<T>::max_value(), para_id)
}
#[benchmark]
fn place_order_allow_death(s: Linear<1, MAX_FILL_BENCH>) {
// Setup
let caller = whitelisted_caller();
let para_id = ParaId::from(111u32);
init_parathread::<T>(para_id);
T::Currency::make_free_balance_be(&caller, BalanceOf::<T>::max_value());
let assignment = Assignment::new(para_id);
for _ in 0..s {
Pallet::<T>::add_on_demand_assignment(assignment.clone(), QueuePushDirection::Back)
.unwrap();
}
#[extrinsic_call]
_(RawOrigin::Signed(caller.into()), BalanceOf::<T>::max_value(), para_id)
}
impl_benchmark_test_suite!(
Pallet,
crate::mock::new_test_ext(
crate::assigner_on_demand::mock_helpers::GenesisConfigBuilder::default().build()
),
crate::mock::Test
);
}
@@ -0,0 +1,86 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Helper functions for tests, also used in runtime-benchmarks.
#![cfg(test)]
use super::*;
use crate::{
mock::MockGenesisConfig,
paras::{ParaGenesisArgs, ParaKind},
};
use primitives::{Balance, HeadData, ValidationCode};
pub fn default_genesis_config() -> MockGenesisConfig {
MockGenesisConfig {
configuration: crate::configuration::GenesisConfig {
config: crate::configuration::HostConfiguration { ..Default::default() },
},
..Default::default()
}
}
#[derive(Debug)]
pub struct GenesisConfigBuilder {
pub on_demand_cores: u32,
pub on_demand_base_fee: Balance,
pub on_demand_fee_variability: Perbill,
pub on_demand_max_queue_size: u32,
pub on_demand_target_queue_utilization: Perbill,
pub onboarded_on_demand_chains: Vec<ParaId>,
}
impl Default for GenesisConfigBuilder {
fn default() -> Self {
Self {
on_demand_cores: 10,
on_demand_base_fee: 10_000,
on_demand_fee_variability: Perbill::from_percent(1),
on_demand_max_queue_size: 100,
on_demand_target_queue_utilization: Perbill::from_percent(25),
onboarded_on_demand_chains: vec![],
}
}
}
impl GenesisConfigBuilder {
pub(super) fn build(self) -> MockGenesisConfig {
let mut genesis = default_genesis_config();
let config = &mut genesis.configuration.config;
config.on_demand_cores = self.on_demand_cores;
config.on_demand_base_fee = self.on_demand_base_fee;
config.on_demand_fee_variability = self.on_demand_fee_variability;
config.on_demand_queue_max_size = self.on_demand_max_queue_size;
config.on_demand_target_queue_utilization = self.on_demand_target_queue_utilization;
let paras = &mut genesis.paras.paras;
for para_id in self.onboarded_on_demand_chains {
paras.push((
para_id,
ParaGenesisArgs {
genesis_head: HeadData::from(vec![0u8]),
validation_code: ValidationCode::from(vec![0u8]),
para_kind: ParaKind::Parathread,
},
))
}
genesis
}
}
@@ -0,0 +1,614 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! The parachain on demand assignment module.
//!
//! Implements a mechanism for taking in orders for pay as you go (PAYG) or on demand
//! parachain (previously parathreads) assignments. This module is not handled by the
//! initializer but is instead instantiated in the `construct_runtime` macro.
//!
//! The module currently limits parallel execution of blocks from the same `ParaId` via
//! a core affinity mechanism. As long as there exists an affinity for a `CoreIndex` for
//! a specific `ParaId`, orders for blockspace for that `ParaId` will only be assigned to
//! that `CoreIndex`. This affinity mechanism can be removed if it can be shown that parallel
//! execution is valid.
mod benchmarking;
mod mock_helpers;
#[cfg(test)]
mod tests;
use crate::{
configuration, paras,
scheduler::common::{AssignmentProvider, AssignmentProviderConfig},
};
use frame_support::{
pallet_prelude::*,
traits::{
Currency,
ExistenceRequirement::{self, AllowDeath, KeepAlive},
WithdrawReasons,
},
};
use frame_system::pallet_prelude::*;
use primitives::{v5::Assignment, CoreIndex, Id as ParaId};
use sp_runtime::{
traits::{One, SaturatedConversion},
FixedPointNumber, FixedPointOperand, FixedU128, Perbill, Saturating,
};
use sp_std::{collections::vec_deque::VecDeque, prelude::*};
const LOG_TARGET: &str = "runtime::parachains::assigner-on-demand";
pub use pallet::*;
pub trait WeightInfo {
fn place_order_allow_death(s: u32) -> Weight;
fn place_order_keep_alive(s: u32) -> Weight;
}
/// A weight info that is only suitable for testing.
pub struct TestWeightInfo;
impl WeightInfo for TestWeightInfo {
fn place_order_allow_death(_: u32) -> Weight {
Weight::MAX
}
fn place_order_keep_alive(_: u32) -> Weight {
Weight::MAX
}
}
/// Keeps track of how many assignments a scheduler currently has at a specific `CoreIndex` for a
/// specific `ParaId`.
#[derive(Encode, Decode, Default, Clone, Copy, TypeInfo)]
#[cfg_attr(test, derive(PartialEq, Debug))]
pub struct CoreAffinityCount {
core_idx: CoreIndex,
count: u32,
}
/// An indicator as to which end of the `OnDemandQueue` an assignment will be placed.
pub enum QueuePushDirection {
Back,
Front,
}
/// Shorthand for the Balance type the runtime is using.
type BalanceOf<T> =
<<T as Config>::Currency as Currency<<T as frame_system::Config>::AccountId>>::Balance;
/// Errors that can happen during spot traffic calculation.
#[derive(PartialEq)]
#[cfg_attr(feature = "std", derive(Debug))]
pub enum SpotTrafficCalculationErr {
/// The order queue capacity is at 0.
QueueCapacityIsZero,
/// The queue size is larger than the queue capacity.
QueueSizeLargerThanCapacity,
/// Arithmetic error during division, either division by 0 or over/underflow.
Division,
}
#[frame_support::pallet]
pub mod pallet {
use super::*;
#[pallet::pallet]
#[pallet::without_storage_info]
pub struct Pallet<T>(_);
#[pallet::config]
pub trait Config: frame_system::Config + configuration::Config + paras::Config {
/// The runtime's definition of an event.
type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;
/// The runtime's definition of a Currency.
type Currency: Currency<Self::AccountId>;
/// Something that provides the weight of this pallet.
type WeightInfo: WeightInfo;
/// The default value for the spot traffic multiplier.
#[pallet::constant]
type TrafficDefaultValue: Get<FixedU128>;
}
/// Creates an empty spot traffic value if one isn't present in storage already.
#[pallet::type_value]
pub fn SpotTrafficOnEmpty<T: Config>() -> FixedU128 {
T::TrafficDefaultValue::get()
}
/// Creates an empty on demand queue if one isn't present in storage already.
#[pallet::type_value]
pub fn OnDemandQueueOnEmpty<T: Config>() -> VecDeque<Assignment> {
VecDeque::new()
}
/// Keeps track of the multiplier used to calculate the current spot price for the on demand
/// assigner.
#[pallet::storage]
pub(super) type SpotTraffic<T: Config> =
StorageValue<_, FixedU128, ValueQuery, SpotTrafficOnEmpty<T>>;
/// The order storage entry. Uses a VecDeque to be able to push to the front of the
/// queue from the scheduler on session boundaries.
#[pallet::storage]
pub type OnDemandQueue<T: Config> =
StorageValue<_, VecDeque<Assignment>, ValueQuery, OnDemandQueueOnEmpty<T>>;
/// Maps a `ParaId` to `CoreIndex` and keeps track of how many assignments the scheduler has in
/// it's lookahead. Keeping track of this affinity prevents parallel execution of the same
/// `ParaId` on two or more `CoreIndex`es.
#[pallet::storage]
pub(super) type ParaIdAffinity<T: Config> =
StorageMap<_, Twox256, ParaId, CoreAffinityCount, OptionQuery>;
#[pallet::event]
#[pallet::generate_deposit(pub(super) fn deposit_event)]
pub enum Event<T: Config> {
/// An order was placed at some spot price amount.
OnDemandOrderPlaced { para_id: ParaId, spot_price: BalanceOf<T> },
/// The value of the spot traffic multiplier changed.
SpotTrafficSet { traffic: FixedU128 },
}
#[pallet::error]
pub enum Error<T> {
/// The `ParaId` supplied to the `place_order` call is not a valid `ParaThread`, making the
/// call is invalid.
InvalidParaId,
/// The order queue is full, `place_order` will not continue.
QueueFull,
/// The current spot price is higher than the max amount specified in the `place_order`
/// call, making it invalid.
SpotPriceHigherThanMaxAmount,
/// There are no on demand cores available. `place_order` will not add anything to the
/// queue.
NoOnDemandCores,
}
#[pallet::hooks]
impl<T: Config> Hooks<BlockNumberFor<T>> for Pallet<T> {
fn on_initialize(_now: BlockNumberFor<T>) -> Weight {
let config = <configuration::Pallet<T>>::config();
// Calculate spot price multiplier and store it.
let old_traffic = SpotTraffic::<T>::get();
match Self::calculate_spot_traffic(
old_traffic,
config.on_demand_queue_max_size,
Self::queue_size(),
config.on_demand_target_queue_utilization,
config.on_demand_fee_variability,
) {
Ok(new_traffic) => {
// Only update storage on change
if new_traffic != old_traffic {
SpotTraffic::<T>::set(new_traffic);
Pallet::<T>::deposit_event(Event::<T>::SpotTrafficSet {
traffic: new_traffic,
});
return T::DbWeight::get().reads_writes(2, 1)
}
},
Err(SpotTrafficCalculationErr::QueueCapacityIsZero) => {
log::debug!(
target: LOG_TARGET,
"Error calculating spot traffic: The order queue capacity is at 0."
);
},
Err(SpotTrafficCalculationErr::QueueSizeLargerThanCapacity) => {
log::debug!(
target: LOG_TARGET,
"Error calculating spot traffic: The queue size is larger than the queue capacity."
);
},
Err(SpotTrafficCalculationErr::Division) => {
log::debug!(
target: LOG_TARGET,
"Error calculating spot traffic: Arithmetic error during division, either division by 0 or over/underflow."
);
},
};
T::DbWeight::get().reads_writes(2, 0)
}
}
#[pallet::call]
impl<T: Config> Pallet<T> {
/// Create a single on demand core order.
/// Will use the spot price for the current block and will reap the account if needed.
///
/// Parameters:
/// - `origin`: The sender of the call, funds will be withdrawn from this account.
/// - `max_amount`: The maximum balance to withdraw from the origin to place an order.
/// - `para_id`: A `ParaId` the origin wants to provide blockspace for.
///
/// Errors:
/// - `InsufficientBalance`: from the Currency implementation
/// - `InvalidParaId`
/// - `QueueFull`
/// - `SpotPriceHigherThanMaxAmount`
/// - `NoOnDemandCores`
///
/// Events:
/// - `SpotOrderPlaced`
#[pallet::call_index(0)]
#[pallet::weight(<T as Config>::WeightInfo::place_order_allow_death(OnDemandQueue::<T>::get().len() as u32))]
pub fn place_order_allow_death(
origin: OriginFor<T>,
max_amount: BalanceOf<T>,
para_id: ParaId,
) -> DispatchResult {
let sender = ensure_signed(origin)?;
Pallet::<T>::do_place_order(sender, max_amount, para_id, AllowDeath)
}
/// Same as the [`place_order_allow_death`] call , but with a check that placing the order
/// will not reap the account.
///
/// Parameters:
/// - `origin`: The sender of the call, funds will be withdrawn from this account.
/// - `max_amount`: The maximum balance to withdraw from the origin to place an order.
/// - `para_id`: A `ParaId` the origin wants to provide blockspace for.
///
/// Errors:
/// - `InsufficientBalance`: from the Currency implementation
/// - `InvalidParaId`
/// - `QueueFull`
/// - `SpotPriceHigherThanMaxAmount`
/// - `NoOnDemandCores`
///
/// Events:
/// - `SpotOrderPlaced`
#[pallet::call_index(1)]
#[pallet::weight(<T as Config>::WeightInfo::place_order_keep_alive(OnDemandQueue::<T>::get().len() as u32))]
pub fn place_order_keep_alive(
origin: OriginFor<T>,
max_amount: BalanceOf<T>,
para_id: ParaId,
) -> DispatchResult {
let sender = ensure_signed(origin)?;
Pallet::<T>::do_place_order(sender, max_amount, para_id, KeepAlive)
}
}
}
impl<T: Config> Pallet<T>
where
BalanceOf<T>: FixedPointOperand,
{
/// Helper function for `place_order_*` calls. Used to differentiate between placing orders
/// with a keep alive check or to allow the account to be reaped.
///
/// Parameters:
/// - `sender`: The sender of the call, funds will be withdrawn from this account.
/// - `max_amount`: The maximum balance to withdraw from the origin to place an order.
/// - `para_id`: A `ParaId` the origin wants to provide blockspace for.
/// - `existence_requirement`: Whether or not to ensure that the account will not be reaped.
///
/// Errors:
/// - `InsufficientBalance`: from the Currency implementation
/// - `InvalidParaId`
/// - `QueueFull`
/// - `SpotPriceHigherThanMaxAmount`
/// - `NoOnDemandCores`
///
/// Events:
/// - `SpotOrderPlaced`
fn do_place_order(
sender: <T as frame_system::Config>::AccountId,
max_amount: BalanceOf<T>,
para_id: ParaId,
existence_requirement: ExistenceRequirement,
) -> DispatchResult {
let config = <configuration::Pallet<T>>::config();
// Are there any schedulable cores in this session
ensure!(config.on_demand_cores > 0, Error::<T>::NoOnDemandCores);
// Traffic always falls back to 1.0
let traffic = SpotTraffic::<T>::get();
// Calculate spot price
let spot_price: BalanceOf<T> =
traffic.saturating_mul_int(config.on_demand_base_fee.saturated_into::<BalanceOf<T>>());
// Is the current price higher than `max_amount`
ensure!(spot_price.le(&max_amount), Error::<T>::SpotPriceHigherThanMaxAmount);
// Charge the sending account the spot price
T::Currency::withdraw(&sender, spot_price, WithdrawReasons::FEE, existence_requirement)?;
let assignment = Assignment::new(para_id);
let res = Pallet::<T>::add_on_demand_assignment(assignment, QueuePushDirection::Back);
match res {
Ok(_) => {
Pallet::<T>::deposit_event(Event::<T>::OnDemandOrderPlaced { para_id, spot_price });
return Ok(())
},
Err(err) => return Err(err),
}
}
/// The spot price multiplier. This is based on the transaction fee calculations defined in:
/// https://research.web3.foundation/Polkadot/overview/token-economics#setting-transaction-fees
///
/// Parameters:
/// - `traffic`: The previously calculated multiplier, can never go below 1.0.
/// - `queue_capacity`: The max size of the order book.
/// - `queue_size`: How many orders are currently in the order book.
/// - `target_queue_utilisation`: How much of the queue_capacity should be ideally occupied,
/// expressed in percentages(perbill).
/// - `variability`: A variability factor, i.e. how quickly the spot price adjusts. This number
/// can be chosen by p/(k*(1-s)) where p is the desired ratio increase in spot price over k
/// number of blocks. s is the target_queue_utilisation. A concrete example: v =
/// 0.05/(20*(1-0.25)) = 0.0033.
///
/// Returns:
/// - A `FixedU128` in the range of `Config::TrafficDefaultValue` - `FixedU128::MAX` on
/// success.
///
/// Errors:
/// - `SpotTrafficCalculationErr::QueueCapacityIsZero`
/// - `SpotTrafficCalculationErr::QueueSizeLargerThanCapacity`
/// - `SpotTrafficCalculationErr::Division`
pub(crate) fn calculate_spot_traffic(
traffic: FixedU128,
queue_capacity: u32,
queue_size: u32,
target_queue_utilisation: Perbill,
variability: Perbill,
) -> Result<FixedU128, SpotTrafficCalculationErr> {
// Return early if queue has no capacity.
if queue_capacity == 0 {
return Err(SpotTrafficCalculationErr::QueueCapacityIsZero)
}
// Return early if queue size is greater than capacity.
if queue_size > queue_capacity {
return Err(SpotTrafficCalculationErr::QueueSizeLargerThanCapacity)
}
// (queue_size / queue_capacity) - target_queue_utilisation
let queue_util_ratio = FixedU128::from_rational(queue_size.into(), queue_capacity.into());
let positive = queue_util_ratio >= target_queue_utilisation.into();
let queue_util_diff = queue_util_ratio.max(target_queue_utilisation.into()) -
queue_util_ratio.min(target_queue_utilisation.into());
// variability * queue_util_diff
let var_times_qud = queue_util_diff.saturating_mul(variability.into());
// variability^2 * queue_util_diff^2
let var_times_qud_pow = var_times_qud.saturating_mul(var_times_qud);
// (variability^2 * queue_util_diff^2)/2
let div_by_two: FixedU128;
match var_times_qud_pow.const_checked_div(2.into()) {
Some(dbt) => div_by_two = dbt,
None => return Err(SpotTrafficCalculationErr::Division),
}
// traffic * (1 + queue_util_diff) + div_by_two
if positive {
let new_traffic = queue_util_diff
.saturating_add(div_by_two)
.saturating_add(One::one())
.saturating_mul(traffic);
Ok(new_traffic.max(<T as Config>::TrafficDefaultValue::get()))
} else {
let new_traffic = queue_util_diff.saturating_sub(div_by_two).saturating_mul(traffic);
Ok(new_traffic.max(<T as Config>::TrafficDefaultValue::get()))
}
}
/// Adds an assignment to the on demand queue.
///
/// Paramenters:
/// - `assignment`: The on demand assignment to add to the queue.
/// - `location`: Whether to push this entry to the back or the front of the queue. Pushing an
/// entry to the front of the queue is only used when the scheduler wants to push back an
/// entry it has already popped.
/// Returns:
/// - The unit type on success.
///
/// Errors:
/// - `InvalidParaId`
/// - `QueueFull`
pub fn add_on_demand_assignment(
assignment: Assignment,
location: QueuePushDirection,
) -> Result<(), DispatchError> {
// Only parathreads are valid paraids for on the go parachains.
ensure!(<paras::Pallet<T>>::is_parathread(assignment.para_id), Error::<T>::InvalidParaId);
let config = <configuration::Pallet<T>>::config();
OnDemandQueue::<T>::try_mutate(|queue| {
// Abort transaction if queue is too large
ensure!(Self::queue_size() < config.on_demand_queue_max_size, Error::<T>::QueueFull);
match location {
QueuePushDirection::Back => queue.push_back(assignment),
QueuePushDirection::Front => queue.push_front(assignment),
};
Ok(())
})
}
/// Get the size of the on demand queue.
///
/// Returns:
/// - The size of the on demand queue.
fn queue_size() -> u32 {
let config = <configuration::Pallet<T>>::config();
match OnDemandQueue::<T>::get().len().try_into() {
Ok(size) => return size,
Err(_) => {
log::debug!(
target: LOG_TARGET,
"Failed to fetch the on demand queue size, returning the max size."
);
return config.on_demand_queue_max_size
},
}
}
/// Getter for the order queue.
pub fn get_queue() -> VecDeque<Assignment> {
OnDemandQueue::<T>::get()
}
/// Getter for the affinity tracker.
pub fn get_affinity_map(para_id: ParaId) -> Option<CoreAffinityCount> {
ParaIdAffinity::<T>::get(para_id)
}
/// Decreases the affinity of a `ParaId` to a specified `CoreIndex`.
/// Subtracts from the count of the `CoreAffinityCount` if an entry is found and the core_idx
/// matches. When the count reaches 0, the entry is removed.
/// A non-existant entry is a no-op.
fn decrease_affinity(para_id: ParaId, core_idx: CoreIndex) {
ParaIdAffinity::<T>::mutate(para_id, |maybe_affinity| {
if let Some(affinity) = maybe_affinity {
if affinity.core_idx == core_idx {
let new_count = affinity.count.saturating_sub(1);
if new_count > 0 {
*maybe_affinity = Some(CoreAffinityCount { core_idx, count: new_count });
} else {
*maybe_affinity = None;
}
}
}
});
}
/// Increases the affinity of a `ParaId` to a specified `CoreIndex`.
/// Adds to the count of the `CoreAffinityCount` if an entry is found and the core_idx matches.
/// A non-existant entry will be initialized with a count of 1 and uses the supplied
/// `CoreIndex`.
fn increase_affinity(para_id: ParaId, core_idx: CoreIndex) {
ParaIdAffinity::<T>::mutate(para_id, |maybe_affinity| match maybe_affinity {
Some(affinity) =>
if affinity.core_idx == core_idx {
*maybe_affinity = Some(CoreAffinityCount {
core_idx,
count: affinity.count.saturating_add(1),
});
},
None => {
*maybe_affinity = Some(CoreAffinityCount { core_idx, count: 1 });
},
})
}
}
impl<T: Config> AssignmentProvider<BlockNumberFor<T>> for Pallet<T> {
fn session_core_count() -> u32 {
let config = <configuration::Pallet<T>>::config();
config.on_demand_cores
}
/// Take the next queued entry that is available for a given core index.
/// Invalidates and removes orders with a `para_id` that is not `ParaLifecycle::Parathread`
/// but only in [0..P] range slice of the order queue, where P is the element that is
/// removed from the order queue.
///
/// Parameters:
/// - `core_idx`: The core index
/// - `previous_paraid`: Which paraid was previously processed on the requested core. Is None if
/// nothing was processed on the core.
fn pop_assignment_for_core(
core_idx: CoreIndex,
previous_para: Option<ParaId>,
) -> Option<Assignment> {
// Only decrease the affinity of the previous para if it exists.
// A nonexistant `ParaId` indicates that the scheduler has not processed any
// `ParaId` this session.
if let Some(previous_para_id) = previous_para {
Pallet::<T>::decrease_affinity(previous_para_id, core_idx)
}
let mut queue: VecDeque<Assignment> = OnDemandQueue::<T>::get();
let mut invalidated_para_id_indexes: Vec<usize> = vec![];
// Get the position of the next `ParaId`. Select either a valid `ParaId` that has an
// affinity to the same `CoreIndex` as the scheduler asks for or a valid `ParaId` with no
// affinity at all.
let pos = queue.iter().enumerate().position(|(index, assignment)| {
if <paras::Pallet<T>>::is_parathread(assignment.para_id) {
match ParaIdAffinity::<T>::get(&assignment.para_id) {
Some(affinity) => return affinity.core_idx == core_idx,
None => return true,
}
}
// Record no longer valid para_ids.
invalidated_para_id_indexes.push(index);
return false
});
// Collect the popped value.
let popped = pos.and_then(|p: usize| {
if let Some(assignment) = queue.remove(p) {
Pallet::<T>::increase_affinity(assignment.para_id, core_idx);
return Some(assignment)
};
None
});
// Only remove the invalid indexes *after* using the index.
// Removed in reverse order so that the indexes don't shift.
invalidated_para_id_indexes.iter().rev().for_each(|idx| {
queue.remove(*idx);
});
// Write changes to storage.
OnDemandQueue::<T>::set(queue);
popped
}
/// Push an assignment back to the queue.
/// Typically used on session boundaries.
/// Parameters:
/// - `core_idx`: The core index
/// - `assignment`: The on demand assignment.
fn push_assignment_for_core(core_idx: CoreIndex, assignment: Assignment) {
Pallet::<T>::decrease_affinity(assignment.para_id, core_idx);
// Skip the queue on push backs from scheduler
match Pallet::<T>::add_on_demand_assignment(assignment, QueuePushDirection::Front) {
Ok(_) => {},
Err(_) => {},
}
}
fn get_provider_config(_core_idx: CoreIndex) -> AssignmentProviderConfig<BlockNumberFor<T>> {
let config = <configuration::Pallet<T>>::config();
AssignmentProviderConfig {
availability_period: config.paras_availability_period,
max_availability_timeouts: config.on_demand_retries,
ttl: config.on_demand_ttl,
}
}
}
@@ -0,0 +1,558 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
use super::*;
use crate::{
assigner_on_demand::{mock_helpers::GenesisConfigBuilder, Error},
initializer::SessionChangeNotification,
mock::{
new_test_ext, Balances, OnDemandAssigner, Paras, ParasShared, RuntimeOrigin, Scheduler,
System, Test,
},
paras::{ParaGenesisArgs, ParaKind},
};
use frame_support::{assert_noop, assert_ok, error::BadOrigin};
use pallet_balances::Error as BalancesError;
use primitives::{
v5::{Assignment, ValidationCode},
BlockNumber, SessionIndex,
};
use sp_std::collections::btree_map::BTreeMap;
fn schedule_blank_para(id: ParaId, parakind: ParaKind) {
let validation_code: ValidationCode = vec![1, 2, 3].into();
assert_ok!(Paras::schedule_para_initialize(
id,
ParaGenesisArgs {
genesis_head: Vec::new().into(),
validation_code: validation_code.clone(),
para_kind: parakind,
}
));
assert_ok!(Paras::add_trusted_validation_code(RuntimeOrigin::root(), validation_code));
}
fn run_to_block(
to: BlockNumber,
new_session: impl Fn(BlockNumber) -> Option<SessionChangeNotification<BlockNumber>>,
) {
while System::block_number() < to {
let b = System::block_number();
Scheduler::initializer_finalize();
Paras::initializer_finalize(b);
if let Some(notification) = new_session(b + 1) {
let mut notification_with_session_index = notification;
// We will make every session change trigger an action queue. Normally this may require
// 2 or more session changes.
if notification_with_session_index.session_index == SessionIndex::default() {
notification_with_session_index.session_index = ParasShared::scheduled_session();
}
Paras::initializer_on_new_session(&notification_with_session_index);
Scheduler::initializer_on_new_session(&notification_with_session_index);
}
System::on_finalize(b);
System::on_initialize(b + 1);
System::set_block_number(b + 1);
Paras::initializer_initialize(b + 1);
Scheduler::initializer_initialize(b + 1);
// In the real runtime this is expected to be called by the `InclusionInherent` pallet.
Scheduler::update_claimqueue(BTreeMap::new(), b + 1);
}
}
#[test]
fn spot_traffic_capacity_zero_returns_none() {
match OnDemandAssigner::calculate_spot_traffic(
FixedU128::from(u128::MAX),
0u32,
u32::MAX,
Perbill::from_percent(100),
Perbill::from_percent(1),
) {
Ok(_) => panic!("Error"),
Err(e) => assert_eq!(e, SpotTrafficCalculationErr::QueueCapacityIsZero),
};
}
#[test]
fn spot_traffic_queue_size_larger_than_capacity_returns_none() {
match OnDemandAssigner::calculate_spot_traffic(
FixedU128::from(u128::MAX),
1u32,
2u32,
Perbill::from_percent(100),
Perbill::from_percent(1),
) {
Ok(_) => panic!("Error"),
Err(e) => assert_eq!(e, SpotTrafficCalculationErr::QueueSizeLargerThanCapacity),
}
}
#[test]
fn spot_traffic_calculation_identity() {
match OnDemandAssigner::calculate_spot_traffic(
FixedU128::from_u32(1),
1000,
100,
Perbill::from_percent(10),
Perbill::from_percent(3),
) {
Ok(res) => {
assert_eq!(res, FixedU128::from_u32(1))
},
_ => (),
}
}
#[test]
fn spot_traffic_calculation_u32_max() {
match OnDemandAssigner::calculate_spot_traffic(
FixedU128::from_u32(1),
u32::MAX,
u32::MAX,
Perbill::from_percent(100),
Perbill::from_percent(3),
) {
Ok(res) => {
assert_eq!(res, FixedU128::from_u32(1))
},
_ => panic!("Error"),
};
}
#[test]
fn spot_traffic_calculation_u32_traffic_max() {
match OnDemandAssigner::calculate_spot_traffic(
FixedU128::from(u128::MAX),
u32::MAX,
u32::MAX,
Perbill::from_percent(1),
Perbill::from_percent(1),
) {
Ok(res) => assert_eq!(res, FixedU128::from(u128::MAX)),
_ => panic!("Error"),
};
}
#[test]
fn sustained_target_increases_spot_traffic() {
let mut traffic = FixedU128::from_u32(1u32);
for _ in 0..50 {
traffic = OnDemandAssigner::calculate_spot_traffic(
traffic,
100,
12,
Perbill::from_percent(10),
Perbill::from_percent(100),
)
.unwrap()
}
assert_eq!(traffic, FixedU128::from_inner(2_718_103_312_071_174_015u128))
}
#[test]
fn spot_traffic_can_decrease() {
let traffic = FixedU128::from_u32(100u32);
match OnDemandAssigner::calculate_spot_traffic(
traffic,
100u32,
0u32,
Perbill::from_percent(100),
Perbill::from_percent(100),
) {
Ok(new_traffic) =>
assert_eq!(new_traffic, FixedU128::from_inner(50_000_000_000_000_000_000u128)),
_ => panic!("Error"),
}
}
#[test]
fn spot_traffic_decreases_over_time() {
let mut traffic = FixedU128::from_u32(100u32);
for _ in 0..5 {
traffic = OnDemandAssigner::calculate_spot_traffic(
traffic,
100u32,
0u32,
Perbill::from_percent(100),
Perbill::from_percent(100),
)
.unwrap();
println!("{traffic}");
}
assert_eq!(traffic, FixedU128::from_inner(3_125_000_000_000_000_000u128))
}
#[test]
fn place_order_works() {
let alice = 1u64;
let amt = 10_000_000u128;
let para_id = ParaId::from(111);
new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| {
// Initialize the parathread and wait for it to be ready.
schedule_blank_para(para_id, ParaKind::Parathread);
assert!(!Paras::is_parathread(para_id));
run_to_block(100, |n| if n == 100 { Some(Default::default()) } else { None });
assert!(Paras::is_parathread(para_id));
// Does not work unsigned
assert_noop!(
OnDemandAssigner::place_order_allow_death(RuntimeOrigin::none(), amt, para_id),
BadOrigin
);
// Does not work with max_amount lower than fee
let low_max_amt = 1u128;
assert_noop!(
OnDemandAssigner::place_order_allow_death(
RuntimeOrigin::signed(alice),
low_max_amt,
para_id,
),
Error::<Test>::SpotPriceHigherThanMaxAmount,
);
// Does not work with insufficient balance
assert_noop!(
OnDemandAssigner::place_order_allow_death(RuntimeOrigin::signed(alice), amt, para_id),
BalancesError::<Test, _>::InsufficientBalance
);
// Works
Balances::make_free_balance_be(&alice, amt);
run_to_block(101, |n| if n == 101 { Some(Default::default()) } else { None });
assert_ok!(OnDemandAssigner::place_order_allow_death(
RuntimeOrigin::signed(alice),
amt,
para_id
));
});
}
#[test]
fn place_order_keep_alive_keeps_alive() {
let alice = 1u64;
let amt = 1u128; // The same as crate::mock's EXISTENTIAL_DEPOSIT
let max_amt = 10_000_000u128;
let para_id = ParaId::from(111);
new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| {
// Initialize the parathread and wait for it to be ready.
schedule_blank_para(para_id, ParaKind::Parathread);
Balances::make_free_balance_be(&alice, amt);
assert!(!Paras::is_parathread(para_id));
run_to_block(100, |n| if n == 100 { Some(Default::default()) } else { None });
assert!(Paras::is_parathread(para_id));
assert_noop!(
OnDemandAssigner::place_order_keep_alive(
RuntimeOrigin::signed(alice),
max_amt,
para_id
),
BalancesError::<Test, _>::InsufficientBalance
);
});
}
#[test]
fn add_on_demand_assignment_works() {
let para_a = ParaId::from(111);
let assignment = Assignment::new(para_a);
let mut genesis = GenesisConfigBuilder::default();
genesis.on_demand_max_queue_size = 1;
new_test_ext(genesis.build()).execute_with(|| {
// Initialize the parathread and wait for it to be ready.
schedule_blank_para(para_a, ParaKind::Parathread);
// `para_a` is not onboarded as a parathread yet.
assert_noop!(
OnDemandAssigner::add_on_demand_assignment(
assignment.clone(),
QueuePushDirection::Back
),
Error::<Test>::InvalidParaId
);
assert!(!Paras::is_parathread(para_a));
run_to_block(100, |n| if n == 100 { Some(Default::default()) } else { None });
assert!(Paras::is_parathread(para_a));
// `para_a` is now onboarded as a valid parathread.
assert_ok!(OnDemandAssigner::add_on_demand_assignment(
assignment.clone(),
QueuePushDirection::Back
));
// Max queue size is 1, queue should be full.
assert_noop!(
OnDemandAssigner::add_on_demand_assignment(assignment, QueuePushDirection::Back),
Error::<Test>::QueueFull
);
});
}
#[test]
fn spotqueue_push_directions() {
new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| {
let para_a = ParaId::from(111);
let para_b = ParaId::from(222);
let para_c = ParaId::from(333);
schedule_blank_para(para_a, ParaKind::Parathread);
schedule_blank_para(para_b, ParaKind::Parathread);
schedule_blank_para(para_c, ParaKind::Parathread);
run_to_block(11, |n| if n == 11 { Some(Default::default()) } else { None });
let assignment_a = Assignment { para_id: para_a };
let assignment_b = Assignment { para_id: para_b };
let assignment_c = Assignment { para_id: para_c };
assert_ok!(OnDemandAssigner::add_on_demand_assignment(
assignment_a.clone(),
QueuePushDirection::Front
));
assert_ok!(OnDemandAssigner::add_on_demand_assignment(
assignment_b.clone(),
QueuePushDirection::Front
));
assert_ok!(OnDemandAssigner::add_on_demand_assignment(
assignment_c.clone(),
QueuePushDirection::Back
));
assert_eq!(OnDemandAssigner::queue_size(), 3);
assert_eq!(
OnDemandAssigner::get_queue(),
VecDeque::from(vec![assignment_b, assignment_a, assignment_c])
)
});
}
#[test]
fn affinity_changes_work() {
new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| {
let para_a = ParaId::from(111);
schedule_blank_para(para_a, ParaKind::Parathread);
run_to_block(11, |n| if n == 11 { Some(Default::default()) } else { None });
let assignment_a = Assignment { para_id: para_a };
// There should be no affinity before starting.
assert!(OnDemandAssigner::get_affinity_map(para_a).is_none());
// Add enough assignments to the order queue.
for _ in 0..10 {
OnDemandAssigner::add_on_demand_assignment(
assignment_a.clone(),
QueuePushDirection::Front,
)
.expect("Invalid paraid or queue full");
}
// There should be no affinity before the scheduler pops.
assert!(OnDemandAssigner::get_affinity_map(para_a).is_none());
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), None);
// Affinity count is 1 after popping.
assert_eq!(OnDemandAssigner::get_affinity_map(para_a).unwrap().count, 1);
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), Some(para_a));
// Affinity count is 1 after popping with a previous para.
assert_eq!(OnDemandAssigner::get_affinity_map(para_a).unwrap().count, 1);
assert_eq!(OnDemandAssigner::queue_size(), 8);
for _ in 0..3 {
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), None);
}
// Affinity count is 4 after popping 3 times without a previous para.
assert_eq!(OnDemandAssigner::get_affinity_map(para_a).unwrap().count, 4);
assert_eq!(OnDemandAssigner::queue_size(), 5);
for _ in 0..5 {
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), Some(para_a));
}
// Affinity count should still be 4 but queue should be empty.
assert_eq!(OnDemandAssigner::get_affinity_map(para_a).unwrap().count, 4);
assert_eq!(OnDemandAssigner::queue_size(), 0);
// Pop 4 times and get to exactly 0 (None) affinity.
for _ in 0..4 {
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), Some(para_a));
}
assert!(OnDemandAssigner::get_affinity_map(para_a).is_none());
// Decreasing affinity beyond 0 should still be None.
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), Some(para_a));
assert!(OnDemandAssigner::get_affinity_map(para_a).is_none());
});
}
#[test]
fn affinity_prohibits_parallel_scheduling() {
new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| {
let para_a = ParaId::from(111);
let para_b = ParaId::from(222);
schedule_blank_para(para_a, ParaKind::Parathread);
schedule_blank_para(para_b, ParaKind::Parathread);
run_to_block(11, |n| if n == 11 { Some(Default::default()) } else { None });
let assignment_a = Assignment { para_id: para_a };
let assignment_b = Assignment { para_id: para_b };
// There should be no affinity before starting.
assert!(OnDemandAssigner::get_affinity_map(para_a).is_none());
assert!(OnDemandAssigner::get_affinity_map(para_b).is_none());
// Add 2 assignments for para_a for every para_b.
OnDemandAssigner::add_on_demand_assignment(assignment_a.clone(), QueuePushDirection::Back)
.expect("Invalid paraid or queue full");
OnDemandAssigner::add_on_demand_assignment(assignment_a.clone(), QueuePushDirection::Back)
.expect("Invalid paraid or queue full");
OnDemandAssigner::add_on_demand_assignment(assignment_b.clone(), QueuePushDirection::Back)
.expect("Invalid paraid or queue full");
assert_eq!(OnDemandAssigner::queue_size(), 3);
// Approximate having 1 core.
for _ in 0..3 {
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), None);
}
// Affinity on one core is meaningless.
assert_eq!(OnDemandAssigner::get_affinity_map(para_a).unwrap().count, 2);
assert_eq!(OnDemandAssigner::get_affinity_map(para_b).unwrap().count, 1);
assert_eq!(
OnDemandAssigner::get_affinity_map(para_a).unwrap().core_idx,
OnDemandAssigner::get_affinity_map(para_b).unwrap().core_idx
);
// Clear affinity
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), Some(para_a));
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), Some(para_a));
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), Some(para_b));
// Add 2 assignments for para_a for every para_b.
OnDemandAssigner::add_on_demand_assignment(assignment_a.clone(), QueuePushDirection::Back)
.expect("Invalid paraid or queue full");
OnDemandAssigner::add_on_demand_assignment(assignment_a.clone(), QueuePushDirection::Back)
.expect("Invalid paraid or queue full");
OnDemandAssigner::add_on_demand_assignment(assignment_b.clone(), QueuePushDirection::Back)
.expect("Invalid paraid or queue full");
// Approximate having 2 cores.
for _ in 0..3 {
OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), None);
OnDemandAssigner::pop_assignment_for_core(CoreIndex(1), None);
}
// Affinity should be the same as before, but on different cores.
assert_eq!(OnDemandAssigner::get_affinity_map(para_a).unwrap().count, 2);
assert_eq!(OnDemandAssigner::get_affinity_map(para_b).unwrap().count, 1);
assert_eq!(OnDemandAssigner::get_affinity_map(para_a).unwrap().core_idx, CoreIndex(0));
assert_eq!(OnDemandAssigner::get_affinity_map(para_b).unwrap().core_idx, CoreIndex(1));
});
}
#[test]
fn cannot_place_order_when_no_on_demand_cores() {
let mut genesis = GenesisConfigBuilder::default();
genesis.on_demand_cores = 0;
let para_id = ParaId::from(10);
let alice = 1u64;
let amt = 10_000_000u128;
new_test_ext(genesis.build()).execute_with(|| {
schedule_blank_para(para_id, ParaKind::Parathread);
Balances::make_free_balance_be(&alice, amt);
assert!(!Paras::is_parathread(para_id));
run_to_block(10, |n| if n == 10 { Some(Default::default()) } else { None });
assert!(Paras::is_parathread(para_id));
assert_noop!(
OnDemandAssigner::place_order_allow_death(RuntimeOrigin::signed(alice), amt, para_id),
Error::<Test>::NoOnDemandCores
);
});
}
#[test]
fn on_demand_orders_cannot_be_popped_if_lifecycle_changes() {
let para_id = ParaId::from(10);
let assignment = Assignment { para_id };
new_test_ext(GenesisConfigBuilder::default().build()).execute_with(|| {
// Register the para_id as a parathread
schedule_blank_para(para_id, ParaKind::Parathread);
assert!(!Paras::is_parathread(para_id));
run_to_block(10, |n| if n == 10 { Some(Default::default()) } else { None });
assert!(Paras::is_parathread(para_id));
// Add two assignments for a para_id with a valid lifecycle.
assert_ok!(OnDemandAssigner::add_on_demand_assignment(
assignment.clone(),
QueuePushDirection::Back
));
assert_ok!(OnDemandAssigner::add_on_demand_assignment(
assignment.clone(),
QueuePushDirection::Back
));
// First pop is fine
assert!(OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), None) == Some(assignment));
// Deregister para
assert_ok!(Paras::schedule_para_cleanup(para_id));
// Run to new session and verify that para_id is no longer a valid parathread.
assert!(Paras::is_parathread(para_id));
run_to_block(20, |n| if n == 20 { Some(Default::default()) } else { None });
assert!(!Paras::is_parathread(para_id));
// Second pop should be None.
assert!(OnDemandAssigner::pop_assignment_for_core(CoreIndex(0), Some(para_id)) == None);
});
}
@@ -0,0 +1,70 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! The bulk (parachain slot auction) blockspace assignment provider.
//! This provider is tightly coupled with the configuration and paras modules.
use crate::{
configuration, paras,
scheduler::common::{AssignmentProvider, AssignmentProviderConfig},
};
use frame_system::pallet_prelude::BlockNumberFor;
pub use pallet::*;
use primitives::{v5::Assignment, CoreIndex, Id as ParaId};
#[frame_support::pallet]
pub mod pallet {
use super::*;
#[pallet::pallet]
#[pallet::without_storage_info]
pub struct Pallet<T>(_);
#[pallet::config]
pub trait Config: frame_system::Config + configuration::Config + paras::Config {}
}
impl<T: Config> AssignmentProvider<BlockNumberFor<T>> for Pallet<T> {
fn session_core_count() -> u32 {
<paras::Pallet<T>>::parachains().len() as u32
}
fn pop_assignment_for_core(
core_idx: CoreIndex,
_concluded_para: Option<ParaId>,
) -> Option<Assignment> {
<paras::Pallet<T>>::parachains()
.get(core_idx.0 as usize)
.copied()
.map(|para_id| Assignment::new(para_id))
}
/// Bulk assignment has no need to push the assignment back on a session change,
/// this is a no-op in the case of a bulk assignment slot.
fn push_assignment_for_core(_: CoreIndex, _: Assignment) {}
fn get_provider_config(_core_idx: CoreIndex) -> AssignmentProviderConfig<BlockNumberFor<T>> {
let config = <configuration::Pallet<T>>::config();
AssignmentProviderConfig {
availability_period: config.paras_availability_period,
// The next assignment already goes to the same [`ParaId`], no timeout tracking needed.
max_availability_timeouts: 0,
// The next assignment already goes to the same [`ParaId`], this can be any number
// that's high enough to clear the time it takes to clear backing/availability.
ttl: BlockNumberFor::<T>::from(10u32),
}
}
}
+25 -14
View File
@@ -17,20 +17,22 @@
use crate::{
configuration, inclusion, initializer, paras,
paras::ParaKind,
paras_inherent::{self},
scheduler, session_info, shared,
paras_inherent,
scheduler::{self, common::AssignmentProviderConfig},
session_info, shared,
};
use bitvec::{order::Lsb0 as BitOrderLsb0, vec::BitVec};
use frame_support::pallet_prelude::*;
use frame_system::pallet_prelude::*;
use primitives::{
collator_signature_payload, AvailabilityBitfield, BackedCandidate, CandidateCommitments,
CandidateDescriptor, CandidateHash, CollatorId, CollatorSignature, CommittedCandidateReceipt,
CompactStatement, CoreIndex, CoreOccupied, DisputeStatement, DisputeStatementSet, GroupIndex,
HeadData, Id as ParaId, IndexedVec, InherentData as ParachainsInherentData,
InvalidDisputeStatementKind, PersistedValidationData, SessionIndex, SigningContext,
UncheckedSigned, ValidDisputeStatementKind, ValidationCode, ValidatorId, ValidatorIndex,
ValidityAttestation,
collator_signature_payload,
v5::{Assignment, ParasEntry},
AvailabilityBitfield, BackedCandidate, CandidateCommitments, CandidateDescriptor,
CandidateHash, CollatorId, CollatorSignature, CommittedCandidateReceipt, CompactStatement,
CoreIndex, CoreOccupied, DisputeStatement, DisputeStatementSet, GroupIndex, HeadData,
Id as ParaId, IndexedVec, InherentData as ParachainsInherentData, InvalidDisputeStatementKind,
PersistedValidationData, SessionIndex, SigningContext, UncheckedSigned,
ValidDisputeStatementKind, ValidationCode, ValidatorId, ValidatorIndex, ValidityAttestation,
};
use sp_core::{sr25519, H256};
use sp_runtime::{
@@ -689,13 +691,22 @@ impl<T: paras_inherent::Config> BenchBuilder<T> {
);
assert_eq!(inclusion::PendingAvailability::<T>::iter().count(), used_cores as usize,);
// Mark all the used cores as occupied. We expect that their are
// Mark all the used cores as occupied. We expect that there are
// `backed_and_concluding_cores` that are pending availability and that there are
// `used_cores - backed_and_concluding_cores ` which are about to be disputed.
scheduler::AvailabilityCores::<T>::set(vec![
Some(CoreOccupied::Parachain);
used_cores as usize
]);
let now = <frame_system::Pallet<T>>::block_number() + One::one();
let cores = (0..used_cores)
.into_iter()
.map(|i| {
let AssignmentProviderConfig { ttl, .. } =
scheduler::Pallet::<T>::assignment_provider_config(CoreIndex(i));
CoreOccupied::Paras(ParasEntry::new(
Assignment::new(ParaId::from(i as u32)),
now + ttl,
))
})
.collect();
scheduler::AvailabilityCores::<T>::set(cores);
Bench::<T> {
data: ParachainsInherentData {
+125 -114
View File
@@ -25,9 +25,9 @@ use parity_scale_codec::{Decode, Encode};
use polkadot_parachain::primitives::{MAX_HORIZONTAL_MESSAGE_NUM, MAX_UPWARD_MESSAGE_NUM};
use primitives::{
vstaging::AsyncBackingParams, Balance, ExecutorParams, SessionIndex, MAX_CODE_SIZE,
MAX_HEAD_DATA_SIZE, MAX_POV_SIZE,
MAX_HEAD_DATA_SIZE, MAX_POV_SIZE, ON_DEMAND_DEFAULT_QUEUE_MAX_SIZE,
};
use sp_runtime::traits::Zero;
use sp_runtime::{traits::Zero, Perbill};
use sp_std::prelude::*;
#[cfg(test)]
@@ -42,7 +42,7 @@ pub use pallet::*;
const LOG_TARGET: &str = "runtime::configuration";
/// All configuration of the runtime with respect to parachains and parathreads.
/// All configuration of the runtime with respect to paras.
#[derive(
Clone,
Encode,
@@ -113,10 +113,9 @@ pub struct HostConfiguration<BlockNumber> {
/// been completed.
///
/// Note, there are situations in which `expected_at` in the past. For example, if
/// [`chain_availability_period`] or [`thread_availability_period`] is less than the delay set
/// by this field or if PVF pre-check took more time than the delay. In such cases, the upgrade
/// is further at the earliest possible time determined by
/// [`minimum_validation_upgrade_delay`].
/// [`paras_availability_period`] is less than the delay set by
/// this field or if PVF pre-check took more time than the delay. In such cases, the upgrade is
/// further at the earliest possible time determined by [`minimum_validation_upgrade_delay`].
///
/// The rationale for this delay has to do with relay-chain reversions. In case there is an
/// invalid candidate produced with the new version of the code, then the relay-chain can
@@ -143,8 +142,6 @@ pub struct HostConfiguration<BlockNumber> {
pub max_downward_message_size: u32,
/// The maximum number of outbound HRMP channels a parachain is allowed to open.
pub hrmp_max_parachain_outbound_channels: u32,
/// The maximum number of outbound HRMP channels a parathread is allowed to open.
pub hrmp_max_parathread_outbound_channels: u32,
/// The deposit that the sender should provide for opening an HRMP channel.
pub hrmp_sender_deposit: Balance,
/// The deposit that the recipient should provide for accepting opening an HRMP channel.
@@ -155,8 +152,6 @@ pub struct HostConfiguration<BlockNumber> {
pub hrmp_channel_max_total_size: u32,
/// The maximum number of inbound HRMP channels a parachain is allowed to accept.
pub hrmp_max_parachain_inbound_channels: u32,
/// The maximum number of inbound HRMP channels a parathread is allowed to accept.
pub hrmp_max_parathread_inbound_channels: u32,
/// The maximum size of a message that could ever be put into an HRMP channel.
///
/// This parameter affects the upper bound of size of `CandidateCommitments`.
@@ -171,26 +166,34 @@ pub struct HostConfiguration<BlockNumber> {
/// How long to keep code on-chain, in blocks. This should be sufficiently long that disputes
/// have concluded.
pub code_retention_period: BlockNumber,
/// The amount of execution cores to dedicate to parathread execution.
pub parathread_cores: u32,
/// The number of retries that a parathread author has to submit their block.
pub parathread_retries: u32,
/// The amount of execution cores to dedicate to on demand execution.
pub on_demand_cores: u32,
/// The number of retries that a on demand author has to submit their block.
pub on_demand_retries: u32,
/// The maximum queue size of the pay as you go module.
pub on_demand_queue_max_size: u32,
/// The target utilization of the spot price queue in percentages.
pub on_demand_target_queue_utilization: Perbill,
/// How quickly the fee rises in reaction to increased utilization.
/// The lower the number the slower the increase.
pub on_demand_fee_variability: Perbill,
/// The minimum amount needed to claim a slot in the spot pricing queue.
pub on_demand_base_fee: Balance,
/// The number of blocks an on demand claim stays in the scheduler's claimqueue before getting
/// cleared. This number should go reasonably higher than the number of blocks in the async
/// backing lookahead.
pub on_demand_ttl: BlockNumber,
/// How often parachain groups should be rotated across parachains.
///
/// Must be non-zero.
pub group_rotation_frequency: BlockNumber,
/// The availability period, in blocks, for parachains. This is the amount of blocks
/// The availability period, in blocks. This is the amount of blocks
/// after inclusion that validators have to make the block available and signal its
/// availability to the chain.
///
/// Must be at least 1.
pub chain_availability_period: BlockNumber,
/// The availability period, in blocks, for parathreads. Same as the
/// `chain_availability_period`, but a differing timeout due to differing requirements.
///
/// Must be at least 1.
pub thread_availability_period: BlockNumber,
/// The amount of blocks ahead to schedule parachains and parathreads.
pub paras_availability_period: BlockNumber,
/// The amount of blocks ahead to schedule paras.
pub scheduling_lookahead: u32,
/// The maximum number of validators to have per core.
///
@@ -237,8 +240,7 @@ pub struct HostConfiguration<BlockNumber> {
/// To prevent that, we introduce the minimum number of blocks after which the upgrade can be
/// scheduled. This number is controlled by this field.
///
/// This value should be greater than [`chain_availability_period`] and
/// [`thread_availability_period`].
/// This value should be greater than [`paras_availability_period`].
pub minimum_validation_upgrade_delay: BlockNumber,
}
@@ -250,8 +252,7 @@ impl<BlockNumber: Default + From<u32>> Default for HostConfiguration<BlockNumber
allowed_ancestry_len: 0,
},
group_rotation_frequency: 1u32.into(),
chain_availability_period: 1u32.into(),
thread_availability_period: 1u32.into(),
paras_availability_period: 1u32.into(),
no_show_slots: 1u32.into(),
validation_upgrade_cooldown: Default::default(),
validation_upgrade_delay: 2u32.into(),
@@ -259,9 +260,9 @@ impl<BlockNumber: Default + From<u32>> Default for HostConfiguration<BlockNumber
max_code_size: Default::default(),
max_pov_size: Default::default(),
max_head_data_size: Default::default(),
parathread_cores: Default::default(),
parathread_retries: Default::default(),
scheduling_lookahead: Default::default(),
on_demand_cores: Default::default(),
on_demand_retries: Default::default(),
scheduling_lookahead: 1,
max_validators_per_core: Default::default(),
max_validators: None,
dispute_period: 6,
@@ -280,14 +281,17 @@ impl<BlockNumber: Default + From<u32>> Default for HostConfiguration<BlockNumber
hrmp_channel_max_capacity: Default::default(),
hrmp_channel_max_total_size: Default::default(),
hrmp_max_parachain_inbound_channels: Default::default(),
hrmp_max_parathread_inbound_channels: Default::default(),
hrmp_channel_max_message_size: Default::default(),
hrmp_max_parachain_outbound_channels: Default::default(),
hrmp_max_parathread_outbound_channels: Default::default(),
hrmp_max_message_num_per_candidate: Default::default(),
pvf_voting_ttl: 2u32.into(),
minimum_validation_upgrade_delay: 2.into(),
executor_params: Default::default(),
on_demand_queue_max_size: ON_DEMAND_DEFAULT_QUEUE_MAX_SIZE,
on_demand_base_fee: 10_000_000u128,
on_demand_fee_variability: Perbill::from_percent(3),
on_demand_target_queue_utilization: Perbill::from_percent(25),
on_demand_ttl: 5u32.into(),
}
}
}
@@ -297,10 +301,8 @@ impl<BlockNumber: Default + From<u32>> Default for HostConfiguration<BlockNumber
pub enum InconsistentError<BlockNumber> {
/// `group_rotation_frequency` is set to zero.
ZeroGroupRotationFrequency,
/// `chain_availability_period` is set to zero.
ZeroChainAvailabilityPeriod,
/// `thread_availability_period` is set to zero.
ZeroThreadAvailabilityPeriod,
/// `paras_availability_period` is set to zero.
ZeroParasAvailabilityPeriod,
/// `no_show_slots` is set to zero.
ZeroNoShowSlots,
/// `max_code_size` exceeds the hard limit of `MAX_CODE_SIZE`.
@@ -309,15 +311,10 @@ pub enum InconsistentError<BlockNumber> {
MaxHeadDataSizeExceedHardLimit { max_head_data_size: u32 },
/// `max_pov_size` exceeds the hard limit of `MAX_POV_SIZE`.
MaxPovSizeExceedHardLimit { max_pov_size: u32 },
/// `minimum_validation_upgrade_delay` is less than `chain_availability_period`.
/// `minimum_validation_upgrade_delay` is less than `paras_availability_period`.
MinimumValidationUpgradeDelayLessThanChainAvailabilityPeriod {
minimum_validation_upgrade_delay: BlockNumber,
chain_availability_period: BlockNumber,
},
/// `minimum_validation_upgrade_delay` is less than `thread_availability_period`.
MinimumValidationUpgradeDelayLessThanThreadAvailabilityPeriod {
minimum_validation_upgrade_delay: BlockNumber,
thread_availability_period: BlockNumber,
paras_availability_period: BlockNumber,
},
/// `validation_upgrade_delay` is less than or equal 1.
ValidationUpgradeDelayIsTooLow { validation_upgrade_delay: BlockNumber },
@@ -349,12 +346,8 @@ where
return Err(ZeroGroupRotationFrequency)
}
if self.chain_availability_period.is_zero() {
return Err(ZeroChainAvailabilityPeriod)
}
if self.thread_availability_period.is_zero() {
return Err(ZeroThreadAvailabilityPeriod)
if self.paras_availability_period.is_zero() {
return Err(ZeroParasAvailabilityPeriod)
}
if self.no_show_slots.is_zero() {
@@ -375,15 +368,10 @@ where
return Err(MaxPovSizeExceedHardLimit { max_pov_size: self.max_pov_size })
}
if self.minimum_validation_upgrade_delay <= self.chain_availability_period {
if self.minimum_validation_upgrade_delay <= self.paras_availability_period {
return Err(MinimumValidationUpgradeDelayLessThanChainAvailabilityPeriod {
minimum_validation_upgrade_delay: self.minimum_validation_upgrade_delay.clone(),
chain_availability_period: self.chain_availability_period.clone(),
})
} else if self.minimum_validation_upgrade_delay <= self.thread_availability_period {
return Err(MinimumValidationUpgradeDelayLessThanThreadAvailabilityPeriod {
minimum_validation_upgrade_delay: self.minimum_validation_upgrade_delay.clone(),
thread_availability_period: self.thread_availability_period.clone(),
paras_availability_period: self.paras_availability_period.clone(),
})
}
@@ -442,6 +430,7 @@ pub trait WeightInfo {
fn set_config_with_balance() -> Weight;
fn set_hrmp_open_request_ttl() -> Weight;
fn set_config_with_executor_params() -> Weight;
fn set_config_with_perbill() -> Weight;
}
pub struct TestWeightInfo;
@@ -464,6 +453,9 @@ impl WeightInfo for TestWeightInfo {
fn set_config_with_executor_params() -> Weight {
Weight::MAX
}
fn set_config_with_perbill() -> Weight {
Weight::MAX
}
}
#[frame_support::pallet]
@@ -481,7 +473,8 @@ pub mod pallet {
/// + <https://github.com/paritytech/polkadot/pull/6934>
/// v5-v6: <https://github.com/paritytech/polkadot/pull/6271> (remove UMP dispatch queue)
/// v6-v7: <https://github.com/paritytech/polkadot/pull/7396>
const STORAGE_VERSION: StorageVersion = StorageVersion::new(7);
/// v7-v8: <https://github.com/paritytech/polkadot/pull/6969>
const STORAGE_VERSION: StorageVersion = StorageVersion::new(8);
#[pallet::pallet]
#[pallet::storage_version(STORAGE_VERSION)]
@@ -626,29 +619,29 @@ pub mod pallet {
})
}
/// Set the number of parathread execution cores.
/// Set the number of on demand execution cores.
#[pallet::call_index(6)]
#[pallet::weight((
T::WeightInfo::set_config_with_u32(),
DispatchClass::Operational,
))]
pub fn set_parathread_cores(origin: OriginFor<T>, new: u32) -> DispatchResult {
pub fn set_on_demand_cores(origin: OriginFor<T>, new: u32) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.parathread_cores = new;
config.on_demand_cores = new;
})
}
/// Set the number of retries for a particular parathread.
/// Set the number of retries for a particular on demand.
#[pallet::call_index(7)]
#[pallet::weight((
T::WeightInfo::set_config_with_u32(),
DispatchClass::Operational,
))]
pub fn set_parathread_retries(origin: OriginFor<T>, new: u32) -> DispatchResult {
pub fn set_on_demand_retries(origin: OriginFor<T>, new: u32) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.parathread_retries = new;
config.on_demand_retries = new;
})
}
@@ -668,35 +661,19 @@ pub mod pallet {
})
}
/// Set the availability period for parachains.
/// Set the availability period for paras.
#[pallet::call_index(9)]
#[pallet::weight((
T::WeightInfo::set_config_with_block_number(),
DispatchClass::Operational,
))]
pub fn set_chain_availability_period(
pub fn set_paras_availability_period(
origin: OriginFor<T>,
new: BlockNumberFor<T>,
) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.chain_availability_period = new;
})
}
/// Set the availability period for parathreads.
#[pallet::call_index(10)]
#[pallet::weight((
T::WeightInfo::set_config_with_block_number(),
DispatchClass::Operational,
))]
pub fn set_thread_availability_period(
origin: OriginFor<T>,
new: BlockNumberFor<T>,
) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.thread_availability_period = new;
config.paras_availability_period = new;
})
}
@@ -989,22 +966,6 @@ pub mod pallet {
})
}
/// Sets the maximum number of inbound HRMP channels a parathread is allowed to accept.
#[pallet::call_index(35)]
#[pallet::weight((
T::WeightInfo::set_config_with_u32(),
DispatchClass::Operational,
))]
pub fn set_hrmp_max_parathread_inbound_channels(
origin: OriginFor<T>,
new: u32,
) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.hrmp_max_parathread_inbound_channels = new;
})
}
/// Sets the maximum size of a message that could ever be put into an HRMP channel.
#[pallet::call_index(36)]
#[pallet::weight((
@@ -1034,22 +995,6 @@ pub mod pallet {
})
}
/// Sets the maximum number of outbound HRMP channels a parathread is allowed to open.
#[pallet::call_index(38)]
#[pallet::weight((
T::WeightInfo::set_config_with_u32(),
DispatchClass::Operational,
))]
pub fn set_hrmp_max_parathread_outbound_channels(
origin: OriginFor<T>,
new: u32,
) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.hrmp_max_parathread_outbound_channels = new;
})
}
/// Sets the maximum number of outbound HRMP messages can be sent by a candidate.
#[pallet::call_index(39)]
#[pallet::weight((
@@ -1139,6 +1084,72 @@ pub mod pallet {
config.executor_params = new;
})
}
/// Set the on demand (parathreads) base fee.
#[pallet::call_index(47)]
#[pallet::weight((
T::WeightInfo::set_config_with_balance(),
DispatchClass::Operational,
))]
pub fn set_on_demand_base_fee(origin: OriginFor<T>, new: Balance) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.on_demand_base_fee = new;
})
}
/// Set the on demand (parathreads) fee variability.
#[pallet::call_index(48)]
#[pallet::weight((
T::WeightInfo::set_config_with_perbill(),
DispatchClass::Operational,
))]
pub fn set_on_demand_fee_variability(origin: OriginFor<T>, new: Perbill) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.on_demand_fee_variability = new;
})
}
/// Set the on demand (parathreads) queue max size.
#[pallet::call_index(49)]
#[pallet::weight((
T::WeightInfo::set_config_with_option_u32(),
DispatchClass::Operational,
))]
pub fn set_on_demand_queue_max_size(origin: OriginFor<T>, new: u32) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.on_demand_queue_max_size = new;
})
}
/// Set the on demand (parathreads) fee variability.
#[pallet::call_index(50)]
#[pallet::weight((
T::WeightInfo::set_config_with_perbill(),
DispatchClass::Operational,
))]
pub fn set_on_demand_target_queue_utilization(
origin: OriginFor<T>,
new: Perbill,
) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.on_demand_target_queue_utilization = new;
})
}
/// Set the on demand (parathreads) ttl in the claimqueue.
#[pallet::call_index(51)]
#[pallet::weight((
T::WeightInfo::set_config_with_block_number(),
DispatchClass::Operational
))]
pub fn set_on_demand_ttl(origin: OriginFor<T>, new: BlockNumberFor<T>) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.on_demand_ttl = new;
})
}
}
#[pallet::hooks]
@@ -47,6 +47,8 @@ benchmarks! {
ExecutorParam::PvfExecTimeout(PvfExecTimeoutKind::Approval, 12_000),
][..]))
set_config_with_perbill {}: set_on_demand_fee_variability(RawOrigin::Root, Perbill::from_percent(100))
impl_benchmark_test_suite!(
Pallet,
crate::mock::new_test_ext(Default::default()),
@@ -18,3 +18,4 @@
pub mod v6;
pub mod v7;
pub mod v8;
@@ -23,13 +23,106 @@ use frame_support::{
weights::Weight,
};
use frame_system::pallet_prelude::BlockNumberFor;
use primitives::SessionIndex;
use primitives::{vstaging::AsyncBackingParams, Balance, ExecutorParams, SessionIndex};
use sp_std::vec::Vec;
use frame_support::traits::OnRuntimeUpgrade;
use super::v6::V6HostConfiguration;
type V7HostConfiguration<BlockNumber> = configuration::HostConfiguration<BlockNumber>;
#[derive(parity_scale_codec::Encode, parity_scale_codec::Decode, Debug, Clone)]
pub struct V7HostConfiguration<BlockNumber> {
pub max_code_size: u32,
pub max_head_data_size: u32,
pub max_upward_queue_count: u32,
pub max_upward_queue_size: u32,
pub max_upward_message_size: u32,
pub max_upward_message_num_per_candidate: u32,
pub hrmp_max_message_num_per_candidate: u32,
pub validation_upgrade_cooldown: BlockNumber,
pub validation_upgrade_delay: BlockNumber,
pub async_backing_params: AsyncBackingParams,
pub max_pov_size: u32,
pub max_downward_message_size: u32,
pub hrmp_max_parachain_outbound_channels: u32,
pub hrmp_max_parathread_outbound_channels: u32,
pub hrmp_sender_deposit: Balance,
pub hrmp_recipient_deposit: Balance,
pub hrmp_channel_max_capacity: u32,
pub hrmp_channel_max_total_size: u32,
pub hrmp_max_parachain_inbound_channels: u32,
pub hrmp_max_parathread_inbound_channels: u32,
pub hrmp_channel_max_message_size: u32,
pub executor_params: ExecutorParams,
pub code_retention_period: BlockNumber,
pub parathread_cores: u32,
pub parathread_retries: u32,
pub group_rotation_frequency: BlockNumber,
pub chain_availability_period: BlockNumber,
pub thread_availability_period: BlockNumber,
pub scheduling_lookahead: u32,
pub max_validators_per_core: Option<u32>,
pub max_validators: Option<u32>,
pub dispute_period: SessionIndex,
pub dispute_post_conclusion_acceptance_period: BlockNumber,
pub no_show_slots: u32,
pub n_delay_tranches: u32,
pub zeroth_delay_tranche_width: u32,
pub needed_approvals: u32,
pub relay_vrf_modulo_samples: u32,
pub pvf_voting_ttl: SessionIndex,
pub minimum_validation_upgrade_delay: BlockNumber,
}
impl<BlockNumber: Default + From<u32>> Default for V7HostConfiguration<BlockNumber> {
fn default() -> Self {
Self {
async_backing_params: AsyncBackingParams {
max_candidate_depth: 0,
allowed_ancestry_len: 0,
},
group_rotation_frequency: 1u32.into(),
chain_availability_period: 1u32.into(),
thread_availability_period: 1u32.into(),
no_show_slots: 1u32.into(),
validation_upgrade_cooldown: Default::default(),
validation_upgrade_delay: 2u32.into(),
code_retention_period: Default::default(),
max_code_size: Default::default(),
max_pov_size: Default::default(),
max_head_data_size: Default::default(),
parathread_cores: Default::default(),
parathread_retries: Default::default(),
scheduling_lookahead: Default::default(),
max_validators_per_core: Default::default(),
max_validators: None,
dispute_period: 6,
dispute_post_conclusion_acceptance_period: 100.into(),
n_delay_tranches: Default::default(),
zeroth_delay_tranche_width: Default::default(),
needed_approvals: Default::default(),
relay_vrf_modulo_samples: Default::default(),
max_upward_queue_count: Default::default(),
max_upward_queue_size: Default::default(),
max_downward_message_size: Default::default(),
max_upward_message_size: Default::default(),
max_upward_message_num_per_candidate: Default::default(),
hrmp_sender_deposit: Default::default(),
hrmp_recipient_deposit: Default::default(),
hrmp_channel_max_capacity: Default::default(),
hrmp_channel_max_total_size: Default::default(),
hrmp_max_parachain_inbound_channels: Default::default(),
hrmp_max_parathread_inbound_channels: Default::default(),
hrmp_channel_max_message_size: Default::default(),
hrmp_max_parachain_outbound_channels: Default::default(),
hrmp_max_parathread_outbound_channels: Default::default(),
hrmp_max_message_num_per_candidate: Default::default(),
pvf_voting_ttl: 2u32.into(),
minimum_validation_upgrade_delay: 2.into(),
executor_params: Default::default(),
}
}
}
mod v6 {
use super::*;
@@ -0,0 +1,319 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! A module that is responsible for migration of storage.
use crate::configuration::{self, Config, Pallet};
use frame_support::{
pallet_prelude::*,
traits::{Defensive, StorageVersion},
weights::Weight,
};
use frame_system::pallet_prelude::BlockNumberFor;
use primitives::SessionIndex;
use sp_runtime::Perbill;
use sp_std::vec::Vec;
use frame_support::traits::OnRuntimeUpgrade;
use super::v7::V7HostConfiguration;
type V8HostConfiguration<BlockNumber> = configuration::HostConfiguration<BlockNumber>;
mod v7 {
use super::*;
#[frame_support::storage_alias]
pub(crate) type ActiveConfig<T: Config> =
StorageValue<Pallet<T>, V7HostConfiguration<BlockNumberFor<T>>, OptionQuery>;
#[frame_support::storage_alias]
pub(crate) type PendingConfigs<T: Config> = StorageValue<
Pallet<T>,
Vec<(SessionIndex, V7HostConfiguration<BlockNumberFor<T>>)>,
OptionQuery,
>;
}
mod v8 {
use super::*;
#[frame_support::storage_alias]
pub(crate) type ActiveConfig<T: Config> =
StorageValue<Pallet<T>, V8HostConfiguration<BlockNumberFor<T>>, OptionQuery>;
#[frame_support::storage_alias]
pub(crate) type PendingConfigs<T: Config> = StorageValue<
Pallet<T>,
Vec<(SessionIndex, V8HostConfiguration<BlockNumberFor<T>>)>,
OptionQuery,
>;
}
pub struct MigrateToV8<T>(sp_std::marker::PhantomData<T>);
impl<T: Config> OnRuntimeUpgrade for MigrateToV8<T> {
#[cfg(feature = "try-runtime")]
fn pre_upgrade() -> Result<Vec<u8>, sp_runtime::TryRuntimeError> {
log::trace!(target: crate::configuration::LOG_TARGET, "Running pre_upgrade() for HostConfiguration MigrateToV8");
Ok(Vec::new())
}
fn on_runtime_upgrade() -> Weight {
log::info!(target: configuration::LOG_TARGET, "HostConfiguration MigrateToV8 started");
if StorageVersion::get::<Pallet<T>>() == 7 {
let weight_consumed = migrate_to_v8::<T>();
log::info!(target: configuration::LOG_TARGET, "HostConfiguration MigrateToV8 executed successfully");
StorageVersion::new(8).put::<Pallet<T>>();
weight_consumed
} else {
log::warn!(target: configuration::LOG_TARGET, "HostConfiguration MigrateToV8 should be removed.");
T::DbWeight::get().reads(1)
}
}
#[cfg(feature = "try-runtime")]
fn post_upgrade(_state: Vec<u8>) -> Result<(), sp_runtime::TryRuntimeError> {
log::trace!(target: crate::configuration::LOG_TARGET, "Running post_upgrade() for HostConfiguration MigrateToV8");
ensure!(
StorageVersion::get::<Pallet<T>>() >= 8,
"Storage version should be >= 8 after the migration"
);
Ok(())
}
}
fn migrate_to_v8<T: Config>() -> Weight {
// Unusual formatting is justified:
// - make it easier to verify that fields assign what they supposed to assign.
// - this code is transient and will be removed after all migrations are done.
// - this code is important enough to optimize for legibility sacrificing consistency.
#[rustfmt::skip]
let translate =
|pre: V7HostConfiguration<BlockNumberFor<T>>| ->
V8HostConfiguration<BlockNumberFor<T>>
{
V8HostConfiguration {
max_code_size : pre.max_code_size,
max_head_data_size : pre.max_head_data_size,
max_upward_queue_count : pre.max_upward_queue_count,
max_upward_queue_size : pre.max_upward_queue_size,
max_upward_message_size : pre.max_upward_message_size,
max_upward_message_num_per_candidate : pre.max_upward_message_num_per_candidate,
hrmp_max_message_num_per_candidate : pre.hrmp_max_message_num_per_candidate,
validation_upgrade_cooldown : pre.validation_upgrade_cooldown,
validation_upgrade_delay : pre.validation_upgrade_delay,
max_pov_size : pre.max_pov_size,
max_downward_message_size : pre.max_downward_message_size,
hrmp_sender_deposit : pre.hrmp_sender_deposit,
hrmp_recipient_deposit : pre.hrmp_recipient_deposit,
hrmp_channel_max_capacity : pre.hrmp_channel_max_capacity,
hrmp_channel_max_total_size : pre.hrmp_channel_max_total_size,
hrmp_max_parachain_inbound_channels : pre.hrmp_max_parachain_inbound_channels,
hrmp_max_parachain_outbound_channels : pre.hrmp_max_parachain_outbound_channels,
hrmp_channel_max_message_size : pre.hrmp_channel_max_message_size,
code_retention_period : pre.code_retention_period,
on_demand_cores : pre.parathread_cores,
on_demand_retries : pre.parathread_retries,
group_rotation_frequency : pre.group_rotation_frequency,
paras_availability_period : pre.chain_availability_period,
scheduling_lookahead : pre.scheduling_lookahead,
max_validators_per_core : pre.max_validators_per_core,
max_validators : pre.max_validators,
dispute_period : pre.dispute_period,
dispute_post_conclusion_acceptance_period: pre.dispute_post_conclusion_acceptance_period,
no_show_slots : pre.no_show_slots,
n_delay_tranches : pre.n_delay_tranches,
zeroth_delay_tranche_width : pre.zeroth_delay_tranche_width,
needed_approvals : pre.needed_approvals,
relay_vrf_modulo_samples : pre.relay_vrf_modulo_samples,
pvf_voting_ttl : pre.pvf_voting_ttl,
minimum_validation_upgrade_delay : pre.minimum_validation_upgrade_delay,
async_backing_params : pre.async_backing_params,
executor_params : pre.executor_params,
on_demand_queue_max_size : 10_000u32,
on_demand_base_fee : 10_000_000u128,
on_demand_fee_variability : Perbill::from_percent(3),
on_demand_target_queue_utilization : Perbill::from_percent(25),
on_demand_ttl : 5u32.into(),
}
};
let v7 = v7::ActiveConfig::<T>::get()
.defensive_proof("Could not decode old config")
.unwrap_or_default();
let v8 = translate(v7);
v8::ActiveConfig::<T>::set(Some(v8));
// Allowed to be empty.
let pending_v7 = v7::PendingConfigs::<T>::get().unwrap_or_default();
let mut pending_v8 = Vec::new();
for (session, v7) in pending_v7.into_iter() {
let v8 = translate(v7);
pending_v8.push((session, v8));
}
v8::PendingConfigs::<T>::set(Some(pending_v8.clone()));
let num_configs = (pending_v8.len() + 1) as u64;
T::DbWeight::get().reads_writes(num_configs, num_configs)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::mock::{new_test_ext, Test};
#[test]
fn v8_deserialized_from_actual_data() {
// Example how to get new `raw_config`:
// We'll obtain the raw_config at a specified a block
// Steps:
// 1. Go to Polkadot.js -> Developer -> Chain state -> Storage: https://polkadot.js.org/apps/#/chainstate
// 2. Set these parameters:
// 2.1. selected state query: configuration; activeConfig():
// PolkadotRuntimeParachainsConfigurationHostConfiguration
// 2.2. blockhash to query at:
// 0xf89d3ab5312c5f70d396dc59612f0aa65806c798346f9db4b35278baed2e0e53 (the hash of
// the block)
// 2.3. Note the value of encoded storage key ->
// 0x06de3d8a54d27e44a9d5ce189618f22db4b49d95320d9021994c850f25b8e385 for the
// referenced block.
// 2.4. You'll also need the decoded values to update the test.
// 3. Go to Polkadot.js -> Developer -> Chain state -> Raw storage
// 3.1 Enter the encoded storage key and you get the raw config.
// This exceeds the maximal line width length, but that's fine, since this is not code and
// doesn't need to be read and also leaving it as one line allows to easily copy it.
let raw_config =
hex_literal::hex!["
0000300000800000080000000000100000c8000005000000050000000200000002000000000000000000000000005000000010000400000000000000000000000000000000000000000000000000000000000000000000000800000000200000040000000000100000b004000000000000000000001027000080b2e60e80c3c9018096980000000000000000000000000005000000140000000400000001000000010100000000060000006400000002000000190000000000000002000000020000000200000005000000"
];
let v8 =
V8HostConfiguration::<primitives::BlockNumber>::decode(&mut &raw_config[..]).unwrap();
// We check only a sample of the values here. If we missed any fields or messed up data
// types that would skew all the fields coming after.
assert_eq!(v8.max_code_size, 3_145_728);
assert_eq!(v8.validation_upgrade_cooldown, 2);
assert_eq!(v8.max_pov_size, 5_242_880);
assert_eq!(v8.hrmp_channel_max_message_size, 1_048_576);
assert_eq!(v8.n_delay_tranches, 25);
assert_eq!(v8.minimum_validation_upgrade_delay, 5);
assert_eq!(v8.group_rotation_frequency, 20);
assert_eq!(v8.on_demand_cores, 0);
assert_eq!(v8.on_demand_base_fee, 10_000_000);
}
#[test]
fn test_migrate_to_v8() {
// Host configuration has lots of fields. However, in this migration we only remove one
// field. The most important part to check are a couple of the last fields. We also pick
// extra fields to check arbitrarily, e.g. depending on their position (i.e. the middle) and
// also their type.
//
// We specify only the picked fields and the rest should be provided by the `Default`
// implementation. That implementation is copied over between the two types and should work
// fine.
let v7 = V7HostConfiguration::<primitives::BlockNumber> {
needed_approvals: 69,
thread_availability_period: 55,
hrmp_recipient_deposit: 1337,
max_pov_size: 1111,
chain_availability_period: 33,
minimum_validation_upgrade_delay: 20,
..Default::default()
};
let mut pending_configs = Vec::new();
pending_configs.push((100, v7.clone()));
pending_configs.push((300, v7.clone()));
new_test_ext(Default::default()).execute_with(|| {
// Implant the v6 version in the state.
v7::ActiveConfig::<Test>::set(Some(v7));
v7::PendingConfigs::<Test>::set(Some(pending_configs));
migrate_to_v8::<Test>();
let v8 = v8::ActiveConfig::<Test>::get().unwrap();
let mut configs_to_check = v8::PendingConfigs::<Test>::get().unwrap();
configs_to_check.push((0, v8.clone()));
for (_, v7) in configs_to_check {
#[rustfmt::skip]
{
assert_eq!(v7.max_code_size , v8.max_code_size);
assert_eq!(v7.max_head_data_size , v8.max_head_data_size);
assert_eq!(v7.max_upward_queue_count , v8.max_upward_queue_count);
assert_eq!(v7.max_upward_queue_size , v8.max_upward_queue_size);
assert_eq!(v7.max_upward_message_size , v8.max_upward_message_size);
assert_eq!(v7.max_upward_message_num_per_candidate , v8.max_upward_message_num_per_candidate);
assert_eq!(v7.hrmp_max_message_num_per_candidate , v8.hrmp_max_message_num_per_candidate);
assert_eq!(v7.validation_upgrade_cooldown , v8.validation_upgrade_cooldown);
assert_eq!(v7.validation_upgrade_delay , v8.validation_upgrade_delay);
assert_eq!(v7.max_pov_size , v8.max_pov_size);
assert_eq!(v7.max_downward_message_size , v8.max_downward_message_size);
assert_eq!(v7.hrmp_max_parachain_outbound_channels , v8.hrmp_max_parachain_outbound_channels);
assert_eq!(v7.hrmp_sender_deposit , v8.hrmp_sender_deposit);
assert_eq!(v7.hrmp_recipient_deposit , v8.hrmp_recipient_deposit);
assert_eq!(v7.hrmp_channel_max_capacity , v8.hrmp_channel_max_capacity);
assert_eq!(v7.hrmp_channel_max_total_size , v8.hrmp_channel_max_total_size);
assert_eq!(v7.hrmp_max_parachain_inbound_channels , v8.hrmp_max_parachain_inbound_channels);
assert_eq!(v7.hrmp_channel_max_message_size , v8.hrmp_channel_max_message_size);
assert_eq!(v7.code_retention_period , v8.code_retention_period);
assert_eq!(v7.on_demand_cores , v8.on_demand_cores);
assert_eq!(v7.on_demand_retries , v8.on_demand_retries);
assert_eq!(v7.group_rotation_frequency , v8.group_rotation_frequency);
assert_eq!(v7.paras_availability_period , v8.paras_availability_period);
assert_eq!(v7.scheduling_lookahead , v8.scheduling_lookahead);
assert_eq!(v7.max_validators_per_core , v8.max_validators_per_core);
assert_eq!(v7.max_validators , v8.max_validators);
assert_eq!(v7.dispute_period , v8.dispute_period);
assert_eq!(v7.no_show_slots , v8.no_show_slots);
assert_eq!(v7.n_delay_tranches , v8.n_delay_tranches);
assert_eq!(v7.zeroth_delay_tranche_width , v8.zeroth_delay_tranche_width);
assert_eq!(v7.needed_approvals , v8.needed_approvals);
assert_eq!(v7.relay_vrf_modulo_samples , v8.relay_vrf_modulo_samples);
assert_eq!(v7.pvf_voting_ttl , v8.pvf_voting_ttl);
assert_eq!(v7.minimum_validation_upgrade_delay , v8.minimum_validation_upgrade_delay);
assert_eq!(v7.async_backing_params.allowed_ancestry_len, v8.async_backing_params.allowed_ancestry_len);
assert_eq!(v7.async_backing_params.max_candidate_depth , v8.async_backing_params.max_candidate_depth);
assert_eq!(v7.executor_params , v8.executor_params);
}; // ; makes this a statement. `rustfmt::skip` cannot be put on an expression.
}
});
}
// Test that migration doesn't panic in case there're no pending configurations upgrades in
// pallet's storage.
#[test]
fn test_migrate_to_v8_no_pending() {
let v7 = V7HostConfiguration::<primitives::BlockNumber>::default();
new_test_ext(Default::default()).execute_with(|| {
// Implant the v6 version in the state.
v7::ActiveConfig::<Test>::set(Some(v7));
// Ensure there're no pending configs.
v7::PendingConfigs::<Test>::set(None);
// Shouldn't fail.
migrate_to_v8::<Test>();
});
}
}
@@ -216,11 +216,7 @@ fn invariants() {
);
assert_err!(
Configuration::set_chain_availability_period(RuntimeOrigin::root(), 0),
Error::<Test>::InvalidNewValue
);
assert_err!(
Configuration::set_thread_availability_period(RuntimeOrigin::root(), 0),
Configuration::set_paras_availability_period(RuntimeOrigin::root(), 0),
Error::<Test>::InvalidNewValue
);
assert_err!(
@@ -229,17 +225,12 @@ fn invariants() {
);
ActiveConfig::<Test>::put(HostConfiguration {
chain_availability_period: 10,
thread_availability_period: 8,
paras_availability_period: 10,
minimum_validation_upgrade_delay: 11,
..Default::default()
});
assert_err!(
Configuration::set_chain_availability_period(RuntimeOrigin::root(), 12),
Error::<Test>::InvalidNewValue
);
assert_err!(
Configuration::set_thread_availability_period(RuntimeOrigin::root(), 12),
Configuration::set_paras_availability_period(RuntimeOrigin::root(), 12),
Error::<Test>::InvalidNewValue
);
assert_err!(
@@ -291,11 +282,10 @@ fn setting_pending_config_members() {
max_code_size: 100_000,
max_pov_size: 1024,
max_head_data_size: 1_000,
parathread_cores: 2,
parathread_retries: 5,
on_demand_cores: 2,
on_demand_retries: 5,
group_rotation_frequency: 20,
chain_availability_period: 10,
thread_availability_period: 8,
paras_availability_period: 10,
scheduling_lookahead: 3,
max_validators_per_core: None,
max_validators: None,
@@ -316,14 +306,17 @@ fn setting_pending_config_members() {
hrmp_channel_max_capacity: 3921,
hrmp_channel_max_total_size: 7687,
hrmp_max_parachain_inbound_channels: 37,
hrmp_max_parathread_inbound_channels: 19,
hrmp_channel_max_message_size: 8192,
hrmp_max_parachain_outbound_channels: 10,
hrmp_max_parathread_outbound_channels: 20,
hrmp_max_message_num_per_candidate: 20,
pvf_voting_ttl: 3,
minimum_validation_upgrade_delay: 20,
executor_params: Default::default(),
on_demand_queue_max_size: 10_000u32,
on_demand_base_fee: 10_000_000u128,
on_demand_fee_variability: Perbill::from_percent(3),
on_demand_target_queue_utilization: Perbill::from_percent(25),
on_demand_ttl: 5u32,
};
Configuration::set_validation_upgrade_cooldown(
@@ -345,9 +338,9 @@ fn setting_pending_config_members() {
Configuration::set_max_pov_size(RuntimeOrigin::root(), new_config.max_pov_size).unwrap();
Configuration::set_max_head_data_size(RuntimeOrigin::root(), new_config.max_head_data_size)
.unwrap();
Configuration::set_parathread_cores(RuntimeOrigin::root(), new_config.parathread_cores)
Configuration::set_on_demand_cores(RuntimeOrigin::root(), new_config.on_demand_cores)
.unwrap();
Configuration::set_parathread_retries(RuntimeOrigin::root(), new_config.parathread_retries)
Configuration::set_on_demand_retries(RuntimeOrigin::root(), new_config.on_demand_retries)
.unwrap();
Configuration::set_group_rotation_frequency(
RuntimeOrigin::root(),
@@ -361,14 +354,9 @@ fn setting_pending_config_members() {
new_config.minimum_validation_upgrade_delay,
)
.unwrap();
Configuration::set_chain_availability_period(
Configuration::set_paras_availability_period(
RuntimeOrigin::root(),
new_config.chain_availability_period,
)
.unwrap();
Configuration::set_thread_availability_period(
RuntimeOrigin::root(),
new_config.thread_availability_period,
new_config.paras_availability_period,
)
.unwrap();
Configuration::set_scheduling_lookahead(
@@ -462,11 +450,6 @@ fn setting_pending_config_members() {
new_config.hrmp_max_parachain_inbound_channels,
)
.unwrap();
Configuration::set_hrmp_max_parathread_inbound_channels(
RuntimeOrigin::root(),
new_config.hrmp_max_parathread_inbound_channels,
)
.unwrap();
Configuration::set_hrmp_channel_max_message_size(
RuntimeOrigin::root(),
new_config.hrmp_channel_max_message_size,
@@ -477,11 +460,6 @@ fn setting_pending_config_members() {
new_config.hrmp_max_parachain_outbound_channels,
)
.unwrap();
Configuration::set_hrmp_max_parathread_outbound_channels(
RuntimeOrigin::root(),
new_config.hrmp_max_parathread_outbound_channels,
)
.unwrap();
Configuration::set_hrmp_max_message_num_per_candidate(
RuntimeOrigin::root(),
new_config.hrmp_max_message_num_per_candidate,
+3 -2
View File
@@ -94,8 +94,9 @@ impl fmt::Debug for ProcessedDownwardMessagesAcceptanceErr {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
use ProcessedDownwardMessagesAcceptanceErr::*;
match *self {
AdvancementRule =>
write!(fmt, "DMQ is not empty, but processed_downward_messages is 0",),
AdvancementRule => {
write!(fmt, "DMQ is not empty, but processed_downward_messages is 0",)
},
Underflow { processed_downward_messages, dmq_length } => write!(
fmt,
"processed_downward_messages = {}, but dmq_length is only {}",
+2 -10
View File
@@ -1184,11 +1184,7 @@ impl<T: Config> Pallet<T> {
let egress_cnt = HrmpEgressChannelsIndex::<T>::decode_len(&origin).unwrap_or(0) as u32;
let open_req_cnt = HrmpOpenChannelRequestCount::<T>::get(&origin);
let channel_num_limit = if <paras::Pallet<T>>::is_parathread(origin) {
config.hrmp_max_parathread_outbound_channels
} else {
config.hrmp_max_parachain_outbound_channels
};
let channel_num_limit = config.hrmp_max_parachain_outbound_channels;
ensure!(
egress_cnt + open_req_cnt < channel_num_limit,
Error::<T>::OpenHrmpChannelLimitExceeded,
@@ -1254,11 +1250,7 @@ impl<T: Config> Pallet<T> {
// check if by accepting this open channel request, this parachain would exceed the
// number of inbound channels.
let config = <configuration::Pallet<T>>::config();
let channel_num_limit = if <paras::Pallet<T>>::is_parathread(origin) {
config.hrmp_max_parathread_inbound_channels
} else {
config.hrmp_max_parachain_inbound_channels
};
let channel_num_limit = config.hrmp_max_parachain_inbound_channels;
let ingress_cnt = HrmpIngressChannelsIndex::<T>::decode_len(&origin).unwrap_or(0) as u32;
let accepted_cnt = HrmpAcceptedChannelRequestCount::<T>::get(&origin);
ensure!(
+6 -12
View File
@@ -69,10 +69,8 @@ pub(crate) fn run_to_block(to: BlockNumber, new_session: Option<Vec<BlockNumber>
pub(super) struct GenesisConfigBuilder {
hrmp_channel_max_capacity: u32,
hrmp_channel_max_message_size: u32,
hrmp_max_parathread_outbound_channels: u32,
hrmp_max_parachain_outbound_channels: u32,
hrmp_max_parathread_inbound_channels: u32,
hrmp_max_parachain_inbound_channels: u32,
hrmp_max_paras_outbound_channels: u32,
hrmp_max_paras_inbound_channels: u32,
hrmp_max_message_num_per_candidate: u32,
hrmp_channel_max_total_size: u32,
hrmp_sender_deposit: Balance,
@@ -84,10 +82,8 @@ impl Default for GenesisConfigBuilder {
Self {
hrmp_channel_max_capacity: 2,
hrmp_channel_max_message_size: 8,
hrmp_max_parathread_outbound_channels: 1,
hrmp_max_parachain_outbound_channels: 2,
hrmp_max_parathread_inbound_channels: 1,
hrmp_max_parachain_inbound_channels: 2,
hrmp_max_paras_outbound_channels: 2,
hrmp_max_paras_inbound_channels: 2,
hrmp_max_message_num_per_candidate: 2,
hrmp_channel_max_total_size: 16,
hrmp_sender_deposit: 100,
@@ -102,10 +98,8 @@ impl GenesisConfigBuilder {
let config = &mut genesis.configuration.config;
config.hrmp_channel_max_capacity = self.hrmp_channel_max_capacity;
config.hrmp_channel_max_message_size = self.hrmp_channel_max_message_size;
config.hrmp_max_parathread_outbound_channels = self.hrmp_max_parathread_outbound_channels;
config.hrmp_max_parachain_outbound_channels = self.hrmp_max_parachain_outbound_channels;
config.hrmp_max_parathread_inbound_channels = self.hrmp_max_parathread_inbound_channels;
config.hrmp_max_parachain_inbound_channels = self.hrmp_max_parachain_inbound_channels;
config.hrmp_max_parachain_outbound_channels = self.hrmp_max_paras_outbound_channels;
config.hrmp_max_parachain_inbound_channels = self.hrmp_max_paras_inbound_channels;
config.hrmp_max_message_num_per_candidate = self.hrmp_max_message_num_per_candidate;
config.hrmp_channel_max_total_size = self.hrmp_channel_max_total_size;
config.hrmp_sender_deposit = self.hrmp_sender_deposit;
@@ -23,7 +23,7 @@
use crate::{
configuration::{self, HostConfiguration},
disputes, dmp, hrmp, paras,
scheduler::CoreAssignment,
scheduler::common::CoreAssignment,
shared,
};
use bitvec::{order::Lsb0 as BitOrderLsb0, vec::BitVec};
@@ -178,7 +178,7 @@ pub trait RewardValidators {
#[derive(Encode, Decode, PartialEq, TypeInfo)]
#[cfg_attr(test, derive(Debug))]
pub(crate) struct ProcessedCandidates<H = Hash> {
pub(crate) core_indices: Vec<CoreIndex>,
pub(crate) core_indices: Vec<(CoreIndex, ParaId)>,
pub(crate) candidate_receipt_with_backing_validator_indices:
Vec<(CandidateReceipt<H>, Vec<(ValidatorIndex, ValidityAttestation)>)>,
}
@@ -322,8 +322,6 @@ pub mod pallet {
UnscheduledCandidate,
/// Candidate scheduled despite pending candidate already existing for the para.
CandidateScheduledBeforeParaFree,
/// Candidate included with the wrong collator.
WrongCollator,
/// Scheduled cores out of order.
ScheduledOutOfOrder,
/// Head data exceeds the configured maximum.
@@ -599,7 +597,7 @@ impl<T: Config> Pallet<T> {
pub(crate) fn process_candidates<GV>(
parent_storage_root: T::Hash,
candidates: Vec<BackedCandidate<T::Hash>>,
scheduled: Vec<CoreAssignment>,
scheduled: Vec<CoreAssignment<BlockNumberFor<T>>>,
group_validators: GV,
) -> Result<ProcessedCandidates<T::Hash>, DispatchError>
where
@@ -630,15 +628,16 @@ impl<T: Config> Pallet<T> {
let mut core_indices_and_backers = Vec::with_capacity(candidates.len());
let mut last_core = None;
let mut check_assignment_in_order = |assignment: &CoreAssignment| -> DispatchResult {
ensure!(
last_core.map_or(true, |core| assignment.core > core),
Error::<T>::ScheduledOutOfOrder,
);
let mut check_assignment_in_order =
|assignment: &CoreAssignment<BlockNumberFor<T>>| -> DispatchResult {
ensure!(
last_core.map_or(true, |core| assignment.core > core),
Error::<T>::ScheduledOutOfOrder,
);
last_core = Some(assignment.core);
Ok(())
};
last_core = Some(assignment.core);
Ok(())
};
let signing_context =
SigningContext { parent_hash, session_index: shared::Pallet::<T>::session_index() };
@@ -680,17 +679,10 @@ impl<T: Config> Pallet<T> {
let para_id = backed_candidate.descriptor().para_id;
let mut backers = bitvec::bitvec![u8, BitOrderLsb0; 0; validators.len()];
for (i, assignment) in scheduled[skip..].iter().enumerate() {
check_assignment_in_order(assignment)?;
if para_id == assignment.para_id {
if let Some(required_collator) = assignment.required_collator() {
ensure!(
required_collator == &backed_candidate.descriptor().collator,
Error::<T>::WrongCollator,
);
}
for (i, core_assignment) in scheduled[skip..].iter().enumerate() {
check_assignment_in_order(core_assignment)?;
if para_id == core_assignment.paras_entry.para_id() {
ensure!(
<PendingAvailability<T>>::get(&para_id).is_none() &&
<PendingAvailabilityCommitments<T>>::get(&para_id).is_none(),
@@ -700,7 +692,7 @@ impl<T: Config> Pallet<T> {
// account for already skipped, and then skip this one.
skip = i + skip + 1;
let group_vals = group_validators(assignment.group_idx)
let group_vals = group_validators(core_assignment.group_idx)
.ok_or_else(|| Error::<T>::InvalidGroupIndex)?;
// check the signatures in the backing and that it is a majority.
@@ -752,9 +744,9 @@ impl<T: Config> Pallet<T> {
}
core_indices_and_backers.push((
assignment.core,
(core_assignment.core, core_assignment.paras_entry.para_id()),
backers,
assignment.group_idx,
core_assignment.group_idx,
));
continue 'next_backed_candidate
}
@@ -788,7 +780,7 @@ impl<T: Config> Pallet<T> {
Self::deposit_event(Event::<T>::CandidateBacked(
candidate.candidate.to_plain(),
candidate.candidate.commitments.head_data.clone(),
core,
core.0,
group,
));
@@ -800,7 +792,7 @@ impl<T: Config> Pallet<T> {
<PendingAvailability<T>>::insert(
&para_id,
CandidatePendingAvailability {
core,
core: core.0,
hash: candidate_hash,
descriptor,
availability_votes,
@@ -24,7 +24,6 @@ use crate::{
},
paras::{ParaGenesisArgs, ParaKind},
paras_inherent::DisputedBitfield,
scheduler::AssignmentKind,
};
use primitives::{SignedAvailabilityBitfields, UncheckedSignedAvailabilityBitfields};
@@ -33,6 +32,7 @@ use frame_support::assert_noop;
use keyring::Sr25519Keyring;
use parity_scale_codec::DecodeAll;
use primitives::{
v5::{Assignment, ParasEntry},
BlockNumber, CandidateCommitments, CandidateDescriptor, CollatorId,
CompactStatement as Statement, Hash, SignedAvailabilityBitfield, SignedStatement,
ValidationCode, ValidatorId, ValidityAttestation, PARACHAIN_KEY_TYPE_ID,
@@ -44,7 +44,7 @@ use test_helpers::{dummy_collator, dummy_collator_signature, dummy_validation_co
fn default_config() -> HostConfiguration<BlockNumber> {
let mut config = HostConfiguration::default();
config.parathread_cores = 1;
config.on_demand_cores = 1;
config.max_code_size = 0b100000;
config.max_head_data_size = 0b100000;
config
@@ -201,7 +201,7 @@ pub(crate) fn run_to_block(
}
pub(crate) fn expected_bits() -> usize {
Paras::parachains().len() + Configuration::config().parathread_cores as usize
Paras::parachains().len() + Configuration::config().on_demand_cores as usize
}
fn default_bitfield() -> AvailabilityBitfield {
@@ -877,26 +877,23 @@ fn candidate_checks() {
.map(|m| m.into_iter().map(ValidatorIndex).collect::<Vec<_>>())
};
let entry_ttl = 10_000;
let thread_collator: CollatorId = Sr25519Keyring::Two.public().into();
let chain_a_assignment = CoreAssignment {
core: CoreIndex::from(0),
para_id: chain_a,
kind: AssignmentKind::Parachain,
paras_entry: ParasEntry::new(Assignment::new(chain_a), entry_ttl),
group_idx: GroupIndex::from(0),
};
let chain_b_assignment = CoreAssignment {
core: CoreIndex::from(1),
para_id: chain_b,
kind: AssignmentKind::Parachain,
paras_entry: ParasEntry::new(Assignment::new(chain_b), entry_ttl),
group_idx: GroupIndex::from(1),
};
let thread_a_assignment = CoreAssignment {
core: CoreIndex::from(2),
para_id: thread_a,
kind: AssignmentKind::Parathread(thread_collator.clone(), 0),
paras_entry: ParasEntry::new(Assignment::new(thread_a), entry_ttl),
group_idx: GroupIndex::from(2),
};
@@ -1056,45 +1053,6 @@ fn candidate_checks() {
);
}
// candidate has wrong collator.
{
let mut candidate = TestCandidateBuilder {
para_id: thread_a,
relay_parent: System::parent_hash(),
pov_hash: Hash::repeat_byte(1),
persisted_validation_data_hash: make_vdata_hash(thread_a).unwrap(),
hrmp_watermark: RELAY_PARENT_NUM,
..Default::default()
}
.build();
assert!(CollatorId::from(Sr25519Keyring::One.public()) != thread_collator);
collator_sign_candidate(Sr25519Keyring::One, &mut candidate);
let backed = back_candidate(
candidate,
&validators,
group_validators(GroupIndex::from(2)).unwrap().as_ref(),
&keystore,
&signing_context,
BackingKind::Threshold,
);
assert_noop!(
ParaInclusion::process_candidates(
Default::default(),
vec![backed],
vec![
chain_a_assignment.clone(),
chain_b_assignment.clone(),
thread_a_assignment.clone(),
],
&group_validators,
),
Error::<Test>::WrongCollator,
);
}
// candidate not well-signed by collator.
{
let mut candidate = TestCandidateBuilder {
@@ -1424,26 +1382,23 @@ fn backing_works() {
.map(|vs| vs.into_iter().map(ValidatorIndex).collect::<Vec<_>>())
};
let thread_collator: CollatorId = Sr25519Keyring::Two.public().into();
let entry_ttl = 10_000;
let chain_a_assignment = CoreAssignment {
core: CoreIndex::from(0),
para_id: chain_a,
kind: AssignmentKind::Parachain,
paras_entry: ParasEntry::new(Assignment::new(chain_a), entry_ttl),
group_idx: GroupIndex::from(0),
};
let chain_b_assignment = CoreAssignment {
core: CoreIndex::from(1),
para_id: chain_b,
kind: AssignmentKind::Parachain,
paras_entry: ParasEntry::new(Assignment::new(chain_b), entry_ttl),
group_idx: GroupIndex::from(1),
};
let thread_a_assignment = CoreAssignment {
core: CoreIndex::from(2),
para_id: thread_a,
kind: AssignmentKind::Parathread(thread_collator.clone(), 0),
paras_entry: ParasEntry::new(Assignment::new(thread_a), entry_ttl),
group_idx: GroupIndex::from(2),
};
@@ -1507,7 +1462,7 @@ fn backing_works() {
BackingKind::Threshold,
);
let backed_candidates = vec![backed_a, backed_b, backed_c];
let backed_candidates = vec![backed_a.clone(), backed_b.clone(), backed_c];
let get_backing_group_idx = {
// the order defines the group implicitly for this test case
let backed_candidates_with_groups = backed_candidates
@@ -1544,7 +1499,11 @@ fn backing_works() {
assert_eq!(
occupied_cores,
vec![CoreIndex::from(0), CoreIndex::from(1), CoreIndex::from(2)]
vec![
(CoreIndex::from(0), chain_a),
(CoreIndex::from(1), chain_b),
(CoreIndex::from(2), thread_a)
]
);
// Transform the votes into the setup we expect
@@ -1702,10 +1661,11 @@ fn can_include_candidate_with_ok_code_upgrade() {
.map(|vs| vs.into_iter().map(ValidatorIndex).collect::<Vec<_>>())
};
let entry_ttl = 10_000;
let chain_a_assignment = CoreAssignment {
core: CoreIndex::from(0),
para_id: chain_a,
kind: AssignmentKind::Parachain,
paras_entry: ParasEntry::new(Assignment::new(chain_a), entry_ttl),
group_idx: GroupIndex::from(0),
};
@@ -1739,7 +1699,7 @@ fn can_include_candidate_with_ok_code_upgrade() {
)
.expect("candidates scheduled, in order, and backed");
assert_eq!(occupied_cores, vec![CoreIndex::from(0)]);
assert_eq!(occupied_cores, vec![(CoreIndex::from(0), chain_a)]);
let backers = {
let num_backers = minimum_backing_votes(group_validators(GroupIndex(0)).unwrap().len());
@@ -1958,8 +1918,11 @@ fn para_upgrade_delay_scheduled_from_inclusion() {
let chain_a_assignment = CoreAssignment {
core: CoreIndex::from(0),
para_id: chain_a,
kind: AssignmentKind::Parachain,
paras_entry: ParasEntry {
assignment: Assignment { para_id: chain_a },
availability_timeouts: 0,
ttl: 5,
},
group_idx: GroupIndex::from(0),
};
@@ -1993,7 +1956,7 @@ fn para_upgrade_delay_scheduled_from_inclusion() {
)
.expect("candidates scheduled, in order, and backed");
assert_eq!(occupied_cores, vec![CoreIndex::from(0)]);
assert_eq!(occupied_cores, vec![(CoreIndex::from(0), chain_a)]);
// Run a couple of blocks before the inclusion.
run_to_block(7, |_| None);
@@ -240,6 +240,9 @@ impl<T: Config> Pallet<T> {
buf
};
// inform about upcoming new session
scheduler::Pallet::<T>::pre_new_session();
let configuration::SessionChangeOutcome { prev_config, new_config } =
configuration::Pallet::<T>::initializer_on_new_session(&session_index);
let new_config = new_config.unwrap_or_else(|| prev_config.clone());
+3
View File
@@ -23,6 +23,9 @@
#![cfg_attr(feature = "runtime-benchmarks", recursion_limit = "256")]
#![cfg_attr(not(feature = "std"), no_std)]
pub mod assigner;
pub mod assigner_on_demand;
pub mod assigner_parachains;
pub mod configuration;
pub mod disputes;
pub mod dmp;
+26 -3
View File
@@ -17,7 +17,7 @@
//! Mocks for all the traits.
use crate::{
configuration, disputes, dmp, hrmp,
assigner, assigner_on_demand, assigner_parachains, configuration, disputes, dmp, hrmp,
inclusion::{self, AggregateMessageOrigin, UmpQueueId},
initializer, origin, paras,
paras::ParaKind,
@@ -43,7 +43,7 @@ use sp_io::TestExternalities;
use sp_runtime::{
traits::{AccountIdConversion, BlakeTwo256, IdentityLookup},
transaction_validity::TransactionPriority,
BuildStorage, Perbill, Permill,
BuildStorage, FixedU128, Perbill, Permill,
};
use std::{cell::RefCell, collections::HashMap};
@@ -62,6 +62,9 @@ frame_support::construct_runtime!(
ParaInclusion: inclusion,
ParaInherent: paras_inherent,
Scheduler: scheduler,
Assigner: assigner,
OnDemandAssigner: assigner_on_demand,
ParachainsAssigner: assigner_parachains,
Initializer: initializer,
Dmp: dmp,
Hrmp: hrmp,
@@ -281,7 +284,9 @@ impl crate::disputes::SlashingHandler<BlockNumber> for Test {
fn initializer_on_new_session(_: SessionIndex) {}
}
impl crate::scheduler::Config for Test {}
impl crate::scheduler::Config for Test {
type AssignmentProvider = Assigner;
}
pub struct TestMessageQueueWeight;
impl pallet_message_queue::WeightInfo for TestMessageQueueWeight {
@@ -334,6 +339,24 @@ impl pallet_message_queue::Config for Test {
type ServiceWeight = MessageQueueServiceWeight;
}
impl assigner::Config for Test {
type ParachainsAssignmentProvider = ParachainsAssigner;
type OnDemandAssignmentProvider = OnDemandAssigner;
}
impl assigner_parachains::Config for Test {}
parameter_types! {
pub const OnDemandTrafficDefaultValue: FixedU128 = FixedU128::from_u32(1);
}
impl assigner_on_demand::Config for Test {
type RuntimeEvent = RuntimeEvent;
type Currency = Balances;
type TrafficDefaultValue = OnDemandTrafficDefaultValue;
type WeightInfo = crate::assigner_on_demand::TestWeightInfo;
}
impl crate::inclusion::Config for Test {
type WeightInfo = ();
type RuntimeEvent = RuntimeEvent;
@@ -746,8 +746,7 @@ fn full_parachain_cleanup_storage() {
minimum_validation_upgrade_delay: 2,
// Those are not relevant to this test. However, HostConfiguration is still a
// subject for the consistency check.
chain_availability_period: 1,
thread_availability_period: 1,
paras_availability_period: 1,
..Default::default()
},
},
@@ -28,7 +28,8 @@ use crate::{
inclusion::CandidateCheckContext,
initializer,
metrics::METRICS,
scheduler::{self, CoreAssignment, FreedReason},
scheduler,
scheduler::common::{CoreAssignment, FreedReason},
shared, ParaId,
};
use bitvec::prelude::BitVec;
@@ -518,7 +519,7 @@ impl<T: Config> Pallet<T> {
.map(|(_session, candidate)| candidate)
.collect::<BTreeSet<CandidateHash>>();
let mut freed_disputed: Vec<_> =
let freed_disputed: BTreeMap<CoreIndex, FreedReason> =
<inclusion::Pallet<T>>::collect_disputed(&current_concluded_invalid_disputes)
.into_iter()
.map(|core| (core, FreedReason::Concluded))
@@ -528,16 +529,10 @@ impl<T: Config> Pallet<T> {
// a core index that was freed due to a dispute.
//
// I.e. 010100 would indicate, the candidates on Core 1 and 3 would be disputed.
let disputed_bitfield = create_disputed_bitfield(
expected_bits,
freed_disputed.iter().map(|(core_index, _)| core_index),
);
let disputed_bitfield = create_disputed_bitfield(expected_bits, freed_disputed.keys());
if !freed_disputed.is_empty() {
// unstable sort is fine, because core indices are unique
// i.e. the same candidate can't occupy 2 cores at once.
freed_disputed.sort_unstable_by_key(|pair| pair.0); // sort by core index
<scheduler::Pallet<T>>::free_cores(freed_disputed.clone());
<scheduler::Pallet<T>>::update_claimqueue(freed_disputed.clone(), now);
}
let bitfields = sanitize_bitfields::<T>(
@@ -569,10 +564,7 @@ impl<T: Config> Pallet<T> {
let freed = collect_all_freed_cores::<T, _>(freed_concluded.iter().cloned());
<scheduler::Pallet<T>>::clear();
<scheduler::Pallet<T>>::schedule(freed, now);
let scheduled = <scheduler::Pallet<T>>::scheduled();
let scheduled = <scheduler::Pallet<T>>::update_claimqueue(freed, now);
let relay_parent_number = now - One::one();
let parent_storage_root = *parent_header.state_root();
@@ -614,7 +606,7 @@ impl<T: Config> Pallet<T> {
<scheduler::Pallet<T>>::group_validators,
)?;
// Note which of the scheduled cores were actually occupied by a backed candidate.
<scheduler::Pallet<T>>::occupied(&occupied);
<scheduler::Pallet<T>>::occupied(occupied.into_iter().map(|e| (e.0, e.1)).collect());
set_scrapable_on_chain_backings::<T>(
current_session,
@@ -908,7 +900,7 @@ fn sanitize_backed_candidates<
relay_parent: T::Hash,
mut backed_candidates: Vec<BackedCandidate<T::Hash>>,
mut candidate_has_concluded_invalid_dispute_or_is_invalid: F,
scheduled: &[CoreAssignment],
scheduled: &[CoreAssignment<BlockNumberFor<T>>],
) -> Vec<BackedCandidate<T::Hash>> {
// Remove any candidates that were concluded invalid.
// This does not assume sorting.
@@ -918,7 +910,7 @@ fn sanitize_backed_candidates<
let scheduled_paras_to_core_idx = scheduled
.into_iter()
.map(|core_assignment| (core_assignment.para_id, core_assignment.core))
.map(|core_assignment| (core_assignment.paras_entry.para_id(), core_assignment.core))
.collect::<BTreeMap<ParaId, CoreIndex>>();
// Assure the backed candidate's `ParaId`'s core is free.
@@ -72,7 +72,10 @@ mod enter {
// freed via becoming fully available, the backed candidates will not be filtered out in
// `create_inherent` and will not cause `enter` to early.
fn include_backed_candidates() {
new_test_ext(MockGenesisConfig::default()).execute_with(|| {
let config = MockGenesisConfig::default();
assert!(config.configuration.config.scheduling_lookahead > 0);
new_test_ext(config).execute_with(|| {
let dispute_statements = BTreeMap::new();
let mut backed_and_concluding = BTreeMap::new();
@@ -106,7 +109,7 @@ mod enter {
.unwrap();
// The current schedule is empty prior to calling `create_inherent_enter`.
assert_eq!(<scheduler::Pallet<Test>>::scheduled(), vec![]);
assert!(<scheduler::Pallet<Test>>::claimqueue_is_empty());
// Nothing is filtered out (including the backed candidates.)
assert_eq!(
@@ -253,7 +256,7 @@ mod enter {
.unwrap();
// The current schedule is empty prior to calling `create_inherent_enter`.
assert_eq!(<scheduler::Pallet<Test>>::scheduled(), vec![]);
assert!(<scheduler::Pallet<Test>>::claimqueue_is_empty());
let multi_dispute_inherent_data =
Pallet::<Test>::create_inherent_inner(&inherent_data.clone()).unwrap();
@@ -322,7 +325,7 @@ mod enter {
.unwrap();
// The current schedule is empty prior to calling `create_inherent_enter`.
assert_eq!(<scheduler::Pallet<Test>>::scheduled(), vec![]);
assert!(<scheduler::Pallet<Test>>::claimqueue_is_empty());
let limit_inherent_data =
Pallet::<Test>::create_inherent_inner(&inherent_data.clone()).unwrap();
@@ -391,7 +394,7 @@ mod enter {
.unwrap();
// The current schedule is empty prior to calling `create_inherent_enter`.
assert_eq!(<scheduler::Pallet<Test>>::scheduled(), vec![]);
assert!(<scheduler::Pallet<Test>>::claimqueue_is_empty());
// Nothing is filtered out (including the backed candidates.)
let limit_inherent_data =
@@ -475,7 +478,7 @@ mod enter {
.unwrap();
// The current schedule is empty prior to calling `create_inherent_enter`.
assert_eq!(<scheduler::Pallet<Test>>::scheduled(), vec![]);
assert!(<scheduler::Pallet<Test>>::claimqueue_is_empty());
// Nothing is filtered out (including the backed candidates.)
let limit_inherent_data =
@@ -601,7 +604,10 @@ mod enter {
#[test]
// Ensure that when a block is over weight due to disputes and bitfields, we filter.
fn limit_candidates_over_weight_1() {
new_test_ext(MockGenesisConfig::default()).execute_with(|| {
let config = MockGenesisConfig::default();
assert!(config.configuration.config.scheduling_lookahead > 0);
new_test_ext(config).execute_with(|| {
// Create the inherent data for this block
let mut dispute_statements = BTreeMap::new();
// Control the number of statements per dispute to ensure we have enough space
@@ -953,7 +959,10 @@ mod sanitizers {
use crate::mock::Test;
use keyring::Sr25519Keyring;
use primitives::PARACHAIN_KEY_TYPE_ID;
use primitives::{
v5::{Assignment, ParasEntry},
PARACHAIN_KEY_TYPE_ID,
};
use sc_keystore::LocalKeystore;
use sp_keystore::{Keystore, KeystorePtr};
use std::sync::Arc;
@@ -1225,19 +1234,22 @@ mod sanitizers {
let has_concluded_invalid =
|_idx: usize, _backed_candidate: &BackedCandidate| -> bool { false };
let entry_ttl = 10_000;
let scheduled = (0_usize..2)
.into_iter()
.map(|idx| {
let core_idx = CoreIndex::from(idx as u32);
let ca = CoreAssignment {
kind: scheduler::AssignmentKind::Parachain,
paras_entry: ParasEntry::new(
Assignment::new(ParaId::from(1_u32 + idx as u32)),
entry_ttl,
),
group_idx: GroupIndex::from(idx as u32),
para_id: ParaId::from(1_u32 + idx as u32),
core: CoreIndex::from(idx as u32),
core: core_idx,
};
ca
})
.collect::<Vec<_>>();
let scheduled = &scheduled[..];
let group_validators = |group_index: GroupIndex| {
match group_index {
@@ -1282,14 +1294,14 @@ mod sanitizers {
relay_parent,
backed_candidates.clone(),
has_concluded_invalid,
scheduled
&scheduled
),
backed_candidates
);
// nothing is scheduled, so no paraids match, thus all backed candidates are skipped
{
let scheduled = &[][..];
let scheduled = &Vec::new();
assert!(sanitize_backed_candidates::<Test, _>(
relay_parent,
backed_candidates.clone(),
@@ -1306,7 +1318,7 @@ mod sanitizers {
relay_parent,
backed_candidates.clone(),
has_concluded_invalid,
scheduled
&scheduled
)
.is_empty());
}
@@ -1330,7 +1342,7 @@ mod sanitizers {
relay_parent,
backed_candidates.clone(),
has_concluded_invalid,
scheduled
&scheduled
)
.len(),
backed_candidates.len() / 2
@@ -27,8 +27,8 @@ use primitives::{
CoreIndex, CoreOccupied, CoreState, DisputeState, ExecutorParams, GroupIndex,
GroupRotationInfo, Hash, Id as ParaId, InboundDownwardMessage, InboundHrmpMessage,
OccupiedCore, OccupiedCoreAssumption, PersistedValidationData, PvfCheckStatement,
ScheduledCore, ScrapedOnChainVotes, SessionIndex, SessionInfo, ValidationCode,
ValidationCodeHash, ValidatorId, ValidatorIndex, ValidatorSignature,
ScrapedOnChainVotes, SessionIndex, SessionInfo, ValidationCode, ValidationCodeHash,
ValidatorId, ValidatorIndex, ValidatorSignature,
};
use sp_runtime::traits::One;
use sp_std::{collections::btree_map::BTreeMap, prelude::*};
@@ -52,13 +52,8 @@ pub fn validator_groups<T: initializer::Config>(
/// Implementation for the `availability_cores` function of the runtime API.
pub fn availability_cores<T: initializer::Config>() -> Vec<CoreState<T::Hash, BlockNumberFor<T>>> {
let cores = <scheduler::Pallet<T>>::availability_cores();
let parachains = <paras::Pallet<T>>::parachains();
let config = <configuration::Pallet<T>>::config();
let now = <frame_system::Pallet<T>>::block_number() + One::one();
<scheduler::Pallet<T>>::clear();
<scheduler::Pallet<T>>::schedule(Vec::new(), now);
let rotation_info = <scheduler::Pallet<T>>::group_rotation_info(now);
let time_out_at = |backed_in_number, availability_period| {
@@ -102,73 +97,39 @@ pub fn availability_cores<T: initializer::Config>() -> Vec<CoreState<T::Hash, Bl
.into_iter()
.enumerate()
.map(|(i, core)| match core {
Some(occupied) => CoreState::Occupied(match occupied {
CoreOccupied::Parachain => {
let para_id = parachains[i];
let pending_availability =
<inclusion::Pallet<T>>::pending_availability(para_id)
.expect("Occupied core always has pending availability; qed");
CoreOccupied::Paras(entry) => {
let pending_availability =
<inclusion::Pallet<T>>::pending_availability(entry.para_id())
.expect("Occupied core always has pending availability; qed");
let backed_in_number = *pending_availability.backed_in_number();
OccupiedCore {
next_up_on_available: <scheduler::Pallet<T>>::next_up_on_available(
CoreIndex(i as u32),
),
occupied_since: backed_in_number,
time_out_at: time_out_at(
backed_in_number,
config.chain_availability_period,
),
next_up_on_time_out: <scheduler::Pallet<T>>::next_up_on_time_out(
CoreIndex(i as u32),
),
availability: pending_availability.availability_votes().clone(),
group_responsible: group_responsible_for(
backed_in_number,
pending_availability.core_occupied(),
),
candidate_hash: pending_availability.candidate_hash(),
candidate_descriptor: pending_availability.candidate_descriptor().clone(),
}
},
CoreOccupied::Parathread(p) => {
let para_id = p.claim.0;
let pending_availability =
<inclusion::Pallet<T>>::pending_availability(para_id)
.expect("Occupied core always has pending availability; qed");
let backed_in_number = *pending_availability.backed_in_number();
OccupiedCore {
next_up_on_available: <scheduler::Pallet<T>>::next_up_on_available(
CoreIndex(i as u32),
),
occupied_since: backed_in_number,
time_out_at: time_out_at(
backed_in_number,
config.thread_availability_period,
),
next_up_on_time_out: <scheduler::Pallet<T>>::next_up_on_time_out(
CoreIndex(i as u32),
),
availability: pending_availability.availability_votes().clone(),
group_responsible: group_responsible_for(
backed_in_number,
pending_availability.core_occupied(),
),
candidate_hash: pending_availability.candidate_hash(),
candidate_descriptor: pending_availability.candidate_descriptor().clone(),
}
},
}),
None => CoreState::Free,
let backed_in_number = *pending_availability.backed_in_number();
CoreState::Occupied(OccupiedCore {
next_up_on_available: <scheduler::Pallet<T>>::next_up_on_available(CoreIndex(
i as u32,
)),
occupied_since: backed_in_number,
time_out_at: time_out_at(backed_in_number, config.paras_availability_period),
next_up_on_time_out: <scheduler::Pallet<T>>::next_up_on_time_out(CoreIndex(
i as u32,
)),
availability: pending_availability.availability_votes().clone(),
group_responsible: group_responsible_for(
backed_in_number,
pending_availability.core_occupied(),
),
candidate_hash: pending_availability.candidate_hash(),
candidate_descriptor: pending_availability.candidate_descriptor().clone(),
})
},
CoreOccupied::Free => CoreState::Free,
})
.collect();
// This will overwrite only `Free` cores if the scheduler module is working as intended.
for scheduled in <scheduler::Pallet<T>>::scheduled() {
core_states[scheduled.core.0 as usize] = CoreState::Scheduled(ScheduledCore {
para_id: scheduled.para_id,
collator: scheduled.required_collator().map(|c| c.clone()),
for scheduled in <scheduler::Pallet<T>>::scheduled_claimqueue(now) {
core_states[scheduled.core.0 as usize] = CoreState::Scheduled(primitives::ScheduledCore {
para_id: scheduled.paras_entry.para_id(),
collator: None,
});
}
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,98 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Common traits and types used by the scheduler and assignment providers.
use frame_support::pallet_prelude::*;
use primitives::{
v5::{Assignment, ParasEntry},
CoreIndex, GroupIndex, Id as ParaId,
};
use scale_info::TypeInfo;
use sp_std::prelude::*;
// Only used to link to configuration documentation.
#[allow(unused)]
use crate::configuration::HostConfiguration;
/// Reasons a core might be freed
#[derive(Clone, Copy)]
pub enum FreedReason {
/// The core's work concluded and the parablock assigned to it is considered available.
Concluded,
/// The core's work timed out.
TimedOut,
}
/// A set of variables required by the scheduler in order to operate.
pub struct AssignmentProviderConfig<BlockNumber> {
/// The availability period specified by the implementation.
/// See [`HostConfiguration::paras_availability_period`] for more information.
pub availability_period: BlockNumber,
/// How many times a collation can time out on availability.
/// Zero timeouts still means that a collation can be provided as per the slot auction
/// assignment provider.
pub max_availability_timeouts: u32,
/// How long the collator has to provide a collation to the backing group before being dropped.
pub ttl: BlockNumber,
}
pub trait AssignmentProvider<BlockNumber> {
/// How many cores are allocated to this provider.
fn session_core_count() -> u32;
/// Pops an [`Assignment`] from the provider for a specified [`CoreIndex`].
/// The `concluded_para` field makes the caller report back to the provider
/// which [`ParaId`] it processed last on the supplied [`CoreIndex`].
fn pop_assignment_for_core(
core_idx: CoreIndex,
concluded_para: Option<ParaId>,
) -> Option<Assignment>;
/// Push back an already popped assignment. Intended for provider implementations
/// that need to be able to keep track of assignments over session boundaries,
/// such as the on demand assignment provider.
fn push_assignment_for_core(core_idx: CoreIndex, assignment: Assignment);
/// Returns a set of variables needed by the scheduler
fn get_provider_config(core_idx: CoreIndex) -> AssignmentProviderConfig<BlockNumber>;
}
/// How a core is mapped to a backing group and a `ParaId`
#[derive(Clone, Encode, Decode, PartialEq, TypeInfo)]
#[cfg_attr(feature = "std", derive(Debug))]
pub struct CoreAssignment<BlockNumber> {
/// The core that is assigned.
pub core: CoreIndex,
/// The para id and accompanying information needed to collate and back a parablock.
pub paras_entry: ParasEntry<BlockNumber>,
/// The index of the validator group assigned to the core.
pub group_idx: GroupIndex,
}
impl<BlockNumber> CoreAssignment<BlockNumber> {
/// Returns the [`ParaId`] of the assignment.
pub fn para_id(&self) -> ParaId {
self.paras_entry.para_id()
}
/// Returns the inner [`ParasEntry`] of the assignment.
pub fn to_paras_entry(self) -> ParasEntry<BlockNumber> {
self.paras_entry
}
}
@@ -0,0 +1,170 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! A module that is responsible for migration of storage.
use super::*;
use frame_support::{
pallet_prelude::ValueQuery, storage_alias, traits::OnRuntimeUpgrade, weights::Weight,
};
use primitives::vstaging::Assignment;
mod v0 {
use super::*;
use primitives::CollatorId;
#[storage_alias]
pub(super) type Scheduled<T: Config> = StorageValue<Pallet<T>, Vec<CoreAssignment>, ValueQuery>;
#[derive(Encode, Decode)]
pub struct QueuedParathread {
claim: primitives::ParathreadEntry,
core_offset: u32,
}
#[derive(Encode, Decode, Default)]
pub struct ParathreadClaimQueue {
queue: Vec<QueuedParathread>,
next_core_offset: u32,
}
// Only here to facilitate the migration.
impl ParathreadClaimQueue {
pub fn len(self) -> usize {
self.queue.len()
}
}
#[storage_alias]
pub(super) type ParathreadQueue<T: Config> =
StorageValue<Pallet<T>, ParathreadClaimQueue, ValueQuery>;
#[storage_alias]
pub(super) type ParathreadClaimIndex<T: Config> =
StorageValue<Pallet<T>, Vec<ParaId>, ValueQuery>;
/// The assignment type.
#[derive(Clone, Encode, Decode, TypeInfo, RuntimeDebug)]
#[cfg_attr(feature = "std", derive(PartialEq))]
pub enum AssignmentKind {
/// A parachain.
Parachain,
/// A parathread.
Parathread(CollatorId, u32),
}
/// How a free core is scheduled to be assigned.
#[derive(Clone, Encode, Decode, TypeInfo, RuntimeDebug)]
#[cfg_attr(feature = "std", derive(PartialEq))]
pub struct CoreAssignment {
/// The core that is assigned.
pub core: CoreIndex,
/// The unique ID of the para that is assigned to the core.
pub para_id: ParaId,
/// The kind of the assignment.
pub kind: AssignmentKind,
/// The index of the validator group assigned to the core.
pub group_idx: GroupIndex,
}
}
pub mod v1 {
use super::*;
use crate::scheduler;
use frame_support::traits::StorageVersion;
pub struct MigrateToV1<T>(sp_std::marker::PhantomData<T>);
impl<T: Config> OnRuntimeUpgrade for MigrateToV1<T> {
fn on_runtime_upgrade() -> Weight {
if StorageVersion::get::<Pallet<T>>() == 0 {
let weight_consumed = migrate_to_v1::<T>();
log::info!(target: scheduler::LOG_TARGET, "Migrating para scheduler storage to v1");
StorageVersion::new(1).put::<Pallet<T>>();
weight_consumed
} else {
log::warn!(target: scheduler::LOG_TARGET, "Para scheduler v1 migration should be removed.");
T::DbWeight::get().reads(1)
}
}
#[cfg(feature = "try-runtime")]
fn pre_upgrade() -> Result<Vec<u8>, sp_runtime::DispatchError> {
log::trace!(
target: crate::scheduler::LOG_TARGET,
"Scheduled before migration: {}",
v0::Scheduled::<T>::get().len()
);
ensure!(
StorageVersion::get::<Pallet<T>>() == 0,
"Storage version should be less than `1` before the migration",
);
let bytes = u32::to_be_bytes(v0::Scheduled::<T>::get().len() as u32);
Ok(bytes.to_vec())
}
#[cfg(feature = "try-runtime")]
fn post_upgrade(state: Vec<u8>) -> Result<(), sp_runtime::DispatchError> {
log::trace!(target: crate::scheduler::LOG_TARGET, "Running post_upgrade()");
ensure!(
StorageVersion::get::<Pallet<T>>() == 1,
"Storage version should be `1` after the migration"
);
ensure!(
v0::Scheduled::<T>::get().len() == 0,
"Scheduled should be empty after the migration"
);
let sched_len = u32::from_be_bytes(state.try_into().unwrap());
ensure!(
Pallet::<T>::claimqueue_len() as u32 == sched_len,
"Scheduled completely moved to ClaimQueue after migration"
);
Ok(())
}
}
}
pub fn migrate_to_v1<T: crate::scheduler::Config>() -> Weight {
let mut weight: Weight = Weight::zero();
let pq = v0::ParathreadQueue::<T>::take();
let pq_len = pq.len() as u64;
let pci = v0::ParathreadClaimIndex::<T>::take();
let pci_len = pci.len() as u64;
let now = <frame_system::Pallet<T>>::block_number();
let scheduled = v0::Scheduled::<T>::take();
let sched_len = scheduled.len() as u64;
for core_assignment in scheduled {
let core_idx = core_assignment.core;
let assignment = Assignment::new(core_assignment.para_id);
let pe = ParasEntry::new(assignment, now);
Pallet::<T>::add_to_claimqueue(core_idx, pe);
}
// 2x as once for Scheduled and once for Claimqueue
weight = weight.saturating_add(T::DbWeight::get().reads_writes(2 * sched_len, 2 * sched_len));
weight = weight.saturating_add(T::DbWeight::get().reads_writes(pq_len, pq_len));
weight = weight.saturating_add(T::DbWeight::get().reads_writes(pci_len, pci_len));
weight
}
File diff suppressed because it is too large Load Diff
@@ -62,7 +62,7 @@ fn run_to_block(
fn default_config() -> HostConfiguration<BlockNumber> {
HostConfiguration {
parathread_cores: 1,
on_demand_cores: 1,
dispute_period: 2,
needed_approvals: 3,
..Default::default()
+9 -1
View File
@@ -27,6 +27,7 @@ use runtime_common::{
};
use runtime_parachains::{
assigner_parachains as parachains_assigner_parachains,
configuration as parachains_configuration, disputes as parachains_disputes,
disputes::slashing as parachains_slashing,
dmp as parachains_dmp, hrmp as parachains_hrmp, inclusion as parachains_inclusion,
@@ -1183,7 +1184,11 @@ impl parachains_paras_inherent::Config for Runtime {
type WeightInfo = weights::runtime_parachains_paras_inherent::WeightInfo<Runtime>;
}
impl parachains_scheduler::Config for Runtime {}
impl parachains_scheduler::Config for Runtime {
type AssignmentProvider = ParaAssignmentProvider;
}
impl parachains_assigner_parachains::Config for Runtime {}
impl parachains_initializer::Config for Runtime {
type Randomness = pallet_babe::RandomnessFromOneEpochAgo<Runtime>;
@@ -1434,6 +1439,7 @@ construct_runtime! {
ParaSessionInfo: parachains_session_info::{Pallet, Storage} = 61,
ParasDisputes: parachains_disputes::{Pallet, Call, Storage, Event<T>} = 62,
ParasSlashing: parachains_slashing::{Pallet, Call, Storage, ValidateUnsigned} = 63,
ParaAssignmentProvider: parachains_assigner_parachains::{Pallet} = 64,
// Parachain Onboarding Pallets. Start indices at 70 to leave room.
Registrar: paras_registrar::{Pallet, Call, Storage, Event<T>} = 70,
@@ -1494,6 +1500,8 @@ pub mod migrations {
pub type Unreleased = (
pallet_im_online::migration::v1::Migration<Runtime>,
parachains_configuration::migration::v7::MigrateToV7<Runtime>,
parachains_scheduler::migration::v1::MigrateToV1<Runtime>,
parachains_configuration::migration::v8::MigrateToV8<Runtime>,
);
}
@@ -17,27 +17,25 @@
//! Autogenerated weights for `runtime_parachains::configuration`
//!
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 4.0.0-dev
//! DATE: 2023-06-19, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! DATE: 2023-08-11, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! WORST CASE MAP SIZE: `1000000`
//! HOSTNAME: `runner-e8ezs4ez-project-163-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! EXECUTION: Some(Wasm), WASM-EXECUTION: Compiled, CHAIN: Some("polkadot-dev"), DB CACHE: 1024
//! HOSTNAME: `runner-fljshgub-project-163-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! WASM-EXECUTION: `Compiled`, CHAIN: `Some("polkadot-dev")`, DB CACHE: 1024
// Executed Command:
// ./target/production/polkadot
// target/production/polkadot
// benchmark
// pallet
// --chain=polkadot-dev
// --steps=50
// --repeat=20
// --no-storage-info
// --no-median-slopes
// --no-min-squares
// --pallet=runtime_parachains::configuration
// --extrinsic=*
// --execution=wasm
// --wasm-execution=compiled
// --heap-pages=4096
// --json-file=/builds/parity/mirrors/polkadot/.git/.artifacts/bench.json
// --pallet=runtime_parachains::configuration
// --chain=polkadot-dev
// --header=./file_header.txt
// --output=./runtime/polkadot/src/weights/runtime_parachains_configuration.rs
// --output=./runtime/polkadot/src/weights/
#![cfg_attr(rustfmt, rustfmt_skip)]
#![allow(unused_parens)]
@@ -50,62 +48,56 @@ use core::marker::PhantomData;
/// Weight functions for `runtime_parachains::configuration`.
pub struct WeightInfo<T>(PhantomData<T>);
impl<T: frame_system::Config> runtime_parachains::configuration::WeightInfo for WeightInfo<T> {
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_block_number() -> Weight {
// Proof Size summary in bytes:
// Measured: `443`
// Estimated: `1928`
// Minimum execution time: 13_403_000 picoseconds.
Weight::from_parts(13_933_000, 0)
.saturating_add(Weight::from_parts(0, 1928))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_330_000 picoseconds.
Weight::from_parts(9_663_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_u32() -> Weight {
// Proof Size summary in bytes:
// Measured: `443`
// Estimated: `1928`
// Minimum execution time: 13_210_000 picoseconds.
Weight::from_parts(13_674_000, 0)
.saturating_add(Weight::from_parts(0, 1928))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_155_000 picoseconds.
Weight::from_parts(9_554_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_option_u32() -> Weight {
// Proof Size summary in bytes:
// Measured: `443`
// Estimated: `1928`
// Minimum execution time: 13_351_000 picoseconds.
Weight::from_parts(13_666_000, 0)
.saturating_add(Weight::from_parts(0, 1928))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_299_000 picoseconds.
Weight::from_parts(9_663_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Benchmark Override (r:0 w:0)
/// Proof Skipped: Benchmark Override (max_values: None, max_size: None, mode: Measured)
/// Storage: `Benchmark::Override` (r:0 w:0)
/// Proof: `Benchmark::Override` (`max_values`: None, `max_size`: None, mode: `Measured`)
fn set_hrmp_open_request_ttl() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
@@ -114,40 +106,52 @@ impl<T: frame_system::Config> runtime_parachains::configuration::WeightInfo for
Weight::from_parts(2_000_000_000_000, 0)
.saturating_add(Weight::from_parts(0, 0))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_balance() -> Weight {
// Proof Size summary in bytes:
// Measured: `443`
// Estimated: `1928`
// Minimum execution time: 13_299_000 picoseconds.
Weight::from_parts(13_892_000, 0)
.saturating_add(Weight::from_parts(0, 1928))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_130_000 picoseconds.
Weight::from_parts(9_554_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_executor_params() -> Weight {
// Proof Size summary in bytes:
// Measured: `443`
// Estimated: `1928`
// Minimum execution time: 14_002_000 picoseconds.
Weight::from_parts(14_673_000, 0)
.saturating_add(Weight::from_parts(0, 1928))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 10_177_000 picoseconds.
Weight::from_parts(10_632_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_perbill() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_136_000 picoseconds.
Weight::from_parts(9_487_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
}
+32 -2
View File
@@ -38,6 +38,8 @@ use scale_info::TypeInfo;
use sp_std::{cmp::Ordering, collections::btree_map::BTreeMap, prelude::*};
use runtime_parachains::{
assigner as parachains_assigner, assigner_on_demand as parachains_assigner_on_demand,
assigner_parachains as parachains_assigner_parachains,
configuration as parachains_configuration, disputes as parachains_disputes,
disputes::slashing as parachains_slashing,
dmp as parachains_dmp, hrmp as parachains_hrmp, inclusion as parachains_inclusion,
@@ -77,7 +79,7 @@ use sp_runtime::{
Extrinsic as ExtrinsicT, Keccak256, OpaqueKeys, SaturatedConversion, Verify,
},
transaction_validity::{TransactionPriority, TransactionSource, TransactionValidity},
ApplyExtrinsicResult, KeyTypeId, Perbill, Percent, Permill,
ApplyExtrinsicResult, FixedU128, KeyTypeId, Perbill, Percent, Permill,
};
use sp_staking::SessionIndex;
#[cfg(any(feature = "std", test))]
@@ -879,6 +881,7 @@ pub enum ProxyType {
CancelProxy,
Auction,
Society,
OnDemandOrdering,
}
impl Default for ProxyType {
fn default() -> Self {
@@ -965,6 +968,7 @@ impl InstanceFilter<RuntimeCall> for ProxyType {
RuntimeCall::Slots { .. }
),
ProxyType::Society => matches!(c, RuntimeCall::Society(..)),
ProxyType::OnDemandOrdering => matches!(c, RuntimeCall::OnDemandAssignmentProvider(..)),
}
}
fn is_superset(&self, o: &Self) -> bool {
@@ -1095,7 +1099,27 @@ impl parachains_paras_inherent::Config for Runtime {
type WeightInfo = weights::runtime_parachains_paras_inherent::WeightInfo<Runtime>;
}
impl parachains_scheduler::Config for Runtime {}
impl parachains_scheduler::Config for Runtime {
type AssignmentProvider = ParaAssignmentProvider;
}
parameter_types! {
pub const OnDemandTrafficDefaultValue: FixedU128 = FixedU128::from_u32(1);
}
impl parachains_assigner_on_demand::Config for Runtime {
type RuntimeEvent = RuntimeEvent;
type Currency = Balances;
type TrafficDefaultValue = OnDemandTrafficDefaultValue;
type WeightInfo = weights::runtime_parachains_assigner_on_demand::WeightInfo<Runtime>;
}
impl parachains_assigner_parachains::Config for Runtime {}
impl parachains_assigner::Config for Runtime {
type OnDemandAssignmentProvider = OnDemandAssignmentProvider;
type ParachainsAssignmentProvider = ParachainsAssignmentProvider;
}
impl parachains_initializer::Config for Runtime {
type Randomness = pallet_babe::RandomnessFromOneEpochAgo<Runtime>;
@@ -1451,6 +1475,9 @@ construct_runtime! {
ParasDisputes: parachains_disputes::{Pallet, Call, Storage, Event<T>} = 62,
ParasSlashing: parachains_slashing::{Pallet, Call, Storage, ValidateUnsigned} = 63,
MessageQueue: pallet_message_queue::{Pallet, Call, Storage, Event<T>} = 64,
ParaAssignmentProvider: parachains_assigner::{Pallet, Storage} = 65,
OnDemandAssignmentProvider: parachains_assigner_on_demand::{Pallet, Call, Storage, Event<T>} = 66,
ParachainsAssignmentProvider: parachains_assigner_parachains::{Pallet} = 67,
// Parachain Onboarding Pallets. Start indices at 70 to leave room.
Registrar: paras_registrar::{Pallet, Call, Storage, Event<T>, Config<T>} = 70,
@@ -1524,6 +1551,8 @@ pub mod migrations {
pallet_im_online::migration::v1::Migration<Runtime>,
parachains_configuration::migration::v7::MigrateToV7<Runtime>,
assigned_slots::migration::v1::VersionCheckedMigrateToV1<Runtime>,
parachains_scheduler::migration::v1::MigrateToV1<Runtime>,
parachains_configuration::migration::v8::MigrateToV8<Runtime>,
);
}
@@ -1583,6 +1612,7 @@ mod benches {
[runtime_parachains::initializer, Initializer]
[runtime_parachains::paras_inherent, ParaInherent]
[runtime_parachains::paras, Paras]
[runtime_parachains::assigner_on_demand, OnDemandAssignmentProvider]
// Substrate
[pallet_balances, Balances]
[pallet_balances, NisCounterpartBalances]
@@ -48,6 +48,7 @@ pub mod runtime_common_claims;
pub mod runtime_common_crowdloan;
pub mod runtime_common_paras_registrar;
pub mod runtime_common_slots;
pub mod runtime_parachains_assigner_on_demand;
pub mod runtime_parachains_configuration;
pub mod runtime_parachains_disputes;
pub mod runtime_parachains_hrmp;
@@ -0,0 +1,91 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Autogenerated weights for `runtime_parachains::assigner_on_demand`
//!
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 4.0.0-dev
//! DATE: 2023-08-11, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! WORST CASE MAP SIZE: `1000000`
//! HOSTNAME: `runner-fljshgub-project-163-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! WASM-EXECUTION: `Compiled`, CHAIN: `Some("rococo-dev")`, DB CACHE: 1024
// Executed Command:
// target/production/polkadot
// benchmark
// pallet
// --steps=50
// --repeat=20
// --extrinsic=*
// --wasm-execution=compiled
// --heap-pages=4096
// --json-file=/builds/parity/mirrors/polkadot/.git/.artifacts/bench.json
// --pallet=runtime_parachains::assigner_on_demand
// --chain=rococo-dev
// --header=./file_header.txt
// --output=./runtime/rococo/src/weights/
#![cfg_attr(rustfmt, rustfmt_skip)]
#![allow(unused_parens)]
#![allow(unused_imports)]
#![allow(missing_docs)]
use frame_support::{traits::Get, weights::Weight};
use core::marker::PhantomData;
/// Weight functions for `runtime_parachains::assigner_on_demand`.
pub struct WeightInfo<T>(PhantomData<T>);
impl<T: frame_system::Config> runtime_parachains::assigner_on_demand::WeightInfo for WeightInfo<T> {
/// Storage: `OnDemandAssignmentProvider::SpotTraffic` (r:1 w:0)
/// Proof: `OnDemandAssignmentProvider::SpotTraffic` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Paras::ParaLifecycles` (r:1 w:0)
/// Proof: `Paras::ParaLifecycles` (`max_values`: None, `max_size`: None, mode: `Measured`)
/// Storage: `OnDemandAssignmentProvider::OnDemandQueue` (r:1 w:1)
/// Proof: `OnDemandAssignmentProvider::OnDemandQueue` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// The range of component `s` is `[1, 9999]`.
fn place_order_keep_alive(s: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `297 + s * (4 ±0)`
// Estimated: `3762 + s * (4 ±0)`
// Minimum execution time: 33_522_000 picoseconds.
Weight::from_parts(35_436_835, 0)
.saturating_add(Weight::from_parts(0, 3762))
// Standard Error: 129
.saturating_add(Weight::from_parts(14_041, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
.saturating_add(Weight::from_parts(0, 4).saturating_mul(s.into()))
}
/// Storage: `OnDemandAssignmentProvider::SpotTraffic` (r:1 w:0)
/// Proof: `OnDemandAssignmentProvider::SpotTraffic` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Paras::ParaLifecycles` (r:1 w:0)
/// Proof: `Paras::ParaLifecycles` (`max_values`: None, `max_size`: None, mode: `Measured`)
/// Storage: `OnDemandAssignmentProvider::OnDemandQueue` (r:1 w:1)
/// Proof: `OnDemandAssignmentProvider::OnDemandQueue` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// The range of component `s` is `[1, 9999]`.
fn place_order_allow_death(s: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `297 + s * (4 ±0)`
// Estimated: `3762 + s * (4 ±0)`
// Minimum execution time: 33_488_000 picoseconds.
Weight::from_parts(34_848_934, 0)
.saturating_add(Weight::from_parts(0, 3762))
// Standard Error: 143
.saturating_add(Weight::from_parts(14_215, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
.saturating_add(Weight::from_parts(0, 4).saturating_mul(s.into()))
}
}
@@ -17,24 +17,25 @@
//! Autogenerated weights for `runtime_parachains::configuration`
//!
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 4.0.0-dev
//! DATE: 2023-05-26, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! DATE: 2023-08-11, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! WORST CASE MAP SIZE: `1000000`
//! HOSTNAME: `bm5`, CPU: `Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz`
//! EXECUTION: Some(Wasm), WASM-EXECUTION: Compiled, CHAIN: Some("rococo-dev"), DB CACHE: 1024
//! HOSTNAME: `runner-fljshgub-project-163-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! WASM-EXECUTION: `Compiled`, CHAIN: `Some("rococo-dev")`, DB CACHE: 1024
// Executed Command:
// ./target/production/polkadot
// target/production/polkadot
// benchmark
// pallet
// --chain=rococo-dev
// --steps=50
// --repeat=20
// --pallet=runtime_parachains::configuration
// --extrinsic=*
// --execution=wasm
// --wasm-execution=compiled
// --heap-pages=4096
// --json-file=/builds/parity/mirrors/polkadot/.git/.artifacts/bench.json
// --pallet=runtime_parachains::configuration
// --chain=rococo-dev
// --header=./file_header.txt
// --output=./runtime/rococo/src/weights/runtime_parachains_configuration.rs
// --output=./runtime/rococo/src/weights/
#![cfg_attr(rustfmt, rustfmt_skip)]
#![allow(unused_parens)]
@@ -47,63 +48,56 @@ use core::marker::PhantomData;
/// Weight functions for `runtime_parachains::configuration`.
pub struct WeightInfo<T>(PhantomData<T>);
impl<T: frame_system::Config> runtime_parachains::configuration::WeightInfo for WeightInfo<T> {
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_block_number() -> Weight {
// Proof Size summary in bytes:
// Measured: `414`
// Estimated: `1899`
// Minimum execution time: 13_097_000 picoseconds.
Weight::from_parts(13_667_000, 0)
.saturating_add(Weight::from_parts(0, 1899))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_051_000 picoseconds.
Weight::from_parts(9_496_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_u32() -> Weight {
// Proof Size summary in bytes:
// Measured: `414`
// Estimated: `1899`
// Minimum execution time: 13_199_000 picoseconds.
Weight::from_parts(13_400_000, 0)
.saturating_add(Weight::from_parts(0, 1899))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_104_000 picoseconds.
Weight::from_parts(9_403_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_option_u32() -> Weight {
// Proof Size summary in bytes:
// Measured: `397`
// Estimated: `1882`
// Minimum execution time: 12_831_000 picoseconds.
Weight::from_parts(13_151_000, 0)
.saturating_add(Weight::from_parts(0, 1882))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_112_000 picoseconds.
Weight::from_parts(9_495_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Benchmark Override (r:0 w:0)
/// Proof Skipped: Benchmark Override (max_values: None, max_size: None, mode: Measured)
/// Storage: `Benchmark::Override` (r:0 w:0)
/// Proof: `Benchmark::Override` (`max_values`: None, `max_size`: None, mode: `Measured`)
fn set_hrmp_open_request_ttl() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
@@ -112,40 +106,52 @@ impl<T: frame_system::Config> runtime_parachains::configuration::WeightInfo for
Weight::from_parts(2_000_000_000_000, 0)
.saturating_add(Weight::from_parts(0, 0))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_balance() -> Weight {
// Proof Size summary in bytes:
// Measured: `414`
// Estimated: `1899`
// Minimum execution time: 13_059_000 picoseconds.
Weight::from_parts(13_481_000, 0)
.saturating_add(Weight::from_parts(0, 1899))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_011_000 picoseconds.
Weight::from_parts(9_460_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration ActiveConfig (r:1 w:0)
/// Proof Skipped: Configuration ActiveConfig (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_executor_params() -> Weight {
// Proof Size summary in bytes:
// Measured: `414`
// Estimated: `1899`
// Minimum execution time: 13_764_000 picoseconds.
Weight::from_parts(14_224_000, 0)
.saturating_add(Weight::from_parts(0, 1899))
.saturating_add(T::DbWeight::get().reads(4))
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_940_000 picoseconds.
Weight::from_parts(10_288_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_perbill() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_192_000 picoseconds.
Weight::from_parts(9_595_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
}
+7 -1
View File
@@ -25,6 +25,7 @@ use parity_scale_codec::Encode;
use sp_std::{collections::btree_map::BTreeMap, prelude::*};
use polkadot_runtime_parachains::{
assigner_parachains as parachains_assigner_parachains,
configuration as parachains_configuration, disputes as parachains_disputes,
disputes::slashing as parachains_slashing, dmp as parachains_dmp, hrmp as parachains_hrmp,
inclusion as parachains_inclusion, initializer as parachains_initializer,
@@ -555,7 +556,11 @@ impl parachains_hrmp::Config for Runtime {
type WeightInfo = parachains_hrmp::TestWeightInfo;
}
impl parachains_scheduler::Config for Runtime {}
impl parachains_assigner_parachains::Config for Runtime {}
impl parachains_scheduler::Config for Runtime {
type AssignmentProvider = ParaAssignmentProvider;
}
impl paras_sudo_wrapper::Config for Runtime {}
@@ -697,6 +702,7 @@ construct_runtime! {
Xcm: pallet_xcm::{Pallet, Call, Event<T>, Origin},
ParasDisputes: parachains_disputes::{Pallet, Storage, Event<T>},
ParasSlashing: parachains_slashing::{Pallet, Call, Storage, ValidateUnsigned},
ParaAssignmentProvider: parachains_assigner_parachains::{Pallet},
Sudo: pallet_sudo::{Pallet, Call, Storage, Config<T>, Event<T>},
+9 -1
View File
@@ -52,6 +52,7 @@ use runtime_common::{
BlockHashCount, BlockLength, CurrencyToVote, SlowAdjustingFeeUpdate, U256ToBalance,
};
use runtime_parachains::{
assigner_parachains as parachains_assigner_parachains,
configuration as parachains_configuration, disputes as parachains_disputes,
disputes::slashing as parachains_slashing,
dmp as parachains_dmp, hrmp as parachains_hrmp, inclusion as parachains_inclusion,
@@ -994,7 +995,11 @@ impl parachains_paras_inherent::Config for Runtime {
type WeightInfo = weights::runtime_parachains_paras_inherent::WeightInfo<Runtime>;
}
impl parachains_scheduler::Config for Runtime {}
impl parachains_scheduler::Config for Runtime {
type AssignmentProvider = ParaAssignmentProvider;
}
impl parachains_assigner_parachains::Config for Runtime {}
impl parachains_initializer::Config for Runtime {
type Randomness = pallet_babe::RandomnessFromOneEpochAgo<Runtime>;
@@ -1221,6 +1226,7 @@ construct_runtime! {
ParaSessionInfo: parachains_session_info::{Pallet, Storage} = 52,
ParasDisputes: parachains_disputes::{Pallet, Call, Storage, Event<T>} = 53,
ParasSlashing: parachains_slashing::{Pallet, Call, Storage, ValidateUnsigned} = 54,
ParaAssignmentProvider: parachains_assigner_parachains::{Pallet, Storage} = 55,
// Parachain Onboarding Pallets. Start indices at 60 to leave room.
Registrar: paras_registrar::{Pallet, Call, Storage, Event<T>, Config<T>} = 60,
@@ -1283,6 +1289,8 @@ pub mod migrations {
pallet_im_online::migration::v1::Migration<Runtime>,
parachains_configuration::migration::v7::MigrateToV7<Runtime>,
assigned_slots::migration::v1::VersionCheckedMigrateToV1<Runtime>,
parachains_scheduler::migration::v1::MigrateToV1<Runtime>,
parachains_configuration::migration::v8::MigrateToV8<Runtime>,
);
}
@@ -17,27 +17,25 @@
//! Autogenerated weights for `runtime_parachains::configuration`
//!
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 4.0.0-dev
//! DATE: 2023-06-14, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! DATE: 2023-08-11, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! WORST CASE MAP SIZE: `1000000`
//! HOSTNAME: `runner--ss9ysm1-project-163-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! EXECUTION: Some(Wasm), WASM-EXECUTION: Compiled, CHAIN: Some("westend-dev"), DB CACHE: 1024
//! HOSTNAME: `runner-fljshgub-project-163-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! WASM-EXECUTION: `Compiled`, CHAIN: `Some("westend-dev")`, DB CACHE: 1024
// Executed Command:
// ./target/production/polkadot
// target/production/polkadot
// benchmark
// pallet
// --chain=westend-dev
// --steps=50
// --repeat=20
// --no-storage-info
// --no-median-slopes
// --no-min-squares
// --pallet=runtime_parachains::configuration
// --extrinsic=*
// --execution=wasm
// --wasm-execution=compiled
// --heap-pages=4096
// --json-file=/builds/parity/mirrors/polkadot/.git/.artifacts/bench.json
// --pallet=runtime_parachains::configuration
// --chain=westend-dev
// --header=./file_header.txt
// --output=./runtime/westend/src/weights/runtime_parachains_configuration.rs
// --output=./runtime/westend/src/weights/
#![cfg_attr(rustfmt, rustfmt_skip)]
#![allow(unused_parens)]
@@ -50,56 +48,56 @@ use core::marker::PhantomData;
/// Weight functions for `runtime_parachains::configuration`.
pub struct WeightInfo<T>(PhantomData<T>);
impl<T: frame_system::Config> runtime_parachains::configuration::WeightInfo for WeightInfo<T> {
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_block_number() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_998_000 picoseconds.
Weight::from_parts(10_268_000, 0)
// Minimum execution time: 9_616_000 picoseconds.
Weight::from_parts(9_961_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_u32() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_851_000 picoseconds.
Weight::from_parts(10_102_000, 0)
// Minimum execution time: 9_587_000 picoseconds.
Weight::from_parts(9_964_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_option_u32() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_932_000 picoseconds.
Weight::from_parts(10_248_000, 0)
// Minimum execution time: 9_650_000 picoseconds.
Weight::from_parts(9_960_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Benchmark Override (r:0 w:0)
/// Proof Skipped: Benchmark Override (max_values: None, max_size: None, mode: Measured)
/// Storage: `Benchmark::Override` (r:0 w:0)
/// Proof: `Benchmark::Override` (`max_values`: None, `max_size`: None, mode: `Measured`)
fn set_hrmp_open_request_ttl() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
@@ -108,34 +106,50 @@ impl<T: frame_system::Config> runtime_parachains::configuration::WeightInfo for
Weight::from_parts(2_000_000_000_000, 0)
.saturating_add(Weight::from_parts(0, 0))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_balance() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_804_000 picoseconds.
Weight::from_parts(10_173_000, 0)
// Minimum execution time: 9_545_000 picoseconds.
Weight::from_parts(9_845_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: Configuration PendingConfigs (r:1 w:1)
/// Proof Skipped: Configuration PendingConfigs (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: Configuration BypassConsistencyCheck (r:1 w:0)
/// Proof Skipped: Configuration BypassConsistencyCheck (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: ParasShared CurrentSessionIndex (r:1 w:0)
/// Proof Skipped: ParasShared CurrentSessionIndex (max_values: Some(1), max_size: None, mode: Measured)
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_executor_params() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 10_531_000 picoseconds.
Weight::from_parts(10_984_000, 0)
// Minimum execution time: 10_258_000 picoseconds.
Weight::from_parts(10_607_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `Configuration::PendingConfigs` (r:1 w:1)
/// Proof: `Configuration::PendingConfigs` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `Configuration::BypassConsistencyCheck` (r:1 w:0)
/// Proof: `Configuration::BypassConsistencyCheck` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
/// Storage: `ParasShared::CurrentSessionIndex` (r:1 w:0)
/// Proof: `ParasShared::CurrentSessionIndex` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
fn set_config_with_perbill() -> Weight {
// Proof Size summary in bytes:
// Measured: `127`
// Estimated: `1612`
// Minimum execution time: 9_502_000 picoseconds.
Weight::from_parts(9_902_000, 0)
.saturating_add(Weight::from_parts(0, 1612))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(1))
@@ -23,7 +23,7 @@ zombienet-tests-parachains-smoke-test:
- export DEBUG=zombie,zombie::network-node
- export ZOMBIENET_INTEGRATION_TEST_IMAGE=${PARACHAINS_IMAGE_NAME}:${PARACHAINS_IMAGE_TAG}
- export MALUS_IMAGE=${MALUS_IMAGE_NAME}:${MALUS_IMAGE_TAG}
- export COL_IMAGE="docker.io/paritypr/colander:4519" # The collator image is fixed
- export COL_IMAGE="docker.io/paritypr/colander:7292" # The collator image is fixed
script:
- /home/nonroot/zombie-net/scripts/ci/run-test-env-manager.sh
--github-remote-dir="${GH_DIR}"
@@ -9,7 +9,7 @@ honest-validator-2: reports node_roles is 4
malus-validator-0: reports node_roles is 4
# Parachains should be making progress even if we have up to 1/3 malicious validators.
honest-validator-0: parachain 2000 block height is at least 2 within 180 seconds
honest-validator-0: parachain 2000 block height is at least 2 within 240 seconds
honest-validator-1: parachain 2001 block height is at least 2 within 180 seconds
honest-validator-2: parachain 2002 block height is at least 2 within 180 seconds
@@ -0,0 +1,32 @@
[settings]
timeout = 1000
[relaychain]
default_image = "{{ZOMBIENET_INTEGRATION_TEST_IMAGE}}"
chain = "rococo-local"
command = "polkadot"
[[relaychain.nodes]]
name = "alice"
args = [ "--alice", "-lruntime=debug,parachain=trace" ]
[[relaychain.nodes]]
name = "bob"
args = [ "--bob", "-lruntime=debug,parachain=trace" ]
[[parachains]]
id = 100
add_to_genesis = false
register_para = true
onboard_as_parachain = false
[parachains.collator]
name = "collator01"
image = "{{COL_IMAGE}}"
command = "adder-collator"
args = [ "-lruntime=debug,parachain=trace" ]
[types.Header]
number = "u64"
parent_hash = "Hash"
post_state = "Hash"
@@ -3,4 +3,4 @@ Network: ./0001-parachains-smoke-test.toml
Creds: config
alice: parachain 100 is registered within 225 seconds
alice: parachain 100 block height is at least 10 within 200 seconds
alice: parachain 100 block height is at least 10 within 400 seconds
@@ -3,6 +3,6 @@ Network: ./0002-parachains-upgrade-smoke-test.toml
Creds: config
alice: parachain 100 is registered within 225 seconds
alice: parachain 100 block height is at least 10 within 400 seconds
alice: parachain 100 block height is at least 10 within 460 seconds
alice: parachain 100 perform dummy upgrade within 200 seconds
alice: parachain 100 block height is at least 14 within 200 seconds