Files
pezkuwi-subxt/cumulus/pallets/dmp-queue/src/lib.rs
T
Oliver Tale-Yazdi e1c033ebe1 Use Message Queue as DMP and XCMP dispatch queue (#1246)
(imported from https://github.com/paritytech/cumulus/pull/2157)

## Changes

This MR refactores the XCMP, Parachains System and DMP pallets to use
the [MessageQueue](https://github.com/paritytech/substrate/pull/12485)
for delayed execution of incoming messages. The DMP pallet is entirely
replaced by the MQ and thereby removed. This allows for PoV-bounded
execution and resolves a number of issues that stem from the current
work-around.

All System Parachains adopt this change.  
The most important changes are in `primitives/core/src/lib.rs`,
`parachains/common/src/process_xcm_message.rs`,
`pallets/parachain-system/src/lib.rs`, `pallets/xcmp-queue/src/lib.rs`
and the runtime configs.

### DMP Queue Pallet

The pallet got removed and its logic refactored into parachain-system.
Overweight message management can be done directly through the MQ
pallet.

Final undeployment migrations are provided by
`cumulus_pallet_dmp_queue::UndeployDmpQueue` and `DeleteDmpQueue` that
can be configured with an aux config trait like:

```rust
parameter_types! {
	pub const DmpQueuePalletName: &'static str = \"DmpQueue\" < CHANGE ME;
	pub const RelayOrigin: AggregateMessageOrigin = AggregateMessageOrigin::Parent;
}

impl cumulus_pallet_dmp_queue::MigrationConfig for Runtime {
	type PalletName = DmpQueuePalletName;
	type DmpHandler = frame_support::traits::EnqueueWithOrigin<MessageQueue, RelayOrigin>;
	type DbWeight = <Runtime as frame_system::Config>::DbWeight;
}

// And adding them to your Migrations tuple:
pub type Migrations = (
	...
	cumulus_pallet_dmp_queue::UndeployDmpQueue<Runtime>,
	cumulus_pallet_dmp_queue::DeleteDmpQueue<Runtime>,
);
```

### XCMP Queue pallet

Removed all dispatch queue functionality. Incoming XCMP messages are now
either: Immediately handled if they are Signals, enqueued into the MQ
pallet otherwise.

New config items for the XCMP queue pallet:
```rust
/// The actual queue implementation that retains the messages for later processing.
type XcmpQueue: EnqueueMessage<ParaId>;

/// How a XCM over HRMP from a sibling parachain should be processed.
type XcmpProcessor: ProcessMessage<Origin = ParaId>;

/// The maximal number of suspended XCMP channels at the same time.
#[pallet::constant]
type MaxInboundSuspended: Get<u32>;
```

How to configure those:

```rust
// Use the MessageQueue pallet to store messages for later processing. The `TransformOrigin` is needed since
// the MQ pallet itself operators on `AggregateMessageOrigin` but we want to enqueue `ParaId`s.
type XcmpQueue = TransformOrigin<MessageQueue, AggregateMessageOrigin, ParaId, ParaIdToSibling>;

// Process XCMP messages from siblings. This is type-safe to only accept `ParaId`s. They will be dispatched
// with origin `Junction::Sibling(…)`.
type XcmpProcessor = ProcessFromSibling<
	ProcessXcmMessage<
		AggregateMessageOrigin,
		xcm_executor::XcmExecutor<xcm_config::XcmConfig>,
		RuntimeCall,
	>,
>;

// Not really important what to choose here. Just something larger than the maximal number of channels.
type MaxInboundSuspended = sp_core::ConstU32<1_000>;
```

The `InboundXcmpStatus` storage item was replaced by
`InboundXcmpSuspended` since it now only tracks inbound queue suspension
and no message indices anymore.

Now only sends the most recent channel `Signals`, as all prio ones are
out-dated anyway.

### Parachain System pallet

For `DMP` messages instead of forwarding them to the `DMP` pallet, it
now pushes them to the configured `DmpQueue`. The message processing
which was triggered in `set_validation_data` is now being done by the MQ
pallet `on_initialize`.

XCMP messages are still handed off to the `XcmpMessageHandler`
(XCMP-Queue pallet) - no change here.

New config items for the parachain system pallet:
```rust
/// Queues inbound downward messages for delayed processing. 
///
/// Analogous to the `XcmpQueue` of the XCMP queue pallet.
type DmpQueue: EnqueueMessage<AggregateMessageOrigin>;
``` 

How to configure:
```rust
/// Use the MQ pallet to store DMP messages for delayed processing.
type DmpQueue = MessageQueue;
``` 

## Message Flow

The flow of messages on the parachain side. Messages come in from the
left via the `Validation Data` and finally end up at the `Xcm Executor`
on the right.

![Untitled
(1)](https://github.com/paritytech/cumulus/assets/10380170/6cf8b377-88c9-4aed-96df-baace266e04d)

## Further changes

- Bumped the default suspension, drop and resume thresholds in
`QueueConfigData::default()`.
- `XcmpQueue::{suspend_xcm_execution, resume_xcm_execution}` errors when
they would be a noop.
- Properly validate the `QueueConfigData` before setting it.
- Marked weight files as auto-generated so they wont auto-expand in the
MR files view.
- Move the `hypothetical` asserts to `frame_support` under the name
`experimental_hypothetically`

Questions:
- [ ] What about the ugly `#[cfg(feature = \"runtime-benchmarks\")]` in
the runtimes? Not sure how to best fix. Just having them like this makes
tests fail that rely on the real message processor when the feature is
enabled.
- [ ] Need a good weight for `MessageQueueServiceWeight`. The scheduler
already takes 80% so I put it to 10% but that is quite low.

TODO:
- [x] Remove c&p code after
https://github.com/paritytech/polkadot/pull/6271
- [x] Use `HandleMessage` once it is public in Substrate
- [x] fix `runtime-benchmarks` feature
https://github.com/paritytech/polkadot/pull/6966
- [x] Benchmarks
- [x] Tests
- [ ] Migrate `InboundXcmpStatus` to `InboundXcmpSuspended`
- [x] Possibly cleanup Migrations (DMP+XCMP)
- [x] optional: create `TransformProcessMessageOrigin` in Substrate and
replace `ProcessFromSibling`
- [ ] Rerun weights on ref HW

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: command-bot <>
2023-11-02 15:31:38 +01:00

285 lines
9.2 KiB
Rust

// Copyright (C) Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! This pallet used to implement a message queue for downward messages from the relay-chain.
//!
//! It is now deprecated and has been refactored to simply drain any remaining messages into
//! something implementing `HandleMessage`. It proceeds in the state of
//! [`MigrationState`] one by one by their listing in the source code. The pallet can be removed
//! from the runtime once `Completed` was emitted.
#![cfg_attr(not(feature = "std"), no_std)]
use migration::*;
pub use pallet::*;
mod benchmarking;
mod migration;
mod mock;
mod tests;
pub mod weights;
pub use weights::WeightInfo;
/// The maximal length of a DMP message.
pub type MaxDmpMessageLenOf<T> =
<<T as Config>::DmpSink as frame_support::traits::HandleMessage>::MaxMessageLen;
#[frame_support::pallet]
pub mod pallet {
use super::*;
use frame_support::{pallet_prelude::*, traits::HandleMessage, weights::WeightMeter};
use frame_system::pallet_prelude::*;
use sp_io::hashing::twox_128;
const STORAGE_VERSION: StorageVersion = StorageVersion::new(2);
#[pallet::pallet]
#[pallet::storage_version(STORAGE_VERSION)]
pub struct Pallet<T>(_);
#[pallet::config]
pub trait Config: frame_system::Config {
/// The overarching event type of the runtime.
type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;
/// The sink for all DMP messages that the lazy migration will use.
type DmpSink: HandleMessage;
/// Weight info for this pallet (only needed for the lazy migration).
type WeightInfo: WeightInfo;
}
/// The migration state of this pallet.
#[pallet::storage]
pub type MigrationStatus<T> = StorageValue<_, MigrationState, ValueQuery>;
/// The lazy-migration state of the pallet.
#[derive(
codec::Encode, codec::Decode, Debug, PartialEq, Eq, Clone, MaxEncodedLen, TypeInfo,
)]
pub enum MigrationState {
/// Migration has not started yet.
NotStarted,
/// The export of pages started.
StartedExport {
/// The next page that should be exported.
next_begin_used: PageCounter,
},
/// The page export completed.
CompletedExport,
/// The export of overweight messages started.
StartedOverweightExport {
/// The next overweight index that should be exported.
next_overweight_index: u64,
},
/// The export of overweight messages completed.
CompletedOverweightExport,
/// The storage cleanup started.
StartedCleanup { cursor: Option<BoundedVec<u8, ConstU32<1024>>> },
/// The migration finished. The pallet can now be removed from the runtime.
Completed,
}
impl Default for MigrationState {
fn default() -> Self {
Self::NotStarted
}
}
#[pallet::event]
#[pallet::generate_deposit(pub(super) fn deposit_event)]
pub enum Event<T: Config> {
/// The export of pages started.
StartedExport,
/// The export of a page completed.
Exported { page: PageCounter },
/// The export of a page failed.
///
/// This should never be emitted.
ExportFailed { page: PageCounter },
/// The export of pages completed.
CompletedExport,
/// The export of overweight messages started.
StartedOverweightExport,
/// The export of an overweight message completed.
ExportedOverweight { index: OverweightIndex },
/// The export of an overweight message failed.
///
/// This should never be emitted.
ExportOverweightFailed { index: OverweightIndex },
/// The export of overweight messages completed.
CompletedOverweightExport,
/// The cleanup of remaining pallet storage started.
StartedCleanup,
/// Some debris was cleaned up.
CleanedSome { keys_removed: u32 },
/// The cleanup of remaining pallet storage completed.
Completed { error: bool },
}
#[pallet::call]
impl<T: Config> Pallet<T> {}
#[pallet::hooks]
impl<T: Config> Hooks<BlockNumberFor<T>> for Pallet<T> {
fn integrity_test() {
let w = Self::on_idle_weight();
assert!(w != Weight::zero());
assert!(w.all_lte(T::BlockWeights::get().max_block));
}
fn on_idle(now: BlockNumberFor<T>, limit: Weight) -> Weight {
let mut meter = WeightMeter::with_limit(limit);
if meter.try_consume(Self::on_idle_weight()).is_err() {
log::debug!(target: LOG, "Not enough weight for on_idle. {} < {}", Self::on_idle_weight(), limit);
return meter.consumed()
}
let state = MigrationStatus::<T>::get();
let index = PageIndex::<T>::get();
log::debug!(target: LOG, "on_idle: block={:?}, state={:?}, index={:?}", now, state, index);
match state {
MigrationState::NotStarted => {
log::debug!(target: LOG, "Init export at page {}", index.begin_used);
MigrationStatus::<T>::put(MigrationState::StartedExport {
next_begin_used: index.begin_used,
});
Self::deposit_event(Event::StartedExport);
},
MigrationState::StartedExport { next_begin_used } => {
log::debug!(target: LOG, "Exporting page {}", next_begin_used);
if next_begin_used == index.end_used {
MigrationStatus::<T>::put(MigrationState::CompletedExport);
log::debug!(target: LOG, "CompletedExport");
Self::deposit_event(Event::CompletedExport);
} else {
let res = migration::migrate_page::<T>(next_begin_used);
MigrationStatus::<T>::put(MigrationState::StartedExport {
next_begin_used: next_begin_used.saturating_add(1),
});
if let Ok(()) = res {
log::debug!(target: LOG, "Exported page {}", next_begin_used);
Self::deposit_event(Event::Exported { page: next_begin_used });
} else {
Self::deposit_event(Event::ExportFailed { page: next_begin_used });
}
}
},
MigrationState::CompletedExport => {
log::debug!(target: LOG, "Init export overweight at index 0");
MigrationStatus::<T>::put(MigrationState::StartedOverweightExport {
next_overweight_index: 0,
});
Self::deposit_event(Event::StartedOverweightExport);
},
MigrationState::StartedOverweightExport { next_overweight_index } => {
log::debug!(target: LOG, "Exporting overweight index {}", next_overweight_index);
if next_overweight_index == index.overweight_count {
MigrationStatus::<T>::put(MigrationState::CompletedOverweightExport);
log::debug!(target: LOG, "CompletedOverweightExport");
Self::deposit_event(Event::CompletedOverweightExport);
} else {
let res = migration::migrate_overweight::<T>(next_overweight_index);
MigrationStatus::<T>::put(MigrationState::StartedOverweightExport {
next_overweight_index: next_overweight_index.saturating_add(1),
});
if let Ok(()) = res {
log::debug!(target: LOG, "Exported overweight index {next_overweight_index}");
Self::deposit_event(Event::ExportedOverweight {
index: next_overweight_index,
});
} else {
Self::deposit_event(Event::ExportOverweightFailed {
index: next_overweight_index,
});
}
}
},
MigrationState::CompletedOverweightExport => {
log::debug!(target: LOG, "Init cleanup");
MigrationStatus::<T>::put(MigrationState::StartedCleanup { cursor: None });
Self::deposit_event(Event::StartedCleanup);
},
MigrationState::StartedCleanup { cursor } => {
log::debug!(target: LOG, "Cleaning up");
let hashed_prefix =
twox_128(<Pallet<T> as PalletInfoAccess>::name().as_bytes());
let result = frame_support::storage::unhashed::clear_prefix(
&hashed_prefix,
Some(2), // Somehow it does nothing when set to 1, so we set it to 2.
cursor.as_ref().map(|c| c.as_ref()),
);
Self::deposit_event(Event::CleanedSome { keys_removed: result.backend });
// GOTCHA! We deleted *all* pallet storage; hence we also our own
// `MigrationState`. BUT we insert it back:
if let Some(unbound_cursor) = result.maybe_cursor {
if let Ok(cursor) = unbound_cursor.try_into() {
log::debug!(target: LOG, "Next cursor: {:?}", &cursor);
MigrationStatus::<T>::put(MigrationState::StartedCleanup {
cursor: Some(cursor),
});
} else {
MigrationStatus::<T>::put(MigrationState::Completed);
log::error!(target: LOG, "Completed with error: could not bound cursor");
Self::deposit_event(Event::Completed { error: true });
}
} else {
MigrationStatus::<T>::put(MigrationState::Completed);
log::debug!(target: LOG, "Completed");
Self::deposit_event(Event::Completed { error: false });
}
},
MigrationState::Completed => {
log::debug!(target: LOG, "Idle; you can remove this pallet");
},
}
meter.consumed()
}
}
impl<T: Config> Pallet<T> {
/// The worst-case weight of [`Self::on_idle`].
pub fn on_idle_weight() -> Weight {
<T as crate::Config>::WeightInfo::on_idle_good_msg()
.max(<T as crate::Config>::WeightInfo::on_idle_large_msg())
}
}
}