Use Message Queue as DMP and XCMP dispatch queue (#1246)

(imported from https://github.com/paritytech/cumulus/pull/2157)

## Changes

This MR refactores the XCMP, Parachains System and DMP pallets to use
the [MessageQueue](https://github.com/paritytech/substrate/pull/12485)
for delayed execution of incoming messages. The DMP pallet is entirely
replaced by the MQ and thereby removed. This allows for PoV-bounded
execution and resolves a number of issues that stem from the current
work-around.

All System Parachains adopt this change.  
The most important changes are in `primitives/core/src/lib.rs`,
`parachains/common/src/process_xcm_message.rs`,
`pallets/parachain-system/src/lib.rs`, `pallets/xcmp-queue/src/lib.rs`
and the runtime configs.

### DMP Queue Pallet

The pallet got removed and its logic refactored into parachain-system.
Overweight message management can be done directly through the MQ
pallet.

Final undeployment migrations are provided by
`cumulus_pallet_dmp_queue::UndeployDmpQueue` and `DeleteDmpQueue` that
can be configured with an aux config trait like:

```rust
parameter_types! {
	pub const DmpQueuePalletName: &'static str = \"DmpQueue\" < CHANGE ME;
	pub const RelayOrigin: AggregateMessageOrigin = AggregateMessageOrigin::Parent;
}

impl cumulus_pallet_dmp_queue::MigrationConfig for Runtime {
	type PalletName = DmpQueuePalletName;
	type DmpHandler = frame_support::traits::EnqueueWithOrigin<MessageQueue, RelayOrigin>;
	type DbWeight = <Runtime as frame_system::Config>::DbWeight;
}

// And adding them to your Migrations tuple:
pub type Migrations = (
	...
	cumulus_pallet_dmp_queue::UndeployDmpQueue<Runtime>,
	cumulus_pallet_dmp_queue::DeleteDmpQueue<Runtime>,
);
```

### XCMP Queue pallet

Removed all dispatch queue functionality. Incoming XCMP messages are now
either: Immediately handled if they are Signals, enqueued into the MQ
pallet otherwise.

New config items for the XCMP queue pallet:
```rust
/// The actual queue implementation that retains the messages for later processing.
type XcmpQueue: EnqueueMessage<ParaId>;

/// How a XCM over HRMP from a sibling parachain should be processed.
type XcmpProcessor: ProcessMessage<Origin = ParaId>;

/// The maximal number of suspended XCMP channels at the same time.
#[pallet::constant]
type MaxInboundSuspended: Get<u32>;
```

How to configure those:

```rust
// Use the MessageQueue pallet to store messages for later processing. The `TransformOrigin` is needed since
// the MQ pallet itself operators on `AggregateMessageOrigin` but we want to enqueue `ParaId`s.
type XcmpQueue = TransformOrigin<MessageQueue, AggregateMessageOrigin, ParaId, ParaIdToSibling>;

// Process XCMP messages from siblings. This is type-safe to only accept `ParaId`s. They will be dispatched
// with origin `Junction::Sibling(…)`.
type XcmpProcessor = ProcessFromSibling<
	ProcessXcmMessage<
		AggregateMessageOrigin,
		xcm_executor::XcmExecutor<xcm_config::XcmConfig>,
		RuntimeCall,
	>,
>;

// Not really important what to choose here. Just something larger than the maximal number of channels.
type MaxInboundSuspended = sp_core::ConstU32<1_000>;
```

The `InboundXcmpStatus` storage item was replaced by
`InboundXcmpSuspended` since it now only tracks inbound queue suspension
and no message indices anymore.

Now only sends the most recent channel `Signals`, as all prio ones are
out-dated anyway.

### Parachain System pallet

For `DMP` messages instead of forwarding them to the `DMP` pallet, it
now pushes them to the configured `DmpQueue`. The message processing
which was triggered in `set_validation_data` is now being done by the MQ
pallet `on_initialize`.

XCMP messages are still handed off to the `XcmpMessageHandler`
(XCMP-Queue pallet) - no change here.

New config items for the parachain system pallet:
```rust
/// Queues inbound downward messages for delayed processing. 
///
/// Analogous to the `XcmpQueue` of the XCMP queue pallet.
type DmpQueue: EnqueueMessage<AggregateMessageOrigin>;
``` 

How to configure:
```rust
/// Use the MQ pallet to store DMP messages for delayed processing.
type DmpQueue = MessageQueue;
``` 

## Message Flow

The flow of messages on the parachain side. Messages come in from the
left via the `Validation Data` and finally end up at the `Xcm Executor`
on the right.

![Untitled
(1)](https://github.com/paritytech/cumulus/assets/10380170/6cf8b377-88c9-4aed-96df-baace266e04d)

## Further changes

- Bumped the default suspension, drop and resume thresholds in
`QueueConfigData::default()`.
- `XcmpQueue::{suspend_xcm_execution, resume_xcm_execution}` errors when
they would be a noop.
- Properly validate the `QueueConfigData` before setting it.
- Marked weight files as auto-generated so they wont auto-expand in the
MR files view.
- Move the `hypothetical` asserts to `frame_support` under the name
`experimental_hypothetically`

Questions:
- [ ] What about the ugly `#[cfg(feature = \"runtime-benchmarks\")]` in
the runtimes? Not sure how to best fix. Just having them like this makes
tests fail that rely on the real message processor when the feature is
enabled.
- [ ] Need a good weight for `MessageQueueServiceWeight`. The scheduler
already takes 80% so I put it to 10% but that is quite low.

TODO:
- [x] Remove c&p code after
https://github.com/paritytech/polkadot/pull/6271
- [x] Use `HandleMessage` once it is public in Substrate
- [x] fix `runtime-benchmarks` feature
https://github.com/paritytech/polkadot/pull/6966
- [x] Benchmarks
- [x] Tests
- [ ] Migrate `InboundXcmpStatus` to `InboundXcmpSuspended`
- [x] Possibly cleanup Migrations (DMP+XCMP)
- [x] optional: create `TransformProcessMessageOrigin` in Substrate and
replace `ProcessFromSibling`
- [ ] Rerun weights on ref HW

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: command-bot <>
This commit is contained in:
Oliver Tale-Yazdi
2023-11-02 15:31:38 +01:00
committed by GitHub
parent 7df0417bcd
commit e1c033ebe1
277 changed files with 11604 additions and 4733 deletions
+19 -23
View File
@@ -195,7 +195,7 @@ use frame_support::{
pallet_prelude::*,
traits::{
DefensiveTruncateFrom, EnqueueMessage, ExecuteOverweightError, Footprint, ProcessMessage,
ProcessMessageError, QueuePausedQuery, ServiceQueues,
ProcessMessageError, QueueFootprint, QueuePausedQuery, ServiceQueues,
},
BoundedSlice, CloneNoBound, DefaultNoBound,
};
@@ -423,14 +423,23 @@ impl<MessageOrigin> Default for BookState<MessageOrigin> {
}
}
impl<MessageOrigin> From<BookState<MessageOrigin>> for QueueFootprint {
fn from(book: BookState<MessageOrigin>) -> Self {
QueueFootprint {
pages: book.count,
storage: Footprint { count: book.message_count, size: book.size },
}
}
}
/// Handler code for when the items in a queue change.
pub trait OnQueueChanged<Id> {
/// Note that the queue `id` now has `item_count` items in it, taking up `items_size` bytes.
fn on_queue_changed(id: Id, items_count: u64, items_size: u64);
fn on_queue_changed(id: Id, fp: QueueFootprint);
}
impl<Id> OnQueueChanged<Id> for () {
fn on_queue_changed(_: Id, _: u64, _: u64) {}
fn on_queue_changed(_: Id, _: QueueFootprint) {}
}
#[frame_support::pallet]
@@ -907,11 +916,7 @@ impl<T: Config> Pallet<T> {
T::WeightInfo::execute_overweight_page_updated()
};
BookStateFor::<T>::insert(&origin, &book_state);
T::QueueChangeHandler::on_queue_changed(
origin,
book_state.message_count,
book_state.size,
);
T::QueueChangeHandler::on_queue_changed(origin, book_state.into());
Ok(weight_counter.consumed().saturating_add(page_weight))
},
}
@@ -976,11 +981,7 @@ impl<T: Config> Pallet<T> {
book_state.message_count.saturating_reduce(page.remaining.into() as u64);
book_state.size.saturating_reduce(page.remaining_size.into() as u64);
BookStateFor::<T>::insert(origin, &book_state);
T::QueueChangeHandler::on_queue_changed(
origin.clone(),
book_state.message_count,
book_state.size,
);
T::QueueChangeHandler::on_queue_changed(origin.clone(), book_state.into());
Self::deposit_event(Event::PageReaped { origin: origin.clone(), index: page_index });
Ok(())
@@ -1035,11 +1036,7 @@ impl<T: Config> Pallet<T> {
}
BookStateFor::<T>::insert(&origin, &book_state);
if total_processed > 0 {
T::QueueChangeHandler::on_queue_changed(
origin,
book_state.message_count,
book_state.size,
);
T::QueueChangeHandler::on_queue_changed(origin, book_state.into());
}
(total_processed > 0, next_ready)
}
@@ -1482,7 +1479,7 @@ impl<T: Config> EnqueueMessage<MessageOriginOf<T>> for Pallet<T> {
) {
Self::do_enqueue_message(&origin, message);
let book_state = BookStateFor::<T>::get(&origin);
T::QueueChangeHandler::on_queue_changed(origin, book_state.message_count, book_state.size);
T::QueueChangeHandler::on_queue_changed(origin, book_state.into());
}
fn enqueue_messages<'a>(
@@ -1493,7 +1490,7 @@ impl<T: Config> EnqueueMessage<MessageOriginOf<T>> for Pallet<T> {
Self::do_enqueue_message(&origin, message);
}
let book_state = BookStateFor::<T>::get(&origin);
T::QueueChangeHandler::on_queue_changed(origin, book_state.message_count, book_state.size);
T::QueueChangeHandler::on_queue_changed(origin, book_state.into());
}
fn sweep_queue(origin: MessageOriginOf<T>) {
@@ -1508,8 +1505,7 @@ impl<T: Config> EnqueueMessage<MessageOriginOf<T>> for Pallet<T> {
BookStateFor::<T>::insert(&origin, &book_state);
}
fn footprint(origin: MessageOriginOf<T>) -> Footprint {
let book_state = BookStateFor::<T>::get(&origin);
Footprint { count: book_state.message_count, size: book_state.size }
fn footprint(origin: MessageOriginOf<T>) -> QueueFootprint {
BookStateFor::<T>::get(&origin).into()
}
}
+6 -2
View File
@@ -278,8 +278,8 @@ parameter_types! {
/// Records all queue changes into [`QueueChanges`].
pub struct RecordingQueueChangeHandler;
impl OnQueueChanged<MessageOrigin> for RecordingQueueChangeHandler {
fn on_queue_changed(id: MessageOrigin, items_count: u64, items_size: u64) {
QueueChanges::mutate(|cs| cs.push((id, items_count, items_size)));
fn on_queue_changed(id: MessageOrigin, fp: QueueFootprint) {
QueueChanges::mutate(|cs| cs.push((id, fp.storage.count, fp.storage.size)));
}
}
@@ -366,3 +366,7 @@ pub fn num_overweight_enqueued_events() -> u32 {
})
.count() as u32
}
pub fn fp(pages: u32, count: u64, size: u64) -> QueueFootprint {
QueueFootprint { storage: Footprint { count, size }, pages }
}
+42 -9
View File
@@ -49,6 +49,7 @@ fn enqueue_within_one_page_works() {
MessageQueue::enqueue_message(msg("c"), Here);
assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight());
assert_eq!(MessagesProcessed::take(), vec![(b"a".to_vec(), Here), (b"b".to_vec(), Here)]);
assert_eq!(MessageQueue::footprint(Here).pages, 1);
assert_eq!(MessageQueue::service_queues(2.into_weight()), 1.into_weight());
assert_eq!(MessagesProcessed::take(), vec![(b"c".to_vec(), Here)]);
@@ -314,6 +315,7 @@ fn reap_page_permanent_overweight_works() {
MessageQueue::enqueue_message(msg("weight=2"), Here);
}
assert_eq!(Pages::<Test>::iter().count(), n);
assert_eq!(MessageQueue::footprint(Here).pages, n as u32);
assert_eq!(QueueChanges::take().len(), n);
// Mark all pages as stale since their message is permanently overweight.
MessageQueue::service_queues(1.into_weight());
@@ -339,6 +341,7 @@ fn reap_page_permanent_overweight_works() {
assert_noop!(MessageQueue::do_reap_page(&o, i), Error::<Test>::NotReapable);
assert!(QueueChanges::take().is_empty());
}
assert_eq!(MessageQueue::footprint(Here).pages, 3);
});
}
@@ -1022,8 +1025,9 @@ fn footprint_works() {
BookStateFor::<Test>::insert(origin, book);
let info = MessageQueue::footprint(origin);
assert_eq!(info.count as usize, msgs);
assert_eq!(info.size, page.remaining_size as u64);
assert_eq!(info.storage.count as usize, msgs);
assert_eq!(info.storage.size, page.remaining_size as u64);
assert_eq!(info.pages, 1);
// Sweeping a queue never calls OnQueueChanged.
assert!(QueueChanges::take().is_empty());
@@ -1044,16 +1048,44 @@ fn footprint_invalid_works() {
fn footprint_on_swept_works() {
use MessageOrigin::*;
build_and_execute::<Test>(|| {
let mut book = empty_book::<Test>();
book.message_count = 3;
book.size = 10;
BookStateFor::<Test>::insert(Here, &book);
knit(&Here);
build_ring::<Test>(&[Here]);
MessageQueue::sweep_queue(Here);
let fp = MessageQueue::footprint(Here);
assert_eq!(fp.count, 3);
assert_eq!(fp.size, 10);
assert_eq!((1, 1, 1), (fp.storage.count, fp.storage.size, fp.pages));
})
}
/// The number of reported pages takes overweight pages into account.
#[test]
fn footprint_num_pages_works() {
use MessageOrigin::*;
build_and_execute::<Test>(|| {
MessageQueue::enqueue_message(msg("weight=2"), Here);
MessageQueue::enqueue_message(msg("weight=3"), Here);
assert_eq!(MessageQueue::footprint(Here), fp(2, 2, 16));
// Mark the messages as overweight.
assert_eq!(MessageQueue::service_queues(1.into_weight()), 0.into_weight());
assert_eq!(System::events().len(), 2);
// Overweight does not change the footprint.
assert_eq!(MessageQueue::footprint(Here), fp(2, 2, 16));
// Now execute the second message.
assert_eq!(
<MessageQueue as ServiceQueues>::execute_overweight(3.into_weight(), (Here, 1, 0))
.unwrap(),
3.into_weight()
);
assert_eq!(MessageQueue::footprint(Here), fp(1, 1, 8));
// And the first one:
assert_eq!(
<MessageQueue as ServiceQueues>::execute_overweight(2.into_weight(), (Here, 0, 0))
.unwrap(),
2.into_weight()
);
assert_eq!(MessageQueue::footprint(Here), Default::default());
})
}
@@ -1143,6 +1175,7 @@ fn permanently_overweight_book_unknits() {
assert_ring(&[]);
assert_eq!(MessagesProcessed::take().len(), 0);
assert_eq!(BookStateFor::<Test>::get(Here).message_count, 1);
assert_eq!(MessageQueue::footprint(Here).pages, 1);
// Now if we enqueue another message, it will become ready again.
MessageQueue::enqueue_messages([msg("weight=1")].into_iter(), Here);
assert_ring(&[Here]);