* Use beefy branch with scale-info * Add patches * Sprinkle some TypeInfo derives * Add some TypeInfo deriv * Cargo.lock * Derive TypeInfo and skip type params for Xcm types * Cargo.lock * Fix up scale_info bounds attributes * Fix up dependencies * Use my own beefy-primitives branch * Bump BEEFY * Update patches * Add some scale-info dependencies and TypeInfo derives * More TypeInfo decoration * Update scale-info * Some TypeInfos and remove more Event pallet::metadata * Moar TypeInfos * TypeInfos galore, fix up metadata runtime API * TypeInfo * TypeInfos, update other runtime metadata APIs * Fix up Kusama, comment out some `usize` QueueSize parameter types * Remove local diener patches * Cargo.lock * Cargo.lock * Update to scale-info crates.io release * Update primitive-types branch * Update pallet-beefy to use custom branch * Update other parity-common deps * Update parity-common patches * bump a bunch of deps in parity-common * Remove parity-common patches * Bump finality-grandpa version * Cargo.lock * Update scale-info to 0.9.1 * Add recursion_limit for runtime-parachains * Add some scale_info attributes * Cargo.lock * Revert finality-grandpa bump * Cargo.lock, scale-info update * cargo update * Make sure using patched version of finality-grandpa * Use patched scale-info * Update to scale-info 0.10.0 * Update finality-grandpa * Cargo.lock * Update beefy deps * Update beefy deps again * Add scale-info dependency * Remove deprecated pallet::metadata attributes. * Add some missing scale-info deps and derives * Use some variant struct call syntax * Add missing TypeInfo impl * Add some more TypeInfo impls * Convert some call enum struct variant constructors * More scale-info deps and derives * Call enum struct variants * TypeInfo derives * Call enum variant structs * scale-info deps and derives * Call enum variant struct constructors * Use beefy-primitives scale-info feature * Use grandpa-bridge-gadget master branch * Remove finality-grandpa patch * Add missing scale_info dependency and derive * Fix up some call variant constructors * Add missing scale_info dependency * Fix some test errors * More TypeInfo derives * More call variant structs * Call variant structs in tests * Cargo.lock * Fmt * Fix more call struct variants * Another call struct variant * add scale-info/std features explicitly * More call struct variants * Add missing scale-info dependency * Fmt * review: activate scale-info/std where missing * Remove some duplicate std feature activation * review: add scale_info bounds() attr * More call variant structs * Remove recursion limit * Update beefy-primitives * Update beefy-primitives * Fix simnet call variant struct errors * Fmt * cargo update -p beefy-primitives * Add some missing TypeInfo derives * Fix some call variants * Fix some call variant underscores * Cargo.lock * Cargo.lock * Add missing TypeInfo derive * Add some more missing TypeInfo derives * Even more missing TypeInfo derives * Add TypeInfo derives to new xcm types * Fmt * Cargo.lock * Add missing TypeInfo impls * Cargo.lock * More missing TypeInfos * Fixes * Cargo.lock * Cargo.lock * Add TypeInfo impls to xcm v2 * Update to scale-info 1.0 * Update finality-grandpa 0.14.4, patch for now * Update beefy * Remove patched finality-grandpa * Add TypeInfo impl to Outcome * Fixes * Call variant struct * Call variant struct * Fix test * Add TypeInfo impl * Cargo.lock * Cargo.lock * Cargo.lock * git checkout master Cargo.lock * update Substrate * Add missing scale-info features for beefy-primitives * Fmt * Remove check for now * Update beefy-primitives, removes scale-info feature * Update beefy-primitives again Co-authored-by: adoerr <0xad@gmx.net> Co-authored-by: Andronik Ordian <write@reusable.software> Co-authored-by: thiolliere <gui.thiolliere@gmail.com> Co-authored-by: parity-processbot <> Co-authored-by: Bastian Köcher <info@kchr.de>
Call Dispatch Module
The call dispatch module has a single internal (only callable by other runtime modules) entry point
for dispatching encoded calls (pallet_bridge_dispatch::Module::dispatch). Every dispatch
(successful or not) emits a corresponding module event. The module doesn't have any call-related
requirements - they may come from the bridged chain over some message lane, or they may be crafted
locally. But in this document we'll mostly talk about this module in the context of bridges.
Every message that is being dispatched has three main characteristics:
bridgeis the 4-bytes identifier of the bridge where this message comes from. This may be the identifier of the bridged chain (likeb"rlto"for messages coming fromRialto), or the identifier of the bridge itself (b"rimi"forRialto<->Millaubridge);idis the unique id of the message within the given bridge. For messages coming from the messages module, it may worth to use a tuple(LaneId, MessageNonce)to identify a message;messageis thebp_message_dispatch::MessagePayloadstructure. Thecallfield is set to the (potentially) encodedCallof this chain.
The easiest way to understand what is happening when a Call is being dispatched, is to look at the
module events set:
MessageRejectedevent is emitted if a message has been rejected even before it has reached the module. Dispatch then is called just to reflect the fact that message has been received, but we have failed to pre-process it (e.g. because we have failed to decodeMessagePayloadstructure from the proof);MessageVersionSpecMismatchevent is emitted if current runtime specification version differs from the version that has been used to encode theCall. The message payload has thespec_version, that is filled by the message submitter. If this value differs from the current runtime version, dispatch mechanism rejects to dispatch the message. Without this check, we may decode the wrongCallfor example if method arguments were changed;MessageCallDecodeFailedevent is emitted if we have failed to decodeCallfrom the payload. This may happen if the submitter has provided incorrect value in thecallfield, or if source chain storage has been corrupted. TheCallis decoded afterspec_versioncheck, so we'll never try to decodeCallfrom other runtime version;MessageSignatureMismatchevent is emitted if submitter has chose to dispatch message using specified this chain account (bp_message_dispatch::CallOrigin::TargetAccountorigin), but he has failed to prove that he owns the private key for this account;MessageCallRejectedevent is emitted if the module has been deployed with some call filter and this filter has rejected theCall. In your bridge you may choose to reject all messages except e.g. balance transfer calls;MessageWeightMismatchevent is emitted if the message submitter has specified invalidCalldispatch weight in theweightfield of the message payload. The value of this field is compared to the pre-dispatch weight of the decodedCall. If it is less than the actual pre-dispatch weight, the dispatch is rejected. Keep in mind, that even if post-dispatch weight will be less than specified, the submitter still have to declare (and pay for) the maximal possible weight (that is the pre-dispatch weight);MessageDispatchPaymentFailedevent is emitted if the message submitter has selected to pay dispatch fee at the target chain, but has failed to do that;MessageDispatchedevent is emitted if the message has passed all checks and we have actually dispatched it. The dispatch may still fail, though - that's why we are including the dispatch result in the event payload.
When we talk about module in context of bridges, these events are helping in following cases:
-
when the message submitter has access to the state of both chains and wants to monitor what has happened with his message. Then he could use the message id (that he gets from the messages module events) to filter events of call dispatch module at the target chain and actually see what has happened with his message;
-
when the message submitter only has access to the source chain state (for example, when sender is the runtime module at the source chain). In this case, your bridge may have additional mechanism to deliver dispatch proofs (which are storage proof of module events) back to the source chain, thus allowing the submitter to see what has happened with his messages.