diff --git a/404.html b/404.html index 24bc2c8..0118aaf 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index e831f69..507e8ea 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index dec4ef8..42d464a 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 6525554..55e223c 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 5f76aa8..d2ac426 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0009-improved-net-light-client-requests.html b/approved/0009-improved-net-light-client-requests.html index dbf1b22..2fa31fa 100644 --- a/approved/0009-improved-net-light-client-requests.html +++ b/approved/0009-improved-net-light-client-requests.html @@ -90,7 +90,7 @@ diff --git a/approved/0010-burn-coretime-revenue.html b/approved/0010-burn-coretime-revenue.html index cc030e8..112243c 100644 --- a/approved/0010-burn-coretime-revenue.html +++ b/approved/0010-burn-coretime-revenue.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index dfb022b..354acb9 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html index 53b1ea2..5b5c68a 100644 --- a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index f49a077..6837132 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 2f1288c..541cde9 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0026-sassafras-consensus.html b/approved/0026-sassafras-consensus.html index c6550e0..9960016 100644 --- a/approved/0026-sassafras-consensus.html +++ b/approved/0026-sassafras-consensus.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index ddffea9..c1f8c7c 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0042-extrinsics-state-version.html b/approved/0042-extrinsics-state-version.html index 1894821..07f1fd8 100644 --- a/approved/0042-extrinsics-state-version.html +++ b/approved/0042-extrinsics-state-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0043-storage-proof-size-hostfunction.html b/approved/0043-storage-proof-size-hostfunction.html index f18b0f9..6f031da 100644 --- a/approved/0043-storage-proof-size-hostfunction.html +++ b/approved/0043-storage-proof-size-hostfunction.html @@ -90,7 +90,7 @@ diff --git a/approved/0045-nft-deposits-asset-hub.html b/approved/0045-nft-deposits-asset-hub.html index ede2166..187c75f 100644 --- a/approved/0045-nft-deposits-asset-hub.html +++ b/approved/0045-nft-deposits-asset-hub.html @@ -90,7 +90,7 @@ diff --git a/approved/0047-assignment-of-availability-chunks.html b/approved/0047-assignment-of-availability-chunks.html index 524cee3..6a4b185 100644 --- a/approved/0047-assignment-of-availability-chunks.html +++ b/approved/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ diff --git a/approved/0048-session-keys-runtime-api.html b/approved/0048-session-keys-runtime-api.html index 84312b5..0a8b470 100644 --- a/approved/0048-session-keys-runtime-api.html +++ b/approved/0048-session-keys-runtime-api.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 9d31e66..614136e 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index da536e4..0cf9fef 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/approved/0059-nodes-capabilities-discovery.html b/approved/0059-nodes-capabilities-discovery.html index 7fca8e0..8a24805 100644 --- a/approved/0059-nodes-capabilities-discovery.html +++ b/approved/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ diff --git a/approved/0078-merkleized-metadata.html b/approved/0078-merkleized-metadata.html index 7ad1034..fd7a291 100644 --- a/approved/0078-merkleized-metadata.html +++ b/approved/0078-merkleized-metadata.html @@ -90,7 +90,7 @@ diff --git a/approved/0084-general-transaction-extrinsic-format.html b/approved/0084-general-transaction-extrinsic-format.html index be714c4..608c42e 100644 --- a/approved/0084-general-transaction-extrinsic-format.html +++ b/approved/0084-general-transaction-extrinsic-format.html @@ -90,7 +90,7 @@ diff --git a/approved/0091-dht-record-creation-time.html b/approved/0091-dht-record-creation-time.html index 6b8b464..9ae0dca 100644 --- a/approved/0091-dht-record-creation-time.html +++ b/approved/0091-dht-record-creation-time.html @@ -90,7 +90,7 @@ diff --git a/approved/0097-unbonding_queue.html b/approved/0097-unbonding_queue.html index 59406e6..0d31068 100644 --- a/approved/0097-unbonding_queue.html +++ b/approved/0097-unbonding_queue.html @@ -90,7 +90,7 @@ diff --git a/approved/0099-transaction-extension-version.html b/approved/0099-transaction-extension-version.html index 40dc2ff..3e6fdad 100644 --- a/approved/0099-transaction-extension-version.html +++ b/approved/0099-transaction-extension-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0100-xcm-multi-type-asset-transfer.html b/approved/0100-xcm-multi-type-asset-transfer.html index 443a6a0..a3ce218 100644 --- a/approved/0100-xcm-multi-type-asset-transfer.html +++ b/approved/0100-xcm-multi-type-asset-transfer.html @@ -90,7 +90,7 @@ diff --git a/approved/0101-xcm-transact-remove-max-weight-param.html b/approved/0101-xcm-transact-remove-max-weight-param.html index d0b000c..24ddad5 100644 --- a/approved/0101-xcm-transact-remove-max-weight-param.html +++ b/approved/0101-xcm-transact-remove-max-weight-param.html @@ -90,7 +90,7 @@ diff --git a/approved/0103-introduce-core-index-commitment.html b/approved/0103-introduce-core-index-commitment.html index 2939256..74a5205 100644 --- a/approved/0103-introduce-core-index-commitment.html +++ b/approved/0103-introduce-core-index-commitment.html @@ -90,7 +90,7 @@ diff --git a/approved/0105-xcm-improved-fee-mechanism.html b/approved/0105-xcm-improved-fee-mechanism.html index 5e1b665..d19053f 100644 --- a/approved/0105-xcm-improved-fee-mechanism.html +++ b/approved/0105-xcm-improved-fee-mechanism.html @@ -90,7 +90,7 @@ diff --git a/approved/0107-xcm-execution-hints.html b/approved/0107-xcm-execution-hints.html index 5b8099e..186c362 100644 --- a/approved/0107-xcm-execution-hints.html +++ b/approved/0107-xcm-execution-hints.html @@ -90,7 +90,7 @@ diff --git a/approved/0108-xcm-remove-testnet-ids.html b/approved/0108-xcm-remove-testnet-ids.html index 0ae9dc7..032e1b4 100644 --- a/approved/0108-xcm-remove-testnet-ids.html +++ b/approved/0108-xcm-remove-testnet-ids.html @@ -90,7 +90,7 @@ diff --git a/approved/0122-alias-origin-on-asset-transfers.html b/approved/0122-alias-origin-on-asset-transfers.html index aa97dfb..2212725 100644 --- a/approved/0122-alias-origin-on-asset-transfers.html +++ b/approved/0122-alias-origin-on-asset-transfers.html @@ -90,7 +90,7 @@ @@ -313,7 +313,7 @@ Following the same logic, the existing DepositReserveAsset, I - @@ -327,7 +327,7 @@ Following the same logic, the existing DepositReserveAsset, I - diff --git a/proposed/0125-xcm-asset-metadata.html b/approved/0125-xcm-asset-metadata.html similarity index 83% rename from proposed/0125-xcm-asset-metadata.html rename to approved/0125-xcm-asset-metadata.html index 4048478..6e14d6f 100644 --- a/proposed/0125-xcm-asset-metadata.html +++ b/approved/0125-xcm-asset-metadata.html @@ -90,7 +90,7 @@ @@ -174,7 +174,7 @@
-

(source)

+

(source)

Table of Contents

  • RFC-0125: XCM Asset Metadata @@ -402,11 +402,11 @@ This RFC proposes to use the Undefined variant of a collection iden
diff --git a/index.html b/index.html index 3311607..837d8f7 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index 3311607..837d8f7 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index cdbf521..f161c64 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -571,229 +571,6 @@ mod pallet_proxy_replica { -

(source)

-

Table of Contents

- -

RFC-0125: XCM Asset Metadata

-
- - - -
Start Date22 Oct 2024
DescriptionXCM Asset Metadata definition and a way of communicating it via XCM
AuthorsDaniel Shiposha
-
-

Summary

-

This RFC proposes a metadata format for XCM-identifiable assets (i.e., for fungible/non-fungible collections and non-fungible tokens) and a set of instructions to communicate it across chains.

-

Motivation

-

Currently, there is no way to communicate metadata of an asset (or an asset instance) via XCM.

-

The ability to query and modify the metadata is useful for two kinds of entities:

- -

Besides metadata modification, the ability to read it is also valuable. On-chain logic can interpret the NFT metadata, i.e., the metadata could have not only the media meaning but also a utility function within a consensus system. Currently, such a way of using NFT metadata is possible only within one consensus system. This RFC proposes making it possible between different systems via XCM so different chains can fetch and analyze the asset metadata from other chains.

-

Stakeholders

-

Runtime users, Runtime devs, Cross-chain dApps, Wallets.

-

Explanation

-

The Asset Metadata is information bound to an asset class (fungible or NFT collection) or an asset instance (an NFT). -The Asset Metadata could be represented differently on different chains (or in other consensus entities). -However, to communicate metadata between consensus entities via XCM, we need a general format so that any consensus entity can make sense of such information.

-

We can name this format "XCM Asset Metadata".

-

This RFC proposes:

-
    -
  1. -

    Using key-value pairs as XCM Asset Metadata since it is a general concept useful for both structured and unstructured data. Both key and value can be raw bytes with interpretation up to the communicating entities.

    -

    The XCM Asset Metadata should be represented as a map SCALE-encoded equivalent to the BTreeMap<Vec<u8>, Vec<u8>>.

    -

    As such, the XCM Asset Metadata types are defined as follows:

    -
    #![allow(unused)]
    -fn main() {
    -type MetadataKey = Vec<u8>;
    -type MetadataValue = Vec<u8>;
    -type MetadataMap = BTreeMap<MetadataKey, MetadataValue>;
    -}
    -
  2. -
  3. -

    Communicating only the demanded part of the metadata, not the whole metadata.

    -
      -
    • -

      A consensus entity should be able to query the values of interested keys to read the metadata. -We need a set-like type to specify the keys to read, a SCALE-encoded equivalent to the BTreeSet<Vec<u8>>. -Let's define that type as follows:

      -
      #![allow(unused)]
      -fn main() {
      -type MetadataKeySet = BTreeSet<MetadataKey>;
      -}
      -
    • -
    • -

      A consensus entity should be able to write the values for specified keys.

      -
    • -
    -
  4. -
  5. -

    New XCM instructions to communicate the metadata.

    -
  6. -
-

Note: the maximum lengths of MetadataKey, MetadataValue, MetadataMap, and MetadataKeySet are implementation-defined.

-

New instructions

-

ReportMetadata

-

The ReportMetadata is a new instruction to query metadata information. -It can be used to query metadata key list or to query values of interested keys.

-

This instruction allows querying the metadata of:

- -

If an asset (or an asset instance) for which the query is made doesn't exist, the Response::Null should be reported via the existing QueryResponse instruction.

-

The ReportMetadata can be used without origin (i.e., following the ClearOrigin instruction) since it only reads state.

-

Safety: The reporter origin should be trusted to hold the true metadata. If the reserve-based model is considered, the asset's reserve location must be viewed as the only source of truth about the metadata.

-

The use case for this instruction is when the metadata information of a foreign asset (or asset instance) is used in the logic of a consensus entity that requested it.

-
#![allow(unused)]
-fn main() {
-/// An instruction to query metadata of an asset or an asset instance.
-ReportMetadata {
-    /// The ID of an asset (a collection, fungible or nonfungible).
-    asset_id: AssetId,
-
-    /// The ID of an asset instance.
-    ///
-    /// If the value is `Undefined`, the metadata of the collection is reported.
-    instance: AssetInstance,
-
-    /// See `MetadataQueryKind` below.
-    query_kind: MetadataQueryKind,
-
-    /// The usual field for Report<something> XCM instructions.
-    ///
-    /// Information regarding the query response.
-    /// The `QueryResponseInfo` type is already defined in the XCM spec.
-    response_info: QueryResponseInfo,
-}
-}
-

Where the MetadataQueryKind is:

-
#![allow(unused)]
-fn main() {
-enum MetadataQueryKind {
-    /// Query metadata key set.
-    KeySet,
-
-    /// Query values of the specified keys.
-    Values(MetadataKeySet),
-}
-}
-

The ReportMetadata works in conjunction with the existing QueryResponse instruction. The Response type should be modified accordingly: we need to add a new AssetMetadata variant to it.

-
#![allow(unused)]
-fn main() {
-/// The struct used in the existing `QueryResponse` instruction.
-pub enum Response {
-    // ... snip, existing variants ...
-
-    /// The metadata info.
-    AssetMetadata {
-        /// The ID of an asset (a collection, fungible or nonfungible).
-        asset_id: AssetId,
-
-        /// The ID of an asset instance.
-        ///
-        /// If the value is `Undefined`, the reported metadata is related to the collection, not a token.
-        instance: AssetInstance,
-
-        /// See `MetadataResponseData` below.
-        data: MetadataResponseData,
-    }
-}
-
-pub enum MetadataResponseData {
-    /// The metadata key list to be reported
-    /// in response to the `KeySet` metadata query kind.
-    KeySet(MetadataKeySet),
-
-    /// The values of the keys that were specified in the
-    /// `Values` variant of the metadata query kind.
-    Values(MetadataMap),
-}
-}
-

ModifyMetadata

-

The ModifyMetadata is a new instruction to request a remote chain to modify the values of the specified keys.

-

This instruction can be used to update the metadata of a collection (fungible or nonfungible) or of an NFT.

-

The remote chain handles the modification request and may reject it based on its internal rules. -The request can only be executed or rejected in its entirety. It must not be executed partially.

-

To execute the ModifyMetadata, an origin is required so that the handling logic can authorize the metadata modification request from a known source. Since this instruction requires an origin, the assets used to cover the execution fees must be transferred in a way that preserves the origin. For instance, one can use the approach described in RFC #122 if the handling chain configured aliasing rules accordingly.

-

The example use case of this instruction is to ask the reserve location of the asset to modify the metadata. So that, the original asset's metadata is updated according to the reserve location's rules.

-
#![allow(unused)]
-fn main() {
-ModifyMetadata {
-    /// The ID of an asset (a collection, fungible or nonfungible).
-    asset_id: AssetId,
-
-    /// The ID of an asset instance.
-    ///
-    /// If the value is `Undefined`, the modification request targets the collection, not a token.
-    instance: AssetInstance,
-
-    /// The map contains the keys mapped to the requested new values.
-    modification: MetadataMap,
-}
-}
-

Repurposing AssetInstance::Undefined

-

As the new instructions show, this RFC reframes the purpose of the Undefined variant of the AssetInstance enum. -This RFC proposes to use the Undefined variant of a collection identified by an AssetId as a synonym of the collection itself. I.e., an asset Asset { id: <AssetId>, fun: NonFungible(AssetInstance::Undefined) } is considered an NFT representing the collection itself.

-

As a singleton non-fungible instance is barely distinguishable from its collection, this convention shouldn't cause any problems.

-

Thus, the AssetInstance docs must be updated accordingly in the implementations.

-

Drawbacks

-

Regarding ergonomics, no drawbacks were noticed.

-

As for the user experience, it could discover new cross-chain use cases involving asset collections and NFTs, indicating a positive impact.

-

There are no security concerns except for the ReportMetadata instruction, which implies that the source of the information must be trusted.

-

In terms of performance and privacy, there will be no changes.

-

Testing, Security, and Privacy

-

The implementations must honor the contract for the new instructions. Namely, if the instance field has the value of AssetInstance::Undefined, the metadata must relate to the asset collection but not to a non-fungible token inside it.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

No significant impact.

-

Ergonomics

-

Introducing a standard metadata format and a way of communicating it is a valuable addition to the XCM format that potentially increases cross-chain interoperability without the need to form ad-hoc chain-to-chain integrations via Transact.

-

Compatibility

-

This RFC proposes new functionality, so there are no compatibility issues.

-

Prior Art and References

-

RFC: XCM Asset Metadata

- -

The original RFC draft contained additional metadata instructions. Though they could be useful, they're clearly outside the basic logic. So, this RFC version omits them to make the metadata discussion more focused on the core things. Nonetheless, there is hope that metadata approval instructions might be useful in the future, so they are mentioned here.

-

You can read about the details in the original draft.

(source)

Table of Contents

-

Motivation

+

Motivation

In Substrate, runtime APIs facilitate off-chain clients in reading the state of the consensus system. However, different chains may expose different APIs for a similar query or have varying data types, such as doing custom transformations on direct data, or differing AccountId types. This diversity also extends to client-side, which may require custom computations over runtime APIs in various use cases. Therefore, tools and UI developers often access storage directly and reimplement custom computations to convert data into user-friendly representations, leading to duplicated code between Rust runtime logic and UI JS/TS logic. This duplication increases workload and potential for bugs.

Therefore, a system is needed to serve as an intermediary layer between concrete chain runtime implementations and tools/UIs, to provide a unified interface for cross-chain queries.

-

Stakeholders

+

Stakeholders

-

Explanation

+

Explanation

The overall query pattern of XCQ consists of three components:

Errors

-

Drawbacks

+

Drawbacks

Performance issues

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

It's a new functionality, which doesn't modify the existing implementations.

-

Ergonomics

+

Ergonomics

The proposal facilitate the wallets and dApps developers. Developers no longer need to examine every concrete implementation to support conceptually similar operations across different chains. Additionally, they gain a more modular development experience through encapsulating custom computations over the exposed APIs in PolkaVM programs.

-

Compatibility

+

Compatibility

The proposal defines new apis, which doesn't break compatibility with existing interfaces.

-

Prior Art and References

+

Prior Art and References

There are several discussions related to the proposal, including:

-

Drawbacks

+

Drawbacks

The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

The vast majority of cases can be covered by unit testing. Integration test should ensure that the Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired number of Candidates, can handle updates over XCM from the system's governance location.

-

Performance, Ergonomics, and Compatibility

+

Performance, Ergonomics, and Compatibility

This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.

-

Performance

+

Performance

As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.

-

Ergonomics

+

Ergonomics

The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.

-

Compatibility

+

Compatibility

This RFC is compatible with the existing implementation and can be handled via upgrades and migration.

-

Prior Art and References

+

Prior Art and References

Written Discussions

Unresolved Questions

None at this time.

- +

There may exist in the future system chains for which this model of collator selection is not appropriate. These chains should be evaluated on a case-by-case basis.

(source)

@@ -2171,10 +1948,10 @@ appropriate. These chains should be evaluated on a case-by-case basis.

AuthorsPierre Krieger -

Summary

+

Summary

The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

-

Motivation

+

Motivation

The maintenance of bootnodes has long been an annoyance for everyone.

When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

@@ -2183,9 +1960,9 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-date

Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

-

Stakeholders

+

Stakeholders

This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

-

Explanation

+

Explanation

The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

@@ -2222,10 +1999,10 @@ message Response {

The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

-

Drawbacks

+

Drawbacks

The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

@@ -2234,22 +2011,22 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) a

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

-

Ergonomics

+

Ergonomics

Irrelevant.

-

Compatibility

+

Compatibility

Irrelevant.

-

Prior Art and References

+

Prior Art and References

None.

Unresolved Questions

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

- +

It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

(source)

Table of Contents

@@ -2282,9 +2059,9 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsPierre Krieger -

Summary

+

Summary

Improve the networking messages that query storage items from the remote, in order to reduce the bandwidth usage and number of round trips of light clients.

-

Motivation

+

Motivation

Clients on the Polkadot peer-to-peer network can be divided into two categories: full nodes and light clients. So-called full nodes are nodes that store the content of the chain locally on their disk, while light clients are nodes that don't. In order to access for example the balance of an account, a full node can do a disk read, while a light client needs to send a network message to a full node and wait for the full node to reply with the desired value. This reply is in the form of a Merkle proof, which makes it possible for the light client to verify the exactness of the value.

Unfortunately, this network protocol is suffering from some issues:

Once Polkadot and Kusama will have transitioned to state_version = 1, which modifies the format of the trie entries, it will be possible to generate Merkle proofs that contain only the hashes of values in the storage. Thanks to this, it is already possible to prove the existence of a key without sending its entire value (only its hash), or to prove that a value has changed or not between two blocks (by sending just their hashes). Thus, the only reason why aforementioned issues exist is because the existing networking messages don't give the possibility for the querier to query this. This is what this proposal aims at fixing.

-

Stakeholders

+

Stakeholders

This is the continuation of https://github.com/w3f/PPPs/pull/10, which itself is the continuation of https://github.com/w3f/PPPs/pull/5.

-

Explanation

+

Explanation

The protobuf schema of the networking protocol can be found here: https://github.com/paritytech/substrate/blob/5b6519a7ff4a2d3cc424d78bc4830688f3b184c0/client/network/light/src/schema/light.v1.proto

The proposal is to modify this protocol in this way:

@@ -11,6 +11,7 @@ message Request {
@@ -2354,26 +2131,26 @@ An alternative could have been to specify the child_trie_info for e
 Also note that child tries aren't considered as descendants of the main trie when it comes to the includeDescendants flag. In other words, if the request concerns the main trie, no content coming from child tries is ever sent back.

This protocol keeps the same maximum response size limit as currently exists (16 MiB). It is not possible for the querier to know in advance whether its query will lead to a reply that exceeds the maximum size. If the reply is too large, the replier should send back only a limited number (but at least one) of requested items in the proof. The querier should then send additional requests for the rest of the items. A response containing none of the requested items is invalid.

The server is allowed to silently discard some keys of the request if it judges that the number of requested keys is too high. This is in line with the fact that the server might truncate the response.

-

Drawbacks

+

Drawbacks

This proposal doesn't handle one specific situation: what if a proof containing a single specific item would exceed the response size limit? For example, if the response size limit was 1 MiB, querying the runtime code (which is typically 1.0 to 1.5 MiB) would be impossible as it's impossible to generate a proof less than 1 MiB. The response size limit is currently 16 MiB, meaning that no single storage item must exceed 16 MiB.

Unfortunately, because it's impossible to verify a Merkle proof before having received it entirely, parsing the proof in a streaming way is also not possible.

A way to solve this issue would be to Merkle-ize large storage items, so that a proof could include only a portion of a large storage item. Since this would require a change to the trie format, it is not realistically feasible in a short time frame.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

The main security consideration concerns the size of replies and the resources necessary to generate them. It is for example easily possible to ask for all keys and values of the chain, which would take a very long time to generate. Since responses to this networking protocol have a maximum size, the replier should truncate proofs that would lead to the response being too large. Note that it is already possible to send a query that would lead to a very large reply with the existing network protocol. The only thing that this proposal changes is that it would make it less complicated to perform such an attack.

Implementers of the replier side should be careful to detect early on when a reply would exceed the maximum reply size, rather than inconditionally generate a reply, as this could take a very large amount of CPU, disk I/O, and memory. Existing implementations might currently be accidentally protected from such an attack thanks to the fact that requests have a maximum size, and thus that the list of keys in the query was bounded. After this proposal, this accidental protection would no longer exist.

Malicious server nodes might truncate Merkle proofs even when they don't strictly need to, and it is not possible for the client to (easily) detect this situation. However, malicious server nodes can already do undesirable things such as throttle down their upload bandwidth or simply not respond. There is no need to handle unnecessarily truncated Merkle proofs any differently than a server simply not answering the request.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

It is unclear to the author of the RFC what the performance implications are. Servers are supposed to have limits to the amount of resources they use to respond to requests, and as such the worst that can happen is that light client requests become a bit slower than they currently are.

-

Ergonomics

+

Ergonomics

Irrelevant.

-

Compatibility

+

Compatibility

The prior networking protocol is maintained for now. The older version of this protocol could get removed in a long time.

-

Prior Art and References

+

Prior Art and References

None. This RFC is a clean-up of an existing mechanism.

Unresolved Questions

None

- +

The current networking protocol could be deprecated in a long time. Additionally, the current "state requests" protocol (used for warp syncing) could also be deprecated in favor of this one.

(source)

Table of Contents

@@ -2394,13 +2171,13 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJonas Gehrlein -

Summary

+

Summary

The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

-

Motivation

+

Motivation

How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

-

Stakeholders

+

Stakeholders

Polkadot DOT token holders.

-

Explanation

+

Explanation

This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

@@ -2443,13 +2220,13 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJoe Petrowski -

Summary

+

Summary

Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

-

Motivation

+

Motivation

Many groups have expressed interest in representing collectives on-chain. Some of these include:

  • Parachain technical fellowship (new)
  • @@ -2465,12 +2242,12 @@ path to having its collective accepted on-chain as part of the protocol. Accepta the Fellowship to include the new collective with a given initial configuration into the runtime. However, the network, not the Fellowship, should ultimately decide which collectives are in the interest of the network.

    -

    Stakeholders

    +

    Stakeholders

    • Polkadot stakeholders who would like to organize on-chain.
    • Technical Fellowship, in its role of maintaining system runtimes.
    -

    Explanation

    +

    Explanation

    The group that wishes to operate an on-chain collective should publish the following information:

    • Charter, including the collective's mandate and how it benefits Polkadot. This would be similar @@ -2504,19 +2281,19 @@ Fellowship would help them identify the pallet indices associated with a given c or not the Fellowship member agrees with removal.

      Collective removal may also come with other governance calls, for example voiding any scheduled Treasury spends that would fund the given collective.

      -

      Drawbacks

      +

      Drawbacks

      Passing a Root origin referendum is slow. However, given the network's investment (in terms of code maintenance and salaries) in a new collective, this is an appropriate step.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      No impacts.

      -

      Performance, Ergonomics, and Compatibility

      +

      Performance, Ergonomics, and Compatibility

      Generally all new collectives will be in the Collectives parachain. Thus, performance impacts should strictly be limited to this parachain and not affect others. As the majority of logic for collectives is generalized and reusable, we expect most collectives to be instances of similar subsets of modules. That is, new collectives should generally be compatible with UIs and other services that provide collective-related functionality, with little modifications to support new ones.

      -

      Prior Art and References

      +

      Prior Art and References

      The launch of the Technical Fellowship, see the initial forum post.

      Unresolved Questions

      @@ -2556,13 +2333,13 @@ ones.

      AuthorsOliver Tale-Yazdi -

      Summary

      +

      Summary

      Introduces breaking changes to the Core runtime API by letting Core::initialize_block return an enum. The versions of Core is bumped from 4 to 5.

      -

      Motivation

      +

      Motivation

      The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks.
      Further it would be nice to not hinder the possibility of implementing a new hook poll, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem. This hook can then be used to replace the use of on_initialize and on_finalize for non-deadline critical logic.
      In a similar fashion, it should not hinder the future addition of a System::PostInherents callback that always runs after all inherents were applied.

      -

      Stakeholders

      +

      Stakeholders

      • Substrate Maintainers: They have to implement this, including tests, audit and maintenance burden.
      • @@ -2570,7 +2347,7 @@ maintenance burden.
      • Polkadot Parachain Teams: They have to adapt to the breaking changes but then eventually have multi-block migrations available.
      -

      Explanation

      +

      Explanation

      Core::initialize_block

      This runtime API function is changed from returning () to ExtrinsicInclusionMode:

      fn initialize_block(header: &<Block as BlockT>::Header)
      @@ -2591,23 +2368,23 @@ multi-block migrations available.
    • 1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.

      2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.

      3. System::PostInherents can be done in the same manner as poll.

      -

      Drawbacks

      +

      Drawbacks

      The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.

      Security: n/a

      Privacy: n/a

      -

      Performance, Ergonomics, and Compatibility

      -

      Performance

      +

      Performance, Ergonomics, and Compatibility

      +

      Performance

      The performance overhead is minimal in the sense that no clutter was added after fulfilling the requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.

      -

      Ergonomics

      +

      Ergonomics

      The new interface allows for more extensible runtime logic. In the future, this will be utilized for multi-block-migrations which should be a huge ergonomic advantage for parachain developers.

      -

      Compatibility

      +

      Compatibility

      The advice here is OPTIONAL and outside of the RFC. To not degrade user experience, it is recommended to ensure that an updated node can still import historic blocks.

      -

      Prior Art and References

      +

      Prior Art and References

      The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge requests:

        @@ -2624,7 +2401,7 @@ transactions => renamed to ExtrinsicInclusionMode

        Is post_inherents more consistent instead of last_inherent? Then we should change it.
        => renamed to last_inherent

        - +

        The long-term future here is to move the block building logic into the runtime. Currently there is a tight dance between the block author and the runtime; the author has to call into different runtime functions in quick succession and exact order. Any misstep causes the block to be invalid.
        This can be unified and simplified by moving both parts into the runtime.

        (source)

        @@ -2661,14 +2438,14 @@ This can be unified and simplified by moving both parts into the runtime.

        AuthorsBryan Chen -

        Summary

        +

        Summary

        This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.

        This is achieved by remove existing lock conditions and only lock a parachain when:

        • A parachain manager explicitly lock the parachain
        • OR a parachain block is produced successfully
        -

        Motivation

        +

        Motivation

        The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.

        The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.

        The key scenarios this RFC seeks to improve are:

        @@ -2687,12 +2464,12 @@ This can be unified and simplified by moving both parts into the runtime.

      • A parachain SHOULD be locked when it successfully produced the first block.
      • A parachain manager MUST be able to perform lease swap without having a running parachain.
      -

      Stakeholders

      +

      Stakeholders

      • Parachain teams
      • Parachain users
      -

      Explanation

      +

      Explanation

      Status quo

      A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:

        @@ -2732,23 +2509,23 @@ This can be unified and simplified by moving both parts into the runtime.

      • Parachain never produced a block. Including from expired leases.
      • Parachain manager never explicitly lock the parachain.
      -

      Drawbacks

      +

      Drawbacks

      Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.

      For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.

      It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.

      Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.

      Existing operational parachains will not be impacted.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

      An audit maybe required to ensure the implementation does not introduce unwanted side effects.

      There is no privacy related concerns.

      -

      Performance

      +

      Performance

      This RFC should not introduce any performance impact.

      -

      Ergonomics

      +

      Ergonomics

      This RFC should improve the developer experiences for new and existing parachain teams

      -

      Compatibility

      +

      Compatibility

      This RFC is fully compatibility with existing interfaces.

      -

      Prior Art and References

      +

      Prior Art and References

      • Parachain Slot Extension Story: https://github.com/paritytech/polkadot/issues/4758
      • Allow parachain to renew lease without actually run another parachain: https://github.com/paritytech/polkadot/issues/6685
      • @@ -2756,7 +2533,7 @@ This can be unified and simplified by moving both parts into the runtime.

      Unresolved Questions

      None at this stage.

      - +

      This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.

      1

      https://github.com/paritytech/cumulus/issues/377 @@ -2790,19 +2567,19 @@ This can be unified and simplified by moving both parts into the runtime.

      Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland
      -

      Summary

      +

      Summary

      Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

      -

      Motivation

      +

      Motivation

      Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

      Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

      -

      Stakeholders

      +

      Stakeholders

      • Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
      • Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
      • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
      • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
      -

      Explanation

      +

      Explanation

      Our PR has all details about our runtime and how we would move it into the fellowship repo.

      Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

      It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

      @@ -2811,17 +2588,17 @@ This can be unified and simplified by moving both parts into the runtime.

    • Encointer will publish all its crates crates.io
    • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
    -

    Drawbacks

    +

    Drawbacks

    Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    No changes to the existing system are proposed. Only changes to how maintenance is organized.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    No changes

    -

    Prior Art and References

    +

    Prior Art and References

    Existing Encointer runtime repo

    Unresolved Questions

    None identified

    - +

    More info on Encointer: encointer.org

    (source)

    Table of Contents

    @@ -3741,11 +3518,11 @@ other privacy-enhancing mechanisms to address this concern. AuthorsJoe Petrowski, Gavin Wood -

    Summary

    +

    Summary

    The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.

    -

    Motivation

    +

    Motivation

    Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -3757,13 +3534,13 @@ blockspace) to the network.

    By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace.

    -

    Stakeholders

    +

    Stakeholders

    • Parachains that interact with affected logic on the Relay Chain;
    • Core protocol and XCM format developers;
    • Tooling, block explorer, and UI developers.
    -

    Explanation

    +

    Explanation

    The following pallets and subsystems are good candidates to migrate from the Relay Chain:

    • Identity
    • @@ -3909,27 +3686,27 @@ sensible to rehearse a migration.

      Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- will give confidence to the chain's robustness on Polkadot.

      -

      Drawbacks

      +

      Drawbacks

      These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular may require some optimizations to deal with constraints.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      Standard audit/review requirements apply. More powerful multi-chain integration test tools would be useful in developement.

      -

      Performance, Ergonomics, and Compatibility

      +

      Performance, Ergonomics, and Compatibility

      Describe the impact of the proposal on the exposed functionality of Polkadot.

      -

      Performance

      +

      Performance

      This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its primary resources are allocated to system performance.

      -

      Ergonomics

      +

      Ergonomics

      This proposal alters very little for coretime users (e.g. parachain developers). Application developers will need to interact with multiple chains, making ergonomic light client tools particularly important for application development.

      For existing parachains that interact with these subsystems, they will need to configure their runtimes to recognize the new locations in the network.

      -

      Compatibility

      +

      Compatibility

      Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. Application developers will need to interact with multiple chains in the network.

      -

      Prior Art and References

      +

      Prior Art and References

      • Transactionless Relay-chain
      • Moving Staking off the Relay Chain
      • @@ -3938,7 +3715,7 @@ Application developers will need to interact with multiple chains in the network

        There remain some implementation questions, like how to use balances for both Staking and Governance. See, for example, Moving Staking off the Relay Chain.

        - +

        Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. With Staking and Governance off the Relay Chain, this is not an unreasonable next step.

        With Identity on Polkadot, Kusama may opt to drop its People Chain.

        @@ -3973,13 +3750,13 @@ With Staking and Governance off the Relay Chain, this is not an unreasonable nex AuthorsVedhavyas Singareddi -

        Summary

        +

        Summary

        At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the Storage. We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field under RuntimeVersion, we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

        -

        Motivation

        +

        Motivation

        Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is further explored in https://github.com/polkadot-fellows/RFCs/issues/19

        @@ -3991,11 +3768,11 @@ One of the main challenge here is some extrinsics could be big enough that this included in the Consensus block due to Block's weight restriction. If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but rather at maximum, 32 byte of extrinsic data.

        -

        Stakeholders

        +

        Stakeholders

        • Technical Fellowship, in its role of maintaining system runtimes.
        -

        Explanation

        +

        Explanation

        In order to use project specific StateVersion for extrinsic roots, we proposed an implementation that introduced parameter to frame_system::Config but that unfortunately did not feel correct. @@ -4021,26 +3798,26 @@ pub const VERSION: RuntimeVersion = RuntimeVersion { system_version: 1, }; }

-

Drawbacks

+

Drawbacks

There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated so that chains know which system_version to use.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

AFAIK, should not have any impact on the security or privacy.

-

Performance, Ergonomics, and Compatibility

+

Performance, Ergonomics, and Compatibility

These changes should be compatible for existing chains if they use state_version value for system_verision.

-

Performance

+

Performance

I do not believe there is any performance hit with this change.

-

Ergonomics

+

Ergonomics

This does not break any exposed Apis.

-

Compatibility

+

Compatibility

This change should not break any compatibility.

-

Prior Art and References

+

Prior Art and References

We proposed introducing a similar change by introducing a parameter to frame_system::Config but did not feel that is the correct way of introducing this change.

Unresolved Questions

I do not have any specific questions about this change at the moment.

- +

IMO, this change is pretty self-contained and there won't be any future work necessary.

(source)

Table of Contents

@@ -4069,9 +3846,9 @@ is the correct way of introducing this change.

AuthorsSebastian Kunert -

Summary

+

Summary

This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

-

Motivation

+

Motivation

The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

Transact Over Bridge -

Drawbacks

+

Drawbacks

In terms of ergonomics and user experience, this support for combining an asset transfer with a subsequent action (like Transact) is a net positive.

In terms of performance, and privacy, this is neutral with no changes.

In terms of security, the feature by itself is also neutral because it allows preserve_origin: false usage for operating with no extra trust assumptions. When wanting to support preserving origin, chains need to configure secure origin aliasing filters. The one suggested in this RFC should be the right choice for the majority of chains, but each chain will ultimately choose depending on their business model and logic (e.g. chain does not plan to integrate with Asset Hub). It is up to the individual chains to configure accordingly.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

Barriers should now allow AliasOrigin, DescendOrigin or ClearOrigin.

Normally, XCM program builders should audit their programs and eliminate assumptions of "no origin" on remote side of this instruction. In this case, the InitiateAssetsTransfer has not been released yet, it will be part of XCMv5, and we can make this change part of the same XCMv5 so that there isn't even the possibility of someone in the wild having built XCM programs using this instruction on those wrong assumptions.

The working assumption going forward is that the origin on the remote side can either be cleared or it can be the local origin's reanchored location. This assumption is in line with the current behavior of remote XCM programs sent over using pallet_xcm::send.

The existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear origin same as before for compatibility reasons.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

No impact.

-

Ergonomics

+

Ergonomics

Improves ergonomics by allowing the local origin to operate on the remote chain even when the XCM program includes an asset transfer.

-

Compatibility

+

Compatibility

At the executor-level this change is backwards and forwards compatible. Both types of programs can be executed on new and old versions of XCM with no changes in behavior.

New version of the InitiateAssetsTransfer instruction acts same as before when used with preserve_origin: false.

For using the new capabilities, the XCM builder has to verify that the involved chains have the required origin-aliasing filters configured and use some new version of Barriers aware of AliasOrigin as an allowed alternative to ClearOrigin.

For compatibility reasons, this RFC proposes this mechanism be added as an enhancement to the yet unreleased InitiateAssetsTransfer instruction, thus eliminating possibilities of XCM logic breakages in the wild. Following the same logic, the existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear the origin same as before for compatibility reasons.

Any one of DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport instructions can be replaced with a InitiateAssetsTransfer instruction with or without origin aliasing, thus providing a clean and clear upgrade path for opting-in this new feature.

-

Prior Art and References

+

Prior Art and References

Unresolved Questions

None

+ +

(source)

+

Table of Contents

+ +

RFC-0125: XCM Asset Metadata

+
+ + + +
Start Date22 Oct 2024
DescriptionXCM Asset Metadata definition and a way of communicating it via XCM
AuthorsDaniel Shiposha
+
+

Summary

+

This RFC proposes a metadata format for XCM-identifiable assets (i.e., for fungible/non-fungible collections and non-fungible tokens) and a set of instructions to communicate it across chains.

+

Motivation

+

Currently, there is no way to communicate metadata of an asset (or an asset instance) via XCM.

+

The ability to query and modify the metadata is useful for two kinds of entities:

+ +

Besides metadata modification, the ability to read it is also valuable. On-chain logic can interpret the NFT metadata, i.e., the metadata could have not only the media meaning but also a utility function within a consensus system. Currently, such a way of using NFT metadata is possible only within one consensus system. This RFC proposes making it possible between different systems via XCM so different chains can fetch and analyze the asset metadata from other chains.

+

Stakeholders

+

Runtime users, Runtime devs, Cross-chain dApps, Wallets.

+

Explanation

+

The Asset Metadata is information bound to an asset class (fungible or NFT collection) or an asset instance (an NFT). +The Asset Metadata could be represented differently on different chains (or in other consensus entities). +However, to communicate metadata between consensus entities via XCM, we need a general format so that any consensus entity can make sense of such information.

+

We can name this format "XCM Asset Metadata".

+

This RFC proposes:

+
    +
  1. +

    Using key-value pairs as XCM Asset Metadata since it is a general concept useful for both structured and unstructured data. Both key and value can be raw bytes with interpretation up to the communicating entities.

    +

    The XCM Asset Metadata should be represented as a map SCALE-encoded equivalent to the BTreeMap<Vec<u8>, Vec<u8>>.

    +

    As such, the XCM Asset Metadata types are defined as follows:

    +
    #![allow(unused)]
    +fn main() {
    +type MetadataKey = Vec<u8>;
    +type MetadataValue = Vec<u8>;
    +type MetadataMap = BTreeMap<MetadataKey, MetadataValue>;
    +}
    +
  2. +
  3. +

    Communicating only the demanded part of the metadata, not the whole metadata.

    +
      +
    • +

      A consensus entity should be able to query the values of interested keys to read the metadata. +We need a set-like type to specify the keys to read, a SCALE-encoded equivalent to the BTreeSet<Vec<u8>>. +Let's define that type as follows:

      +
      #![allow(unused)]
      +fn main() {
      +type MetadataKeySet = BTreeSet<MetadataKey>;
      +}
      +
    • +
    • +

      A consensus entity should be able to write the values for specified keys.

      +
    • +
    +
  4. +
  5. +

    New XCM instructions to communicate the metadata.

    +
  6. +
+

Note: the maximum lengths of MetadataKey, MetadataValue, MetadataMap, and MetadataKeySet are implementation-defined.

+

New instructions

+

ReportMetadata

+

The ReportMetadata is a new instruction to query metadata information. +It can be used to query metadata key list or to query values of interested keys.

+

This instruction allows querying the metadata of:

+ +

If an asset (or an asset instance) for which the query is made doesn't exist, the Response::Null should be reported via the existing QueryResponse instruction.

+

The ReportMetadata can be used without origin (i.e., following the ClearOrigin instruction) since it only reads state.

+

Safety: The reporter origin should be trusted to hold the true metadata. If the reserve-based model is considered, the asset's reserve location must be viewed as the only source of truth about the metadata.

+

The use case for this instruction is when the metadata information of a foreign asset (or asset instance) is used in the logic of a consensus entity that requested it.

+
#![allow(unused)]
+fn main() {
+/// An instruction to query metadata of an asset or an asset instance.
+ReportMetadata {
+    /// The ID of an asset (a collection, fungible or nonfungible).
+    asset_id: AssetId,
+
+    /// The ID of an asset instance.
+    ///
+    /// If the value is `Undefined`, the metadata of the collection is reported.
+    instance: AssetInstance,
+
+    /// See `MetadataQueryKind` below.
+    query_kind: MetadataQueryKind,
+
+    /// The usual field for Report<something> XCM instructions.
+    ///
+    /// Information regarding the query response.
+    /// The `QueryResponseInfo` type is already defined in the XCM spec.
+    response_info: QueryResponseInfo,
+}
+}
+

Where the MetadataQueryKind is:

+
#![allow(unused)]
+fn main() {
+enum MetadataQueryKind {
+    /// Query metadata key set.
+    KeySet,
+
+    /// Query values of the specified keys.
+    Values(MetadataKeySet),
+}
+}
+

The ReportMetadata works in conjunction with the existing QueryResponse instruction. The Response type should be modified accordingly: we need to add a new AssetMetadata variant to it.

+
#![allow(unused)]
+fn main() {
+/// The struct used in the existing `QueryResponse` instruction.
+pub enum Response {
+    // ... snip, existing variants ...
+
+    /// The metadata info.
+    AssetMetadata {
+        /// The ID of an asset (a collection, fungible or nonfungible).
+        asset_id: AssetId,
+
+        /// The ID of an asset instance.
+        ///
+        /// If the value is `Undefined`, the reported metadata is related to the collection, not a token.
+        instance: AssetInstance,
+
+        /// See `MetadataResponseData` below.
+        data: MetadataResponseData,
+    }
+}
+
+pub enum MetadataResponseData {
+    /// The metadata key list to be reported
+    /// in response to the `KeySet` metadata query kind.
+    KeySet(MetadataKeySet),
+
+    /// The values of the keys that were specified in the
+    /// `Values` variant of the metadata query kind.
+    Values(MetadataMap),
+}
+}
+

ModifyMetadata

+

The ModifyMetadata is a new instruction to request a remote chain to modify the values of the specified keys.

+

This instruction can be used to update the metadata of a collection (fungible or nonfungible) or of an NFT.

+

The remote chain handles the modification request and may reject it based on its internal rules. +The request can only be executed or rejected in its entirety. It must not be executed partially.

+

To execute the ModifyMetadata, an origin is required so that the handling logic can authorize the metadata modification request from a known source. Since this instruction requires an origin, the assets used to cover the execution fees must be transferred in a way that preserves the origin. For instance, one can use the approach described in RFC #122 if the handling chain configured aliasing rules accordingly.

+

The example use case of this instruction is to ask the reserve location of the asset to modify the metadata. So that, the original asset's metadata is updated according to the reserve location's rules.

+
#![allow(unused)]
+fn main() {
+ModifyMetadata {
+    /// The ID of an asset (a collection, fungible or nonfungible).
+    asset_id: AssetId,
+
+    /// The ID of an asset instance.
+    ///
+    /// If the value is `Undefined`, the modification request targets the collection, not a token.
+    instance: AssetInstance,
+
+    /// The map contains the keys mapped to the requested new values.
+    modification: MetadataMap,
+}
+}
+

Repurposing AssetInstance::Undefined

+

As the new instructions show, this RFC reframes the purpose of the Undefined variant of the AssetInstance enum. +This RFC proposes to use the Undefined variant of a collection identified by an AssetId as a synonym of the collection itself. I.e., an asset Asset { id: <AssetId>, fun: NonFungible(AssetInstance::Undefined) } is considered an NFT representing the collection itself.

+

As a singleton non-fungible instance is barely distinguishable from its collection, this convention shouldn't cause any problems.

+

Thus, the AssetInstance docs must be updated accordingly in the implementations.

+

Drawbacks

+

Regarding ergonomics, no drawbacks were noticed.

+

As for the user experience, it could discover new cross-chain use cases involving asset collections and NFTs, indicating a positive impact.

+

There are no security concerns except for the ReportMetadata instruction, which implies that the source of the information must be trusted.

+

In terms of performance and privacy, there will be no changes.

+

Testing, Security, and Privacy

+

The implementations must honor the contract for the new instructions. Namely, if the instance field has the value of AssetInstance::Undefined, the metadata must relate to the asset collection but not to a non-fungible token inside it.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

No significant impact.

+

Ergonomics

+

Introducing a standard metadata format and a way of communicating it is a valuable addition to the XCM format that potentially increases cross-chain interoperability without the need to form ad-hoc chain-to-chain integrations via Transact.

+

Compatibility

+

This RFC proposes new functionality, so there are no compatibility issues.

+

Prior Art and References

+

RFC: XCM Asset Metadata

+

The original RFC draft contained additional metadata instructions. Though they could be useful, they're clearly outside the basic logic. So, this RFC version omits them to make the metadata discussion more focused on the core things. Nonetheless, there is hope that metadata approval instructions might be useful in the future, so they are mentioned here.

+

You can read about the details in the original draft.

(source)

Table of Contents