diff --git a/404.html b/404.html index bcebeda..0baf858 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index bb1db6e..f3799e4 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index bdb49ca..5c8a15a 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 78480b5..67086eb 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index e1b4438..5dafe12 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0010-burn-coretime-revenue.html b/approved/0010-burn-coretime-revenue.html index 34668e0..a5b6641 100644 --- a/approved/0010-burn-coretime-revenue.html +++ b/approved/0010-burn-coretime-revenue.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index 78baaa7..7d26d13 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html index c67502c..9dcb875 100644 --- a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index a04c61a..354f2f5 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index f664ce0..982ecf4 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index 93e91bb..3aecc6a 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0042-extrinsics-state-version.html b/approved/0042-extrinsics-state-version.html index 0cba3a8..2b5ddad 100644 --- a/approved/0042-extrinsics-state-version.html +++ b/approved/0042-extrinsics-state-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0043-storage-proof-size-hostfunction.html b/approved/0043-storage-proof-size-hostfunction.html index 1eafd5f..06c002b 100644 --- a/approved/0043-storage-proof-size-hostfunction.html +++ b/approved/0043-storage-proof-size-hostfunction.html @@ -90,7 +90,7 @@ diff --git a/approved/0045-nft-deposits-asset-hub.html b/approved/0045-nft-deposits-asset-hub.html index 1c1abe1..9bf867b 100644 --- a/approved/0045-nft-deposits-asset-hub.html +++ b/approved/0045-nft-deposits-asset-hub.html @@ -90,7 +90,7 @@ diff --git a/approved/0047-assignment-of-availability-chunks.html b/approved/0047-assignment-of-availability-chunks.html index 8116011..f093eda 100644 --- a/approved/0047-assignment-of-availability-chunks.html +++ b/approved/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 04a2e21..39fcdbb 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index ff3dd7c..548cdd1 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/approved/0059-nodes-capabilities-discovery.html b/approved/0059-nodes-capabilities-discovery.html index 6ed7dcb..901562b 100644 --- a/approved/0059-nodes-capabilities-discovery.html +++ b/approved/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ diff --git a/approved/0078-merkleized-metadata.html b/approved/0078-merkleized-metadata.html index 3443cc0..3674fd2 100644 --- a/approved/0078-merkleized-metadata.html +++ b/approved/0078-merkleized-metadata.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index c783811..f6dc14c 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index c783811..f6dc14c 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index 0816987..380ea17 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -444,375 +444,6 @@ The following other host functions are similarly also considered deprecated:

Future Possibilities

After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.

-

(source)

-

Table of Contents

- -

RFC-0046: Metadata for Offline Signers

-
- - - -
Start Date2023-10-31
DescriptionInclude merkelized metadata hash in extrinsic signature for trust-less metadata verification.
AuthorsAlzymologist Oy, Zondax AG, Parity Technologies
-
-

Summary

-

To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this Substrate, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, signers, etc can use this metadata to interact with the chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format. It gets even more important when the user signs the transaction on an offline signer as the latter by its nature can not get access to the metadata without relying on the online wallet to provide it the metadata. So, the idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. Cryptographically digested together small number of extra-metadata information, it allows the offline signers and wallets to decode transactions by getting proofs for the individual chunks of the metadata. This digest is also included into the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash and extra-metadata constants when verifying the transaction. If the digest known by the runtime differs from the one that the offline signer used, it very likely means that the online wallet provided some invalid data and the verification of the transaction fails thus making metadata substitution impossible. The user is depending on the offline wallet showing them the correct decoded transaction before signing and with the merkelized metadata they can be sure that it was the correct transaction or the runtime will reject the transaction.

-

Thus we propose following:

-

Add a metadata digest value to signed data to supplement signer party with proof of correct extrinsic interpretation. This would ensure that hardware wallets always use correct metadata to decode the information for the user.

-

The digest value is generated once before release and is well-known and deterministic. The digest mechanism is designed to be modular and flexible. It also supports partial metadata transfer as needed by the signing party's extrinsic decoding mechanism. This considers signing devices potentially limited communication bandwidth and/or memory capacity.

-

Motivation

-

Background

-

While all blockchain systems support (at least in some sense) offline signing used in air-gapped wallets and lightweight embedded devices, only few allow simultaneously complex upgradeable logic and full message decoding on the cold off-line signer side; Substrate is one of these heartening few, and therefore - we should build on this feature to greatly improve transaction security, and thus in general, network resilience.

-

As a starting point, it is important to recognise that prudence and due care are naturally required. As we build further reliance on this feature we should be very careful to make sure it works correctly every time so as not to create false sense of security.

-

In order to enable decoding that is small and optimized for chain storage transactions, a metadata entity is used, which is not at all small in itself (on the order of half-MB for most networks). This is a dynamic data chunk which completely describes chain interfaces and properties that could be made into a portable scale-encoded string for any given network version and passed along into an off-chain device to familiarize it with latest network updates. Of course, compromising this metadata anywhere in the path could result in differences between what user sees and signs, thus it is essential that we protect it.

-

Therefore, we have 2 problems to be solved:

-
    -
  1. Metadata is large, takes long time to be passed into a cold storage device with memory insufficient for its storage; metadata SHOULD be shortened and transmission SHOULD be optimized.
  2. -
  3. Metadata authenticity SHOULD be ensured.
  4. -
-

As of now, there is no working solution for (1), as the whole metadata has to be passed to the device. On top of this, the solution for (2) heavily relies on a trusted party managing keys and ensuring metadata is indeed authentic: creating poorly decentralized points of potential failure.

-

Solution requirements

-

Include metadata digest into signature

-

Some cryptographically strong digest of metadata MAY be included into signable blob. There SHALL NOT be storage overhead for this blob, nor computational overhead; thus MUST be a constant within a given runtime, deterministically defined by metadata.

- -

Reduce metadata size

-

Metadata should be stripped from parts that are not necessary to parse a signable extrinsic, then it should be separated into a finite set of self-descriptive chunks. Thus, a subset of chunks necessary for signable extrinsic decoding and rendering could be sent, possibly in small portions (ultimately - one at a time), to cold device together with proof.

- -

Stakeholders

- -

All chain teams are stakeholders, as implementing this feature would require timely effort on their side and would impact compatibility with older tools.

-

This feature is essential for all offline signer tools; many regular signing tools might make use of it. In general, this RFC greatly improves security of any network implementing it, as many governing keys are used with offline signers.

-

Implementing this RFC would remove requirement to maintain metadata portals manually, as task of metadata verification would be effectively moved to consensus mechanism of the chain.

-

The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.

-

Explanation

-

The FRAME metadata provides a lot of information about a FRAME-based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, information about runtime APIs and type information about most of the types used in the runtime. For decoding transactions on an offline wallet, we mainly require this type of information. Most of the other information in the FRAME metadata is actually not required for the decoding, and thus, we can remove them. Thus, we have come up with a custom representation of the metadata and how this custom metadata is chunked, ensuring that we only need to send the chunks required for decoding a certain transaction to the offline wallet.

-

A reference implementation of this process is provided in metadata-shortener crate.

-

Definitions

-

Metadata descriptor

-

Values for:

-
    -
  1. u8 metadata shortening protocol version (as encoded enum variant),
  2. -
  3. The Id of type of the outermost Call enum (u32) as described in https://docs.rs/frame-metadata/latest/frame_metadata/v15/struct.ExtrinsicMetadata.html#structfield.call_ty
  4. -
  5. Vector of SignedExtension structs as defined in Metadata V15 (vector of named (String) pairs of TypeId (u32))
  6. -
  7. spec_version String (see explanation below) as found in the RuntimeVersion at generation of the metadata. While this information can also be found in the metadata, it is hidden in a big blob of data. To avoid transferring this big blob of data, we directly add these values here,
  8. -
  9. spec_name String as found in the RuntimeVersion,
  10. -
  11. u16 ss58 prefix,
  12. -
  13. u8 decimals value or 0u8 if no units are defined,
  14. -
  15. tokenSymbol String defined on chain to identify the name of currency (available for example through system.properties() RPC call) or empty String if no base units are defined,
  16. -
-
enum MetadataDescriptor {
-  V0,
-  V1(MetadataDescriptorV1),
-}
-
-struct MetadataDescriptorV1 {
-  call_ty: u32, // TypeId
-  signed_extensions: Vec<SignedExtensionsMetadata>,
-  spec_version: String,
-  spec_name: String,
-  ss58_prefix: u16,
-  decimals: u8,
-  token_symbol: String
-}
-
-struct SignedExtensionMetadata {
-  identifier: String,
-  ty: u32, // TypeId
-  additional_signed: u32, // TypeId
-}
-
-

constitute metadata descriptor. This is minimal information that is, together with (shortened) types registry, sufficient to decode any signable transaction.

-

Note on spec_version: this type is described in metadata; it is indeed u32 in Polkadot and Kusama and most other networks and thus could be defined in a typesafe manner, but theoretically, it could be anything, as long as it is serializable. We saw a couple of networks using u16 there in the past. Not that it's all that important, but as long as our protocol would be the only limitation - it would be a limitation for our protocol; it would be unwise to enforce it here without first enforcing it upstream.

-

The version is not exactly an algebraic number - it has comparison operation, but no other algebra is defined; the primitive variable with these properties is really a String, not an integer. We've argued a lot about this internally, but with a naive JS-style solution, we have all the flexibility we might want and no real downsides (as from the point of view of our protocol, it's just an opaque constant - key for database search).

-

Merkle tree

-

A Complete Binary Merkle Tree (CBMT) is proposed as digest structure.

-

Every node of the proposed tree has a 32-bit value.

-

A terminal node of the tree we call leaf. Its value is input for digest.

-

The top node of the tree we call root.

-

All node values for non-leave nodes are not terminal are computed through non-commutative merge procedure of child nodes.

-

In CBMT, all layers must be populated, except for the last one, that must have complete filling from the left.

-

Nodes are numbered top-down and left-to-right starting with 0 at the top of tree.

-
Example 8-node tree
-
-        0
-     /     \
-    1       2
-   / \     / \
-  3   4   5   6
- / \
-7   8
-
-Nodes 4, 5, 6, 7, 8 are leaves
-Node 0 is root
-
-
-

General flow

-
    -
  1. The metadata is converted into lean modular form (vector of chunks)
  2. -
  3. A Merkle tree is constructed from the metadata chunks
  4. -
  5. A root of tree is merged with the hash of the MetadataDescriptor
  6. -
  7. Resulting value is a constant to be included in additionalSigned to prove that the metadata seen by cold device is genuine
  8. -
-

Metadata modularization

-

Structure of types in shortened metadata exactly matches structure of types in scale-info at MetadataV15 state, but doc field is always empty. This type system provides sufficient information for decoding with SCALE. For those not familiar with MetadataV15, a brief excerpt of types structure is presented below; however, it is important, that the protocol implemented in this RFC MUST agree directly to scale-info definitions, to ensure single source of truth and resolve possible diversions towards more adoptable and universal solution.

-
struct Chunk {
-  id: u32,
-  type: Type,
-}
-
-struct Type {
-  path: Path, // vector of strings
-  type_params: Vec<TypeParams>,
-  type_def: TypeDef, // enum of various types
-  doc: Vec<String>,
-}
-
-struct TypeParams {
-  name: String,
-  ty: Option<Type>,
-}
-
-enum TypeDef {
-    Composite(TypeDefComposite),
-    Variant(TypeDefVariant),
-    Sequence(TypeDefSequence),
-    Array(TypeDefArray),
-    Tuple(TypeDefTuple),
-    Primitive(TypeDefPrimitive),
-    Compact(TypeDefCompact),
-    BitSequence(TypeDefBitSequence),
-}
-
-struct TypeDefComposite {
-    fields: Vec<Field>,
-}
-
-pub struct Field {
-    name: Option<String>,
-    ty: Type,
-    type_name: Option<String>,
-    docs: Vec<String>,
-}
-
-struct TypeDefVariant {
-    variants: Vec<Variant>,
-}
-
-struct Variant {
-    name: String,
-    fields: Vec<Field>,
-    index: u8,
-    docs: Vec<String>,
-}
-
-struct TypeDefSequence {
-    type_param: Type,
-}
-
-struct TypeDefArray {
-    len: u32,
-    type_param: Type,
-}
-
-struct TypeDefTuple {
-    fields: Vec<Type>,
-}
-
-enum TypeDefPrimitive {
-    bool,
-    char,
-    String,
-    u8,
-    u16,
-    u32,
-    u64,
-    u128,
-    u256,
-    i8,
-    i16,
-    i32,
-    i64,
-    i128,
-    i256,
-}
-
-struct TypeDefCompact {
-    type_param: Type,
-}
-
-struct TypeDefBitSequence {
-    pub bit_store_type: Type,
-    pub bit_order_type: Type,
-}
-
-
-

Modularization happens thus:

-
    -
  1. Types registry is stripped from docs fields.
  2. -
  3. Types records are separated into chunks, with enum variants being individual chunks differing by variant index; each chunk consisting of id (same as in full metadata registry) and SCALE-encoded 'Type' description (reduced to 1-variant enum for enum variants). Enums with 0 variants are treated as regular types.
  4. -
  5. Chunks are sorted by id in ascending order; chunks with same id are sorted by enum variant index in ascending order.
  6. -
-
types_registry = metadataV15.types
-modularized_registry = EmptyVector<Chunk>
-for (id, type) in types.registry.iterate_enumerate {
-  type.doc = empty_vector
-  if (type is ReduceableEnum) { // false for 0-variant enums
-    for variant in type.variants.iterate {
-      variant_type = Type {
-        path: type.path,
-        type_params: empty_vector,
-        type_def: TypeDef::Variant(variants: [variant]),
-        docs: empty_vector,
-      }
-      modularized_registry.push(id, variant_type)
-    }
-  } else {
-    modularized_registry.push(id, type)
-  }
-}
-
-modularized_registry.sort(|a, b| {
-    if a.id == b.id { //only possible for variants
-      a.variant_index > b.variant_index
-    } else { a.id > b.id }
-  }
-)
-
-
-

Chunks are scale-encoded and digested by blake3 algorithm into 32-byte CBMT leaf values.

-

Merging protocol

-

blake3 transformation of concatenated child nodes (blake3(left + right)) as merge procedure;

-

Complete Binary Merkle Tree construction protocol

-
    -
  1. Leaves are numbered in ascending order. Leaf index is associated with corresponding chunk.
  2. -
  3. Merge is performed using the leaf with highest index as right and leaf with second to highest index as left children; result is pushed to the end of nodes queue and leaves are discarded.
  4. -
  5. Step (2) is repeated until no leaves or just one leaf remains; in latter case, the last leaf is pushed to the front of the nodes queue.
  6. -
  7. Right node and then left node is popped from the front of the nodes queue and merged; the result is sent to the end of the queue.
  8. -
  9. Step (4) is repeated until only one node remains; this is tree root.
  10. -
-
queue = empty_queue
-
-while leaves.length > 1 {
-  right = leaves.pop_last
-  left = leaves.pop_last
-  queue.push_back(merge(left, right))
-}
-
-if leaves.length == 1 {
-  queue.push_front(leaves.last)
-}
-
-while queue.len() > 1 {
-  right = queue.pop_front
-  left = queue.pop_front
-  queue.push_back(merge(left, right))
-}
-
-return queue.pop
-
-
Resulting tree for metadata consisting of 5 nodes (numbered from 0 to 4):
-
-       root
-     /     \
-    *       *
-   / \     / \
-  *   0   1   2
- / \
-3   4
-
-

Digest

-
    -
  1. Blake3 hash is computed for each chunk of modular short metadata registry.
  2. -
  3. Complete Binary Merkle Tree is constructed as described above.
  4. -
  5. Root hash of this tree (left) is merged with metadata descriptor blake3 hash (right); this is metadata digest.
  6. -
-

Version number and corresponding resulting metadata digest MUST be included into Signed Extensions as specified in Chain Verification section below.

-

Chain verification

-

The root of metadata computed by cold device MAY be included into Signed Extensions; if it is included, the transaction will pass as valid iff hash of metadata as seen by cold storage device is identical to consensus hash of metadata, ensuring fair signing protocol.

-

The Signed Extension representing metadata digest is a single byte representing both digest vaule inclusion and shortening protocol version; this MUST be included in Signed Extensions set. Depending on its value, a digest value is included as additionalSigned to signature computation according to following specification:

-
- - - -
signed extension valuedigest valuecomment
0x00digest is not included
0x0132-byte digestthis represents protocol version 1
0x02 - 0xFFreservedreserved for future use
-
-

Drawbacks

-

Increased transaction size

-

A 1-byte increase in transaction size due to signed extension value. Digest is not included in transferred transaction, only in signing process.

-

Transition overhead

-

Some slightly out of spec systems might experience breaking changes as new content of signed extensions is added - tools that delay their transition instead of preparing ahead of time would break for the duration of delay. It is important to note, that there is no real overhead in processing time nor complexity, as the metadata checking mechanism is voluntary.

-

Testing, Security, and Privacy

-

All implementations MUST follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process generally needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised. -Implementation can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash. -Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to its offline wallet. Besides that there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.

-

To be able to recall shortener protocol in case of vulnerability issues, a version byte is included.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This is negligibly short pessimization during build time on the chain side. Cold wallets performance would improve mostly as metadata validity mechanism that was taking most of effort in cold wallet support would become trivial.

-

There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.

-

Ergonomics

-

The proposal alters the way a transaction is build, signed and verified. So, this imposes some required changes to any kind of developer that wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.

-

Prior Art and References

-

This project was developed upon a Polkadot Treasury grant; relevant development links are located in metadata-offline-project repository.

-

There doesn't seem to exist a similar solution to what the RFC proposes right now. However, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid. Furthermore, since rendering of this textual representation is potentially non-trivial (with use of non-printed glyphs, direction control, escape characters, etc.), inclusion of arbitrary textual representation increases attack surface and potentially results in subtly false sense of security.

-

Unresolved Questions

- -

Changes to code of all cold signers to implement this mechanism SHOULD be done when this is enabled; non-cold signers may perform extra metadata check for better security. Ultimately, signing anything without decoding it with verifiable metadata should become discouraged in all situations where a decision-making mechanism is involved (that is, outside of fully automated blind signers like trade bots or staking rewards payout tools).

(source)

Table of Contents

and much more...

-

Stakeholders

+

Stakeholders

-

Explanation

+

Explanation

I've created the stateful multisig pallet during my studies in Polkadot Blockchain Academy under supervision from @shawntabrizi and @ank4n. After that, I've enhanced it to be fully functional and this is a draft PR#3300 in polkadot-sdk. I'll list all the details and design decisions in the following sections. Note that the PR is not 1-1 exactly to the current RFC as the RFC is a more polished version of the PR after updating based on the feedback and discussions.

Let's start with a sequence diagram to illustrate the main operations of the Stateful Multisig.

multisig operations

@@ -1404,14 +1035,14 @@ pub type PendingProposals<T: Config> = StorageDoubleMap<
  • In case threshold is lower than the number of approvers then the proposal is still valid.
  • In case threshold is higher than the number of approvers then we catch it during execute proposal and error.
  • -

    Drawbacks

    +

    Drawbacks

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Standard audit/review requirements apply.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    Doing back of the envelop calculation to proof that the stateful multisig is more efficient than the stateless multisig given it's smaller footprint size on blocks.

    Quick review over the extrinsics for both as it affects the block size:

    Stateless Multisig: @@ -1475,17 +1106,17 @@ We have the following extrinsics:

    | Stateless | N^2 | Nil | | Stateful | N | N |

    So even though the stateful multisig has a larger state size, it's still more efficient in terms of block size and total footprint on the blockchain.

    -

    Ergonomics

    +

    Ergonomics

    The Stateful Multisig will have better ergonomics for managing multisig accounts for both developers and end-users.

    Compatibility

    This RFC is compatible with the existing implementation and can be handled via upgrades and migration. It's not intended to replace the existing multisig pallet.

    -

    Prior Art and References

    +

    Prior Art and References

    multisig pallet in polkadot-sdk

    -

    Unresolved Questions

    +

    Unresolved Questions

    - + -

    Solution Requirements

    +

    Solution Requirements

    The maximum length of identity raw data values should be increased from the current 32 bytes/chars limit at least a 59 bytes/chars limit (or a 66 bytes/chars limit) to support IPFS CIDs that are either CIDv0 or CIDv1.

    They maximum length of "additional" field values should be increased by the same amount.

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    If a user tries to setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), then if they try to provide an email address or a website domain name that is longer than 32 characters, or try to use the IPFS CID (which may be associated with a file such as a document or image), then they will encounter this error:

    createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Data.Raw values are limited to a maximum length of 32 bytes
     

    Increasing maximum length of identity raw data values from the current 32 bytes/chars limit to at least a 59 bytes/chars limit (or a 66 bytes/chars limit) would overcome these errors and support IPFS CIDs that are either CIDv0 or CIDv1, satisfying the solution requirements.

    Increasing the maximum length of "additional" field values would also overcome these errors and should be increased from the current 32 bytes/chars limitFieldLimit by the same amount.

    -

    Drawbacks

    -

    Performance

    +

    Drawbacks

    +

    Performance

    If Polkadot on-chain identities are able to store raw data values greater than the current maximum length of 32 bytes, then each identity may want to use the maximum (or more) amount of "additional" custom fields or more, which would impact storage and performance on the network.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Implementations would need to be tested for adherance by checking that IPFS CIDs that are either CIDv0 or CIDv1 are supported.

    No effect on security or privacy has been identified than already exists.

    No implementation pitfalls have been identified.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.

    To minimize additional overhead the proposal suggests at least a 59 bytes/chars limit (or a 66 bytes/chars limit) since that would at least provide support for IPFS CIDs that are either CIDv0 or CIDv1, satisfying the solution requirements.

    -

    Ergonomics

    +

    Ergonomics

    It alters exposed interfaces to developers and end-users since they will now be able to provide IPFS CIDs that are either CIDv0 or CIDv1 as the values of Polkadot on-chain identity raw data input fields. Optionally 66 bytes/chars limit could be established to optimise for the usage pattern end-users, since that may be more intuitive to them, as it would allow users to provide a link (e.g. URI protocol + CID) rather than just the CID.

    Compatibility

    Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

    -

    Prior Art and References

    +

    Prior Art and References

    Prior Art

    No prior articles.

    References

    @@ -1602,9 +1233,9 @@ Implement call filters. This will allow multisig accounts to only accept certain
  • 3
  • 4
  • -

    Unresolved Questions

    +

    Unresolved Questions

    Why can't we increase the maximum length of Polkadot on-chain identity raw data values and the "additional" field limit from 32 bytes/chars to an even longer maximum length than proposed of 128 bytes/chars?

    - +

    Relates to RFC entitled "Increase maximum length of identity PGP fingerprint values from 20 bytes".

    (source)

    Table of Contents

    @@ -1643,10 +1274,10 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsLuke Schoen -

    Summary

    +

    Summary

    This proposes to increase the maximum length of PGP Fingerprint values from a 20 bytes/chars limit to a 40 bytes/chars limit.

    -

    Motivation

    -

    Background

    +

    Motivation

    +

    Background

    Pretty Good Privacy (PGP) Fingerprints are shorter versions of their corresponding Public Key that may be printed on a business card.

    They may be used by someone to validate the correct corresponding Public Key.

    It should be possible to add PGP Fingerprints to Polkadot on-chain identities.

    @@ -1661,9 +1292,9 @@ Implement call filters. This will allow multisig accounts to only accept certain
  • Discourages users from setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), since if they try to provide their 40 character long PGP Fingerprint or GPG Fingerprint, which is longer than the maximum length of 20 bytes/chars, they will encounter an error.
  • Discourages users from using on-chain Web3 registrars to judge on-chain identity fields, where the shortest value they are able to generate for a "pgpFingerprint" is not less than or equal to the maximum length of 20 bytes.
  • -

    Solution Requirements

    +

    Solution Requirements

    The maximum length of identity PGP Fingerprint values should be increased from the current 20 bytes/chars limit at least a 40 bytes/chars limit to support PGP Fingerprints and GPG Fingerprints.

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    If a user tries to setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), then if they try to provide their 40 character long PGP Fingerprint or GPG Fingerprint, which is longer than the maximum length of 20 bytes/chars [u8;20], then they will encounter this error:

    createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Struct: failed on pgpFingerprint: Option<[u8;20]>:: Expected input with 20 bytes (160 bits), found 40 bytes
     

    Increasing maximum length of identity PGP Fingerprint values from the current 20 bytes/chars limit to at least a 40 bytes/chars limit would overcome these errors and support PGP Fingerprints and GPG Fingerprints, satisfying the solution requirements.

    -

    Drawbacks

    +

    Drawbacks

    No drawbacks have been identified.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Implementations would be tested for adherance by checking that 40 bytes/chars PGP Fingerprints are supported.

    No effect on security or privacy has been identified than already exists.

    No implementation pitfalls have been identified.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.

    To minimize additional overhead the proposal suggests a 40 bytes/chars limit since that would at least provide support for PGP Fingerprints, satisfying the solution requirements.

    -

    Ergonomics

    +

    Ergonomics

    No potential ergonomic optimizations have been identified.

    Compatibility

    Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

    -

    Prior Art and References

    +

    Prior Art and References

    No prior articles or references.

    -

    Unresolved Questions

    +

    Unresolved Questions

    No further questions at this stage.

    - +

    Relates to RFC entitled "Increase maximum length of identity raw data values from 32 bytes".

    (source)

    Table of Contents

    @@ -1728,14 +1359,14 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsGeorge Pisaltu -

    Summary

    +

    Summary

    This RFC proposes the expansion of the extrinsic type identifier from the current most significant bit of the first byte of the encoded extrinsic to the two most significant bits of the first byte of the encoded extrinsic.

    -

    Motivation

    +

    Motivation

    In order to support transactions that use different authorization schemes than currently exist, a new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time.

    By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.

    -

    Stakeholders

    +

    Stakeholders

    Runtime users.

    -

    Explanation

    +

    Explanation

    An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version.

    Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time.

    This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic type bit representation would change as follows:

    @@ -1746,23 +1377,23 @@ Implement call filters. This will allow multisig accounts to only accept certain 11reserved -

    Drawbacks

    +

    Drawbacks

    This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    There is no impact on testing, security or privacy.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.

    -

    Performance

    +

    Performance

    There is no performance impact.

    -

    Ergonomics

    +

    Ergonomics

    The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.

    Compatibility

    This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.

    -

    Prior Art and References

    +

    Prior Art and References

    The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None.

    - +

    Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work.

    (source)

    Table of Contents

    @@ -1804,9 +1435,9 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsGavin Wood -

    Summary

    +

    Summary

    This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.

    -

    Motivation

    +

    Motivation

    Present System

    The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.

    The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.

    @@ -1827,7 +1458,7 @@ Implement call filters. This will allow multisig accounts to only accept certain
  • The solution SHOULD avoid creating additional dependencies on functionality which the Relay-chain need not strictly provide for the delivery of the Polkadot UC.
  • Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.

    -

    Stakeholders

    +

    Stakeholders

    Primary stakeholder sets are:

    Socialization:

    The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.

    -

    Explanation

    +

    Explanation

    Overview

    Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.

    When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.

    @@ -2268,16 +1899,16 @@ InstaPoolHistory: (empty)
  • Governance upgrade proposal(s).
  • Monitoring of the upgrade process.
  • -

    Performance, Ergonomics and Compatibility

    +

    Performance, Ergonomics and Compatibility

    No specific considerations.

    Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.

    While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.

    -

    Testing, Security and Privacy

    +

    Testing, Security and Privacy

    Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.

    A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.

    Any final implementation MUST pass a professional external security audit.

    The proposal introduces no new privacy concerns.

    - +

    RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

    RFC-5 proposes the API for interacting with Relay-chain.

    Additional work should specify the interface for the instantaneous market revenue so that the Coretime-chain can ensure Bulk Coretime placed in the instantaneous market is properly compensated.

    @@ -2293,7 +1924,7 @@ InstaPoolHistory: (empty)
  • The percentage of cores to be sold as Bulk Coretime.
  • The fate of revenue collected.
  • -

    Prior Art and References

    +

    Prior Art and References

    Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

    (source)

    Table of Contents

    @@ -2326,10 +1957,10 @@ InstaPoolHistory: (empty) AuthorsGavin Wood, Robert Habermeier -

    Summary

    +

    Summary

    In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

    This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

    -

    Motivation

    +

    Motivation

    The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

    Requirements

    -

    Stakeholders

    +

    Stakeholders

    Primary stakeholder sets are:

    Socialization:

    This content of this RFC was discussed in the Polkdot Fellows channel.

    -

    Explanation

    +

    Explanation

    The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

    Future work may include these messages being introduced into the XCM standard.

    UMP Message Types

    @@ -2424,17 +2055,17 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);

    Realistic Limits of the Usage

    For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

    For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

    -

    Performance, Ergonomics and Compatibility

    +

    Performance, Ergonomics and Compatibility

    No specific considerations.

    -

    Testing, Security and Privacy

    +

    Testing, Security and Privacy

    Standard Polkadot testing and security auditing applies.

    The proposal introduces no new privacy concerns.

    - +

    RFC-1 proposes a means of determining allocation of Coretime using this interface.

    RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

    Drawbacks, Alternatives and Unknowns

    None at present.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    (source)

    Table of Contents

    @@ -2480,13 +2111,13 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600); AuthorsJoe Petrowski -

    Summary

    +

    Summary

    As core functionality moves from the Relay Chain into system chains, so increases the reliance on the liveness of these chains for the use of the network. It is not economically scalable, nor necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a mechanism -- part technical and part social -- for ensuring reliable collator sets that are resilient to attemps to stop any subsytem of the Polkadot protocol.

    -

    Motivation

    +

    Motivation

    In order to guarantee access to Polkadot's system, the collators on its system chains must propose blocks (provide liveness) and allow all transactions to eventually be included. That is, some collators may censor transactions, but there must exist one collator in the set who will include a @@ -2522,12 +2153,12 @@ to censor any subset of transactions.

  • Collators selected by governance SHOULD have a reasonable expectation that the Treasury will reimburse their operating costs.
  • -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    This protocol builds on the existing Collator Selection pallet and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who @@ -2563,27 +2194,27 @@ approximately:

  • of which 15 are Invulnerable, and
  • five are elected by bond.
  • -

    Drawbacks

    +

    Drawbacks

    The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The vast majority of cases can be covered by unit testing. Integration test should ensure that the Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired number of Candidates, can handle updates over XCM from the system's governance location.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.

    -

    Performance

    +

    Performance

    As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.

    -

    Ergonomics

    +

    Ergonomics

    The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.

    Compatibility

    This RFC is compatible with the existing implementation and can be handled via upgrades and migration.

    -

    Prior Art and References

    +

    Prior Art and References

    Written Discussions

    -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    - +

    There may exist in the future system chains for which this model of collator selection is not appropriate. These chains should be evaluated on a case-by-case basis.

    (source)

    @@ -2639,10 +2270,10 @@ appropriate. These chains should be evaluated on a case-by-case basis.

    AuthorsPierre Krieger -

    Summary

    +

    Summary

    The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

    This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

    -

    Motivation

    +

    Motivation

    The maintenance of bootnodes has long been an annoyance for everyone.

    When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

    @@ -2651,9 +2282,9 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-date

    Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

    While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

    Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

    -

    Stakeholders

    +

    Stakeholders

    This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

    -

    Explanation

    +

    Explanation

    The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

    Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

    While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

    @@ -2690,10 +2321,10 @@ message Response {

    The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

    Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

    -

    Drawbacks

    +

    Drawbacks

    The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

    The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

    This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

    @@ -2702,22 +2333,22 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) a

    For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

    Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

    Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

    Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

    Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    Compatibility

    Irrelevant.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    -

    Unresolved Questions

    +

    Unresolved Questions

    While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

    - +

    It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

    (source)

    Table of Contents

    @@ -2738,13 +2369,13 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsJonas Gehrlein -

    Summary

    +

    Summary

    The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

    -

    Motivation

    +

    Motivation

    How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

    -

    Stakeholders

    +

    Stakeholders

    Polkadot DOT token holders.

    -

    Explanation

    +

    Explanation

    This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

    It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

    Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

    @@ -2787,13 +2418,13 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsJoe Petrowski -

    Summary

    +

    Summary

    Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

    -

    Motivation

    +

    Motivation

    Many groups have expressed interest in representing collectives on-chain. Some of these include:

    -

    Drawbacks

    +

    Drawbacks

    Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    No changes to the existing system are proposed. Only changes to how maintenance is organized.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    No changes

    -

    Prior Art and References

    +

    Prior Art and References

    Existing Encointer runtime repo

    -

    Unresolved Questions

    +

    Unresolved Questions

    None identified

    - +

    More info on Encointer: encointer.org

    (source)

    Table of Contents

    @@ -3207,11 +2838,11 @@ This can be unified and simplified by moving both parts into the runtime.

    AuthorsJoe Petrowski, Gavin Wood -

    Summary

    +

    Summary

    The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.

    -

    Motivation

    +

    Motivation

    Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -3223,13 +2854,13 @@ blockspace) to the network.

    By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace.

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    The following pallets and subsystems are good candidates to migrate from the Relay Chain: