diff --git a/404.html b/404.html index 2b25caa..326edce 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index c76d278..bb1123b 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index 9c1e020..a5b8b5c 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 27a24a0..b31be51 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index c929f4c..155c19e 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0009-improved-net-light-client-requests.html b/approved/0009-improved-net-light-client-requests.html index e0b4a14..689a471 100644 --- a/approved/0009-improved-net-light-client-requests.html +++ b/approved/0009-improved-net-light-client-requests.html @@ -90,7 +90,7 @@ diff --git a/approved/0010-burn-coretime-revenue.html b/approved/0010-burn-coretime-revenue.html index 318ceb9..471ebca 100644 --- a/approved/0010-burn-coretime-revenue.html +++ b/approved/0010-burn-coretime-revenue.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index c7f403e..66ee892 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html index 0fd6ecb..2d774ef 100644 --- a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 1f1d56a..a154881 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 9674708..a8655ce 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0026-sassafras-consensus.html b/approved/0026-sassafras-consensus.html index 90705bc..d092bb7 100644 --- a/approved/0026-sassafras-consensus.html +++ b/approved/0026-sassafras-consensus.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index 09beda1..5fc3f67 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0042-extrinsics-state-version.html b/approved/0042-extrinsics-state-version.html index f7bfb66..bc3bfa2 100644 --- a/approved/0042-extrinsics-state-version.html +++ b/approved/0042-extrinsics-state-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0043-storage-proof-size-hostfunction.html b/approved/0043-storage-proof-size-hostfunction.html index 618aec8..4a6ab9a 100644 --- a/approved/0043-storage-proof-size-hostfunction.html +++ b/approved/0043-storage-proof-size-hostfunction.html @@ -90,7 +90,7 @@ diff --git a/approved/0045-nft-deposits-asset-hub.html b/approved/0045-nft-deposits-asset-hub.html index 7e43a61..2e43ec0 100644 --- a/approved/0045-nft-deposits-asset-hub.html +++ b/approved/0045-nft-deposits-asset-hub.html @@ -90,7 +90,7 @@ diff --git a/approved/0047-assignment-of-availability-chunks.html b/approved/0047-assignment-of-availability-chunks.html index c378479..cfdae6a 100644 --- a/approved/0047-assignment-of-availability-chunks.html +++ b/approved/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ diff --git a/approved/0048-session-keys-runtime-api.html b/approved/0048-session-keys-runtime-api.html index 48a7816..63079d5 100644 --- a/approved/0048-session-keys-runtime-api.html +++ b/approved/0048-session-keys-runtime-api.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 06d63d7..8541463 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index 10be692..8e60476 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/approved/0059-nodes-capabilities-discovery.html b/approved/0059-nodes-capabilities-discovery.html index ca39319..2bf86b6 100644 --- a/approved/0059-nodes-capabilities-discovery.html +++ b/approved/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ diff --git a/approved/0078-merkleized-metadata.html b/approved/0078-merkleized-metadata.html index 920e9b8..4c32808 100644 --- a/approved/0078-merkleized-metadata.html +++ b/approved/0078-merkleized-metadata.html @@ -90,7 +90,7 @@ diff --git a/approved/0084-general-transaction-extrinsic-format.html b/approved/0084-general-transaction-extrinsic-format.html index 61dfb5c..1001f94 100644 --- a/approved/0084-general-transaction-extrinsic-format.html +++ b/approved/0084-general-transaction-extrinsic-format.html @@ -90,7 +90,7 @@ diff --git a/approved/0091-dht-record-creation-time.html b/approved/0091-dht-record-creation-time.html index e0d688d..621ef20 100644 --- a/approved/0091-dht-record-creation-time.html +++ b/approved/0091-dht-record-creation-time.html @@ -90,7 +90,7 @@ diff --git a/approved/0097-unbonding_queue.html b/approved/0097-unbonding_queue.html index 986f015..13601a4 100644 --- a/approved/0097-unbonding_queue.html +++ b/approved/0097-unbonding_queue.html @@ -90,7 +90,7 @@ diff --git a/approved/0099-transaction-extension-version.html b/approved/0099-transaction-extension-version.html index af059cb..689d306 100644 --- a/approved/0099-transaction-extension-version.html +++ b/approved/0099-transaction-extension-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0100-xcm-multi-type-asset-transfer.html b/approved/0100-xcm-multi-type-asset-transfer.html index 2befc62..58fe8bb 100644 --- a/approved/0100-xcm-multi-type-asset-transfer.html +++ b/approved/0100-xcm-multi-type-asset-transfer.html @@ -90,7 +90,7 @@ diff --git a/approved/0101-xcm-transact-remove-max-weight-param.html b/approved/0101-xcm-transact-remove-max-weight-param.html index 14459a5..c4f3538 100644 --- a/approved/0101-xcm-transact-remove-max-weight-param.html +++ b/approved/0101-xcm-transact-remove-max-weight-param.html @@ -90,7 +90,7 @@ diff --git a/approved/0103-introduce-core-index-commitment.html b/approved/0103-introduce-core-index-commitment.html index ba21ee3..df7ee6e 100644 --- a/approved/0103-introduce-core-index-commitment.html +++ b/approved/0103-introduce-core-index-commitment.html @@ -90,7 +90,7 @@ diff --git a/approved/0105-xcm-improved-fee-mechanism.html b/approved/0105-xcm-improved-fee-mechanism.html index f024365..bcf6643 100644 --- a/approved/0105-xcm-improved-fee-mechanism.html +++ b/approved/0105-xcm-improved-fee-mechanism.html @@ -90,7 +90,7 @@ diff --git a/approved/0107-xcm-execution-hints.html b/approved/0107-xcm-execution-hints.html index 96ab1f2..84b0716 100644 --- a/approved/0107-xcm-execution-hints.html +++ b/approved/0107-xcm-execution-hints.html @@ -90,7 +90,7 @@ diff --git a/approved/0108-xcm-remove-testnet-ids.html b/approved/0108-xcm-remove-testnet-ids.html index d80d1bb..3587d11 100644 --- a/approved/0108-xcm-remove-testnet-ids.html +++ b/approved/0108-xcm-remove-testnet-ids.html @@ -90,7 +90,7 @@ diff --git a/approved/0122-alias-origin-on-asset-transfers.html b/approved/0122-alias-origin-on-asset-transfers.html index ee87c36..c3a9e0b 100644 --- a/approved/0122-alias-origin-on-asset-transfers.html +++ b/approved/0122-alias-origin-on-asset-transfers.html @@ -90,7 +90,7 @@ diff --git a/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html b/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html index 2aefcdf..7e92e04 100644 --- a/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html +++ b/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html @@ -90,7 +90,7 @@ diff --git a/approved/0125-xcm-asset-metadata.html b/approved/0125-xcm-asset-metadata.html index 83d8b05..485b7c2 100644 --- a/approved/0125-xcm-asset-metadata.html +++ b/approved/0125-xcm-asset-metadata.html @@ -90,7 +90,7 @@ diff --git a/approved/0126-introduce-pvq.html b/approved/0126-introduce-pvq.html index 37813f5..43e3c12 100644 --- a/approved/0126-introduce-pvq.html +++ b/approved/0126-introduce-pvq.html @@ -90,7 +90,7 @@ diff --git a/approved/0135-compressed-blob-prefixes.html b/approved/0135-compressed-blob-prefixes.html index ae8c53b..68adb33 100644 --- a/approved/0135-compressed-blob-prefixes.html +++ b/approved/0135-compressed-blob-prefixes.html @@ -90,7 +90,7 @@ diff --git a/approved/0139-faster-erasure-coding.html b/approved/0139-faster-erasure-coding.html index 7dba9a0..59ecf8e 100644 --- a/approved/0139-faster-erasure-coding.html +++ b/approved/0139-faster-erasure-coding.html @@ -90,7 +90,7 @@ diff --git a/approved/0149-rfc-1-renewal-adjustment.html b/approved/0149-rfc-1-renewal-adjustment.html index 19ae1f9..864501b 100644 --- a/approved/0149-rfc-1-renewal-adjustment.html +++ b/approved/0149-rfc-1-renewal-adjustment.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index dad8a22..bcb2556 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index dad8a22..bcb2556 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index 2a97607..12c3ebd 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -696,182 +696,6 @@ To mitigate this, we propose preventing the market from closing at the ope

Prior Art and References

This RFC builds extensively on the available ideas put forward in RFC-1.

Additionally, I want to express a special thanks to Samuel Haefner, Shahar Dobzinski, and Alistair Stewart for fruitful discussions and helping me structure my thoughts.

-

(source)

-

Table of Contents

- -

RFC-0117: The Unbrick Collective

-
- - - -
Start Date22 August 2024
DescriptionThe Unbrick Collective aims to help teams rescuing a para once it stops producing blocks
AuthorsBryan Chen, Pablo Dorado
-
-

Summary

-

A followup of the RFC-0014. This RFC proposes adding a new collective to the Polkadot Collectives -Chain: The Unbrick Collective, as well as improvements in the mechanisms that will allow teams -operating paras that had stopped producing blocks to be assisted, in order to restore the production -of blocks of these paras.

-

Motivation

-

Since the initial launch of Polkadot parachains, there has been many incidients causing parachains -to stop producing new blocks (therefore, being bricked) and many occurrences that required -Polkadot governance to update the parachain head state/wasm. This can be due to many reasons range -from incorrectly registering the initial head state, inability to use sudo key, bad runtime -migration, bad weight configuration, and bugs in the development of the Polkadot SDK.

-

Currently, when the para is not unlocked in the paras registrar1, the Root origin is required to -perform such actions, involving the governance process to invoke this origin, which can be very -resource expensive for the teams. The long voting and enactment times also could result significant -damage to the parachain and users.

-

Finally, other instances of governance that might enact a call using the Root origin (like the -Polkadot Fellowship), due to the nature of their mission, are not fit to carry these kind of tasks.

-

In consequence, the idea of a Unbrick Collective that can provide assistance to para teams when -they brick and further protection against future halts is reasonable enough.

-

Stakeholders

- -

Explanation

-

The Collective

-

The Unbrick Collective is defined as an unranked collective of members, not paid by the Polkadot -Treasury. Its main goal is to serve as a point of contact and assistance for enacting the actions -needed to unbrick a para. Such actions are:

- -

In order to ensure these changes are safe enough for the network, actions enacted by the Unbrick -Collective must be whitelisted via similar mechanisms followed by collectives like the Polkadot -Fellowship. This will prevent unintended, not overseen changes on other paras to occur.

-

Also, teams might opt-in to delegate handling their para in the registry to the Collective. This -allows to perform similar actions using the paras registrar, allowing for a shorter path to unbrick a -para.

-

Initially, the unbrick collective has powers similar to a parachains own sudo, but permits more -decentralized control. In the future, Polkadot shall provide functionality like SPREE or JAM that -exceeds sudo permissions, so the unbrick collective cannot modify those state roots or code.

-

The Unbrick Process

-
flowchart TD
-    A[Start] 
-
-    A -- Bricked --> C[Request para unlock via Root]
-    C -- Approved --> Y
-    C -- Rejected --> A
-    
-    D[unbrick call proposal on WhitelistedUnbrickCaller]
-    E[whitelist call proposal on the Unbrick governance]
-    E -- call whitelisted --> F[unbrick call enacted]
-    D -- unbrick called --> F
-    F --> Y
-
-    A -- Not bricked --> O[Opt-in to the Collective]
-    O -- Bricked --> D
-    O -- Bricked --> E
-
-    Y[update PVF / head state] -- Unbricked --> Z[End]
-
-

Initially, a para team has two paths to handle a potential unbrick of their para in the case it -stops producing blocks.

-
    -
  1. Opt-in to the Unbrick Collective: This is done by delegating the handling of the para -in the paras registrar to an origin related to the Collective. This doesn't require unlocking -the para. This way, the collective is enabled to perform changes in the paras module, after -the Unbrick Process proceeds.
  2. -
  3. Request a Para Unlock: In case the para hasn't delegated its handling in the paras -registrar, it'll be still possible for the para team to submit a proposal to unlock the para, -which can be assisted by the Collective. However, this involves submitting a proposal to the Root -governance origin.
  4. -
-

Belonging to the Collective

-

The collective will be initially created without members (no seeding). There will be additional -governance proposals to setup the seed members.

-

The origins able to modify the members of the collective are:

- -

The members are responsible to verify the technical details of the unbrick requests (i.e. the hash -of the new PVF being set). Therefore, they must have the technical capacity to perform such tasks.

-

Suggested requirements to become a member are the following:

- -

Drawbacks

-

The ability to modify the Head State and/or the PVF of a para means a possibility to perform -arbitrary modifications of it (i.e. take control the native parachain token or any bridged assets -in the para).

-

This could introduce a new attack vector, and therefore, such great power needs to be handled -carefully.

-

Testing, Security, and Privacy

-

The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

-

An audit will be required to ensure the implementation doesn't introduce unwanted side effects.

-

There are no privacy related concerns.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This RFC should not introduce any performance impact.

-

Ergonomics

-

This RFC should improve the experience for new and existing parachain teams, lowering the barrier -to unbrick a stalled para.

-

Compatibility

-

This RFC is fully compatible with existing interfaces.

-

Prior Art and References

- -

Unresolved Questions

- - -
1 -

The paras registrar refers to a pallet in the Relay, responsible to gather registration info -of the paras, the locked/unlocked state, and the manager info.

-
-

(source)

Table of Contents

-

Motivation

+

Motivation

The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.

The API of many host functions contains buffer allocations. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32-byte buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer to free the buffer.

Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst-case scenario consists of simply decreasing a number; in the best-case scenario, it is free. Doing so would save many VM memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.

Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons, it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go through the runtime <-> host boundary twice (once for allocating and once for freeing). Moving the allocator to the runtime side would be a good idea, although it would increase the runtime size. But before the host-side allocator can be deprecated, all the host functions that use it must be updated to avoid using it.

-

Stakeholders

+

Stakeholders

No attempt was made to convince stakeholders.

-

Explanation

+

Explanation

New host functions

This section contains a list of new host functions to introduce and amendments to the existing ones.

(func $ext_storage_read_version_2
@@ -1135,7 +959,7 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub
 
  • ext_allocator_free_version_1
  • ext_offchain_network_state_version_1
  • -

    Unresolved Questions

    +

    Unresolved Questions

    The changes in this RFC would need to be benchmarked. That involves implementing the RFC and measuring the speed difference.

    It is expected that most host functions are faster or equal in speed to their deprecated counterparts, with the following exceptions:

      @@ -1176,7 +1000,7 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub AuthorsJonas Gehrlein -

      Summary

      +

      Summary

      This RFC proposes burning 80% of transaction fees accrued on Polkadot’s Relay Chain and, more significantly, on all its system parachains. The remaining 20% would continue to incentivize Validators (on the Relay Chain) and Collators (on system parachains) for including transactions. The 80:20 split is motivated by preserving the incentives for Validators, which are crucial for the security of the network, while establishing a consistent fee policy across the Relay Chain and all system parachains.

      • @@ -1187,7 +1011,7 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub

      This proposal extends the system's deflationary direction and is enabling direct value capture for DOT holders of an overall increased activity on the network.

      -

      Motivation

      +

      Motivation

      Historically, transaction fees on both the Relay Chain and the system parachains (with a few exceptions) have been relatively low. This is by design—Polkadot is built to scale and offer low-cost transactions. While this principle remains unchanged, growing network activity could still result in a meaningful accumulation of fees over time.

      Implementing this RFC ensures that potentially increasing activity manifesting in more fees is captured for all token holders. It further aligns the way that the network is handling fees (such as from transactions or for coretime usage) is handled. The arguments in support of this are close to those outlined in RFC0010. Specifically, burning transaction fees has the following benefits:

      Compensation for Coretime Usage

      @@ -1195,7 +1019,7 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub

      Value Accrual and Deflationary Pressure

      By burning the transaction fees, the system effectively reduces the token supply and thereby increase the scarcity of the native token. This deflationary pressure can increase the token's long-term value and ensures that the value captured is translated equally to all existing token holders.

      This proposal requires only minimal code changes, making it inexpensive to implement, yet it introduces a consistent policy for handling transaction fees across the network. Crucially, it positions Polkadot for a future where fee burning could serve as a counterweight to an otherwise inflationary token model, ensuring that value generated by network usage is returned to all DOT holders.

      -

      Stakeholders

      +

      Stakeholders

      • All DOT Token Holders: Benefit from reduced supply and direct value capture as network usage increases.

        @@ -1258,9 +1082,9 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub Authorspolka.dom (polkadotdom) -

        Summary

        +

        Summary

        This RFC proposes changes to pallet-conviction-voting that allow for simultaneous voting and delegation. For example, Alice could delegate to Bob, then later vote on a referendum while keeping their delegation to Bob intact. It is a strict subset of Leemo's RFC 35.

        -

        Motivation

        +

        Motivation

        Backdrop

        Under our current voting system, a voter can either vote or delegate. To vote, they must first ensure they have no delegate, and to delegate, they must first clear their current votes.

        The Issue

        @@ -1275,12 +1099,12 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub

        This RFC aims to solve the second and third issue and thus more accurately align governance to the true voter preferences.

        An Aside

        One may ask, could a voter not just undelegate, vote, then delegate again? Could this just be built into the user interface? Unfortunately, this does not work due to the need to clear their votes before redelegation. In practice the voter would undelegate, vote, wait until the referendum is closed, hope that there's no other referenda they would like to vote on, then redelegate. At best it's a temporally extended friction. At worst the voter goes unrepresented in voting for the duration of the vote clearing period.

        -

        Stakeholders

        +

        Stakeholders

        Runtime developers: If runtime developers are relying on the previous assumptions for their VotingHooks implementations, they will need to rethink their approach. In addition, a runtime migration is needed. Lastly, it is a serious change in governance that requires some consideration beyond the technical.

        App developers: Apps like Subsquare and Polkassembly would need to update their user interface logic. They will also need to handle the new error.

        Users: We will want users to be aware of the new functionality, though not required.

        Technical Writers: This change will require rewrites of documentation and tutorials.

        -

        Explanation

        +

        Explanation

        New Data & Runtime Logic

        The Voting Enum is first collapsed, as there's no longer a distinction between the variants. Then a (poll index -> retracted votes count) data item would be added to the user's voting data stored in VotingFor. This would keep track of the per poll balance that has been clawed back from the user by those delegating to them.

        The implementation must allow for the (poll index -> retracted votes) data to exist even if the user does not currently have a vote for that poll. A simple example that highlights the necessity is as follows: A delegator votes first, then the delegate does. If the delegator is not allowed to create the retracted votes data on the delegate, the tally count would be corrupted when the delegate votes.

        @@ -1293,25 +1117,25 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub

        A user's locked balance will be the greater of the delegation lock and the voting lock.

        Migrations

        A runtime migration is necessary, though simple considering voting and delegation are currently separate.

        -

        Drawbacks

        +

        Drawbacks

        There are two potential drawbacks to this system -

        An unbounded rate of change of the voter preferences function

        If implemented, there will be no friction in delegating, undelegating, and voting. Therefore, there could be large and immediate shifts in the voter preferences function. In other voting systems we see bounds added to the rate of change (voting cycles, etc). That said, it is unclear whether this is desired or advantageous. Additionally, there are more easily parameterized and analytically tractable ways to handle this than what we currently have. See future directions.

        Lessened value in becoming a delegate

        If a delegate's voting power can be stripped from them at any point, then there is necessarily a reduction in their power within the system. This provides less incentive to become a delegate. But again, there are more customizable ways to handle this if it proves necessary.

        -

        Testing, Security, and Privacy

        +

        Testing, Security, and Privacy

        This change would mean a more complicated STF for voting, which would increase difficulty of hardening. Though sufficient unit testing should handle this with ease.

        -

        Performance, Ergonomics, and Compatibility

        -

        Performance

        +

        Performance, Ergonomics, and Compatibility

        +

        Performance

        The proposed changes would increase both the compute and storage requirements by about 2x for all voting functions. No change in complexity.

        -

        Ergonomics

        +

        Ergonomics

        Voting and delegation will both become more ergonomic for users, as there are no longer hard constraints affecting what you can do and when you can do it.

        -

        Compatibility

        +

        Compatibility

        Runtime developers will need to add the migration and ensure their hooks still work.

        App developers will need to update their user interfaces to accommodate the new functionality. They will need to handle the new error as well.

        -

        Prior Art and References

        +

        Prior Art and References

        A current WIP implementation can be found here. While WIP, it is STF complete & heavily commented for ease of understanding.

        -

        Unresolved Questions

        +

        Unresolved Questions

        None

        It is possible we would like to add a system parameter for the rate of change of the voting/delegation system. This could prevent wild swings in the voter preferences function and motivate/shield delegates by solidifying their positions over some amount of time. However, it's unclear that this would be valuable or even desirable.

        @@ -1345,9 +1169,9 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub AuthorsAndrew Berger - Syed Hosseini -

        Summary

        +

        Summary

        This RFC is an amendment to RFC-0048. It proposes changing the OpaqueKeysInner:create_ownership_proof and OpaqueKeys:: ownership_proof_is_valid to invoke generation and validation procedures specific to each crypto type. This enables different crypto schemes to implement proof of possession that fits their security needs. In short, this RFC delegates the procedure of generating and validating proof of possession to the crypto schemes rather than dictating a uniform generation and verification.

        -

        Motivation

        +

        Motivation

        Following RFC-0048, all submitted keys accompany a signature of the account_id by the same key, proving that the submitter knows the private key corresponding to the submitted key. However, a scheme should mandate a context-specific approach for generating proof of possession and a different context for signing anything else to prevent rogue key attacks [3]. While this is critical for schemes with aggregatable public keys, the other (non-aggregatable) crypto schemes opt for backward compatibility and accept signatures not prepended with mandatory context.

        However, the current RFC does not allow using different API calls and procedures to generate proof of possession for different crypto schemes.

        After this RFC, the procedure for generating and verifying proof of possession would be at the discretion of the crypto scheme itself, not deterministically tied to the way they sign other messages. Stakeholders

        @@ -1356,7 +1180,7 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub
      • Polkadot node implementors
      • Validator operators
      -

      Explanation

      +

      Explanation

      The RFC does not change the structure introduced by RFC-0048. The proof is a sequence of signatures:

      #![allow(unused)]
       fn main() {
      @@ -1368,16 +1192,16 @@ The ext_crypto_ed25519_public_key_version_1 function writes the pub
       

      The prefix could alert signers if they are misled into signing false proof of possession statements. More importantly, a new crypto scheme could specify a different structure for its proof of possession.

      Because RFC-0048 has not been deployed, the version of the SessionKeys could still be set to 1 as requested by RFC-0048.

      -

      Drawbacks

      +

      Drawbacks

      Crypto scheme needs to implement an explicit generate_proof_of_possession and verify_proof_of_possession runtime API in addition to old capabilities (sigen, verify, etc).

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      The proof of possession for current crypto schemes is virtually identical to the one defined in RFC-0048. On the other hand, the changes proposed by this RFC allow the generation of secure proof of possession for BLS keys.

      -

      Performance, Ergonomics, and Compatibility

      -

      Performance

      +

      Performance, Ergonomics, and Compatibility

      +

      Performance

      The performance is the same as the one discussed in RFC-0048.

      -

      Ergonomics

      +

      Ergonomics

      Separating the generation of proof of possession from signing allows a crypto scheme more freedom to implement proof of possession that is fitted to its needs.

      -

      Compatibility

      +

      Compatibility

      The significant difference is that proof of possession suggested by RFC-0048 is signed:

      rust
       account_id
      @@ -1387,9 +1211,9 @@ account_id
       "POP_"|account_id
       

      for the current crypto scheme. However, future crypto schemes such as BLS, which are not bound to backward compatibility, could produce more sophisticated proof of possession.

      -

      Prior Art and References

      +

      Prior Art and References

      This is a minor amendment to RFC-0048.

      -

      Unresolved Questions

      +

      Unresolved Questions

      None.

      - [1] Substrate implementation of the generation of proof of possession for all crypto schemes (current and experimental ones) is implemented in Pull 6010.

      @@ -1435,9 +1259,9 @@ account_id AuthorsGavin Wood -

      Summary

      +

      Summary

      This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.

      -

      Motivation

      +

      Motivation

      Present System

      The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.

      The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.

      @@ -1458,7 +1282,7 @@ account_id
    • The solution SHOULD avoid creating additional dependencies on functionality which the Relay-chain need not strictly provide for the delivery of the Polkadot UC.
    • Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.

      -

      Stakeholders

      +

      Stakeholders

      Primary stakeholder sets are:

      • Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
      • @@ -1467,7 +1291,7 @@ account_id

      Socialization:

      The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.

      -

      Explanation

      +

      Explanation

      Overview

      Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.

      When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.

      @@ -1899,11 +1723,11 @@ InstaPoolHistory: (empty)
    • Governance upgrade proposal(s).
    • Monitoring of the upgrade process.
    • -

      Performance, Ergonomics and Compatibility

      +

      Performance, Ergonomics and Compatibility

      No specific considerations.

      Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.

      While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.

      -

      Testing, Security and Privacy

      +

      Testing, Security and Privacy

      Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.

      A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.

      Any final implementation MUST pass a professional external security audit.

      @@ -1924,7 +1748,7 @@ InstaPoolHistory: (empty)
    • The percentage of cores to be sold as Bulk Coretime.
    • The fate of revenue collected.
    -

    Prior Art and References

    +

    Prior Art and References

    Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

    (source)

    Table of Contents

    @@ -1957,10 +1781,10 @@ InstaPoolHistory: (empty) AuthorsGavin Wood, Robert Habermeier -

    Summary

    +

    Summary

    In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

    This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

    -

    Motivation

    +

    Motivation

    The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

    Requirements

      @@ -1972,7 +1796,7 @@ InstaPoolHistory: (empty)
    • The interface MUST allow for the allocating chain to instruct changes to the number of cores which it is able to allocate.
    • The interface MUST allow for the allocating chain to be notified of changes to the number of cores which are able to be allocated by the allocating chain.
    -

    Stakeholders

    +

    Stakeholders

    Primary stakeholder sets are:

    • Developers of the Relay-chain core-management logic.
    • @@ -1980,7 +1804,7 @@ InstaPoolHistory: (empty)

    Socialization:

    This content of this RFC was discussed in the Polkdot Fellows channel.

    -

    Explanation

    +

    Explanation

    The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

    Future work may include these messages being introduced into the XCM standard.

    UMP Message Types

    @@ -2055,9 +1879,9 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);

    Realistic Limits of the Usage

    For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

    For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

    -

    Performance, Ergonomics and Compatibility

    +

    Performance, Ergonomics and Compatibility

    No specific considerations.

    -

    Testing, Security and Privacy

    +

    Testing, Security and Privacy

    Standard Polkadot testing and security auditing applies.

    The proposal introduces no new privacy concerns.

    @@ -2065,7 +1889,7 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);

    RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

    Drawbacks, Alternatives and Unknowns

    None at present.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    (source)

    Table of Contents

    @@ -2111,13 +1935,13 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600); AuthorsJoe Petrowski -

    Summary

    +

    Summary

    As core functionality moves from the Relay Chain into system chains, so increases the reliance on the liveness of these chains for the use of the network. It is not economically scalable, nor necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a mechanism -- part technical and part social -- for ensuring reliable collator sets that are resilient to attemps to stop any subsytem of the Polkadot protocol.

    -

    Motivation

    +

    Motivation

    In order to guarantee access to Polkadot's system, the collators on its system chains must propose blocks (provide liveness) and allow all transactions to eventually be included. That is, some collators may censor transactions, but there must exist one collator in the set who will include a @@ -2153,12 +1977,12 @@ to censor any subset of transactions.

  • Collators selected by governance SHOULD have a reasonable expectation that the Treasury will reimburse their operating costs.
  • -

    Stakeholders

    +

    Stakeholders

    • Infrastructure providers (people who run validator/collator nodes)
    • Polkadot Treasury
    -

    Explanation

    +

    Explanation

    This protocol builds on the existing Collator Selection pallet and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who @@ -2194,27 +2018,27 @@ approximately:

  • of which 15 are Invulnerable, and
  • five are elected by bond.
  • -

    Drawbacks

    +

    Drawbacks

    The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The vast majority of cases can be covered by unit testing. Integration test should ensure that the Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired number of Candidates, can handle updates over XCM from the system's governance location.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.

    -

    Performance

    +

    Performance

    As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.

    -

    Ergonomics

    +

    Ergonomics

    The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.

    -

    Compatibility

    +

    Compatibility

    This RFC is compatible with the existing implementation and can be handled via upgrades and migration.

    -

    Prior Art and References

    +

    Prior Art and References

    Written Discussions

    • GitHub: Collator Selection Roadmap
    • @@ -2229,7 +2053,7 @@ migration.

    • SR Labs Auditors
    • Current collators including Paranodes, Stake Plus, Turboflakes, Peter Mensik, SIK, and many more.
    -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    There may exist in the future system chains for which this model of collator selection is not @@ -2270,10 +2094,10 @@ appropriate. These chains should be evaluated on a case-by-case basis.

    AuthorsPierre Krieger -

    Summary

    +

    Summary

    The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

    This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

    -

    Motivation

    +

    Motivation

    The maintenance of bootnodes has long been an annoyance for everyone.

    When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

    @@ -2282,9 +2106,9 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-date

    Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

    While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

    Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

    -

    Stakeholders

    +

    Stakeholders

    This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

    -

    Explanation

    +

    Explanation

    The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

    Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

    While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

    @@ -2321,10 +2145,10 @@ message Response {

    The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

    Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

    -

    Drawbacks

    +

    Drawbacks

    The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

    The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

    This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

    @@ -2333,20 +2157,20 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) a

    For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

    Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

    Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

    Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

    Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    Irrelevant.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    -

    Unresolved Questions

    +

    Unresolved Questions

    While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

    It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

    @@ -2381,9 +2205,9 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsPierre Krieger -

    Summary

    +

    Summary

    Improve the networking messages that query storage items from the remote, in order to reduce the bandwidth usage and number of round trips of light clients.

    -

    Motivation

    +

    Motivation

    Clients on the Polkadot peer-to-peer network can be divided into two categories: full nodes and light clients. So-called full nodes are nodes that store the content of the chain locally on their disk, while light clients are nodes that don't. In order to access for example the balance of an account, a full node can do a disk read, while a light client needs to send a network message to a full node and wait for the full node to reply with the desired value. This reply is in the form of a Merkle proof, which makes it possible for the light client to verify the exactness of the value.

    Unfortunately, this network protocol is suffering from some issues:

    Once Polkadot and Kusama will have transitioned to state_version = 1, which modifies the format of the trie entries, it will be possible to generate Merkle proofs that contain only the hashes of values in the storage. Thanks to this, it is already possible to prove the existence of a key without sending its entire value (only its hash), or to prove that a value has changed or not between two blocks (by sending just their hashes). Thus, the only reason why aforementioned issues exist is because the existing networking messages don't give the possibility for the querier to query this. This is what this proposal aims at fixing.

    -

    Stakeholders

    +

    Stakeholders

    This is the continuation of https://github.com/w3f/PPPs/pull/10, which itself is the continuation of https://github.com/w3f/PPPs/pull/5.

    -

    Explanation

    +

    Explanation

    The protobuf schema of the networking protocol can be found here: https://github.com/paritytech/substrate/blob/5b6519a7ff4a2d3cc424d78bc4830688f3b184c0/client/network/light/src/schema/light.v1.proto

    The proposal is to modify this protocol in this way:

    @@ -11,6 +11,7 @@ message Request {
    @@ -2453,24 +2277,24 @@ An alternative could have been to specify the child_trie_info for e
     Also note that child tries aren't considered as descendants of the main trie when it comes to the includeDescendants flag. In other words, if the request concerns the main trie, no content coming from child tries is ever sent back.

    This protocol keeps the same maximum response size limit as currently exists (16 MiB). It is not possible for the querier to know in advance whether its query will lead to a reply that exceeds the maximum size. If the reply is too large, the replier should send back only a limited number (but at least one) of requested items in the proof. The querier should then send additional requests for the rest of the items. A response containing none of the requested items is invalid.

    The server is allowed to silently discard some keys of the request if it judges that the number of requested keys is too high. This is in line with the fact that the server might truncate the response.

    -

    Drawbacks

    +

    Drawbacks

    This proposal doesn't handle one specific situation: what if a proof containing a single specific item would exceed the response size limit? For example, if the response size limit was 1 MiB, querying the runtime code (which is typically 1.0 to 1.5 MiB) would be impossible as it's impossible to generate a proof less than 1 MiB. The response size limit is currently 16 MiB, meaning that no single storage item must exceed 16 MiB.

    Unfortunately, because it's impossible to verify a Merkle proof before having received it entirely, parsing the proof in a streaming way is also not possible.

    A way to solve this issue would be to Merkle-ize large storage items, so that a proof could include only a portion of a large storage item. Since this would require a change to the trie format, it is not realistically feasible in a short time frame.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The main security consideration concerns the size of replies and the resources necessary to generate them. It is for example easily possible to ask for all keys and values of the chain, which would take a very long time to generate. Since responses to this networking protocol have a maximum size, the replier should truncate proofs that would lead to the response being too large. Note that it is already possible to send a query that would lead to a very large reply with the existing network protocol. The only thing that this proposal changes is that it would make it less complicated to perform such an attack.

    Implementers of the replier side should be careful to detect early on when a reply would exceed the maximum reply size, rather than inconditionally generate a reply, as this could take a very large amount of CPU, disk I/O, and memory. Existing implementations might currently be accidentally protected from such an attack thanks to the fact that requests have a maximum size, and thus that the list of keys in the query was bounded. After this proposal, this accidental protection would no longer exist.

    Malicious server nodes might truncate Merkle proofs even when they don't strictly need to, and it is not possible for the client to (easily) detect this situation. However, malicious server nodes can already do undesirable things such as throttle down their upload bandwidth or simply not respond. There is no need to handle unnecessarily truncated Merkle proofs any differently than a server simply not answering the request.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    It is unclear to the author of the RFC what the performance implications are. Servers are supposed to have limits to the amount of resources they use to respond to requests, and as such the worst that can happen is that light client requests become a bit slower than they currently are.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    The prior networking protocol is maintained for now. The older version of this protocol could get removed in a long time.

    -

    Prior Art and References

    +

    Prior Art and References

    None. This RFC is a clean-up of an existing mechanism.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    The current networking protocol could be deprecated in a long time. Additionally, the current "state requests" protocol (used for warp syncing) could also be deprecated in favor of this one.

    @@ -2493,13 +2317,13 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJonas Gehrlein -

    Summary

    +

    Summary

    The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

    -

    Motivation

    +

    Motivation

    How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

    -

    Stakeholders

    +

    Stakeholders

    Polkadot DOT token holders.

    -

    Explanation

    +

    Explanation

    This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

    It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

    Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

    @@ -2542,13 +2366,13 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJoe Petrowski -

    Summary

    +

    Summary

    Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

    -

    Motivation

    +

    Motivation

    Many groups have expressed interest in representing collectives on-chain. Some of these include:

    • Parachain technical fellowship (new)
    • @@ -2564,12 +2388,12 @@ path to having its collective accepted on-chain as part of the protocol. Accepta the Fellowship to include the new collective with a given initial configuration into the runtime. However, the network, not the Fellowship, should ultimately decide which collectives are in the interest of the network.

      -

      Stakeholders

      +

      Stakeholders

      • Polkadot stakeholders who would like to organize on-chain.
      • Technical Fellowship, in its role of maintaining system runtimes.
      -

      Explanation

      +

      Explanation

      The group that wishes to operate an on-chain collective should publish the following information:

      • Charter, including the collective's mandate and how it benefits Polkadot. This would be similar @@ -2603,22 +2427,22 @@ Fellowship would help them identify the pallet indices associated with a given c or not the Fellowship member agrees with removal.

        Collective removal may also come with other governance calls, for example voiding any scheduled Treasury spends that would fund the given collective.

        -

        Drawbacks

        +

        Drawbacks

        Passing a Root origin referendum is slow. However, given the network's investment (in terms of code maintenance and salaries) in a new collective, this is an appropriate step.

        -

        Testing, Security, and Privacy

        +

        Testing, Security, and Privacy

        No impacts.

        -

        Performance, Ergonomics, and Compatibility

        +

        Performance, Ergonomics, and Compatibility

        Generally all new collectives will be in the Collectives parachain. Thus, performance impacts should strictly be limited to this parachain and not affect others. As the majority of logic for collectives is generalized and reusable, we expect most collectives to be instances of similar subsets of modules. That is, new collectives should generally be compatible with UIs and other services that provide collective-related functionality, with little modifications to support new ones.

        -

        Prior Art and References

        +

        Prior Art and References

        The launch of the Technical Fellowship, see the initial forum post.

        -

        Unresolved Questions

        +

        Unresolved Questions

        None at this time.

        (source)

        Table of Contents

        @@ -2655,13 +2479,13 @@ ones.

        AuthorsOliver Tale-Yazdi -

        Summary

        +

        Summary

        Introduces breaking changes to the Core runtime API by letting Core::initialize_block return an enum. The versions of Core is bumped from 4 to 5.

        -

        Motivation

        +

        Motivation

        The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks.
        Further it would be nice to not hinder the possibility of implementing a new hook poll, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem. This hook can then be used to replace the use of on_initialize and on_finalize for non-deadline critical logic.
        In a similar fashion, it should not hinder the future addition of a System::PostInherents callback that always runs after all inherents were applied.

        -

        Stakeholders

        +

        Stakeholders

        • Substrate Maintainers: They have to implement this, including tests, audit and maintenance burden.
        • @@ -2669,7 +2493,7 @@ maintenance burden.
        • Polkadot Parachain Teams: They have to adapt to the breaking changes but then eventually have multi-block migrations available.
        -

        Explanation

        +

        Explanation

        Core::initialize_block

        This runtime API function is changed from returning () to ExtrinsicInclusionMode:

        fn initialize_block(header: &<Block as BlockT>::Header)
        @@ -2690,23 +2514,23 @@ multi-block migrations available.
      • 1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.

        2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.

        3. System::PostInherents can be done in the same manner as poll.

        -

        Drawbacks

        +

        Drawbacks

        The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.

        -

        Testing, Security, and Privacy

        +

        Testing, Security, and Privacy

        The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.

        Security: n/a

        Privacy: n/a

        -

        Performance, Ergonomics, and Compatibility

        -

        Performance

        +

        Performance, Ergonomics, and Compatibility

        +

        Performance

        The performance overhead is minimal in the sense that no clutter was added after fulfilling the requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.

        -

        Ergonomics

        +

        Ergonomics

        The new interface allows for more extensible runtime logic. In the future, this will be utilized for multi-block-migrations which should be a huge ergonomic advantage for parachain developers.

        -

        Compatibility

        +

        Compatibility

        The advice here is OPTIONAL and outside of the RFC. To not degrade user experience, it is recommended to ensure that an updated node can still import historic blocks.

        -

        Prior Art and References

        +

        Prior Art and References

        The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge requests:

        -

        Unresolved Questions

        +

        Unresolved Questions

        Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode, ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them.
        @@ -2760,14 +2584,14 @@ This can be unified and simplified by moving both parts into the runtime.

        AuthorsBryan Chen -

        Summary

        +

        Summary

        This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.

        This is achieved by remove existing lock conditions and only lock a parachain when:

        • A parachain manager explicitly lock the parachain
        • OR a parachain block is produced successfully
        -

        Motivation

        +

        Motivation

        The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.

        The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.

        The key scenarios this RFC seeks to improve are:

        @@ -2786,12 +2610,12 @@ This can be unified and simplified by moving both parts into the runtime.

      • A parachain SHOULD be locked when it successfully produced the first block.
      • A parachain manager MUST be able to perform lease swap without having a running parachain.
      -

      Stakeholders

      +

      Stakeholders

      • Parachain teams
      • Parachain users
      -

      Explanation

      +

      Explanation

      Status quo

      A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:

        @@ -2831,29 +2655,29 @@ This can be unified and simplified by moving both parts into the runtime.

      • Parachain never produced a block. Including from expired leases.
      • Parachain manager never explicitly lock the parachain.
      -

      Drawbacks

      +

      Drawbacks

      Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.

      For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.

      It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.

      Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.

      Existing operational parachains will not be impacted.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

      An audit maybe required to ensure the implementation does not introduce unwanted side effects.

      There is no privacy related concerns.

      -

      Performance

      +

      Performance

      This RFC should not introduce any performance impact.

      -

      Ergonomics

      +

      Ergonomics

      This RFC should improve the developer experiences for new and existing parachain teams

      -

      Compatibility

      +

      Compatibility

      This RFC is fully compatibility with existing interfaces.

      -

      Prior Art and References

      +

      Prior Art and References

      • Parachain Slot Extension Story: https://github.com/paritytech/polkadot/issues/4758
      • Allow parachain to renew lease without actually run another parachain: https://github.com/paritytech/polkadot/issues/6685
      • Always treat parachain that never produced block for a significant amount of time as unlocked: https://github.com/paritytech/polkadot/issues/7539
      -

      Unresolved Questions

      +

      Unresolved Questions

      None at this stage.

      This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.

      @@ -2889,19 +2713,19 @@ This can be unified and simplified by moving both parts into the runtime.

      Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland -

      Summary

      +

      Summary

      Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

      -

      Motivation

      +

      Motivation

      Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

      Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

      -

      Stakeholders

      +

      Stakeholders

      • Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
      • Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
      • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
      • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
      -

      Explanation

      +

      Explanation

      Our PR has all details about our runtime and how we would move it into the fellowship repo.

      Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

      It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

      @@ -2910,15 +2734,15 @@ This can be unified and simplified by moving both parts into the runtime.

    • Encointer will publish all its crates crates.io
    • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
    -

    Drawbacks

    +

    Drawbacks

    Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    No changes to the existing system are proposed. Only changes to how maintenance is organized.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    No changes

    -

    Prior Art and References

    +

    Prior Art and References

    Existing Encointer runtime repo

    -

    Unresolved Questions

    +

    Unresolved Questions

    None identified

    More info on Encointer: encointer.org

    @@ -3840,11 +3664,11 @@ other privacy-enhancing mechanisms to address this concern. AuthorsJoe Petrowski, Gavin Wood -

    Summary

    +

    Summary

    The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.

    -

    Motivation

    +

    Motivation

    Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -3856,13 +3680,13 @@ blockspace) to the network.

    By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace.

    -

    Stakeholders

    +

    Stakeholders

    • Parachains that interact with affected logic on the Relay Chain;
    • Core protocol and XCM format developers;
    • Tooling, block explorer, and UI developers.
    -

    Explanation

    +

    Explanation

    The following pallets and subsystems are good candidates to migrate from the Relay Chain:

    • Identity
    • @@ -4008,32 +3832,32 @@ sensible to rehearse a migration.

      Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- will give confidence to the chain's robustness on Polkadot.

      -

      Drawbacks

      +

      Drawbacks

      These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular may require some optimizations to deal with constraints.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      Standard audit/review requirements apply. More powerful multi-chain integration test tools would be useful in developement.

      -

      Performance, Ergonomics, and Compatibility

      +

      Performance, Ergonomics, and Compatibility

      Describe the impact of the proposal on the exposed functionality of Polkadot.

      -

      Performance

      +

      Performance

      This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its primary resources are allocated to system performance.

      -

      Ergonomics

      +

      Ergonomics

      This proposal alters very little for coretime users (e.g. parachain developers). Application developers will need to interact with multiple chains, making ergonomic light client tools particularly important for application development.

      For existing parachains that interact with these subsystems, they will need to configure their runtimes to recognize the new locations in the network.

      -

      Compatibility

      +

      Compatibility

      Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. Application developers will need to interact with multiple chains in the network.

      -

      Prior Art and References

      +

      Prior Art and References

      -

      Unresolved Questions

      +

      Unresolved Questions

      There remain some implementation questions, like how to use balances for both Staking and Governance. See, for example, Moving Staking off the Relay Chain.

      @@ -4072,13 +3896,13 @@ With Staking and Governance off the Relay Chain, this is not an unreasonable nex AuthorsVedhavyas Singareddi -

      Summary

      +

      Summary

      At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the Storage. We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field under RuntimeVersion, we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

      -

      Motivation

      +

      Motivation

      Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is further explored in https://github.com/polkadot-fellows/RFCs/issues/19

      @@ -4090,11 +3914,11 @@ One of the main challenge here is some extrinsics could be big enough that this included in the Consensus block due to Block's weight restriction. If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but rather at maximum, 32 byte of extrinsic data.

      -

      Stakeholders

      +

      Stakeholders

      • Technical Fellowship, in its role of maintaining system runtimes.
      -

      Explanation

      +

      Explanation

      In order to use project specific StateVersion for extrinsic roots, we proposed an implementation that introduced parameter to frame_system::Config but that unfortunately did not feel correct. @@ -4120,24 +3944,24 @@ pub const VERSION: RuntimeVersion = RuntimeVersion { system_version: 1, }; }

    -

    Drawbacks

    +

    Drawbacks

    There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated so that chains know which system_version to use.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    AFAIK, should not have any impact on the security or privacy.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    These changes should be compatible for existing chains if they use state_version value for system_verision.

    -

    Performance

    +

    Performance

    I do not believe there is any performance hit with this change.

    -

    Ergonomics

    +

    Ergonomics

    This does not break any exposed Apis.

    -

    Compatibility

    +

    Compatibility

    This change should not break any compatibility.

    -

    Prior Art and References

    +

    Prior Art and References

    We proposed introducing a similar change by introducing a parameter to frame_system::Config but did not feel that is the correct way of introducing this change.

    -

    Unresolved Questions

    +

    Unresolved Questions

    I do not have any specific questions about this change at the moment.

    IMO, this change is pretty self-contained and there won't be any future work necessary.

    @@ -4168,9 +3992,9 @@ is the correct way of introducing this change.

    AuthorsSebastian Kunert -

    Summary

    +

    Summary

    This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

    -

    Motivation

    +

    Motivation

    The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

    Transact Over Bridge -

    Drawbacks

    +

    Drawbacks

    In terms of ergonomics and user experience, this support for combining an asset transfer with a subsequent action (like Transact) is a net positive.

    In terms of performance, and privacy, this is neutral with no changes.

    In terms of security, the feature by itself is also neutral because it allows preserve_origin: false usage for operating with no extra trust assumptions. When wanting to support preserving origin, chains need to configure secure origin aliasing filters. The one suggested in this RFC should be the right choice for the majority of chains, but each chain will ultimately choose depending on their business model and logic (e.g. chain does not plan to integrate with Asset Hub). It is up to the individual chains to configure accordingly.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Barriers should now allow AliasOrigin, DescendOrigin or ClearOrigin.

    Normally, XCM program builders should audit their programs and eliminate assumptions of "no origin" on remote side of this instruction. In this case, the InitiateAssetsTransfer has not been released yet, it will be part of XCMv5, and we can make this change part of the same XCMv5 so that there isn't even the possibility of someone in the wild having built XCM programs using this instruction on those wrong assumptions.

    The working assumption going forward is that the origin on the remote side can either be cleared or it can be the local origin's reanchored location. This assumption is in line with the current behavior of remote XCM programs sent over using pallet_xcm::send.

    The existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear origin same as before for compatibility reasons.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    No impact.

    -

    Ergonomics

    +

    Ergonomics

    Improves ergonomics by allowing the local origin to operate on the remote chain even when the XCM program includes an asset transfer.

    -

    Compatibility

    +

    Compatibility

    At the executor-level this change is backwards and forwards compatible. Both types of programs can be executed on new and old versions of XCM with no changes in behavior.

    New version of the InitiateAssetsTransfer instruction acts same as before when used with preserve_origin: false.

    For using the new capabilities, the XCM builder has to verify that the involved chains have the required origin-aliasing filters configured and use some new version of Barriers aware of AliasOrigin as an allowed alternative to ClearOrigin.

    For compatibility reasons, this RFC proposes this mechanism be added as an enhancement to the yet unreleased InitiateAssetsTransfer instruction, thus eliminating possibilities of XCM logic breakages in the wild. Following the same logic, the existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear the origin same as before for compatibility reasons.

    Any one of DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport instructions can be replaced with a InitiateAssetsTransfer instruction with or without origin aliasing, thus providing a clean and clear upgrade path for opting-in this new feature.

    -

    Prior Art and References

    +

    Prior Art and References

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    (source)

    @@ -7029,37 +6853,37 @@ Following the same logic, the existing DepositReserveAsset, I AuthorsBastian Köcher -

    Summary

    +

    Summary

    The code of a runtime is stored in its own state, and when performing a runtime upgrade, this code is replaced. The new runtime can contain runtime migrations that adapt the state to the state layout as defined by the runtime code. This runtime migration is executed when building the first block with the new runtime code. Anything that interacts with the runtime state uses the state layout as defined by the runtime code. So, when trying to load something from the state in the block that applied the runtime upgrade, it will use the new state layout but will decode the data from the non-migrated state. In the worst case, the data is incorrectly decoded, which may lead to crashes or halting of the chain.

    This RFC proposes to store the new runtime code under a different storage key when applying a runtime upgrade. This way, all the off-chain logic can still load the old runtime code under the default storage key and decode the state correctly. The block producer is then required to use this new runtime code to build the next block. While building the next block, the runtime is executing the migrations and moves the new runtime code to the default runtime code location. So, the runtime code found under the default location is always the correct one to decode the state from which the runtime code was loaded.

    -

    Motivation

    +

    Motivation

    While the issue of having undecodable state only exists for the one block in which the runtime upgrade was applied, it still impacts anything that reads state data, like block explorers, UIs, nodes, etc. For block explorers, the issue mainly results in indexing invalid data and UIs may show invalid data to the user. For nodes, reading incorrect data may lead to a performance degradation of the network. There are also ways to prevent certain decoding issues from happening, but it requires that developers are aware of this issue and also requires introducing extra code, which could introduce further bugs down the line.

    So, this RFC tries to solve these issues by fixing the underlying problem of having temporary undecodable state.

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    The runtime code is stored under the special key :code in the state. Nodes and other tooling read the runtime code under this storage key when they want to interact with the runtime for e.g., building/importing blocks or getting the metadata to read the state. To update the runtime code the runtime overwrites the value at :code, and then from the next block on, the new runtime will be loaded. This RFC proposes to first store the new runtime code under :pending_code in the state for one block. When the next block is being built, the block builder first needs to check if :pending_code is set, and if so, it needs to load the runtime from this storage key. While building the block the runtime will move :pending_code to :code to have the runtime code at the default location. Nodes importing the block will also need to load :pending_code if it exists to ensure that the correct runtime code is used. By doing it this way, the runtime code found at :code in the state of a block will always be able to decode the state. Furthermore, this RFC proposes to introduce system_version: 3. The system_version was introduced in RFC42. Version 3 would then enable the usage of :pending_code when applying a runtime code upgrade. This way, the feature can be introduced first and enabled later when the majority of the nodes have upgraded.

    -

    Drawbacks

    +

    Drawbacks

    Because the first block built with the new runtime code will move the runtime code from :pending_code to :code, the runtime code will need to be loaded. This means the runtime code will appear in the proof of validity of a parachain for the first block built with the new runtime code. Generally this is not a problem as the runtime code is also loaded by the parachain when setting the new runtime code. There is still the possibility of having state that is not migrated even when following the proposal as presented by this RFC. The issue is that if the amount of data to be migrated is too big, not all of it can be migrated in one block, because either it takes more time than there is assigned for a block or parachains for example have a fixed budget for their proof of validity. To solve this issue there already exist multi-block migrations that can chunk the migration across multiple blocks. Consensus-critical data needs to be migrated in the first block to ensure that block production etc., can continue. For the other data being migrated by multi-block migrations the migrations could for example expose to the outside which keys are being migrated and should not be indexed until the migration is finished.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Testing should be straightforward and most of the existing testing should already be good enough. Extending with some checks that :pending_code is moved to :code.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The performance should not be impacted besides requiring loading the runtime code in the first block being built with the new runtime code.

    -

    Ergonomics

    +

    Ergonomics

    It only alters the way blocks are produced and imported after applying a runtime upgrade. This means that only nodes need to be adapted to the changes of this RFC.

    -

    Compatibility

    +

    Compatibility

    The change will require that the nodes are upgraded before the runtime starts using this feature. Otherwise they will fail to import the block build by :pending_code. For Polkadot/Kusama this means that also the parachain nodes need to be running with a relay chain node version that supports this new feature. Otherwise the parachains will stop producing/finalizing nodes as they can not sync the relay chain any more.

    -

    Prior Art and References

    +

    Prior Art and References

    The issue initially reported a bug that led to this RFC. It also discusses multiple solutions for the problem.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    The core idea of PVQ is to have a unified interface that meets the aforementioned requirements.

    On the runtime side, an extension-based system is introduced to serve as a standardization layer across different chains. Each extension specification defines a set of cohesive APIs. @@ -7639,12 +7463,12 @@ enum PvqError {

  • ExceedsMaxMessageSize
  • Transport
  • -

    Drawbacks

    +

    Drawbacks

    Performance issues

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    As a newly introduced feature, PVQ operates independently and does not impact or degrade the performance of existing runtime implementations.

    -

    Ergonomics

    +

    Ergonomics

    From the perspective of off-chain tooling, this proposal streamlines development by unifying multiple chain-specific RuntimeAPIs under a single consistent interface. This significantly benefits wallet and dApp developers by eliminating the need to handle individual implementations for similar operations across different chains. The proposal also enhances development flexibility by allowing custom computations to be modularly encapsulated as PolkaVM programs that interact with the exposed APIs.

    -

    Compatibility

    +

    Compatibility

    For RuntimeAPI integration, the proposal defines new APIs, which do not break compatibility with existing interfaces. For XCM Integration, the proposal does not modify the existing XCM message format, which is backwards compatible.

    -

    Prior Art and References

    +

    Prior Art and References

    There are several discussions related to the proposal, including:

    -

    Unresolved Questions

    +

    Unresolved Questions

    @@ -7739,14 +7563,14 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View Authorss0me0ne-unkn0wn (13WGadgNgqSjiGQvfhimw9pX26mvGdYQ6XgrjPANSEDRoGMt) -

    Summary

    +

    Summary

    This RFC proposes a change that makes it possible to identify types of compressed blobs stored on-chain, as well as used off-chain, without the need for decompression.

    -

    Motivation

    +

    Motivation

    Currently, a compressed blob does not give any idea of what's inside because the only thing that can be inside, according to the spec, is Wasm. In reality, other blob types are already being used, and more are to come. Apart from being error-prone by itself, the current approach does not allow to properly route the blob through the execution paths before its decompression, which will result in suboptimal implementations when more blob types are used. Thus, it is necessary to introduce a mechanism allowing to identify the blob type without decompressing it.

    This proposal is intended to support future work enabling Polkadot to execute PolkaVM and, more generally, other-than-Wasm parachain runtimes, and allow developers to introduce arbitrary compression methods seamlessly in the future.

    -

    Stakeholders

    +

    Stakeholders

    Node developers are the main stakeholders for this proposal. It also creates a foundation on which parachain runtime developers will build.

    -

    Explanation

    +

    Explanation

    Overview

    The current approach to compressing binary blobs involves using zstd compression, and the resulting compressed blob is prefixed with a unique 64-bit magic value specified in that subsection. The same procedure is used to compress both Wasm code blobs and proofs-of-validity. Currently, having solely a compressed blob, it's impossible to tell what's inside it without decompression, a Wasm blob, or a PoV. That doesn't cause problems in the current protocol, as Wasm blobs and PoV blobs take completely different execution paths in the code.

    The changes proposed below are intended to define the means for distinguishing compressed blob types in a backward-compatible and future-proof way.

    @@ -7767,24 +7591,24 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View
  • Conservatively, wait until no more PVFs prefixed with CBLOB_ZSTD_LEGACY remain on-chain. That may take quite some time. Alternatively, create a migration that alters prefixes of existing blobs;
  • Removing CBLOB_ZSTD_LEGACY prefix will be possible after all the nodes in all the networks cease using the prefix which is a long process, and additional incentives should be offered to the community to make people upgrade.
  • -

    Drawbacks

    +

    Drawbacks

    Currently, the only requirement for a compressed blob prefix is not to coincide with Wasm magic bytes (as stated in code comments). Changes proposed here increase prefix collision risk, given that arbitrary data may be compressed in the future. However, it must be taken into account that:

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    As the change increases granularity, it will positively affect both testing possibilities and security, allowing developers to check what's inside a given compressed blob precisely. Testing the change itself is trivial. Privacy is not affected by this change.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The current implementation's performance is not affected by this change. Future implementations allowing for the execution of other-than-Wasm parachain runtimes will benefit from this change performance-wise.

    -

    Ergonomics

    +

    Ergonomics

    The end-user ergonomics is not affected. The ergonomics for developers will benefit from this change as it enables exact checks and less guessing.

    -

    Compatibility

    +

    Compatibility

    The change is designed to be backward-compatible.

    -

    Prior Art and References

    +

    Prior Art and References

    SDK PR#6704 (WIP) introduces a mechanism similar to that described in this proposal and proves the necessity of such a change.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    This proposal creates a foundation for two future work directions:

    @@ -7826,9 +7650,9 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View Authorsordian -

    Summary

    +

    Summary

    This RFC proposes changes to the erasure coding algorithm and the method for computing the erasure root on Polkadot to improve performance of both processes.

    -

    Motivation

    +

    Motivation

    The Data Availability (DA) Layer in Polkadot provides a foundation for shared security, enabling Approval Checkers and Collators to download Proofs-of-Validity (PoV) for security and liveness purposes respectively. @@ -7845,12 +7669,12 @@ The proposed change is orthogonal to RFC-47 and can be used in conjunction with collator nodes), we propose bundling another performance-enhancing breaking change that addresses the CPU bottleneck in the erasure coding process, but using a separate node feature (NodeFeatures part of HostConfiguration) for its activation.

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    We propose two specific changes:

    1. @@ -7884,22 +7708,22 @@ faster deployment for most parachains but would add complexity.

    2. Activate RFC-47 via Configuration::set_node_feature runtime change.
    3. Activate the new erasure coding scheme using another Configuration::set_node_feature runtime change.
    -

    Drawbacks

    +

    Drawbacks

    Bundling this breaking change with RFC-47 might reset progress in updating collators. However, the omni node initiative should help mitigate this issue.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Testing is needed to ensure binary compatibility across implementations in multiple languages.

    Performance and Compatibility

    -

    Performance

    +

    Performance

    According to benchmarks:

    -

    Compatibility

    +

    Compatibility

    This requires a breaking change that can be coordinated following the same approach as in RFC-47.

    -

    Prior Art and References

    +

    Prior Art and References

    JAM already utilizes the same optimizations described in the Graypaper.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None.

    Future improvements could include:

    @@ -7931,12 +7755,12 @@ faster deployment for most parachains but would add complexity.

    Authorseskimor -

    Summary

    +

    Summary

    This RFC proposes an amendment to RFC-1 Agile Coretime: Renewal prices will no longer only be adjusted based on a configurable renewal bump, but also to the lower end of the current sale - if that turns out higher.

    An implementation can be found here.

    -

    Motivation

    +

    Motivation

    In RFC-1, we strived for perfect predictability on renewal prices, but what we expected unfortunately got proven in practice: Perfect predictability allows for core hoarding and cheap market manipulation, with the effect that both on @@ -7948,9 +7772,9 @@ extend to elastic scaling and in practice, even existing teams wanting to keep their core, because they forgot to renew in the interlude.

    In a nutshell the current situation is severely hindering teams from deploying on Polkadot: We are essentially in a Denial of Service situation.

    -

    Stakeholders

    +

    Stakeholders

    Stakeholders should be existing teams already having a core and new teams wanting to join the ecosystem.

    -

    Explanation

    +

    Explanation

    This RFC proposes to fix this situation, by limiting renewal price predictability to reasonable levels, by introducing a weak coupling to the current market price: We ensure that the price for renewals is at least as high @@ -8018,13 +7842,13 @@ ensures that any additional attack will be expensive: 10 cores, results in to 100% capacity to have some leeway for governance in case of unforeseen attacks/weaknesses. -

    Drawbacks

    +

    Drawbacks

    We are dropping almost perfect predictability on renewal prices, in favor of predictability within reasonable bounds. The introduction of a minimum price, will also result in huge relative price adjustments for existing tenants, because prices were so unreasonably low on Kusama. In practice this should not be an issue for any real project.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    This RFC is proposing a single line of code change. A test has been added to make sure it is working as expected.

    @@ -8047,13 +7871,13 @@ tenants. Having them exposed at least with this 10x reduction seems a sensible valuation.

    There are no privacy concerns.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    The proposed changes are backwards compatible. No interfaces are changed. Performance is not affected. Ergonomics should be greatly improved especially for new entrants, as cores will be available for sale again. A configured minimum price also ensures that the starting price of the Dutch auction stays reasonably high, deterring sniping all the cores at the beginning of a sale.

    -

    Prior Art and References

    +

    Prior Art and References

    This RFC is altering RFC-1 and taking ideas from RFC-17, mainly the introduction of a minimum price.

    This RFC should solve the immediate problems we are seeing in production right @@ -8098,16 +7922,16 @@ a few cores not for sale should be enough to mitigate such a situation.

    AuthorsPierre Krieger -

    Summary

    +

    Summary

    Update the runtime-host interface to no longer make use of a host-side allocator.

    -

    Motivation

    +

    Motivation

    The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.

    The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer in order to free the buffer.

    Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.

    Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.

    -

    Stakeholders

    +

    Stakeholders

    No attempt was made at convincing stakeholders.

    -

    Explanation

    +

    Explanation

    New host functions

    This section contains a list of new host functions to introduce.

    (func $ext_storage_read_version_2
    @@ -8310,11 +8134,11 @@ The following other host functions are similarly also considered deprecated:

  • ext_allocator_free_version_1
  • ext_offchain_network_state_version_1
  • -

    Drawbacks

    +

    Drawbacks

    This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.

    Prior Art

    The API of these new functions was heavily inspired by API used by the C programming language.

    -

    Unresolved Questions

    +

    Unresolved Questions

    The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.

    It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions:

      @@ -8371,10 +8195,10 @@ This would remove the possibility to synchronize older blocks, which is probably LicenseMIT -

      Summary

      +

      Summary

      This RFC proposes a dynamic pricing model for the sale of Bulk Coretime on the Polkadot UC. The proposed model updates the regular price of cores for each sale period, by taking into account the number of cores sold in the previous sale, as well as a limit of cores and a target number of cores sold. It ensures a minimum price and limits price growth to a maximum price increase factor, while also giving govenance control over the steepness of the price change curve. It allows governance to address challenges arising from changing market conditions and should offer predictable and controlled price adjustments.

      Accompanying visualizations are provided at [1].

      -

      Motivation

      +

      Motivation

      RFC-1 proposes periodic Bulk Coretime Sales as a mechanism to sell continouos regions of blockspace (suggested to be 4 weeks in length). A number of Blockspace Regions (compare RFC-1 & RFC-3) are provided for sale to the Broker-Chain each period and shall be sold in a way that provides value-capture for the Polkadot network. The exact pricing mechanism is out of scope for RFC-1 and shall be provided by this RFC.

      A dynamic pricing model is needed. A limited number of Regions are offered for sale each period. The model needs to find the price for a period based on supply and demand of the previous period.

      The model shall give Coretime consumers predictability about upcoming price developments and confidence that Polkadot governance can adapt the pricing model to changing market conditions.

      @@ -8386,7 +8210,7 @@ This would remove the possibility to synchronize older blocks, which is probably
    • The solution SHOULD provide a maximum factor of price increase should the limit of Regions sold per period be reached.
    • The solution should allow governance to control the steepness of the price function
    • -

      Stakeholders

      +

      Stakeholders

      The primary stakeholders of this RFC are:

      • Protocol researchers and evelopers
      • @@ -8394,7 +8218,7 @@ This would remove the possibility to synchronize older blocks, which is probably
      • Polkadot parachains teams
      • Brokers involved in the trade of Bulk Coretime
      -

      Explanation

      +

      Explanation

      Overview

      The dynamic pricing model sets the new price based on supply and demand in the previous period. The model is a function of the number of Regions sold, piecewise-defined by two power functions.

        @@ -8493,9 +8317,9 @@ SCALE_DOWN = 1 SCALE_UP = 1 OLD_PRICE = 1000
    -

    Drawbacks

    +

    Drawbacks

    None at present.

    -

    Prior Art and References

    +

    Prior Art and References

    This pricing model is based on the requirements from the basic linear solution proposed in RFC-1, which is a simple dynamic pricing model and only used as proof. The present model adds additional considerations to make the model more adaptable under real conditions.

    Future Possibilities

    This RFC, if accepted, shall be implemented in conjunction with RFC-1.

    @@ -8533,16 +8357,16 @@ OLD_PRICE = 1000 AuthorsGabriel Facco de Arruda -

    Summary

    +

    Summary

    This RFC proposes changes that enable the use of absolute locations in AccountId derivations, which allows protocols built using XCM to have static account derivations in any runtime, regardless of its position in the family hierarchy.

    -

    Motivation

    +

    Motivation

    These changes would allow protocol builders to leverage absolute locations to maintain the exact same derived account address across all networks in the ecosystem, thus enhancing user experience.

    One such protocol, that is the original motivation for this proposal, is InvArch's Saturn Multisig, which gives users a unifying multisig and DAO experience across all XCM connected chains.

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    This proposal aims to make it possible to derive accounts for absolute locations, enabling protocols that require the ability to maintain the same derived account in any runtime. This is done by deriving accounts from the hash of described absolute locations, which are static across different destinations.

    The same location can be represented in relative form and absolute form like so:

    #![allow(unused)]
    @@ -8599,25 +8423,25 @@ OLD_PRICE = 1000
     

    DescribeFamily

    The DescribeFamily location descriptor is part of the HashedDescription MultiLocation hashing system and exists to describe locations in an easy format for encoding and hashing, so that an AccountId can be derived from this MultiLocation.

    This implementation contains a match statement that does not match against absolute locations, so changes to it involve matching against absolute locations and providing appropriate descriptions for hashing.

    -

    Drawbacks

    +

    Drawbacks

    No drawbacks have been identified with this proposal.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Tests can be done using simple unit tests, as this is not a change to XCM itself but rather to types defined in xcm-builder.

    Security considerations should be taken with the implementation to make sure no unwanted behavior is introduced.

    This proposal does not introduce any privacy considerations.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    Depending on the final implementation, this proposal should not introduce much overhead to performance.

    -

    Ergonomics

    +

    Ergonomics

    The ergonomics of this proposal depend on the final implementation details.

    -

    Compatibility

    +

    Compatibility

    Backwards compatibility should remain unchanged, although that depend on the final implementation.

    -

    Prior Art and References

    +

    Prior Art and References

    • DescirbeFamily type: https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/xcm-builder/src/location_conversion.rs#L122
    • WithComputedOrigin type: https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/xcm-builder/src/barriers.rs#L153
    -

    Unresolved Questions

    +

    Unresolved Questions

    Implementation details and overall code is still up to discussion.

    (source)

    Table of Contents

    @@ -8649,7 +8473,7 @@ OLD_PRICE = 1000 ChaosDAO -

    Summary

    +

    Summary

    This RFC proposes to make modifications to voting power delegations as part of the Conviction Voting pallet. The changes being proposed include:

    1. Allow a Delegator to vote independently of their Delegate if they so desire.
    2. @@ -8657,7 +8481,7 @@ OLD_PRICE = 1000
    3. Make a change so that when a delegate votes abstain their delegated votes also vote abstain.
    4. Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call.
    -

    Motivation

    +

    Motivation

    It has become clear since the launch of OpenGov that there are a few common tropes which pop up time and time again:

    1. The frequency of referenda is often too high for network participants to have sufficient time to review, comprehend, and ultimately vote on each individual referendum. This means that these network participants end up being inactive in on-chain governance.
    2. @@ -8665,13 +8489,13 @@ OLD_PRICE = 1000
    3. Delegating votes for all tracks currently requires long batched calls which result in high fees for the Delegator - resulting in a reluctance from many to delegate their votes.

    We believe (based on feedback from token holders with a larger stake in the network) that if there were some changes made to delegation mechanics, these larger stake holders would be more likely to delegate their voting power to active network participants – thus greatly increasing the support turnout.

    -

    Stakeholders

    +

    Stakeholders

    The primary stakeholders of this RFC are:

    • The Polkadot Technical Fellowship who will have to research and implement the technical aspects of this RFC
    • DOT token holders in general
    -

    Explanation

    +

    Explanation

    This RFC proposes to make 4 changes to the convictionVoting pallet logic in order to improve the user experience of those delegating their voting power to another account.

    1. @@ -8687,19 +8511,19 @@ OLD_PRICE = 1000

      Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call - in order to delegate votes across all tracks, a user must batch 15 calls - resulting in high costs for delegation. A single call for delegate_all/ undelegate_all would reduce the complexity and therefore costs of delegations considerably for prospective Delegators.

    -

    Drawbacks

    +

    Drawbacks

    We do not foresee any drawbacks by implementing these changes. If anything we believe that this should help to increase overall voter turnout (via the means of delegation) which we see as a net positive.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    We feel that the Polkadot Technical Fellowship would be the most competent collective to identify the testing requirements for the ideas presented in this RFC.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    This change may add extra chain storage requirements on Polkadot, especially with respect to nested delegations.

    Ergonomics & Compatibility

    The change to add nested delegations may affect governance interfaces such as Nova Wallet who will have to apply changes to their indexers to support nested delegations. It may also affect the Polkadot Delegation Dashboard as well as Polkassembly & SubSquare.

    We want to highlight the importance for ecosystem builders to create a mechanism for indexers and wallets to be able to understand that changes have occurred such as increasing the pallet version, etc.

    -

    Prior Art and References

    +

    Prior Art and References

    N/A

    -

    Unresolved Questions

    +

    Unresolved Questions

    N/A

    Additionally we would like to re-open the conversation about the potential for there to be free delegations. This was discussed by Dr Gavin Wood at Sub0 2022 and we feel like this would go a great way towards increasing the amount of network participants that are delegating: https://youtu.be/hSoSA6laK3Q?t=526

    @@ -8743,9 +8567,9 @@ OLD_PRICE = 1000 AuthorsSergej Sakac -

    Summary

    +

    Summary

    This RFC proposes a new model for a sustainable on-demand parachain registration, involving a smaller initial deposit and periodic rent payments. The new model considers that on-demand chains may be unregistered and later re-registered. The proposed solution also ensures a quick startup for on-demand chains on Polkadot in such cases.

    -

    Motivation

    +

    Motivation

    With the support of on-demand parachains on Polkadot, there is a need to explore a new, more cost-effective model for registering validation code. In the current model, the parachain manager is responsible for reserving a unique ParaId and covering the cost of storing the validation code of the parachain. These costs can escalate, particularly if the validation code is large. We need a better, sustainable model for registering on-demand parachains on Polkadot to help smaller teams deploy more easily.

    This RFC suggests a new payment model to create a more financially viable approach to on-demand parachain registration. In this model, a lower initial deposit is required, followed by recurring payments upon parachain registration.

    This new model will coexist with the existing one-time deposit payment model, offering teams seeking to deploy on-demand parachains on Polkadot a more cost-effective alternative.

    @@ -8759,11 +8583,11 @@ OLD_PRICE = 1000
  • The solution MUST allow anyone to pay the rent.
  • The solution MUST prevent the removal of validation code if it could still be required for disputes or approval checking.
  • -

    Stakeholders

    +

    Stakeholders

    • Future Polkadot on-demand Parachains
    -

    Explanation

    +

    Explanation

    This RFC proposes a set of changes that will enable the new rent based approach to registering and storing validation code on-chain. The new model, compared to the current one, will require periodic rent payments. The parachain won't be pruned automatically if the rent is not paid, but by permitting anyone to prune the parachain and rewarding the caller, there will be an incentive for the removal of the validation code.

    On-demand parachains should still be able to utilize the current one-time payment model. However, given the size of the deposit required, it's highly likely that most on-demand parachains will opt for the new rent-based model.

    @@ -8870,25 +8694,25 @@ pub(super) type CheckedCodeHash<T: Config> = StorageMap<_, Twox64Concat, ParaId, ValidationCodeHash>; }

    To enable parachain re-registration, we should introduce a new extrinsic in the paras-registrar pallet that allows this. The logic of this extrinsic will be same as regular registration, with the distinction that it can be called by anyone, and the required deposit will be smaller since it only has to cover for the storage of the validation code.

    -

    Drawbacks

    +

    Drawbacks

    This RFC does not alter the process of reserving a ParaId, and therefore, it does not propose reducing it, even though such a reduction could be beneficial.

    Even though this RFC doesn't delve into the specifics of the configuration values for parachain registration but rather focuses on the mechanism, configuring it carelessly could lead to potential problems.

    Since the validation code hash and head data are not removed when the parachain is pruned but only when the deregister extrinsic is called, the T::DataDepositPerByte must be set to a higher value to create a strong enough incentive for removing it from the state.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The implementation of this RFC will be tested on Rococo first.

    Proper research should be conducted on setting the configuration values of the new system since these values can have great impact on the network.

    An audit is required to ensure the implementation's correctness.

    The proposal introduces no new privacy concerns.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    This RFC should not introduce any performance impact.

    -

    Ergonomics

    +

    Ergonomics

    This RFC does not affect the current parachains, nor the parachains that intend to use the one-time payment model for parachain registration.

    -

    Compatibility

    +

    Compatibility

    This RFC does not break compatibility.

    -

    Prior Art and References

    +

    Prior Art and References

    Prior discussion on this topic: https://github.com/paritytech/polkadot-sdk/issues/1796

    -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    As noted in this GitHub issue, we want to raise the per-byte cost of on-chain data storage. However, a substantial increase in this cost would make it highly impractical for on-demand parachains to register on Polkadot. @@ -8924,16 +8748,16 @@ This RFC offers an alternative solution for on-demand parachains, ensuring that AuthorsPierre Krieger -

    Summary

    +

    Summary

    Rather than enforce a limit to the total memory consumption on the client side by loading the value at :heappages, enforce that limit on the runtime side.

    -

    Motivation

    +

    Motivation

    From the early days of Substrate up until recently, the runtime was present in two forms: the wasm runtime (wasm bytecode passed through an interpreter) and the native runtime (native code directly run by the client).

    Since the wasm runtime has a lower amount of available memory (4 GiB maximum) compared to the native runtime, and in order to ensure sure that the wasm and native runtimes always produce the same outcome, it was necessary to clamp the amount of memory available to both runtimes to the same value.

    In order to achieve this, a special storage key (a "well-known" key) :heappages was introduced and represents the number of "wasm pages" (one page equals 64kiB) of memory that are available to the memory allocator of the runtimes. If this storage key is absent, it defaults to 2048, which is 128 MiB.

    The native runtime has since then been disappeared, but the concept of "heap pages" still exists. This RFC proposes a simplification to the design of Polkadot by removing the concept of "heap pages" as is currently known, and proposes alternative ways to achieve the goal of limiting the amount of memory available.

    -

    Stakeholders

    +

    Stakeholders

    Client implementers and low-level runtime developers.

    -

    Explanation

    +

    Explanation

    This RFC proposes the following changes to the client:

    Each parachain can choose the option that they prefer, but the author of this RFC strongly suggests either option C or B.

    -

    Drawbacks

    +

    Drawbacks

    In case of path A, there is one situation where the behaviour pre-RFC is not equivalent to the one post-RFC: when a host function that performs an allocation (for example ext_storage_get) is called, without this RFC this allocation might fail due to reaching the maximum heap pages, while after this RFC this will always succeed. This is most likely not a problem, as storage values aren't supposed to be larger than a few megabytes at the very maximum.

    In the unfortunate event where the runtime runs out of memory, path B would make it more difficult to relax the memory limit, as we would need to re-upload the entire Wasm, compared to updating only :heappages in path A or before this RFC. In the case where the runtime runs out of memory only in the specific event where the Wasm runtime is modified, this could brick the chain. However, this situation is no different than the thousands of other ways that a bug in the runtime can brick a chain, and there's no reason to be particularily worried about this situation in particular.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    This RFC would reduce the chance of a consensus issue between clients. The :heappages are a rather obscure feature, and it is not clear what happens in some corner cases such as the value being too large (error? clamp?) or malformed. This RFC would completely erase these questions.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    In case of path A, it is unclear how performances would be affected. Path A consists in moving client-side operations to the runtime without changing these operations, and as such performance differences are expected to be minimal. Overall, we're talking about one addition/subtraction per malloc and per free, so this is more than likely completely negligible.

    In case of path B and C, the performance gain would be a net positive, as this RFC strictly removes things.

    -

    Ergonomics

    +

    Ergonomics

    This RFC would isolate the client and runtime more from each other, making it a bit easier to reason about the client or the runtime in isolation.

    -

    Compatibility

    +

    Compatibility

    Not a breaking change. The runtime-side changes can be applied immediately (without even having to wait for changes in the client), then as soon as the runtime is updated, the client can be updated without any transition period. One can even consider updating the client before the runtime, as it corresponds to path C.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None.

    This RFC follows the same path as https://github.com/polkadot-fellows/RFCs/pull/4 by scoping everything related to memory allocations to the runtime.

    @@ -9017,7 +8841,7 @@ The :heappages are a rather obscure feature, and it is not clear wh AuthorAdam Clay Steeber -

    Summary

    +

    Summary

    This RFC proposes adding a trivial governance track on Kusama to facilitate X (formerly known as Twitter) posts on the @kusamanetwork account. The technical aspect of implementing this in the runtime is very inconsequential and straight-forward, though it might get more technical if the Fellowship wants to regulate this track with a non-existent permission set. If this is implemented it would need to be followed up with:

    @@ -9025,7 +8849,7 @@ with a non-existent permission set. If this is implemented it would need to be f
  • the establishment of specifications for proposing X posts via this track, and
  • the development of tools/processes to ensure that the content contained in referenda enacted in this track would be automatically posted on X.
  • -

    Motivation

    +

    Motivation

    The overall motivation for this RFC is to decentralize the management of the Kusama brand/communication channel to KSM holders. This is necessary in my opinion primarily because of the inactivity of the account in recent history, with posts spanning weeks or months apart. I am currently unaware of who/what entity manages the Kusama X account, but if they are affiliated with Parity or W3F this proposed solution could also offload some of the legal ramifications of making (or not making) @@ -9035,11 +8859,11 @@ and the community becomes totally autonomous in the management of Kusama's X pos that could be offloaded to openGov, provided this proof-of-concept is successful.

    Finally, this RFC is the epitome of experimentation that Kusama is ideal for. This proposal may spark newfound excitement for Kusama and help us realize Kusama's potential for pushing boundaries and trying new unconventional ideas.

    -

    Stakeholders

    +

    Stakeholders

    This idea has not been formalized by any individual (or group of) KSM holder(s). To my knowledge the socialization of this idea is contained entirely in my recent X post here, but it is possible that an idea like this one has been discussed in other places. It appears to me that the ecosystem would welcome a change like this which is why I am taking action to formalize the discussion.

    -

    Explanation

    +

    Explanation

    The implementation of this idea can be broken down into 3 primary phases:

    Phase 1 - Track configurations

    First, we begin with this RFC to ensure all feedback can be discussed and implemented in the proposal. After the Fellowship and the community come to a reasonable @@ -9092,7 +8916,7 @@ to implement them. Here's what would be needed:

  • a UI to allow layman users to propose referenda on this track
  • After everything is complete, we can update the Kusama wiki to include documentation on the X post specifications and include links to the tools/UI.

    -

    Drawbacks

    +

    Drawbacks

    The main drawback to this change is that it requires a lot of off-chain coordination. It's easy enough to include the track on Kusama but it's a totally different challenge to make it function as intended. The tools need to be built and the auth tokens need to be managed. It would certainly add an administrative burden to whoever manages the X account since they would either need to run the tools themselves or manage auth tokens.

    @@ -9104,28 +8928,28 @@ If that happens, we risk getting Kusama banned on X!

    agency to manage posts. It wouldn't be decentralized but it would probably be more effective in terms of creating good content.

    Finally, this solution is merely pseudo-decentralization since the X account manager would still have ultimate control of the account. It's decentralized insofar as the auth tokens are given to people actually running the tools; a house of cards is required to facilitate X posts via this track. Not ideal.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    There's major precedent for configuring tracks on openGov given the amount of power tracks have, so it shouldn't be hard to come up with a sound configuration. That's why I recommend restricting permissions of this track to remarks and batches of remarks, or something equally inconsequential.

    Building the tools for this implementation is really straight-forward and could be audited by Fellowship members, and the community at large, on Github.

    The largest security concern would be the management of Kusama's X account's auth tokens. We would need to ensure that they aren't compromised.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    If a track on Kusama promises users that compliant referenda enacted therein would be posted on Kusama's X account, users would expect that track to perform as promised. If the house of cards tumbles down and a compliant referendum doesn't actually get anything posted, users might think that Kusama is broken or unreliable. This could be damaging to Kusama's image and cause people to question the soundness of other features on Kusama.

    As mentioned in the drawbacks, the performance of this feature would depend on off-chain coordinations. We can reduce the administrative burden of these coordinations by funding third parties with the Treasury to deal with it, but then we're relying on trusting these parties.

    -

    Ergonomics

    +

    Ergonomics

    By adding a new track to Kusama, governance platforms like Polkassembly or Nova Wallet would need to include it on their applications. This shouldn't be too much of a burden or overhead since they've already built the infrastructure for other openGov tracks.

    -

    Compatibility

    +

    Compatibility

    This change wouldn't break any compatibility as far as I know.

    References

    One reference to a similar feature requiring on-chain/off-chain coordination would be the Kappa-Sigma-Mu Society. Nothing on-chain necessarily enforces the rules or facilitates bids, challenges, defenses, etc. However, the Society has managed to maintain itself with integrity to its rules. So I don't think this is totally out of Kusama's scope. But it will require some off-chain effort to maintain.

    -

    Unresolved Questions

    +

    Unresolved Questions

    and much more...

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    I've created the stateful multisig pallet during my studies in Polkadot Blockchain Academy under supervision from @shawntabrizi and @ank4n. After that, I've enhanced it to be fully functional and this is a draft PR#3300 in polkadot-sdk. I'll list all the details and design decisions in the following sections. Note that the PR is not 1-1 exactly to the current RFC as the RFC is a more polished version of the PR after updating based on the feedback and discussions.

    Let's start with a sequence diagram to illustrate the main operations of the Stateful Multisig.

    multisig operations

    @@ -9721,14 +9545,14 @@ pub type PendingProposals<T: Config> = StorageDoubleMap<
  • In case threshold is lower than the number of approvers then the proposal is still valid.
  • In case threshold is higher than the number of approvers then we catch it during execute proposal and error.
  • -

    Drawbacks

    +

    Drawbacks

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Standard audit/review requirements apply.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    Doing back of the envelop calculation to proof that the stateful multisig is more efficient than the stateless multisig given it's smaller footprint size on blocks.

    Quick review over the extrinsics for both as it affects the block size:

    Stateless Multisig: @@ -9792,13 +9616,13 @@ We have the following extrinsics:

    | Stateless | N^2 | Nil | | Stateful | N | N |

    So even though the stateful multisig has a larger state size, it's still more efficient in terms of block size and total footprint on the blockchain.

    -

    Ergonomics

    +

    Ergonomics

    The Stateful Multisig will have better ergonomics for managing multisig accounts for both developers and end-users.

    -

    Compatibility

    +

    Compatibility

    This RFC is compatible with the existing implementation and can be handled via upgrades and migration. It's not intended to replace the existing multisig pallet.

    -

    Prior Art and References

    +

    Prior Art and References

    multisig pallet in polkadot-sdk

    -

    Unresolved Questions

    +

    Unresolved Questions

    @@ -9848,9 +9672,9 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsLuke Schoen -

    Summary

    +

    Summary

    This proposes to increase the maximum length of PGP Fingerprint values from a 20 bytes/chars limit to a 40 bytes/chars limit.

    -

    Motivation

    +

    Motivation

    Background

    Pretty Good Privacy (PGP) Fingerprints are shorter versions of their corresponding Public Key that may be printed on a business card.

    They may be used by someone to validate the correct corresponding Public Key.

    @@ -9868,7 +9692,7 @@ Implement call filters. This will allow multisig accounts to only accept certain

    Solution Requirements

    The maximum length of identity PGP Fingerprint values should be increased from the current 20 bytes/chars limit at least a 40 bytes/chars limit to support PGP Fingerprints and GPG Fingerprints.

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    If a user tries to setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), then if they try to provide their 40 character long PGP Fingerprint or GPG Fingerprint, which is longer than the maximum length of 20 bytes/chars [u8;20], then they will encounter this error:

    createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Struct: failed on pgpFingerprint: Option<[u8;20]>:: Expected input with 20 bytes (160 bits), found 40 bytes
     

    Increasing maximum length of identity PGP Fingerprint values from the current 20 bytes/chars limit to at least a 40 bytes/chars limit would overcome these errors and support PGP Fingerprints and GPG Fingerprints, satisfying the solution requirements.

    -

    Drawbacks

    +

    Drawbacks

    No drawbacks have been identified.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Implementations would be tested for adherance by checking that 40 bytes/chars PGP Fingerprints are supported.

    No effect on security or privacy has been identified than already exists.

    No implementation pitfalls have been identified.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.

    To minimize additional overhead the proposal suggests a 40 bytes/chars limit since that would at least provide support for PGP Fingerprints, satisfying the solution requirements.

    -

    Ergonomics

    +

    Ergonomics

    No potential ergonomic optimizations have been identified.

    -

    Compatibility

    +

    Compatibility

    Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

    -

    Prior Art and References

    +

    Prior Art and References

    No prior articles or references.

    -

    Unresolved Questions

    +

    Unresolved Questions

    No further questions at this stage.

    Relates to RFC entitled "Increase maximum length of identity raw data values from 32 bytes".

    @@ -9946,10 +9770,10 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsLuke Schoen -

    Summary

    +

    Summary

    This proposes to require a slashable deposit in the broker pallet when initially purchasing or renewing Bulk Coretime or Instantaneous Coretime cores.

    Additionally, it proposes to record a reputational status based on the behavior of the purchaser, as it relates to their use of Kusama Coretime cores that they purchase, and to possibly reserve a proportion of the cores for prospective purchasers that have an on-chain identity.

    -

    Motivation

    +

    Motivation

    Background

    There are sales of Kusama Coretime cores that are scheduled to occur later this month by Coretime Marketplace Lastic.xyz initially in limited quantities, and potentially also by RegionX in future that is subject to their Polkadot referendum #582. This poses a risk in that some Kusama Coretime core purchasers may buy Kusama Coretime cores when they have no intention of actually placing a workload on them or leasing them out, which would prevent those that wish to purchase and actually use Kusama Coretime cores from being able to use any at cores at all.

    Problem

    @@ -9981,32 +9805,32 @@ Implement call filters. This will allow multisig accounts to only accept certain

    Reputation. To disincentivise certain behaviours, a reputational status indicator could be used to record the historic behavior of the purchaser and whether on-chain judgement has determined they have adequately rectified that behaviour, as it relates to their usage of Kusama Coretime cores that they purchase.

    -

    Stakeholders

    +

    Stakeholders

    -

    Drawbacks

    -

    Performance

    +

    Drawbacks

    +

    Performance

    The slashable deposit if set too high, may result in an economic impact, where less Kusama Coretime core sales are purchased.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Lack of a slashable deposit in the Broker pallet is a security concern, since it exposes Kusama Coretime sales to potential abuse.

    Reserving a proportion of Kusama Coretime sales cores for those with on-chain identities should not be to the exclusion of accounts that wish to remain anonymous or cause cores to be wasted unnecessarily. As such, if cores that are reserved for on-chain identities remain unsold then they should be released to anonymous accounts that are on a waiting list.

    No implementation pitfalls have been identified.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    It should improve performance as it reduces the potential for state bloat since there is less risk of undesirable Kusama Coretime sales activity that would be apparent with no requirement for a slashable deposit or there being no reputational risk to purchasers that waste or misuse Kusama Coretime cores.

    The solution proposes to minimize the risk of some Kusama Coretime cores not even being used or leased to perform any tasks at all.

    It will be important to monitor and manage the slashable deposits, purchaser reputations, and utilization of the proportion of cores that are reserved for accounts with an on-chain identity.

    -

    Ergonomics

    +

    Ergonomics

    The mechanism for setting a slashable deposit amount, should avoid undue complexity for users.

    -

    Compatibility

    +

    Compatibility

    Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

    -

    Prior Art and References

    +

    Prior Art and References

    Prior Art

    No prior articles.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    None

    @@ -10047,13 +9871,13 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsAurora Poppyseed, Philip Lucsok -

    Summary

    +

    Summary

    This RFC proposes the addition of a secondary market feature to either the broker pallet or as a separate pallet maintained by Lastic, enabling users to list and purchase regions. This includes creating, purchasing, and removing listings, as well as emitting relevant events and handling associated errors.

    -

    Motivation

    +

    Motivation

    Currently, the broker pallet lacks functionality for a secondary market, which limits users' ability to freely trade regions. This RFC aims to introduce a secure and straightforward mechanism for users to list regions they own for sale and allow other users to purchase these regions.

    While integrating this functionality directly into the broker pallet is one option, another viable approach is to implement it as a separate pallet maintained by Lastic. This separate pallet would have access to the broker pallet and add minimal functionality necessary to support the secondary market.

    Adding smart contracts to the Coretime chain could also address this need; however, this process is expected to be lengthy and complex. We cannot afford to wait for this extended timeline to enable basic secondary market functionality. By proposing either integration into the broker pallet or the creation of a dedicated pallet, we can quickly enhance the flexibility and utility of the broker pallet, making it more user-friendly and valuable.

    -

    Stakeholders

    +

    Stakeholders

    Primary stakeholders include:

    -

    Explanation

    +

    Explanation

    This RFC introduces the following key features:

    1. @@ -10102,10 +9926,10 @@ Implement call filters. This will allow multisig accounts to only accept certain
    -

    Drawbacks

    +

    Drawbacks

    The main drawback of adding the additional complexity directly to the broker pallet is the potential increase in maintenance overhead. Therefore, we propose adding additional functionality as a separate pallet on the Coretime chain. To take the pressure off from implementing these features, implementation along with unit tests would be taken care of by Lastic (Aurora Makovac, Philip Lucsok).

    There are potential risks of security vulnerabilities in the new market functionalities, such as unauthorized region transfers or incorrect balance adjustments. Therefore, extensive security measures would have to be implemented.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Testing

    -

    Drawbacks

    +

    Drawbacks

    There are several drawbacks to consider:

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Testing