diff --git a/404.html b/404.html index 230f62f..235b5e5 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index 8538d6d..d640e37 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index 525f53b..dff0157 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index e5b2ffc..87b8669 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 34ebdcf..62d4396 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0009-improved-net-light-client-requests.html b/approved/0009-improved-net-light-client-requests.html index 5cfc5e3..a97a811 100644 --- a/approved/0009-improved-net-light-client-requests.html +++ b/approved/0009-improved-net-light-client-requests.html @@ -90,7 +90,7 @@ diff --git a/approved/0010-burn-coretime-revenue.html b/approved/0010-burn-coretime-revenue.html index b1c2f2a..d2c9a51 100644 --- a/approved/0010-burn-coretime-revenue.html +++ b/approved/0010-burn-coretime-revenue.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index d67da66..31e3770 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html index 00ee7dd..3fa7bca 100644 --- a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 5063e9a..03fd77e 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0017-coretime-market-redesign.html b/approved/0017-coretime-market-redesign.html index 7a4b4cc..76a5454 100644 --- a/approved/0017-coretime-market-redesign.html +++ b/approved/0017-coretime-market-redesign.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 2f1fab6..e78a79f 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0026-sassafras-consensus.html b/approved/0026-sassafras-consensus.html index f7fd492..cca8e80 100644 --- a/approved/0026-sassafras-consensus.html +++ b/approved/0026-sassafras-consensus.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index ef6eafd..06564e5 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0042-extrinsics-state-version.html b/approved/0042-extrinsics-state-version.html index 1868f01..b1dca8b 100644 --- a/approved/0042-extrinsics-state-version.html +++ b/approved/0042-extrinsics-state-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0043-storage-proof-size-hostfunction.html b/approved/0043-storage-proof-size-hostfunction.html index 2bf42cf..558ffa0 100644 --- a/approved/0043-storage-proof-size-hostfunction.html +++ b/approved/0043-storage-proof-size-hostfunction.html @@ -90,7 +90,7 @@ diff --git a/approved/0045-nft-deposits-asset-hub.html b/approved/0045-nft-deposits-asset-hub.html index dbc2886..e424802 100644 --- a/approved/0045-nft-deposits-asset-hub.html +++ b/approved/0045-nft-deposits-asset-hub.html @@ -90,7 +90,7 @@ diff --git a/approved/0047-assignment-of-availability-chunks.html b/approved/0047-assignment-of-availability-chunks.html index 1990cf1..cd556b7 100644 --- a/approved/0047-assignment-of-availability-chunks.html +++ b/approved/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ diff --git a/approved/0048-session-keys-runtime-api.html b/approved/0048-session-keys-runtime-api.html index be632ad..1a314ae 100644 --- a/approved/0048-session-keys-runtime-api.html +++ b/approved/0048-session-keys-runtime-api.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 1ddbdd6..f0b2e81 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index aff35e1..4a657ef 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/approved/0059-nodes-capabilities-discovery.html b/approved/0059-nodes-capabilities-discovery.html index 3acc164..0f324f6 100644 --- a/approved/0059-nodes-capabilities-discovery.html +++ b/approved/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ diff --git a/approved/0078-merkleized-metadata.html b/approved/0078-merkleized-metadata.html index 7d3cb07..44a1975 100644 --- a/approved/0078-merkleized-metadata.html +++ b/approved/0078-merkleized-metadata.html @@ -90,7 +90,7 @@ diff --git a/approved/0084-general-transaction-extrinsic-format.html b/approved/0084-general-transaction-extrinsic-format.html index 37cc37f..890e98c 100644 --- a/approved/0084-general-transaction-extrinsic-format.html +++ b/approved/0084-general-transaction-extrinsic-format.html @@ -90,7 +90,7 @@ diff --git a/approved/0091-dht-record-creation-time.html b/approved/0091-dht-record-creation-time.html index 7a96162..78f9f82 100644 --- a/approved/0091-dht-record-creation-time.html +++ b/approved/0091-dht-record-creation-time.html @@ -90,7 +90,7 @@ diff --git a/approved/0097-unbonding_queue.html b/approved/0097-unbonding_queue.html index 6f7c87d..177942c 100644 --- a/approved/0097-unbonding_queue.html +++ b/approved/0097-unbonding_queue.html @@ -90,7 +90,7 @@ diff --git a/approved/0099-transaction-extension-version.html b/approved/0099-transaction-extension-version.html index 3b2c7f8..50a3cb4 100644 --- a/approved/0099-transaction-extension-version.html +++ b/approved/0099-transaction-extension-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0100-xcm-multi-type-asset-transfer.html b/approved/0100-xcm-multi-type-asset-transfer.html index afc9350..139bf21 100644 --- a/approved/0100-xcm-multi-type-asset-transfer.html +++ b/approved/0100-xcm-multi-type-asset-transfer.html @@ -90,7 +90,7 @@ diff --git a/approved/0101-xcm-transact-remove-max-weight-param.html b/approved/0101-xcm-transact-remove-max-weight-param.html index 615a382..51dbf60 100644 --- a/approved/0101-xcm-transact-remove-max-weight-param.html +++ b/approved/0101-xcm-transact-remove-max-weight-param.html @@ -90,7 +90,7 @@ diff --git a/approved/0103-introduce-core-index-commitment.html b/approved/0103-introduce-core-index-commitment.html index 62c7164..1d3ed2a 100644 --- a/approved/0103-introduce-core-index-commitment.html +++ b/approved/0103-introduce-core-index-commitment.html @@ -90,7 +90,7 @@ diff --git a/approved/0105-xcm-improved-fee-mechanism.html b/approved/0105-xcm-improved-fee-mechanism.html index 73b2c5c..8abb823 100644 --- a/approved/0105-xcm-improved-fee-mechanism.html +++ b/approved/0105-xcm-improved-fee-mechanism.html @@ -90,7 +90,7 @@ diff --git a/approved/0107-xcm-execution-hints.html b/approved/0107-xcm-execution-hints.html index 6e6202c..9e0ca80 100644 --- a/approved/0107-xcm-execution-hints.html +++ b/approved/0107-xcm-execution-hints.html @@ -90,7 +90,7 @@ diff --git a/approved/0108-xcm-remove-testnet-ids.html b/approved/0108-xcm-remove-testnet-ids.html index 24d8660..74a8145 100644 --- a/approved/0108-xcm-remove-testnet-ids.html +++ b/approved/0108-xcm-remove-testnet-ids.html @@ -90,7 +90,7 @@ diff --git a/approved/0122-alias-origin-on-asset-transfers.html b/approved/0122-alias-origin-on-asset-transfers.html index 993ef60..c1783a7 100644 --- a/approved/0122-alias-origin-on-asset-transfers.html +++ b/approved/0122-alias-origin-on-asset-transfers.html @@ -90,7 +90,7 @@ diff --git a/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html b/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html index aa7566c..88fb8fe 100644 --- a/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html +++ b/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html @@ -90,7 +90,7 @@ diff --git a/approved/0125-xcm-asset-metadata.html b/approved/0125-xcm-asset-metadata.html index 242f39c..2da4fbc 100644 --- a/approved/0125-xcm-asset-metadata.html +++ b/approved/0125-xcm-asset-metadata.html @@ -90,7 +90,7 @@ diff --git a/approved/0126-introduce-pvq.html b/approved/0126-introduce-pvq.html index 9934ec6..30a2b74 100644 --- a/approved/0126-introduce-pvq.html +++ b/approved/0126-introduce-pvq.html @@ -90,7 +90,7 @@ diff --git a/approved/0135-compressed-blob-prefixes.html b/approved/0135-compressed-blob-prefixes.html index cac3198..c271c6a 100644 --- a/approved/0135-compressed-blob-prefixes.html +++ b/approved/0135-compressed-blob-prefixes.html @@ -90,7 +90,7 @@ diff --git a/approved/0139-faster-erasure-coding.html b/approved/0139-faster-erasure-coding.html index 14ec49b..cb69c54 100644 --- a/approved/0139-faster-erasure-coding.html +++ b/approved/0139-faster-erasure-coding.html @@ -90,7 +90,7 @@ diff --git a/approved/0146-deflationary-fee-proposal.html b/approved/0146-deflationary-fee-proposal.html index b6f0a2c..d633837 100644 --- a/approved/0146-deflationary-fee-proposal.html +++ b/approved/0146-deflationary-fee-proposal.html @@ -90,7 +90,7 @@ diff --git a/approved/0149-rfc-1-renewal-adjustment.html b/approved/0149-rfc-1-renewal-adjustment.html index c64a4e8..e83973a 100644 --- a/approved/0149-rfc-1-renewal-adjustment.html +++ b/approved/0149-rfc-1-renewal-adjustment.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index 0ae6641..5a99028 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index 0ae6641..5a99028 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index 4ecf60d..2cd3baa 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -1003,6 +1003,93 @@ detailing proposed changes to the technical implementation of the Polkadot netwo
None
It is possible we would like to add a system parameter for the rate of change of the voting/delegation system. This could prevent wild swings in the voter preferences function and motivate/shield delegates by solidifying their positions over some amount of time. However, it's unclear that this would be valuable or even desirable.
+ +Table of Contents
+ +| Start Date | 25th of August 2025 |
| Description | Multi-Slot AURA for System Parachains |
| Authors | bhargavbh, burdges, AlistairStewart |
This RFC proposes a modification to the AURA round-robin block production mechanism for system parachains (e.g. Polkadot Hub). The proposed change increases the number of consecutive block production slots assigned to each collator from the current single-slot allocation to a configurable value, initially set at four. This modification aims to enhance censorship resistance by mitigating data-withholding attacks.
+The Polkadot Relay Chain guarantees the safety of parachain blocks, but it does not provide explicit guarantees for liveness or censorship resistance. With the planned migration of core Relay Chain functionalities—such as Balances, Staking, and Governance—to the Polkadot Hub system parachain in early November 2025, it becomes critical to establish a mechanism for achieving censorship resistance for these parachains without compromising throughput. For example, if governance functionality is migrated to Polkadot-Hub, malicious collators could systematically censor aye votes for a Relay Chain runtime upgrade, potentially altering the referendum's outcome. This demonstrates that censorship attacks on a system parachain can have a direct and undesirable impact on the security of the Relay Chain. This proposal addresses such censorship vulnerabilities by modifying the AURA block production mechanism utilized by system parachain collator with minimal honesty assumptions on the collators.
This analysis of censorship resistance for AURA-based parachains operates under the following assumptions:
+Collator Honesty: The model assumes the presence of at least one honest collator. We intentionally chose the most relaxed security assumption as collators are not slashable (unlike validators). Note that all system parachains use AURA via the Aura-Ext pallet.
+Backer Honesty: The backer assigned to a block candidate is assumed to be honest. This is a reasonable assumption given 2/3rd honesty on relay-chain and that backers are assigned randomly by ELVES. Additionally, we assume that backers responsible for disbursing the withheld block to the victim collators. Pre-PVFs can definitely help in improving the resilience of backers against DoS attacks. Essentially, the pre PVF lets backers check the slot ownership and hence backers can filter out spamming collators at this stage. However, pre-PVFs have not yet been implemented. The stronger on assumption on backer disbursing the block is only needed for efficiency concerns and not essential for censorship resistance itself (i.e. the collator can always reconstruct from the availability layer).
+Availability Layer: We also assume that the availability layer is robust and a collator can fetch the latest parablock (header and body) directly from the availability layer (or the backer) in a reasonable time, i.e., <6s from backer and <18s from availability layer provided by ELVES.
+Scope: We focus mainly on honest collators ability to produce and get their blocks backed, rather than censorship at the transaction level. Ideally, we want to achive the property that honest collators eventually get their blocks backed even if there is a slight delay (and provide a provable bound on this delay).
+The current AURA mechanism, which assigns a single block production slot per collator, is vulnerable to data-withholding attacks. A malicious collator can strategically produce a block and then selectively withhold it from subsequent collators. This can prevent honest collators from building their blocks in a timely manner, effectively censoring their block production.
+Consider 3 collators A, B and C assigned to consecutive slots by the AURA mechanism. A and C conspire to censor collator B, i.e., not allow B's block to get backed, they can execute the following attack: A produces block $b_A$ and submits it to the backers but it selectively witholds $b_A$ from B. Then C builds on top of $b_A$ and gets in its block before B can recover $b_A$ from availability layer and build on top of it.
+This proposal modifies the AURA round-robin mechanism to assign $x$ consecutive slots to each collator. The specific value of $x$ is contingent upon asynchronous backing parameters od the system parachain and will be derived using a generic formula provided in this document. The collator selected by AURA will be responsible for producing $x$ consecutive blocks. This modification will require corresponding adjustments to the AURA authorship checks within the PVF (Parachain Validation Function). For the current configuration of Polkadot Hub, $x=4$.
+The number of consecutive slots to be assigned to ensure AURA's censorship resistance depends on Async Backing Parameters like unincluded_segment_length. We now describe our approach for deriving $x$ based on paramters of async backing and other variables like block production and latency in availability layer. The relevant values can then be plugged in to obtain $x$ for any system parachain.
Clearly, the number of consecutive slots (x) in the round-robin is lower bounded by the time required to reconstruct the previous block from the availability layer (b) in addition to the block building time (a). Hence, we need to set $x$ such that $x\geq a+b$. But with async backing, a malicious collator sequentially tries to not share the block and just-in-time front-run the honest collator for all the unincluded_segment blocks. Hence, $x\geq (a+b)\cdot m$ is sufficient, where $m$ is the max allowed candidate depth (unincluded segment allowed).
+Independently, there is a check on the relay chain which filters out parablocks anchoring to very old relay_parents in the verify_backed_candidates. Any parablock which is anchored to a relay parent older than the oldest element in allowed_relay_parents gets rejected. Hence, the malicious collator can not front-run and censor the consequent collator after this delay as the parablock is no longer valid. The update of the allowed_relay_parents occurs at process_inherent_data where the buffer length of AllowedRelayParents is set by the scheduler parameter: lookahead (set to 3 by default). Therefore, the async_backing delay (asyncdelay) tolerated by the relay chain backers is $3*6s = 18s$. Hence, the number of consecutive slots is the minimum of the above two values:
$$x \geq min((a+b)\cdot m, a + b + asyncdelay)$$
+where $m$ is the max_candidate_depth (or unincluded segment as seen from collator's perpective).
Assuming the previous block data can be fetched from backers, then we comfortably have $a+b \leq 6s$, i.e. block buiding plus recoinstruciton time is < 6s. Using the current asyncdelay of 18s, suffices to set $x$ to 4. If the max_candidate_depth (m) for Polkadot Hub is set $m\leq3$, then this will reduce (improve) $x$ from 4 to $m$. Note that a channel would have to be provided for collators to fetch blocks from backers as the preferred option and only recover from availability layer as the fail-safe option.
The proposed changes are security critical and mitigate censorship attacks on core functionality like balances, staking and governance on Polkadot Hub. +This approach is compatible with the Slot-Based collation and the currently deployed FixedVelocityConsensusHook. Further analysis is needed to integrate with cusotm ConsesnsusHooks that leverage Elastic Scaling.
+Multi-slot collation however is vulnerable to liveness attacks: adversarial collators don't show up to stall the liveness but then also lose out on block production rewards. The amount of missed blocks because of collators skipping is same as in the current implementation, only the distribution of missed slots changes (they are chunked together instead of being evenly distributed). Secondly, when ratio of adversarial (censoring) collators $\alpha$ is high (close to 1), the ratio of uncensored block to all blocks produced drops to $(1-\alpha)/(x\alpha)$. For more practical lower values of $\alpha<1/4$, the ratio of uncensored to all blocks is almost 1.
+The latency for backing of blocks is affected as follows:
+Effective multi-slot collation requires that collators be able to prioritize transactions that have been targeted for censorship. The implementation should incorporate a framework for priority transactions (e.g., governance votes, election extrinsics) to ensure that such transactions are included in the uncensored blocks.
+This RFC is related to RFC-7, which details the selection mechanism for System Parachain Collators. In general, a more robust collator selection mechanism that reduces the proportion of malicious actors would directly benefit the effectiveness of the ideas presented in this RFC
+A resilient mechanism is needed for prioritising transactions in block production for collators that are actively targeted for censorship. There are two potential approches:
+Table of Contents
pUSD (Polkadot USD over-collateralised debt token) is a new DOT-collateralized stablecoin deployed on Asset Hub. It is an overcollateralized stablecoin backed purely by DOT. The implementation follows the Honzon protocol pioneered by Acala. In addition, this RFC introduces an opt-in pUSD Savings module that lets holders lock pUSD to earn interest funded from stability fees.
-@@ -1063,7 +1150,7 @@ detailing proposed changes to the technical implementation of the Polkadot netwo"Polkadot Hub should have a native DOT backed stable coin because people need it and otherwise we will haemorrhage benefits, liquidity and/or security." - Gav
This proposal introduces necessary computational overhead to Asset Hub for CDP management, liquidation monitoring, and Savings accounting. The impact is minimized through:
The implementation follows the Honzon protocol pioneered by Acala for their aUSD stablecoin system. Key references include:
Introduce new host functions allowing runtimes to generate BLS12-381 keys, signatures, and proofs of possession.
BLS implementation and initial host functions implementation are authored by Seyed Hosseini and co-authored by Davide Galassi.
This RFC respects the runtime-side memory allocation strategy that will be introduced by RFC-145.
-New functions are required to equip BEEFY with BLS signatures, which are essential for the accountable light client protocol.
-Runtime developers who will be able to use the new signature types.
This RFC proposes introducing new host functions as follows.
@@ -1442,7 +1529,7 @@ detailing proposed changes to the technical implementation of the Polkadot netwoThis RFC proposes to change the duration of the Confirmation Period for the Big Tipper and Small Tipper tracks in Polkadot OpenGov:
Big Tipper: 1 Hour -> 1 Day
Currently, these are the durations of treasury tracks in Polkadot OpenGov. Confirmation periods for the Spender tracks were adjusted based on RFC20 and its related conversation.
| Track Description | Confirmation Period Duration |
|---|---|
| Treasurer | 7 Days |
| Authors | Gavin Wood |
This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.
-The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.
The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.
@@ -1566,7 +1653,7 @@ detailing proposed changes to the technical implementation of the Polkadot netwoFurthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.
-Primary stakeholder sets are:
No specific considerations.
Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.
While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.
@@ -2032,7 +2119,7 @@ InstaPoolHistory: (empty)Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.
Table of Contents
@@ -2065,10 +2152,10 @@ InstaPoolHistory: (empty)In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.
This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.
-The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.
Primary stakeholder sets are:
For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.
For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.
No specific considerations.
Standard Polkadot testing and security auditing applies.
@@ -2173,7 +2260,7 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.
None at present.
-None.
Table of Contents
@@ -2219,13 +2306,13 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);As core functionality moves from the Relay Chain into system chains, so increases the reliance on the liveness of these chains for the use of the network. It is not economically scalable, nor necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a mechanism -- part technical and part social -- for ensuring reliable collator sets that are resilient to attemps to stop any subsytem of the Polkadot protocol.
-In order to guarantee access to Polkadot's system, the collators on its system chains must propose blocks (provide liveness) and allow all transactions to eventually be included. That is, some collators may censor transactions, but there must exist one collator in the set who will include a @@ -2261,7 +2348,7 @@ to censor any subset of transactions.
The vast majority of cases can be covered by unit testing. Integration test should ensure that the
Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired
number of Candidates, can handle updates over XCM from the system's governance location.
This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.
This RFC is compatible with the existing implementation and can be handled via upgrades and migration.
-The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.
This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.
-The maintenance of bootnodes has long been an annoyance for everyone.
When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.
@@ -2390,7 +2477,7 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-dateBecause the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.
While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.
Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.
-This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.
The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.
@@ -2441,7 +2528,7 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) aFor this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes.
Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.
Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.
-The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.
@@ -2452,7 +2539,7 @@ If this every becomes a problem, this value of 20 is an arbitrary constant thatIrrelevant.
Irrelevant.
-None.
While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?
Improve the networking messages that query storage items from the remote, in order to reduce the bandwidth usage and number of round trips of light clients.
-Clients on the Polkadot peer-to-peer network can be divided into two categories: full nodes and light clients. So-called full nodes are nodes that store the content of the chain locally on their disk, while light clients are nodes that don't. In order to access for example the balance of an account, a full node can do a disk read, while a light client needs to send a network message to a full node and wait for the full node to reply with the desired value. This reply is in the form of a Merkle proof, which makes it possible for the light client to verify the exactness of the value.
Unfortunately, this network protocol is suffering from some issues:
Once Polkadot and Kusama will have transitioned to state_version = 1, which modifies the format of the trie entries, it will be possible to generate Merkle proofs that contain only the hashes of values in the storage. Thanks to this, it is already possible to prove the existence of a key without sending its entire value (only its hash), or to prove that a value has changed or not between two blocks (by sending just their hashes).
Thus, the only reason why aforementioned issues exist is because the existing networking messages don't give the possibility for the querier to query this. This is what this proposal aims at fixing.
This is the continuation of https://github.com/w3f/PPPs/pull/10, which itself is the continuation of https://github.com/w3f/PPPs/pull/5.
The protobuf schema of the networking protocol can be found here: https://github.com/paritytech/substrate/blob/5b6519a7ff4a2d3cc424d78bc4830688f3b184c0/client/network/light/src/schema/light.v1.proto
@@ -2569,14 +2656,14 @@ Also note that child tries aren't considered as descendants of the main trie wheThe main security consideration concerns the size of replies and the resources necessary to generate them. It is for example easily possible to ask for all keys and values of the chain, which would take a very long time to generate. Since responses to this networking protocol have a maximum size, the replier should truncate proofs that would lead to the response being too large. Note that it is already possible to send a query that would lead to a very large reply with the existing network protocol. The only thing that this proposal changes is that it would make it less complicated to perform such an attack.
Implementers of the replier side should be careful to detect early on when a reply would exceed the maximum reply size, rather than inconditionally generate a reply, as this could take a very large amount of CPU, disk I/O, and memory. Existing implementations might currently be accidentally protected from such an attack thanks to the fact that requests have a maximum size, and thus that the list of keys in the query was bounded. After this proposal, this accidental protection would no longer exist.
Malicious server nodes might truncate Merkle proofs even when they don't strictly need to, and it is not possible for the client to (easily) detect this situation. However, malicious server nodes can already do undesirable things such as throttle down their upload bandwidth or simply not respond. There is no need to handle unnecessarily truncated Merkle proofs any differently than a server simply not answering the request.
-It is unclear to the author of the RFC what the performance implications are. Servers are supposed to have limits to the amount of resources they use to respond to requests, and as such the worst that can happen is that light client requests become a bit slower than they currently are.
Irrelevant.
The prior networking protocol is maintained for now. The older version of this protocol could get removed in a long time.
-None. This RFC is a clean-up of an existing mechanism.
None
@@ -2601,11 +2688,11 @@ Also note that child tries aren't considered as descendants of the main trie wheThe Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.
-How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.
-Polkadot DOT token holders.
This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.
@@ -2650,13 +2737,13 @@ Also note that child tries aren't considered as descendants of the main trie wheSince the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.
-Many groups have expressed interest in representing collectives on-chain. Some of these include:
No impacts.
-Generally all new collectives will be in the Collectives parachain. Thus, performance impacts should strictly be limited to this parachain and not affect others. As the majority of logic for collectives is generalized and reusable, we expect most collectives to be instances of similar subsets of modules. That is, new collectives should generally be compatible with UIs and other services that provide collective-related functionality, with little modifications to support new ones.
-The launch of the Technical Fellowship, see the initial forum post.
Introduces breaking changes to the Core runtime API by letting Core::initialize_block return an enum. The versions of Core is bumped from 4 to 5.
The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks.
Further it would be nice to not hinder the possibility of implementing a new hook poll, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem. This hook can then be used to replace the use of on_initialize and on_finalize for non-deadline critical logic.
In a similar fashion, it should not hinder the future addition of a System::PostInherents callback that always runs after all inherents were applied.
The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.
Security: n/a
Privacy: n/a
-The performance overhead is minimal in the sense that no clutter was added after fulfilling the
requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.
The advice here is OPTIONAL and outside of the RFC. To not degrade user experience, it is recommended to ensure that an updated node can still import historic blocks.
-The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge requests:
This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.
This is achieved by remove existing lock conditions and only lock a parachain when:
The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.
The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.
The key scenarios this RFC seeks to improve are:
@@ -2894,7 +2981,7 @@ This can be unified and simplified by moving both parts into the runtime.Only the relaychain Root origin or the parachain itself can unlock the lock5.
This creates an issue that if the parachain is unable to produce block, the parachain manager is unable to do anything and have to rely on relaychain Root origin to manage the parachain.
-This RFC proposes to change the lock and unlock conditions.
A parachain can be locked only with following conditions:
This RFC should improve the developer experiences for new and existing parachain teams
This RFC is fully compatibility with existing interfaces.
-This document proposes a restructuring of the bulk markets in Polkadot's coretime allocation system to improve efficiency and fairness. The proposal suggests splitting the BULK_PERIOD into three consecutive phases: MARKET_PERIOD, RENEWAL_PERIOD, and SETTLEMENT_PERIOD. This structure enables market-driven price discovery through a clearing-price Dutch auction, followed by renewal offers during the RENEWAL_PERIOD.
With all coretime consumers paying a unified price, we propose removing all liquidity restrictions on cores purchased either during the initial market phase or renewed during the renewal phase. This allows a meaningful SETTLEMENT_PERIOD, during which final agreements and deals between coretime consumers can be orchestrated on the social layer—complementing the agility this system seeks to establish.
In the new design, we obtain a uniform price, the clearing_price, which anchors new entrants and current tenants. To complement market-based price discovery, the design includes a dynamic reserve price adjustment mechanism based on actual core consumption. Together, these two components ensure robust price discovery while mitigating price collapse in cases of slight underutilization or collusive behavior.
After exposing the initial system introduced in RFC-1 to real-world conditions, several weaknesses have become apparent. These lie especially in the fact that cores captured at very low prices are removed from the open market and can effectively be retained indefinitely, as renewal costs are minimal. The key issue here is the absence of price anchoring, which results in two divergent price paths: one for the initial purchase on the open market, and another fully deterministic one via the renewal bump mechanism.
This proposal addresses these issues by anchoring all prices to a value derived from the market, while still preserving necessary privileges for current coretime consumers. The goal is to produce robust results across varying demand conditions (low, high, or volatile).
In particular, this proposal introduces the following key changes:
@@ -3030,7 +3117,7 @@ This can be unified and simplified by moving both parts into the runtime.The premise of this proposal is to offer a straightforward design that discovers the price of coretime within a period as a clearing_price. Long-term coretime holders still retain the privilege to keep their cores if they can pay the price discovered by the market (with some premium for that privilege). The proposed model aims to strike a balance between leveraging market forces for allocation while operating within defined bounds. In particular, prices are capped within a BULK_PERIOD, which gives some certainty about prices to existing teams. It must be noted, however, that under high demand, prices could increase exponentially between multiple market cycles. This is a necessary feature to ensure proper price discovery and efficient coretime allocation.
Ultimately, the framework proposed here seeks to adhere to all requirements originally stated in RFC-1.
-Primary stakeholder sets are:
ope
This RFC builds extensively on the available ideas put forward in RFC-1.
Additionally, I want to express a special thanks to Samuel Haefner, Shahar Dobzinski, and Alistair Stewart for fruitful discussions and helping me structure my thoughts.
@@ -3177,12 +3264,12 @@ To mitigate this, we propose preventing the market from closing at theope
Authors @brenzi for Encointer Association, 8000 Zurich, Switzerland
-Summary
+Summary
Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.
-Motivation
+Motivation
Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.
Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.
-Stakeholders
+Stakeholders
- Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
- Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
@@ -3202,9 +3289,9 @@ To mitigate this, we propose preventing the market from closing at the ope
Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.
Testing, Security, and Privacy
No changes to the existing system are proposed. Only changes to how maintenance is organized.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
No changes
-Prior Art and References
+Prior Art and References
Existing Encointer runtime repo
Unresolved Questions
None identified
@@ -4128,11 +4215,11 @@ other privacy-enhancing mechanisms to address this concern.
Authors Joe Petrowski, Gavin Wood
-Summary
+Summary
The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary
prior to the launch of parachains and development of XCM, most of this logic can exist in
parachains. This is a proposal to migrate several subsystems into system parachains.
-Motivation
+Motivation
Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to
operate with common guarantees about the validity and security of their state transitions. Polkadot
provides these common guarantees by executing the state transitions on a strict subset (a backing
@@ -4144,7 +4231,7 @@ blockspace) to the network.
By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a
set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot
Ubiquitous Computer can maximise its primary offering: secure blockspace.
-Stakeholders
+Stakeholders
- Parachains that interact with affected logic on the Relay Chain;
- Core protocol and XCM format developers;
@@ -4302,7 +4389,7 @@ may require some optimizations to deal with constraints.
Testing, Security, and Privacy
Standard audit/review requirements apply. More powerful multi-chain integration test tools would be
useful in developement.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Describe the impact of the proposal on the exposed functionality of Polkadot.
Performance
This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its
@@ -4316,7 +4403,7 @@ runtimes to recognize the new locations in the network.
Compatibility
Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol.
Application developers will need to interact with multiple chains in the network.
-Prior Art and References
+Prior Art and References
- Transactionless Relay-chain
- Moving Staking off the Relay Chain
@@ -4360,13 +4447,13 @@ With Staking and Governance off the Relay Chain, this is not an unreasonable nex
Authors Vedhavyas Singareddi
-Summary
+Summary
At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the
Storage.
We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field
under RuntimeVersion,
we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.
-Motivation
+Motivation
Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data.
This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is
further explored in https://github.com/polkadot-fellows/RFCs/issues/19
@@ -4378,7 +4465,7 @@ One of the main challenge here is some extrinsics could be big enough that this
included in the Consensus block due to Block's weight restriction.
If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but
rather at maximum, 32 byte of extrinsic data.
-Stakeholders
+Stakeholders
- Technical Fellowship, in its role of maintaining system runtimes.
@@ -4413,7 +4500,7 @@ pub const VERSION: RuntimeVersion = RuntimeVersion {
so that chains know which system_version to use.
Testing, Security, and Privacy
AFAIK, should not have any impact on the security or privacy.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
These changes should be compatible for existing chains if they use state_version value for system_verision.
Performance
I do not believe there is any performance hit with this change.
@@ -4421,7 +4508,7 @@ so that chains know which system_version to use.
This does not break any exposed Apis.
Compatibility
This change should not break any compatibility.
-Prior Art and References
+Prior Art and References
We proposed introducing a similar change by introducing a
parameter to frame_system::Config but did not feel that
is the correct way of introducing this change.
@@ -4456,9 +4543,9 @@ is the correct way of introducing this change.
Authors Sebastian Kunert
-Summary
+Summary
This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.
-Motivation
+Motivation
The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:
- Trie Depth: We assume a trie depth to account for intermediary nodes.
@@ -4467,7 +4554,7 @@ is the correct way of introducing this change.
These pessimistic assumptions lead to an overestimation of storage weight, negatively impacting block utilization on parachains.
In addition, the current model does not account for multiple accesses to the same storage items. While these repetitive accesses will not increase storage-proof size, the runtime-side weight monitoring will account for them multiple times. Since the proof size is completely opaque to the runtime, we can not implement retroactive storage weight correction.
A solution must provide a way for the runtime to track the exact storage-proof size consumed on a per-extrinsic basis.
-Stakeholders
+Stakeholders
- Parachain Teams: They MUST include this host function in their runtime and node.
- Light-client Implementors: They SHOULD include this host function in their runtime and node.
@@ -4480,14 +4567,14 @@ is the correct way of introducing this change.
fn ext_storage_proof_size_version_1() -> u64;
}
The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.
Ergonomics
The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.
Compatibility
Parachain teams will need to include this host function to upgrade.
-Prior Art and References
+Prior Art and References
- Pull Request including proposed host function: PoV Reclaim (Clawback) Node Side.
- Issue with discussion: [FRAME core] Clawback PoV Weights For Dispatchables
@@ -4541,12 +4628,12 @@ is the correct way of introducing this change.
Authors Aurora Poppyseed, Just_Luuuu, Viki Val, Joe Petrowski
-Summary
+Summary
This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for
creating an NFT collection, minting an individual NFT, and lowering its corresponding metadata and
attribute deposits. The objective is to lower the barrier to entry for NFT creators, fostering a
more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.
-Motivation
+Motivation
The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2
DOT for metadata and attribute deposits) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub
presents a significant financial barrier for many NFT creators. By lowering the deposit
@@ -4563,7 +4650,7 @@ low.
- Deposits SHOULD be derived from
deposit function, adjusted by correspoding pricing mechansim.
-Stakeholders
+Stakeholders
- NFT Creators: Primary beneficiaries of the proposed change, particularly those who found the
current deposit requirements prohibitive.
@@ -4681,7 +4768,7 @@ Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.
Security concerns
As noted above, state bloat is a security concern. In the case of abuse, governance could adapt by
increasing deposit rates and/or using forceDestroy on collections agreed to be spam.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
The primary performance consideration stems from the potential for state bloat due to increased
activity from lower deposit requirements. It's vital to monitor and manage this to avoid any
@@ -4780,11 +4867,11 @@ Polkadot and Kusama networks.
Authors Alin Dima
-Summary
+Summary
Propose a way of permuting the availability chunk indices assigned to validators, in the context of
recovering available data from systematic chunks, with the
purpose of fairly distributing network bandwidth usage.
-Motivation
+Motivation
Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once
per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3
validators during an entire session, when favouring availability recovery from systematic chunks.
@@ -4792,7 +4879,7 @@ validators during an entire session, when favouring availability recovery from s
systematic availability chunks to different validators, based on the relay chain block and core.
The main purpose is to ensure fair distribution of network bandwidth usage for availability recovery in general and in
particular for systematic chunk holders.
-Stakeholders
+Stakeholders
Relay chain node core developers.
Explanation
Systematic erasure codes
@@ -4961,7 +5048,7 @@ mitigate this problem and will likely be needed in the future for CoreJam and/or
Testing, Security, and Privacy
Extensive testing will be conducted - both automated and manual.
This proposal doesn't affect security or privacy.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of
CPU time in polkadot as we scale up the parachain block size and number of availability cores.
@@ -4974,7 +5061,7 @@ halved and total POV recovery time decrease by 80% for large POVs. See more
This is a breaking change. See upgrade path section above.
All validators and collators need to have upgraded their node versions before the feature will be enabled via a
governance call.
-Prior Art and References
+Prior Art and References
See comments on the tracking issue and the
in-progress PR
Unresolved Questions
@@ -5054,7 +5141,7 @@ dispute scenarios.
Authors Bastian Köcher
-Summary
+Summary
This RFC proposes to changes the SessionKeys::generate_session_keys runtime api interface. This runtime api is used by validator operators to
generate new session keys on a node. The public session keys are then registered manually on chain by the validator operator.
Before this RFC it was not possible by the on chain logic to ensure that the account setting the public session keys is also in
@@ -5062,7 +5149,7 @@ possession of the private session keys. To solve this the RFC proposes to pass t
registration on chain to generate_session_keys. Further this RFC proposes to change the return value of the generate_session_keys
function also to not only return the public session keys, but also the proof of ownership for the private session keys. The
validator operator will then need to send the public session keys and the proof together when registering new session keys on chain.
-Motivation
+Motivation
When submitting the new public session keys to the on chain logic there doesn't exist any verification of possession of the private session keys.
This means that users can basically register any kind of public session keys on chain. While the on chain logic ensures that there are
no duplicate keys, someone could try to prevent others from registering new session keys by setting them first. While this wouldn't bring
@@ -5070,7 +5157,7 @@ the "attacker" any kind of advantage, more like disadvantages (potenti
e.g. changing its session key in the event of a private session key leak.
After this RFC this kind of attack would not be possible anymore, because the on chain logic can verify that the sending account
is in ownership of the private session keys.
-Stakeholders
+Stakeholders
- Polkadot runtime implementors
- Polkadot node implementors
@@ -5116,7 +5203,7 @@ This will require updating some high level docs and making users familiar with t
Testing, Security, and Privacy
Testing of the new changes only requires passing an appropriate owner for the current testing context.
The changes to the proof generation and verification got audited to ensure they are correct.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
The session key generation is an offchain process and thus, doesn't influence the performance of the
chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys.
@@ -5130,7 +5217,7 @@ a runtime is enacted that contains these changes otherwise they will fail to gen
The RPC that exists around this runtime api needs to be updated to support passing the account id
and for returning the ownership proof alongside the public session keys.
UIs would need to be updated to support the new RPC and the changed on chain logic.
-Prior Art and References
+Prior Art and References
None.
Unresolved Questions
None.
@@ -5172,10 +5259,10 @@ and for returning the ownership proof alongside the public session keys.
Authors Joe Petrowski, Gavin Wood
-Summary
+Summary
The Fellowship Manifesto states that members should receive a monthly allowance on par with gross
income in OECD countries. This RFC proposes concrete amounts.
-Motivation
+Motivation
One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and
retain technical talent for the continued progress of the network.
In order for members to uphold their commitment to the network, they should receive support to
@@ -5185,7 +5272,7 @@ on par with a full-time job. Providing a livable wage to those making such contr
pragmatic to work full-time on Polkadot.
Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion
are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.
-Stakeholders
+Stakeholders
- Fellowship members
- Polkadot Treasury
@@ -5255,14 +5342,14 @@ RFC.
to acquire them. However, the asset of choice can be changed in the future.
Testing, Security, and Privacy
N/A.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
N/A
Ergonomics
N/A
Compatibility
N/A
-Prior Art and References
+Prior Art and References
- The Polkadot Fellowship
Manifesto
@@ -5303,11 +5390,11 @@ States
Authors Pierre Krieger
-Summary
+Summary
When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.
Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime.
This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1.
-Motivation
+Motivation
There exists three motivations behind this change:
-
@@ -5320,7 +5407,7 @@ States
It makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below.
-Stakeholders
+Stakeholders
Low-level developers.
Explanation
To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:
@@ -5347,14 +5434,14 @@ This is equivalent to forcing the Vec<Transaction> to always
An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome.
Testing, Security, and Privacy
Irrelevant.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
Irrelevant.
Ergonomics
Irrelevant.
Compatibility
The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.
-Prior Art and References
+Prior Art and References
Irrelevant.
Unresolved Questions
None.
@@ -5398,17 +5485,17 @@ This is equivalent to forcing the Vec<Transaction> to always
Authors Pierre Krieger
-Summary
+Summary
This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".
Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.
The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.
-Motivation
+Motivation
The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.
It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.
If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node.
In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.
This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.
-Stakeholders
+Stakeholders
Low-level client developers.
People interested in accessing the archive of the chain.
Explanation
@@ -5456,7 +5543,7 @@ If blocks pruning is enabled and the chain is a relay chain, then Substrate unfo
Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.
For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.
Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.
@@ -5466,7 +5553,7 @@ Furthermore, when a large number of providers are registered, only the providers
Irrelevant.
Compatibility
Irrelevant.
-Prior Art and References
+Prior Art and References
Unknown.
Unresolved Questions
While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?
@@ -5519,12 +5606,12 @@ We could even add to the peer-to-peer network nodes that are only capable of ser
Authors Zondax AG, Parity Technologies
-Summary
+Summary
To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format.
It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute.
This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails.
Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime.
-Motivation
+Motivation
Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way.
On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature.
The two main reasons why this is not possible today are:
@@ -5551,7 +5638,7 @@ We could even add to the peer-to-peer network nodes that are only capable of ser
- Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead;
- Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching).
-Stakeholders
+Stakeholders
- Runtime implementors
- UI/wallet implementors
@@ -5835,12 +5922,12 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised.
Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.
Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.
Ergonomics & Compatibility
The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.
-Prior Art and References
+Prior Art and References
RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well.
On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid.
Unresolved Questions
@@ -5883,14 +5970,14 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
Authors George Pisaltu
-Summary
+Summary
This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction.
-Motivation
+Motivation
"General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions.
An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712.
The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time.
By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.
-Stakeholders
+Stakeholders
- Runtime users
- Runtime devs
@@ -5911,7 +5998,7 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again.
Testing, Security, and Privacy
There is no impact on testing, security or privacy.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.
Performance
There is no performance impact.
@@ -5919,7 +6006,7 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.
Compatibility
This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.
-Prior Art and References
+Prior Art and References
The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort.
Unresolved Questions
None.
@@ -5956,14 +6043,14 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
Authors Alex Gheorghe (alexggh)
-Summary
+Summary
Extend the DHT authority discovery records with a signed creation time, so that nodes can determine which record is newer and always decide to prefer the newer records to the old ones.
-Motivation
+Motivation
Currently, we use the Kademlia DHT for storing records regarding the p2p address of an authority discovery key, the problem is that if the nodes decide to change its PeerId/Network key it will publish a new record, however because of the distributed and replicated nature of the DHT there is no way to tell which record is newer so both old PeerId and the new PeerId will live in the network until the old one expires(36h), that creates all sort of problem and leads to the node changing its address not being properly connected for up to 36h.
After this RFC, nodes are extended to decide to keep the new record and propagate the new record to nodes that have the old record stored, so in the end all the nodes will converge faster to the new record(in the order of minutes, not 36h)
Implementation of the rfc: https://github.com/paritytech/polkadot-sdk/pull/3786.
Current issue without this enhacement: https://github.com/paritytech/polkadot-sdk/issues/3673
-Stakeholders
+Stakeholders
Polkadot node developers.
Explanation
This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot.
@@ -6003,7 +6090,7 @@ in order to speed up the time until all nodes have the newest record, nodes can
Testing, Security, and Privacy
This RFC's implementation https://github.com/paritytech/polkadot-sdk/pull/3786 had been tested on various local test networks and versi.
With regard to security the creation time is wrapped inside SignedAuthorityRecord wo it will be signed with the authority id key, so there is no way for other malicious nodes to manipulate this field without the received node observing.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Irrelevant.
Performance
Irrelevant.
@@ -6011,7 +6098,7 @@ in order to speed up the time until all nodes have the newest record, nodes can
Irrelevant.
Compatibility
The changes are backwards compatible with the existing protocol, so nodes with both the old protocol and newer protocol can exist in the network, this is achieved by the fact that we use protobuf for serializing and deserializing the records, so new fields will be ignore when deserializing with the older protocol and vice-versa when deserializing an old record with the new protocol the new field will be None and the new code accepts this record as being valid.
-Prior Art and References
+Prior Art and References
The enhancements have been inspired by the algorithm specified in here
Unresolved Questions
N/A
@@ -6061,19 +6148,19 @@ in order to speed up the time until all nodes have the newest record, nodes can
Authors Jonas Gehrlein & Alistair Stewart
-Summary
+Summary
This RFC proposes a flexible unbonding mechanism for tokens that are locked from staking on the Relay Chain (DOT/KSM), aiming to enhance user convenience without compromising system security.
Locking tokens for staking ensures that Polkadot is able to slash tokens backing misbehaving validators. With changing the locking period, we still need to make sure that Polkadot can slash enough tokens to deter misbehaviour. This means that not all tokens can be unbonded immediately, however we can still allow some tokens to be unbonded quickly.
The new mechanism leads to a signficantly reduced unbonding time on average, by queuing up new unbonding requests and scaling their unbonding duration relative to the size of the queue. New requests are executed with a minimum of 2 days, when the queue is comparatively empty, to the conventional 28 days, if the sum of requests (in terms of stake) exceed some threshold. In scenarios between these two bounds, the unbonding duration scales proportionately. The new mechanism will never be worse than the current fixed 28 days.
In this document we also present an empirical analysis by retrospectively fitting the proposed mechanism to the historic unbonding timeline and show that the average unbonding duration would drastically reduce, while still being sensitive to large unbonding events. Additionally, we discuss implications for UI, UX, and conviction voting.
Note: Our proposition solely focuses on the locks imposed from staking. Other locks, such as governance, remain unchanged. Also, this mechanism should not be confused with the already existing feature of FastUnstake, which lets users unstake tokens immediately that have not received rewards for 28 days or longer.
As an initial step to gauge its effectiveness and stability, it is recommended to implement and test this model on Kusama before considering its integration into Polkadot, with appropriate adjustments to the parameters. In the following, however, we limit our discussion to Polkadot.
-Motivation
+Motivation
Polkadot has one of the longest unbonding periods among all Proof-of-Stake protocols, because security is the most important goal. Staking on Polkadot is still attractive compared to other protocols because of its above-average staking APY. However the long unbonding period harms usability and deters potential participants that want to contribute to the security of the network.
The current length of the unbonding period imposes significant costs for any entity that even wants to perform basic tasks such as a reorganization / consolidation of their stashes, or updating their private key infrastructure. It also limits participation of users that have a large preference for liquidity.
The combination of long unbonding periods and high returns has lead to the proliferation of liquid staking, where parachains or centralised exchanges offer users their staked tokens before the 28 days unbonding period is over either in original DOT/KSM form or derivative tokens. Liquid staking is harmless if few tokens are involved but it could result in many validators being selected by a few entities if a large fraction of DOTs were involved. This may lead to centralization (see here for more discussion on threats of liquid staking) and an opportunity for attacks.
The new mechanism greatly increases the competitiveness of Polkadot, while maintaining sufficient security.
-Stakeholders
+Stakeholders
- Every DOT/KSM token holder
@@ -6143,7 +6230,7 @@ The analysis can be reproduced or changed to other parameters using Testing, Security, and Privacy
NA
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
NA
Performance
The authors cannot see any potential impact on performance.
@@ -6151,7 +6238,7 @@ The analysis can be reproduced or changed to other parameters using Compatibility
The authors cannot see any potential impact on compatibility. This should be assessed by the technical fellows.
-Prior Art and References
+Prior Art and References
- Ethereum proposed a similar solution
- Alistair did some initial write-up
@@ -6188,14 +6275,14 @@ The analysis can be reproduced or changed to other parameters using Summary
+Summary
This RFC proposes a change to the extrinsic format to include a transaction extension version.
-Motivation
+Motivation
The extrinsic format supports to be extended with transaction extensions. These transaction extensions are runtime specific and can be different per chain. Each transaction extension can add data to the extrinsic itself or extend the signed payload.
This means that adding a transaction extension is breaking the chain specific extrinsic format. A recent example was the introduction of the CheckMetadatHash to Polkadot and all its system chains.
As the extension was adding one byte to the extrinsic, it broke a lot of tooling. By introducing an extra version for the transaction extensions it will be possible to introduce changes to these transaction extensions while still being backwards compatible.
Based on the version of the transaction extensions, each chain runtime could decode the extrinsic correctly and also create the correct signed payload.
-Stakeholders
+Stakeholders
- Runtime users
- Runtime devs
@@ -6221,7 +6308,7 @@ as extrinsic format 6, but 5 is not yet deployed anywh
This adds one byte more to each signed transaction.
Testing, Security, and Privacy
There is no impact on testing, security or privacy.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
This will ensure that changes to the transactions extensions can be done in a backwards compatible way.
Performance
There is no performance impact.
@@ -6231,7 +6318,7 @@ to decode these old versions, but this should be neglectable.
Compatibility
When introduced together with extrinsic format version 5 from RFC84, it can be implemented in a backwards compatible way. So, transactions can still be send using the
old extrinsic format and decoded by the runtime.
-Prior Art and References
+Prior Art and References
None.
Unresolved Questions
None.
@@ -6272,14 +6359,14 @@ old extrinsic format and decoded by the runtime.
Authors Adrian Catangiu
-Summary
+Summary
This RFC proposes a new instruction that provides a way to initiate on remote chains, asset transfers which
transfer multiple types (teleports, local-reserve, destination-reserve) of assets, using XCM alone.
The currently existing instructions are too opinionated and force each XCM asset transfer to a single
transfer type (teleport, local-reserve, destination-reserve). This results in inability to combine different
types of transfers in single transfer which results in overall poor UX when trying to move assets across
chains.
-Motivation
+Motivation
XCM is the de-facto cross-chain messaging protocol within the Polkadot ecosystem, and cross-chain
assets transfers is one of its main use-cases. Unfortunately, in its current spec, it does not support
initiating on a remote chain, one or more transfers that combine assets with different transfer types.
@@ -6301,7 +6388,7 @@ For example, allows single XCM program execution to transfer multiple assets fro
Kusama Asset Hub, over the bridge through Polkadot Asset Hub with final destination ParaP on Polkadot.
With current XCM, we are limited to doing multiple independent transfers for each individual hop in order to
move both "interesting" assets, but also "supporting" assets (used to pay fees).
-Stakeholders
+Stakeholders
- Runtime users
- Runtime devs
@@ -6507,7 +6594,7 @@ which minimizes the potential free/unpaid work that a receiving chain has to do.
required execution fee payment, part of the instruction logic through the remote_fees: Option<AssetTransferFilter>
parameter, which will make sure the remote XCM starts with a single-asset-holding-loading-instruction,
immediately followed by a BuyExecution using said asset.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
This brings no impact to the rest of the XCM spec. It is a new, independent instruction, no changes to existing instructions.
Enhances the exposed functionality of Polkadot. Will allow multi-chain transfers that are currently forced to happen in
multiple programs per asset per "hop", to be possible in a single XCM program.
@@ -6525,7 +6612,7 @@ success.
A program where the new instruction is used to initiate multiple types of asset transfers, cannot be downgraded to older
XCM versions, because there is no equivalent capability there.
Such conversion attempts will explicitly fail.
-Prior Art and References
+Prior Art and References
None.
Unresolved Questions
None.
@@ -6562,10 +6649,10 @@ Such conversion attempts will explicitly fail.
Authors Adrian Catangiu
-Summary
+Summary
The Transact XCM instruction currently forces the user to set a specific maximum weight allowed to the inner call and then also pay for that much weight regardless of how much the call actually needs in practice.
This RFC proposes improving the usability of Transact by removing that parameter and instead get and charge the actual weight of the inner call from its dispatch info on the remote chain.
-Motivation
+Motivation
The UX of using Transact is poor because of having to guess/estimate the require_weight_at_most weight used by the inner call on the target.
We've seen multiple Transact on-chain failures caused by guessing wrong values for this require_weight_at_most even though the rest of the XCM program would have worked.
In practice, this parameter only adds UX overhead with no real practical value. Use cases fall in one of two categories:
@@ -6578,7 +6665,7 @@ weight limit parameter.
We've had multiple OpenGov root/whitelisted_caller proposals initiated by core-devs completely or partially fail
because of incorrect configuration of require_weight_at_most parameter. This is a strong indication that the
instruction is hard to use.
-Stakeholders
+Stakeholders
- Runtime Users,
- Runtime Devs,
@@ -6600,14 +6687,14 @@ instruction is hard to use.
The security considerations around how much can someone execute for free are the same for
both this new version and the old. In both cases, an "attacker" can do the XCM decoding (including Transact inner calls) for free by adding a large enough BuyExecution without actually having the funds available.
In both cases, decoding is done for free, but in both cases execution fails early on BuyExecution.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
No performance change.
Ergonomics
Ergonomics are slightly improved by simplifying Transact API.
Compatibility
Compatible with previous XCM programs.
-Prior Art and References
+Prior Art and References
None.
Unresolved Questions
None.
@@ -6655,13 +6742,13 @@ both this new version and the old. In both cases, an "attacker" can do
Authors Andrei Sandu
-Summary
+Summary
Elastic scaling is not resilient against griefing attacks without a way for a PoV (Proof of Validity)
to commit to the particular core index it was intended for. This RFC proposes a way to include
core index information in the candidate commitments and the CandidateDescriptor data structure
in a backward compatible way. Additionally, it proposes the addition of a SessionIndex field in
the CandidateDescriptor to make dispute resolution more secure and robust.
-Motivation
+Motivation
This RFC proposes a way to solve two different problems:
- For Elastic Scaling, it prevents anyone who has acquired a valid collation to DoS the parachain
@@ -6676,7 +6763,7 @@ dispute. The dispute may concern a relay chain block not yet imported by a
validator. In this case, validators can safely assume the session index refers to the session
the candidate has appeared in, otherwise, the chain would have rejected the candidate.
-Stakeholders
+Stakeholders
- Polkadot core developers.
- Cumulus node developers.
@@ -6875,7 +6962,7 @@ present in the receipt.
Any tooling that decodes UMP XCM messages needs an update to support or ignore the new UMP
messages, but they should be fine to decode the regular XCM messages that come before the
separator.
-Prior Art and References
+Prior Art and References
Forum discussion about a new CandidateReceipt format:
https://forum.polkadot.network/t/pre-rfc-discussion-candidate-receipt-format-v2/3738
Unresolved Questions
@@ -6922,7 +7009,7 @@ by using the version field of the descriptor introduced in this RFC
Authors Francisco Aguirre
-Summary
+Summary
XCM already handles execution fees in an effective and efficient manner using the BuyExecution instruction.
However, other types of fees are not handled as effectively -- for example, delivery fees.
Fees exist that can't be measured using Weight -- as execution fees can -- so a new method should be thought up for those cases.
@@ -6931,7 +7018,7 @@ This RFC proposes making the fee handling system simpler and more general, by do
- Adding a
fees register
- Deprecating
BuyExecution and adding a new instruction PayFees with new semantics to ultimately replace it.
-Motivation
+Motivation
Execution fees are handled correctly by XCM right now.
However, the addition of extra fees, like for message delivery, result in awkward ways of integrating them into the XCVM implementation.
This is because these types of fees are not included in the language.
@@ -6939,7 +7026,7 @@ The standard should have a way to correctly deal with these implementation speci
The new instruction moves the specified amount of fees from the holding register to a dedicated fees register that the XCVM can use in flexible ways depending on its implementation.
The XCVM implementation is free to use these fees to pay for execution fees, transport fees, or any other type of fee that might be necessary.
This moves the specifics of fees further away from the XCM standard, and more into the actual underlying XCVM implementation, which is a good thing.
-Stakeholders
+Stakeholders
- Runtime Users
- Runtime Devs
@@ -6981,7 +7068,7 @@ PayFees { asset }
There needs to be an explicit change from BuyExecution to PayFees, most often accompanied by a reduction in the assets passed in.
Testing, Security, and Privacy
It might become a security concern if leftover fees are trapped, since a lot of them are expected.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
There should be no performance downsides to this approach.
The fees register is a simplification that may actually result in better performance, in the case an implementation is doing a workaround to achieve what this RFC proposes.
@@ -6993,7 +7080,7 @@ That asset will allow users to limit the amount of fees they are willing to pay.
This RFC can't just change the semantics of the BuyExecution instruction since that instruction accepts any funds, uses what it needs and returns the rest immediately.
The new proposed instruction, PayFees, doesn't return the leftover immediately, it keeps it in the fees register.
In practice, the deprecated BuyExecution needs to be slowly rolled out in favour of PayFees.
-Prior Art and References
+Prior Art and References
The closed RFC PR on the xcm-format repository, before XCM RFCs got moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/53.
Unresolved Questions
None
@@ -7033,12 +7120,12 @@ In practice, the deprecated BuyExecution needs to be slowly rolled
Authors Francisco Aguirre
-Summary
+Summary
A previous XCM RFC (https://github.com/polkadot-fellows/xcm-format/pull/37) introduced a SetAssetClaimer instruction.
This idea of instructing the XCVM to change some implementation-specific behavior is useful.
In order to generalize this mechanism, this RFC introduces a new instruction SetHints
and makes the SetAssetClaimer be just one of many possible execution hints.
-Motivation
+Motivation
There is a need for specifying how certain implementation-specific things should behave.
Things like who can claim the assets or what can be done instead of trapping assets.
Another idea for a hint:
@@ -7046,7 +7133,7 @@ Another idea for a hint:
AssetForFees: to signify to the executor what asset the user prefers to use for fees.
LeftoverAssetsDestination: for depositing leftover assets to a destination instead of trapping them
-Stakeholders
+Stakeholders
- Runtime devs
- Wallets
@@ -7079,7 +7166,7 @@ type NumVariants = /* Number of variants of the `Hint` enum */;
Hints are specified on a per-message basis, so they have to be specified at the beginning of a message.
If they were to be specified at the end, hints like AssetClaimer would be useless if an error occurs beforehand and assets get trapped before ever reaching the hint.
The instruction takes a bounded vector of hints so as to not force barriers to allow an arbitrary number of SetHint instructions.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
None.
Ergonomics
@@ -7089,7 +7176,7 @@ Also, this instruction would make it simpler to write XCM programs.
You only need to specify the hints you want in one single instruction at the top of your program.
Compatibility
None.
-Prior Art and References
+Prior Art and References
The previous RFC PR in the xcm-format repository before XCM RFCs moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/59.
Unresolved Questions
None.
@@ -7126,13 +7213,13 @@ You only need to specify the hints you want in one single instruction at the top
Authors
-Summary
+Summary
This RFC aims to remove the NetworkIds of Westend and Rococo, arguing that testnets shouldn't go in the language.
-Motivation
+Motivation
We've already seen the plans to phase out Rococo and Paseo has appeared.
Instead of constantly changing the testnets included in the language, we should favor specifying them via their genesis hash,
using NetworkId::ByGenesis.
-Stakeholders
+Stakeholders
- Runtime devs
- Wallets
@@ -7144,14 +7231,14 @@ using NetworkId::ByGenesis.
This RFC will make it less convenient to specify a testnet, but not by a large amount.
Testing, Security, and Privacy
None.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
None.
Ergonomics
It will very slightly reduce the ergonomics of testnet developers but improve the stability of the language.
Compatibility
NetworkId::Rococo and NetworkId::Westend can just use NetworkId::ByGenesis, as can other testnets.
-Prior Art and References
+Prior Art and References
A previous attempt to add NetworkId::Paseo: https://github.com/polkadot-fellows/xcm-format/pull/58.
Unresolved Questions
None.
@@ -7194,11 +7281,11 @@ using NetworkId::ByGenesis.
Authors Adrian Catangiu
-Summary
+Summary
XCM programs generated by the InitiateAssetTransfer instruction shall have the option to carry over the original origin all the way to the final destination. They shall do so by internally making use of AliasOrigin or ClearOrigin depending on given parameters.
This allows asset transfers to retain their original origin even across multiple hops.
Ecosystem chains would have to change their trusted aliasing rules to effectively make use of this feature.
-Motivation
+Motivation
Currently, all XCM asset transfer instructions ultimately clear the origin in the remote XCM message by use of the ClearOrigin instruction. This is done for security considerations to ensure that subsequent (user-controlled) instructions cannot command the authority of the sending chain.
The problem with this approach is that it limits what can be achieved on remote chains through XCM. Most XCM operations require having an origin, and following any asset transfer the origin is lost, meaning not much can be done other than depositing the transferred assets to some local account or transferring them onward to another chain.
For example, we cannot transfer some funds for buying execution, then do a Transact (all in the same XCM message).
@@ -7206,7 +7293,7 @@ using NetworkId::ByGenesis.
Transact XCM programs today require a two step process:
And we want to be able to do it using a single XCM program.
-Stakeholders
+Stakeholders
Runtime Users, Runtime Devs, wallets, cross-chain dApps.
Explanation
In the case of XCM programs going from source-chain directly to dest-chain without an intermediary hop, we can enable scenarios such as above by using the AliasOrigin instruction instead of the ClearOrigin instruction.
@@ -7267,7 +7354,7 @@ involved chains.
Normally, XCM program builders should audit their programs and eliminate assumptions of "no origin" on remote side of this instruction. In this case, the InitiateAssetsTransfer has not been released yet, it will be part of XCMv5, and we can make this change part of the same XCMv5 so that there isn't even the possibility of someone in the wild having built XCM programs using this instruction on those wrong assumptions.
The working assumption going forward is that the origin on the remote side can either be cleared or it can be the local origin's reanchored location. This assumption is in line with the current behavior of remote XCM programs sent over using pallet_xcm::send.
The existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear origin same as before for compatibility reasons.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
No impact.
Ergonomics
@@ -7279,7 +7366,7 @@ involved chains.
For compatibility reasons, this RFC proposes this mechanism be added as an enhancement to the yet unreleased InitiateAssetsTransfer instruction, thus eliminating possibilities of XCM logic breakages in the wild.
Following the same logic, the existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear the origin same as before for compatibility reasons.
Any one of DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport instructions can be replaced with a InitiateAssetsTransfer instruction with or without origin aliasing, thus providing a clean and clear upgrade path for opting-in this new feature.
-Prior Art and References
+Prior Art and References
- RFC: InitiateAssetsTransfer for complex asset transfers
- RFC: Descend XCM origin instead of clearing it where possible
@@ -7317,13 +7404,13 @@ Following the same logic, the existing DepositReserveAsset, I
Authors Bastian Köcher
-Summary
+Summary
The code of a runtime is stored in its own state, and when performing a runtime upgrade, this code is replaced. The new runtime can contain runtime migrations that adapt the state to the state layout as defined by the runtime code. This runtime migration is executed when building the first block with the new runtime code. Anything that interacts with the runtime state uses the state layout as defined by the runtime code. So, when trying to load something from the state in the block that applied the runtime upgrade, it will use the new state layout but will decode the data from the non-migrated state. In the worst case, the data is incorrectly decoded, which may lead to crashes or halting of the chain.
This RFC proposes to store the new runtime code under a different storage key when applying a runtime upgrade. This way, all the off-chain logic can still load the old runtime code under the default storage key and decode the state correctly. The block producer is then required to use this new runtime code to build the next block. While building the next block, the runtime is executing the migrations and moves the new runtime code to the default runtime code location. So, the runtime code found under the default location is always the correct one to decode the state from which the runtime code was loaded.
-Motivation
+Motivation
While the issue of having undecodable state only exists for the one block in which the runtime upgrade was applied, it still impacts anything that reads state data, like block explorers, UIs, nodes, etc. For block explorers, the issue mainly results in indexing invalid data and UIs may show invalid data to the user. For nodes, reading incorrect data may lead to a performance degradation of the network. There are also ways to prevent certain decoding issues from happening, but it requires that developers are aware of this issue and also requires introducing extra code, which could introduce further bugs down the line.
So, this RFC tries to solve these issues by fixing the underlying problem of having temporary undecodable state.
-Stakeholders
+Stakeholders
- Relay chain/Parachain node developers
- Relay chain/Parachain node operators
@@ -7337,7 +7424,7 @@ Furthermore, this RFC proposes to introduce system_version: 3. The
There is still the possibility of having state that is not migrated even when following the proposal as presented by this RFC. The issue is that if the amount of data to be migrated is too big, not all of it can be migrated in one block, because either it takes more time than there is assigned for a block or parachains for example have a fixed budget for their proof of validity. To solve this issue there already exist multi-block migrations that can chunk the migration across multiple blocks. Consensus-critical data needs to be migrated in the first block to ensure that block production etc., can continue. For the other data being migrated by multi-block migrations the migrations could for example expose to the outside which keys are being migrated and should not be indexed until the migration is finished.
Testing, Security, and Privacy
Testing should be straightforward and most of the existing testing should already be good enough. Extending with some checks that :pending_code is moved to :code.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
The performance should not be impacted besides requiring loading the runtime code in the first block being built with the new runtime code.
Ergonomics
@@ -7345,7 +7432,7 @@ There is still the possibility of having state that is not migrated even when fo
Compatibility
The change will require that the nodes are upgraded before the runtime starts using this feature. Otherwise they will fail to import the block build by :pending_code.
For Polkadot/Kusama this means that also the parachain nodes need to be running with a relay chain node version that supports this new feature. Otherwise the parachains will stop producing/finalizing nodes as they can not sync the relay chain any more.
-Prior Art and References
+Prior Art and References
The issue initially reported a bug that led to this RFC. It also discusses multiple solutions for the problem.
Unresolved Questions
None
@@ -7388,9 +7475,9 @@ For Polkadot/Kusama this means that also the parachain nodes need to be running
Authors Daniel Shiposha
-Summary
+Summary
This RFC proposes a metadata format for XCM-identifiable assets (i.e., for fungible/non-fungible collections and non-fungible tokens) and a set of instructions to communicate it across chains.
-Motivation
+Motivation
Currently, there is no way to communicate metadata of an asset (or an asset instance) via XCM.
The ability to query and modify the metadata is useful for two kinds of entities:
@@ -7410,7 +7497,7 @@ For Polkadot/Kusama this means that also the parachain nodes need to be running
Besides metadata modification, the ability to read it is also valuable. On-chain logic can interpret the NFT metadata, i.e., the metadata could have not only the media meaning but also a utility function within a consensus system. Currently, such a way of using NFT metadata is possible only within one consensus system. This RFC proposes making it possible between different systems via XCM so different chains can fetch and analyze the asset metadata from other chains.
-Stakeholders
+Stakeholders
Runtime users, Runtime devs, Cross-chain dApps, Wallets.
Explanation
The Asset Metadata is information bound to an asset class (fungible or NFT collection) or an asset instance (an NFT).
@@ -7564,14 +7651,14 @@ This RFC proposes to use the Undefined variant of a collection iden
In terms of performance and privacy, there will be no changes.
Testing, Security, and Privacy
The implementations must honor the contract for the new instructions. Namely, if the instance field has the value of AssetInstance::Undefined, the metadata must relate to the asset collection but not to a non-fungible token inside it.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
No significant impact.
Ergonomics
Introducing a standard metadata format and a way of communicating it is a valuable addition to the XCM format that potentially increases cross-chain interoperability without the need to form ad-hoc chain-to-chain integrations via Transact.
Compatibility
This RFC proposes new functionality, so there are no compatibility issues.
-Prior Art and References
+Prior Art and References
Future Directions and Related Material
The original RFC draft contained additional metadata instructions. Though they could be useful, they're clearly outside the basic logic. So, this RFC version omits them to make the metadata discussion more focused on the core things. Nonetheless, there is hope that metadata approval instructions might be useful in the future, so they are mentioned here.
@@ -7618,9 +7705,9 @@ This RFC proposes to use the Undefined variant of a collection iden
Authors Bryan Chen, Jiyuan Zheng
-Summary
+Summary
This proposal introduces PVQ (PolkaVM Query), a unified query interface that bridges different chain runtime implementations and client tools/UIs. PVQ provides an extension-based system where runtime developers can expose chain-specific functionality through standardized interfaces, while allowing client-side developers to perform custom computations on the data through PolkaVM programs. By abstracting away concrete implementations across chains and supporting both off-chain and cross-chain scenarios, PVQ aims to reduce code duplication and development complexity while maintaining flexibility for custom use cases.
-Motivation
+Motivation
In Substrate, runtime APIs facilitate off-chain clients in reading the state of the consensus system.
However, the APIs defined and implemented by individual chains often fall short of meeting the diverse requirements of client-side developers.
For example, client-side developers may want some aggregated data from multiple pallets, or apply various custom transformations on the raw data.
@@ -7655,7 +7742,7 @@ As a result, client-side developers frequently resort to directly accessing stor
-Stakeholders
+Stakeholders
- Runtime Developers
- Tools/UI Developers
@@ -7969,7 +8056,7 @@ enum PvqError {
N/A
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
As a newly introduced feature, PVQ operates independently and does not impact or degrade the performance of existing runtime implementations.
Ergonomics
@@ -7978,7 +8065,7 @@ This significantly benefits wallet and dApp developers by eliminating the need t
Compatibility
For RuntimeAPI integration, the proposal defines new APIs, which do not break compatibility with existing interfaces.
For XCM Integration, the proposal does not modify the existing XCM message format, which is backwards compatible.
-Prior Art and References
+Prior Art and References
There are several discussions related to the proposal, including:
- Original discussion about having a mechanism to avoid code duplications between the runtime and front-ends/wallets. In the original design, the custom computations are compiled as a wasm function.
@@ -8027,12 +8114,12 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View
Authors s0me0ne-unkn0wn (13WGadgNgqSjiGQvfhimw9pX26mvGdYQ6XgrjPANSEDRoGMt)
-Summary
+Summary
This RFC proposes a change that makes it possible to identify types of compressed blobs stored on-chain, as well as used off-chain, without the need for decompression.
-Motivation
+Motivation
Currently, a compressed blob does not give any idea of what's inside because the only thing that can be inside, according to the spec, is Wasm. In reality, other blob types are already being used, and more are to come. Apart from being error-prone by itself, the current approach does not allow to properly route the blob through the execution paths before its decompression, which will result in suboptimal implementations when more blob types are used. Thus, it is necessary to introduce a mechanism allowing to identify the blob type without decompressing it.
This proposal is intended to support future work enabling Polkadot to execute PolkaVM and, more generally, other-than-Wasm parachain runtimes, and allow developers to introduce arbitrary compression methods seamlessly in the future.
-Stakeholders
+Stakeholders
Node developers are the main stakeholders for this proposal. It also creates a foundation on which parachain runtime developers will build.
Explanation
Overview
@@ -8063,14 +8150,14 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View
Testing, Security, and Privacy
As the change increases granularity, it will positively affect both testing possibilities and security, allowing developers to check what's inside a given compressed blob precisely. Testing the change itself is trivial. Privacy is not affected by this change.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Performance
The current implementation's performance is not affected by this change. Future implementations allowing for the execution of other-than-Wasm parachain runtimes will benefit from this change performance-wise.
Ergonomics
The end-user ergonomics is not affected. The ergonomics for developers will benefit from this change as it enables exact checks and less guessing.
Compatibility
The change is designed to be backward-compatible.
-Prior Art and References
+Prior Art and References
SDK PR#6704 (WIP) introduces a mechanism similar to that described in this proposal and proves the necessity of such a change.
Unresolved Questions
None
@@ -8114,9 +8201,9 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View
Authors ordian
-Summary
+Summary
This RFC proposes changes to the erasure coding algorithm and the method for computing the erasure root on Polkadot to improve performance of both processes.
-Motivation
+Motivation
The Data Availability (DA) Layer in Polkadot provides a foundation for
shared security, enabling Approval Checkers and Collators to download
Proofs-of-Validity (PoV) for security and liveness purposes respectively.
@@ -8133,7 +8220,7 @@ The proposed change is orthogonal to RFC-47 and can be used in conjunction with
collator nodes), we propose bundling another performance-enhancing breaking
change that addresses the CPU bottleneck in the erasure coding process, but using
a separate node feature (NodeFeatures part of HostConfiguration) for its activation.
-Stakeholders
+Stakeholders
- Infrastructure providers (operators of validator/collator nodes)
will need to upgrade their client version in a timely manner
@@ -8185,7 +8272,7 @@ faster deployment for most parachains but would add complexity.
Compatibility
This requires a breaking change that can be coordinated following the same approach as in RFC-47.
-Prior Art and References
+Prior Art and References
JAM already utilizes the same optimizations described in the Graypaper.
Unresolved Questions
None.
@@ -8219,7 +8306,7 @@ faster deployment for most parachains but would add complexity.
Authors Jonas Gehrlein
-Summary
+Summary
This RFC proposes burning 80% of transaction fees accrued on Polkadot’s Relay Chain and, more significantly, on all its system parachains. The remaining 20% would continue to incentivize Validators (on the Relay Chain) and Collators (on system parachains) for including transactions. The 80:20 split is motivated by preserving the incentives for Validators, which are crucial for the security of the network, while establishing a consistent fee policy across the Relay Chain and all system parachains.
-
@@ -8230,7 +8317,7 @@ faster deployment for most parachains but would add complexity.
This proposal extends the system's deflationary direction and is enabling direct value capture for DOT holders of an overall increased activity on the network.
-Motivation
+Motivation
Historically, transaction fees on both the Relay Chain and the system parachains (with a few exceptions) have been relatively low. This is by design—Polkadot is built to scale and offer low-cost transactions. While this principle remains unchanged, growing network activity could still result in a meaningful accumulation of fees over time.
Implementing this RFC ensures that potentially increasing activity manifesting in more fees is captured for all token holders. It further aligns the way that the network is handling fees (such as from transactions or for coretime usage) is handled. The arguments in support of this are close to those outlined in RFC0010. Specifically, burning transaction fees has the following benefits:
Compensation for Coretime Usage
@@ -8238,7 +8325,7 @@ faster deployment for most parachains but would add complexity.
Value Accrual and Deflationary Pressure
By burning the transaction fees, the system effectively reduces the token supply and thereby increase the scarcity of the native token. This deflationary pressure can increase the token's long-term value and ensures that the value captured is translated equally to all existing token holders.
This proposal requires only minimal code changes, making it inexpensive to implement, yet it introduces a consistent policy for handling transaction fees across the network. Crucially, it positions Polkadot for a future where fee burning could serve as a counterweight to an otherwise inflationary token model, ensuring that value generated by network usage is returned to all DOT holders.
-Stakeholders
+Stakeholders
-
All DOT Token Holders: Benefit from reduced supply and direct value capture as network usage increases.
@@ -8276,12 +8363,12 @@ faster deployment for most parachains but would add complexity.
Authors eskimor
-Summary
+Summary
This RFC proposes an amendment to RFC-1 Agile Coretime: Renewal prices will no
longer only be adjusted based on a configurable renewal bump, but also to the
lower end of the current sale - if that turns out higher.
An implementation can be found here.
-Motivation
+Motivation
In RFC-1, we strived for perfect predictability on renewal prices, but what we
expected unfortunately got proven in practice: Perfect predictability allows
for core hoarding and cheap market manipulation, with the effect that both on
@@ -8293,7 +8380,7 @@ extend to elastic scaling and in practice, even existing teams wanting to keep
their core, because they forgot to renew in the interlude.
In a nutshell the current situation is severely hindering teams from deploying
on Polkadot: We are essentially in a Denial of Service situation.
-Stakeholders
+Stakeholders
Stakeholders should be existing teams already having a core and new teams wanting to join the ecosystem.
Explanation
This RFC proposes to fix this situation, by limiting renewal price
@@ -8392,13 +8479,13 @@ tenants. Having them exposed at least with this 10x reduction seems a sensible
valuation.
There are no privacy concerns.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
The proposed changes are backwards compatible. No interfaces are changed.
Performance is not affected. Ergonomics should be greatly improved especially
for new entrants, as cores will be available for sale again. A configured
minimum price also ensures that the starting price of the Dutch auction stays
reasonably high, deterring sniping all the cores at the beginning of a sale.
-Prior Art and References
+Prior Art and References
This RFC is altering RFC-1 and taking ideas from RFC-17, mainly the introduction of a minimum price.
Future Directions and Related Material
This RFC should solve the immediate problems we are seeing in production right
@@ -8458,13 +8545,13 @@ a few cores not for sale should be enough to mitigate such a situation.
Authors Jeff Burdges, Alistair Stewart
-Summary
+Summary
Availability (bitfield) votes gain a preferred_fork flag which expresses the validator's opinion upon relay chain equivocations and babe forks, while still sharing availability votes for all relay chain blocks. We make relay chain block production require a supermajority with preferred_fork set, so forks cannot advance if they split the honest validators, which creates an early soft concensus. We similarly defend ELVES from relay chain equivocation attacks and prevent redundent approvals across babe forks.
-Motivation
+Motivation
We've always known relay chain equivocations break the ELVES threat model. We originally envisioned ELVES having fallback pathways, but doing fallbacks requires dangerous subtle debugging. We support more assignment schemes in ELVES this way too, including one novel post-quantum one, and very low CPU usage schemes.
We expect this early soft concensus creates back pressure that improves performance under babe forks.
Alistair: TODO?
-Stakeholders
+Stakeholders
We modify the availability votes and restrict relay chain blocks, fork choice, and ELVES start conditions, so mostly the parachain. See alternatives notes on the flag under sassafras chains like JAM.
Explanation
Availability voting
@@ -8490,12 +8577,12 @@ a few cores not for sale should be enough to mitigate such a situation.
Concerns: Drawbacks, Testing, Security, and Privacy
Adds subtle timing constraints, which could entrench existing performanceg obstacles. We might explore variations that ignore wall clock time.
We've always known relay chain equivocations break the ELVES threat model. We originally envisioned ELVES having fallback pathways, but these were complex and demanded unused code paths, which cannot realistically be debugged. Although complex, the early soft concensus scheme feels less complex overall. We know timing sucks to optimise a distributed system, but at least doing so use everyday code paths.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
We expect early soft concensus introduce back pressure that radically alters performance. We no longer run approvals checks upon all forks. As primary slots occur once every other slot in expectation, one might expect a 25% reduction in CPU load, but this depends upon diverse factors.
We apply back pressure by dropping some whole relay chain blocks though, so this shall increase the expected parachain blocktime somewhat, but how much depens upon future optimisation work.
Compatibility
Major upgrade
-Prior Art and References
+Prior Art and References
...
Unresolved Questions
We halt the chain when less than 2/3 of validators are online. We consider this reasonable since governance now runs on a parachain, ELVES would not secure, and nothing can be finalized anyways. We could perhaps add some "recovery mode" where the relay chain embeds entire system parachain blocks, but doing so might not warrant the effort required.
@@ -8557,16 +8644,16 @@ a few cores not for sale should be enough to mitigate such a situation.
Authors Jeff Burdges, ...
-Summary
+Summary
An off-chain approximation protocol should assign rewards based upon the approvals and availability work done by validators.
All validators track which approval votes they actually use, reporting the aggregate, after which an on-chain median computation gives a good approximation under byzantine assumptions. Approval checkers report aggregate information about which availability chunks they use too, but in availability we need a tit-for-tat game to enforce honesty, because approval committees could often bias results thanks to their small size.
-Motivation
+Motivation
We want all or most polkadot subsystems be profitable for validataors, because otherwise operators might profit from running modified code. In particular, almost all rewards in Kusama/Polkadot should come from work done securing parachains, primarily approval checking, but also backing, availability, and support of XCMP.
Among these task, our highest priorities must be approval checks, which ensure soundness, and sending availability chunks to approval checkers. We prove backers must be paid strictly less than approval checkers.
At present though, validators' rewards have relatively little relationship to validators operating costs, in terms of bandwidth and CPU time. Worse, polkadot's scaling makes us particular vulnerable "no-shows" caused by validators skipping their approval checks.
We're particularly concernned about hardware specks impact upon the number of parachain cores. We've requested relatively low spec machines so far, only four physical CPU cores, although some run even lower specs like only two physical CPU cores. Alone, rewards cannot fix our low speced validator problem, but rewards and outreach together should far more impact than either alone.
In future, we'll further increase validator spec requirements, which directly improve polkadot's throughput, and which repeats this dynamic of purging underspeced nodes, except outreach becomes more important because de facto too many slow validators can "out vote" the faster ones
-Stakeholders
+Stakeholders
We alter the validators rewards protocol, but with negligable impact upon rewards for honest validators who comply with hardware and bandwidth recommendations.
We shall still reward participation in relay chain concensus of course, which de facto means block production but not finality, but these current reward levels shall wind up greatly reduced. Any validators who manipulate block rewards now could lose rewards here, simply because of rewards being shifted from block production to availability, but this sounds desirable.
We've discussed roughly this rewards protocol in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF and https://github.com/paritytech/polkadot-sdk/issues/1811 as well as related topics like https://github.com/paritytech/polkadot-sdk/issues/5122
@@ -8737,7 +8824,7 @@ At this point, we compute $\beta\prime_w = \sum_v \beta\prime_{w,v}$ on-chain fo
We discuss approvals being considered by the tit-for-tat in earlier drafts. An adversary who successfuly manipulates the rewards median votes would've alraedy violated polkadot's security assumptions though, which requires a hard fork and correcting the dot allocation. Incorrect report wrong approval_usages remain interesting statistics though.
Adversarial validators could manipulates their availability votes though, even without being a supermajority. If they still download honestly, then this costs them more rewards than they earn. We do not prevent validators from preferentially obtaining their pieces from their friends though. We should analyze, or at least observe, the long-term consequences.
A priori, whale nominator's validators could stiff validators but then rotate their validators quickly enough so that they never suffered being skipped back. We discuss several possible solution, and their difficulties, under "Rob's nominator-wise skipping" in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF but overall less seems like more here. Also frequent validator rotation could be penalized elsewhere.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
We operate off-chain except for final rewards votes and median tallies. We expect lower overhead rewards protocols would lack information, thereby admitting easier cheating.
Initially, we designed the ELVES approval gadget to allow on-chain operation, in part for rewards computation, but doing so looks expensive. Also, on-chain rewards computaiton remains only an approximation too, but could even be biased more easily than our off-chain protocol presented here.
@@ -8745,7 +8832,7 @@ At this point, we compute $\beta\prime_w = \sum_v \beta\prime_{w,v}$ on-chain fo
We alraedy teach validators about missed parachain blocks, but we'll teach approval checking more going forwards, because current efforts focus more upon backing.
JAM's block exports should not complicate availability rewards, but could impact some alternative schemes.
-Prior Art and References
+Prior Art and References
None
Unresolved Questions
Provide specific questions to discuss and address before the RFC is voted on by the Fellowship. This should include, for example, alternatives to aspects of the proposed design where the appropriate trade-off to make is unclear.
@@ -8781,14 +8868,14 @@ At this point, we compute $\beta\prime_w = \sum_v \beta\prime_{w,v}$ on-chain fo
Authors Pierre Krieger
-Summary
+Summary
Update the runtime-host interface to no longer make use of a host-side allocator.
-Motivation
+Motivation
The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.
The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer in order to free the buffer.
Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.
Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.
-Stakeholders
+Stakeholders
No attempt was made at convincing stakeholders.
Explanation
New host functions
@@ -9054,10 +9141,10 @@ This would remove the possibility to synchronize older blocks, which is probably
License MIT
-Summary
+Summary
This RFC proposes a dynamic pricing model for the sale of Bulk Coretime on the Polkadot UC. The proposed model updates the regular price of cores for each sale period, by taking into account the number of cores sold in the previous sale, as well as a limit of cores and a target number of cores sold. It ensures a minimum price and limits price growth to a maximum price increase factor, while also giving govenance control over the steepness of the price change curve. It allows governance to address challenges arising from changing market conditions and should offer predictable and controlled price adjustments.
Accompanying visualizations are provided at [1].
-Motivation
+Motivation
RFC-1 proposes periodic Bulk Coretime Sales as a mechanism to sell continouos regions of blockspace (suggested to be 4 weeks in length). A number of Blockspace Regions (compare RFC-1 & RFC-3) are provided for sale to the Broker-Chain each period and shall be sold in a way that provides value-capture for the Polkadot network. The exact pricing mechanism is out of scope for RFC-1 and shall be provided by this RFC.
A dynamic pricing model is needed. A limited number of Regions are offered for sale each period. The model needs to find the price for a period based on supply and demand of the previous period.
The model shall give Coretime consumers predictability about upcoming price developments and confidence that Polkadot governance can adapt the pricing model to changing market conditions.
@@ -9069,7 +9156,7 @@ This would remove the possibility to synchronize older blocks, which is probably
- The solution SHOULD provide a maximum factor of price increase should the limit of Regions sold per period be reached.
- The solution should allow governance to control the steepness of the price function
-Stakeholders
+Stakeholders
The primary stakeholders of this RFC are:
- Protocol researchers and evelopers
@@ -9178,7 +9265,7 @@ OLD_PRICE = 1000
None at present.
-This pricing model is based on the requirements from the basic linear solution proposed in RFC-1, which is a simple dynamic pricing model and only used as proof. The present model adds additional considerations to make the model more adaptable under real conditions.
This RFC, if accepted, shall be implemented in conjunction with RFC-1.
@@ -9216,12 +9303,12 @@ OLD_PRICE = 1000This RFC proposes changes that enable the use of absolute locations in AccountId derivations, which allows protocols built using XCM to have static account derivations in any runtime, regardless of its position in the family hierarchy.
-These changes would allow protocol builders to leverage absolute locations to maintain the exact same derived account address across all networks in the ecosystem, thus enhancing user experience.
One such protocol, that is the original motivation for this proposal, is InvArch's Saturn Multisig, which gives users a unifying multisig and DAO experience across all XCM connected chains.
-Tests can be done using simple unit tests, as this is not a change to XCM itself but rather to types defined in xcm-builder.
Security considerations should be taken with the implementation to make sure no unwanted behavior is introduced.
This proposal does not introduce any privacy considerations.
-Depending on the final implementation, this proposal should not introduce much overhead to performance.
The ergonomics of this proposal depend on the final implementation details.
Backwards compatibility should remain unchanged, although that depend on the final implementation.
-DescirbeFamily type: https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/xcm-builder/src/location_conversion.rs#L122WithComputedOrigin type: https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/xcm-builder/src/barriers.rs#L153This RFC proposes to make modifications to voting power delegations as part of the Conviction Voting pallet. The changes being proposed include:
It has become clear since the launch of OpenGov that there are a few common tropes which pop up time and time again:
We believe (based on feedback from token holders with a larger stake in the network) that if there were some changes made to delegation mechanics, these larger stake holders would be more likely to delegate their voting power to active network participants – thus greatly increasing the support turnout.
-The primary stakeholders of this RFC are:
We do not foresee any drawbacks by implementing these changes. If anything we believe that this should help to increase overall voter turnout (via the means of delegation) which we see as a net positive.
We feel that the Polkadot Technical Fellowship would be the most competent collective to identify the testing requirements for the ideas presented in this RFC.
-This change may add extra chain storage requirements on Polkadot, especially with respect to nested delegations.
The change to add nested delegations may affect governance interfaces such as Nova Wallet who will have to apply changes to their indexers to support nested delegations. It may also affect the Polkadot Delegation Dashboard as well as Polkassembly & SubSquare.
We want to highlight the importance for ecosystem builders to create a mechanism for indexers and wallets to be able to understand that changes have occurred such as increasing the pallet version, etc.
-N/A
N/A
@@ -9426,9 +9513,9 @@ OLD_PRICE = 1000This RFC proposes a new model for a sustainable on-demand parachain registration, involving a smaller initial deposit and periodic rent payments. The new model considers that on-demand chains may be unregistered and later re-registered. The proposed solution also ensures a quick startup for on-demand chains on Polkadot in such cases.
-With the support of on-demand parachains on Polkadot, there is a need to explore a new, more cost-effective model for registering validation code. In the current model, the parachain manager is responsible for reserving a unique ParaId and covering the cost of storing the validation code of the parachain. These costs can escalate, particularly if the validation code is large. We need a better, sustainable model for registering on-demand parachains on Polkadot to help smaller teams deploy more easily.
This RFC suggests a new payment model to create a more financially viable approach to on-demand parachain registration. In this model, a lower initial deposit is required, followed by recurring payments upon parachain registration.
This new model will coexist with the existing one-time deposit payment model, offering teams seeking to deploy on-demand parachains on Polkadot a more cost-effective alternative.
@@ -9442,7 +9529,7 @@ OLD_PRICE = 1000Proper research should be conducted on setting the configuration values of the new system since these values can have great impact on the network.
An audit is required to ensure the implementation's correctness.
The proposal introduces no new privacy concerns.
-This RFC should not introduce any performance impact.
This RFC does not affect the current parachains, nor the parachains that intend to use the one-time payment model for parachain registration.
This RFC does not break compatibility.
-Prior discussion on this topic: https://github.com/paritytech/polkadot-sdk/issues/1796
None at this time.
@@ -9607,14 +9694,14 @@ This RFC offers an alternative solution for on-demand parachains, ensuring thatRather than enforce a limit to the total memory consumption on the client side by loading the value at :heappages, enforce that limit on the runtime side.
From the early days of Substrate up until recently, the runtime was present in two forms: the wasm runtime (wasm bytecode passed through an interpreter) and the native runtime (native code directly run by the client).
Since the wasm runtime has a lower amount of available memory (4 GiB maximum) compared to the native runtime, and in order to ensure sure that the wasm and native runtimes always produce the same outcome, it was necessary to clamp the amount of memory available to both runtimes to the same value.
In order to achieve this, a special storage key (a "well-known" key) :heappages was introduced and represents the number of "wasm pages" (one page equals 64kiB) of memory that are available to the memory allocator of the runtimes. If this storage key is absent, it defaults to 2048, which is 128 MiB.
The native runtime has since then been disappeared, but the concept of "heap pages" still exists. This RFC proposes a simplification to the design of Polkadot by removing the concept of "heap pages" as is currently known, and proposes alternative ways to achieve the goal of limiting the amount of memory available.
-Client implementers and low-level runtime developers.
This RFC proposes the following changes to the client:
@@ -9650,7 +9737,7 @@ In the case where the runtime runs out of memory only in the specific event wherThis RFC would reduce the chance of a consensus issue between clients.
The :heappages are a rather obscure feature, and it is not clear what happens in some corner cases such as the value being too large (error? clamp?) or malformed. This RFC would completely erase these questions.
In case of path A, it is unclear how performances would be affected. Path A consists in moving client-side operations to the runtime without changing these operations, and as such performance differences are expected to be minimal. Overall, we're talking about one addition/subtraction per malloc and per free, so this is more than likely completely negligible.
In case of path B and C, the performance gain would be a net positive, as this RFC strictly removes things.
@@ -9658,7 +9745,7 @@ The:heappages are a rather obscure feature, and it is not clear wh
This RFC would isolate the client and runtime more from each other, making it a bit easier to reason about the client or the runtime in isolation.
Not a breaking change. The runtime-side changes can be applied immediately (without even having to wait for changes in the client), then as soon as the runtime is updated, the client can be updated without any transition period. One can even consider updating the client before the runtime, as it corresponds to path C.
-None.
None.
@@ -9700,7 +9787,7 @@ The:heappages are a rather obscure feature, and it is not clear wh
This RFC proposes adding a trivial governance track on Kusama to facilitate X (formerly known as Twitter) posts on the @kusamanetwork account. The technical aspect of implementing this in the runtime is very inconsequential and straight-forward, though it might get more technical if the Fellowship wants to regulate this track with a non-existent permission set. If this is implemented it would need to be followed up with:
@@ -9708,7 +9795,7 @@ with a non-existent permission set. If this is implemented it would need to be fThe overall motivation for this RFC is to decentralize the management of the Kusama brand/communication channel to KSM holders. This is necessary in my opinion primarily because of the inactivity of the account in recent history, with posts spanning weeks or months apart. I am currently unaware of who/what entity manages the Kusama X account, but if they are affiliated with Parity or W3F this proposed solution could also offload some of the legal ramifications of making (or not making) @@ -9718,7 +9805,7 @@ and the community becomes totally autonomous in the management of Kusama's X pos that could be offloaded to openGov, provided this proof-of-concept is successful.
Finally, this RFC is the epitome of experimentation that Kusama is ideal for. This proposal may spark newfound excitement for Kusama and help us realize Kusama's potential for pushing boundaries and trying new unconventional ideas.
-This idea has not been formalized by any individual (or group of) KSM holder(s). To my knowledge the socialization of this idea is contained entirely in my recent X post here, but it is possible that an idea like this one has been discussed in other places. It appears to me that the ecosystem would welcome a change like this which is why I am taking action to formalize the discussion.
@@ -9792,7 +9879,7 @@ the auth tokens are given to people actually running the tools; a house of cards That's why I recommend restricting permissions of this track to remarks and batches of remarks, or something equally inconsequential.Building the tools for this implementation is really straight-forward and could be audited by Fellowship members, and the community at large, on Github.
The largest security concern would be the management of Kusama's X account's auth tokens. We would need to ensure that they aren't compromised.
-If a track on Kusama promises users that compliant referenda enacted therein would be posted on Kusama's X account, users would expect that track to perform as promised. If the house of cards tumbles down and a compliant referendum doesn't actually get anything posted, users might think that Kusama is broken or unreliable. This @@ -9847,9 +9934,9 @@ out of Kusama's scope. But it will require some off-chain effort to maintain.
The current size of the decision deposit on some tracks is too high for many proposers. As a result, those needing to use it have to find someone else willing to put up the deposit for them - and a number of legitimate attempts to use the root track have timed out. This track would provide a more affordable (though slower) route for these holders to use the root track.
-There have been recent attempts to use the Kusama root track which have timed out with no decision deposit placed. Usually, these referenda have been related to parachain registration related issues.
Propose to address this by adding a new referendum track [22] Referendum Deposit which can place the decision deposit on another referendum. This would require the following changes:
@@ -9880,14 +9967,14 @@ out of Kusama's scope. But it will require some off-chain effort to maintain.An alternative to this might be to reduce the decision deposit size some of the more expensive tracks. However, part of the purpose of the high deposit - at least on the root track - is to prevent spamming the limited queue with junk referenda.Will need additional tests case for the modified pallet and runtime. No security or privacy issues.
-No significant performance impact.
Only changes related to adding the track. Existing functionality is unchanged.
No compatibility issues.
-A pallet to facilitate enhanced multisig accounts. The main enhancement is that we store a multisig account in the state with related info (signers, threshold,..etc). The module affords enhanced control over administrative operations such as adding/removing signers, changing the threshold, account deletion, canceling an existing proposal. Each signer can approve/reject a proposal while still exists. The proposal is not intended for migrating or getting rid of existing multisig. It's to allow both options to coexist.
For the rest of the RFC We use the following terms:
Stateful Multisig to refer to the proposed pallet.
Stateless Multisig to refer to the current multisig pallet in polkadot-sdk.Entities in the Polkadot ecosystem need to have a way to manage their funds and other operations in a secure and efficient way. Multisig accounts are a common way to achieve this. Entities by definition change over time, members of the entity may change, threshold requirements may change, and the multisig account may need to be deleted. For even more enhanced hierarchical control, the multisig account may need to be controlled by other multisig accounts.
Current native solutions for multisig operations are less optimal, performance-wise (as we'll explain later in the RFC), and lack fine-grained control over the multisig account.
@@ -9986,7 +10073,7 @@ DAOs can utilize multisig accounts to ensure that decisions are made collectiveland much more...
-Standard audit/review requirements apply.
-Doing back of the envelop calculation to proof that the stateful multisig is more efficient than the stateless multisig given it's smaller footprint size on blocks.
Quick review over the extrinsics for both as it affects the block size:
@@ -10479,7 +10566,7 @@ We have the following extrinsics:The Stateful Multisig will have better ergonomics for managing multisig accounts for both developers and end-users.
This RFC is compatible with the existing implementation and can be handled via upgrades and migration. It's not intended to replace the existing multisig pallet.
-multisig pallet in polkadot-sdk
This proposes to increase the maximum length of PGP Fingerprint values from a 20 bytes/chars limit to a 40 bytes/chars limit.
-Pretty Good Privacy (PGP) Fingerprints are shorter versions of their corresponding Public Key that may be printed on a business card.
They may be used by someone to validate the correct corresponding Public Key.
@@ -10551,7 +10638,7 @@ Implement call filters. This will allow multisig accounts to only accept certainThe maximum length of identity PGP Fingerprint values should be increased from the current 20 bytes/chars limit at least a 40 bytes/chars limit to support PGP Fingerprints and GPG Fingerprints.
-Implementations would be tested for adherance by checking that 40 bytes/chars PGP Fingerprints are supported.
No effect on security or privacy has been identified than already exists.
No implementation pitfalls have been identified.
-It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.
To minimize additional overhead the proposal suggests a 40 bytes/chars limit since that would at least provide support for PGP Fingerprints, satisfying the solution requirements.
@@ -10579,7 +10666,7 @@ Implement call filters. This will allow multisig accounts to only accept certainNo potential ergonomic optimizations have been identified.
Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.
-No prior articles or references.
No further questions at this stage.
@@ -10629,10 +10716,10 @@ Implement call filters. This will allow multisig accounts to only accept certainThis proposes to require a slashable deposit in the broker pallet when initially purchasing or renewing Bulk Coretime or Instantaneous Coretime cores.
Additionally, it proposes to record a reputational status based on the behavior of the purchaser, as it relates to their use of Kusama Coretime cores that they purchase, and to possibly reserve a proportion of the cores for prospective purchasers that have an on-chain identity.
-There are sales of Kusama Coretime cores that are scheduled to occur later this month by Coretime Marketplace Lastic.xyz initially in limited quantities, and potentially also by RegionX in future that is subject to their Polkadot referendum #582. This poses a risk in that some Kusama Coretime core purchasers may buy Kusama Coretime cores when they have no intention of actually placing a workload on them or leasing them out, which would prevent those that wish to purchase and actually use Kusama Coretime cores from being able to use any at cores at all.
Reputation. To disincentivise certain behaviours, a reputational status indicator could be used to record the historic behavior of the purchaser and whether on-chain judgement has determined they have adequately rectified that behaviour, as it relates to their usage of Kusama Coretime cores that they purchase.
Lack of a slashable deposit in the Broker pallet is a security concern, since it exposes Kusama Coretime sales to potential abuse.
Reserving a proportion of Kusama Coretime sales cores for those with on-chain identities should not be to the exclusion of accounts that wish to remain anonymous or cause cores to be wasted unnecessarily. As such, if cores that are reserved for on-chain identities remain unsold then they should be released to anonymous accounts that are on a waiting list.
No implementation pitfalls have been identified.
-It should improve performance as it reduces the potential for state bloat since there is less risk of undesirable Kusama Coretime sales activity that would be apparent with no requirement for a slashable deposit or there being no reputational risk to purchasers that waste or misuse Kusama Coretime cores.
The solution proposes to minimize the risk of some Kusama Coretime cores not even being used or leased to perform any tasks at all.
@@ -10686,7 +10773,7 @@ Implement call filters. This will allow multisig accounts to only accept certainThe mechanism for setting a slashable deposit amount, should avoid undue complexity for users.
Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.
-No prior articles.
This RFC proposes the addition of a secondary market feature to either the broker pallet or as a separate pallet maintained by Lastic, enabling users to list and purchase regions. This includes creating, purchasing, and removing listings, as well as emitting relevant events and handling associated errors.
-Currently, the broker pallet lacks functionality for a secondary market, which limits users' ability to freely trade regions. This RFC aims to introduce a secure and straightforward mechanism for users to list regions they own for sale and allow other users to purchase these regions.
While integrating this functionality directly into the broker pallet is one option, another viable approach is to implement it as a separate pallet maintained by Lastic. This separate pallet would have access to the broker pallet and add minimal functionality necessary to support the secondary market.
Adding smart contracts to the Coretime chain could also address this need; however, this process is expected to be lengthy and complex. We cannot afford to wait for this extended timeline to enable basic secondary market functionality. By proposing either integration into the broker pallet or the creation of a dedicated pallet, we can quickly enhance the flexibility and utility of the broker pallet, making it more user-friendly and valuable.
-Primary stakeholders include:
This RFC proposes the integration of smart contracts on the Coretime chain to enhance flexibility and enable complex decentralized applications, including secondary market functionalities.
-Currently, the Coretime chain lacks the capability to support smart contracts, which limits the range of decentralized applications that can be developed and deployed. By enabling smart contracts, the Coretime chain can facilitate more sophisticated functionalities such as automated region trading, dynamic pricing mechanisms, and other decentralized applications that require programmable logic. This will enhance the utility of the Coretime chain, attract more developers, and create more opportunities for innovation.
Additionally, while there is a proposal (#885) to allow EVM-compatible contracts on Polkadot’s Asset Hub, the implementation of smart contracts directly on the Coretime chain will provide synchronous interactions and avoid the complexities of asynchronous operations via XCM.
-Primary stakeholders include:
Change the upgrade process of a parachain runtime upgrade to become an off-chain process with regards to the relay chain. Upgrades are still contained in parachain blocks, but will no longer need to end up in relay chain blocks nor in relay chain state.
-Having parachain runtime upgrades go through the relay chain has always been seen as a scalability concern. Due to optimizations in statement distribution and asynchronous backing it became less crucial and got @@ -11042,7 +11129,7 @@ this we would hope for far more parachains to get registered, thousands potentially even ten thousands. With so many PVFs registered, updates are expected to become more frequent and even attacks on service quality for other parachains would become a higher risk.
-This RFC has no impact on privacy.
-This proposal lightens the load on the relay chain and is thus in general beneficial for the performance of the network, this is achieved by the @@ -11244,7 +11331,7 @@ validators. "hot" PVF).
Off-chain runtime upgrades have been discussed before, the architecture described here is simpler though as it piggybacks on already existing features, namely:
@@ -11346,17 +11433,17 @@ sharing if multiple parachains use the same data (e.g. same smart contracts).The SetFeesMode instruction and the fees_mode register allow for the existence of JIT withdrawal.
JIT withdrawal complicates the fee mechanism and leads to bugs and unexpected behaviour.
The proposal is to remove said functionality.
Another effort to simplify fee handling in XCM.
The JIT withdrawal mechanism creates bugs such as not being able to get fees when all assets are put into holding and none left in the origin location. This is a confusing behavior, since there are funds for fees, just not where the XCVM wants them. The XCVM should have only one entrypoint to fee payment, the holding register. That way there is also less surface for bugs.
-Implementations and benchmarking must change for most existing pallet calls that send XCMs to other locations.
-Performance will be improved since unnecessary checks will be avoided.
The previous RFC PR on the xcm-format repo, before XCM RFCs were moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/57.
None.
@@ -11424,9 +11511,9 @@ The instruction should be deprecated as soon as this RFC is approvedThis RFC proposes a solution to replicate an existing pure proxy from one chain to others. The aim is to address the current limitations where pure proxy accounts, which are keyless, cannot have their proxy relationships recreated on different chains. This leads to issues where funds or permissions transferred to the same keyless account address on chains other than its origin chain become inaccessible.
-A pure proxy is a new account created by a primary account. The primary account is set as a proxy for the pure proxy account, managing it. Pure proxies are keyless and non-reproducible, meaning they lack a private key and have an address derived from a preimage determined by on-chain logic. More on pure proxies can be found here.
For the purpose of this document, we define a keyless account as a "pure account", the controlling account as a "proxy account", and the entire relationship as a "pure proxy".
The relationship between a pure account (e.g., account ID: pure1) and its proxy (e.g., account ID: alice) is stored on-chain (e.g., parachain A) and currently cannot be replicated to another chain (e.g., parachain B). Because the account pure1 is keyless and its proxy relationship with alice is not replicable from the parachain A to the parachain B, alice does not control the pure1 account on the parachain B.
Given that these mistakes are likely, it is necessary to provide a solution to either prevent them or enable access to a pure account on a target chain.
-Runtime Users, Runtime Devs, wallets, cross-chain dApps.
One possible solution is to allow a proxy to create or replicate a pure proxy relationship for the same pure account on a target chain. For example, Alice, as the proxy of the pure1 pure account on parachain A, should be able to set a proxy for the same pure1 account on parachain B.
Each chain expressly authorizes another chain to replicate its pure proxies, accepting the inherent risk of that chain potentially being compromised. This authorization allows a malicious actor from the compromised chain to take control of any pure proxy account on the chain that granted the authorization. However, this is limited to pure proxies that originated from the compromised chain if they have a chain-specific seed within the preimage.
There is a security issue, not introduced by the proposed solution but worth mentioning. The same spawner can create the pure accounts on different chains controlled by the different accounts. This is possible because the current preimage version of the proxy pallet does not include any non-reproducible, chain-specific data, and elements like block numbers and extrinsic indexes can be reproduced with some effort. This issue could be addressed by adding a chain-specific seed into the preimages of pure accounts.
-The replication is facilitated by XCM, which adds some additional load to the communication channel. However, since the number of replications is not expected to be large, the impact is minimal.
The proposed solution does not alter any existing interfaces. It does require clients to obtain the witness data which should not be an issue with support of an indexer.
None.
-None.
None.
@@ -11551,14 +11638,14 @@ mod pallet_proxy_replica {This RFC proposes compressing the state response message during the state syncing process to reduce the amount of data transferred.
-State syncing can require downloading several gigabytes of data, particularly for blockchains with large state sizes, such as Astar, which has a state size exceeding 5 GiB (https://github.com/AstarNetwork/Astar/issues/1110). This presents a significant challenge for nodes with slower network connections. Additionally, the current state sync implementation lacks a persistence feature (https://github.com/paritytech/polkadot-sdk/issues/4), meaning any network disruption forces the node to re-download the entire state, making the process even more difficult.
-This RFC benefits all projects utilizing the Substrate framework, specifically in improving the efficiency of state syncing.
None identified.
The code changes required for this RFC are straightforward: compress the state response on the sender side and decompress it on the receiver side. Existing sync tests should ensure functionality remains intact.
-This RFC optimizes network bandwidth usage during state syncing, particularly for blockchains with gigabyte-sized states, while introducing negligible CPU overhead for compression and decompression. For example, compressing the state response during a recent Polkadot warp sync (around height #22076653) reduces the data transferred from 530,310,121 bytes to 352,583,455 bytes — a 33% reduction, saving approximately 169 MiB of data.
Performance data is based on this patch, with logs available here.
@@ -11584,7 +11671,7 @@ for compression.None.
No compatibility issues identified.
-None.
None.
@@ -11623,9 +11710,9 @@ for compression.This RFC proposes a new host function, secp256r1_ecdsa_verify_prehashed, for verifying NIST-P256 signatures. The function takes as input the message hash, r and s components of the signature, and the x and y coordinates of the public key. By providing this function, runtime authors can leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures, reducing computational costs and improving overall performance.
“secp256r1” elliptic curve is a standardized curve by NIST which has the same calculations by different input parameters with “secp256k1” elliptic curve. The cost of combined attacks and the security conditions are almost the same for both curves. Adding a host function can provide signature verifications using the “secp256r1” elliptic curve in the runtime and multi-faceted benefits can occur. One important factor is that this curve is widely used and supported in many modern devices such as Apple’s Secure Enclave, Webauthn, Android Keychain which proves the user adoption. Additionally, the introduction of this host function could enable valuable features in the account abstraction which allows more efficient and flexible management of accounts by transaction signs in mobile devices. Most of the modern devices and applications rely on the “secp256r1” elliptic curve. The addition of this host function enables a more efficient verification of device native transaction signing mechanisms. For example:
The changes are not directly affecting the protocol security, parachains are not enforced to use the host function.
-N/A
The host function proposed in this RFC allows parachain runtime developers to use a more efficient verification mechanism for "secp256r1" elliptic curve signatures.
Parachain teams will need to include this host function to upgrade.
-A followup of the RFC-0014. This RFC proposes adding a new collective to the Polkadot Collectives Chain: The Unbrick Collective, as well as improvements in the mechanisms that will allow teams operating paras that had stopped producing blocks to be assisted, in order to restore the production of blocks of these paras.
-Since the initial launch of Polkadot parachains, there has been many incidients causing parachains to stop producing new blocks (therefore, being bricked) and many occurrences that required Polkadot governance to update the parachain head state/wasm. This can be due to many reasons range @@ -11722,7 +11809,7 @@ damage to the parachain and users.
Polkadot Fellowship), due to the nature of their mission, are not fit to carry these kind of tasks.In consequence, the idea of a Unbrick Collective that can provide assistance to para teams when they brick and further protection against future halts is reasonable enough.
-The implementation of this RFC will be tested on testnets (Rococo and Westend) first.
An audit will be required to ensure the implementation doesn't introduce unwanted side effects.
There are no privacy related concerns.
-This RFC should not introduce any performance impact.
This RFC is fully compatible with existing interfaces.
-In an attempt to mitigate risks derived from unwanted behaviours around long decision periods on referenda, this proposal describes how to finalize and decide a result of a poll via a mechanism similar to candle auctions.
-Referenda protocol provide permissionless and efficient mechanisms to enable governance actors to decide the future of the blockchains around Polkadot network. However, they pose a series of risks derived from the game theory perspective around these mechanisms. One of them being where an actor @@ -11894,7 +11981,7 @@ on a poll as early as possible. This proposal's approach suggests using a Candle be determined right after the confirm period finishes, thus decreasing the chances of actors to alter the results of a poll on confirming state, and instead incentivizing them to cast their votes earlier, on deciding state.
-An audit will be required to ensure the implementation doesn't introduce unwanted side effects.
There are no privacy related concerns.
-The added steps imply pessimization, necessary to meet the expected changes. An implementation MUST exit from the Finalization period as early as possible to minimize this impact.
@@ -12011,7 +12098,7 @@ implemented VRF). previous implementation of the referendum processing algorithm.An acceptable upgrade strategy that can be applied is defining a point in time (block number, poll index) from which to start applying the new mechanism, thus, not affecting the already ongoing referenda.
-polkadot-runtime-commont: Defines the mechanism of candle auctions.This RFC proposes the definition of version 5 extrinsics along with changes to the specification and encoding from version 4.
-RFC84
introduced the specification of General transactions, a new type of extrinsic besides the Signed
and Unsigned variants available previously in version 4. Additionally,
@@ -12073,7 +12160,7 @@ introduced versioning of transaction extensions through an extra byte in the ext
Both of these changes require an extrinsic format version bump as both the semantics around
extensions as well as the actual encoding of extrinsics need to change to accommodate these new
features.
There is no impact on testing, security or privacy.
-This change makes the authorization through signatures configurable by runtime devs in version 5 extrinsics, as opposed to version 4 where the signing payload algorithm and signatures were hardcoded. This moves the responsibility of ensuring proper authentication through @@ -12177,7 +12264,7 @@ whether the transaction is signed or unsigned, as there was only one method of a
As long as extrinsic version 4 is still exposed in the metadata when version 5 will be introduced, the changes will not break existing infrastructure. This should give enough time for tooling to support version 5 and to remove version 4 in the future.
-This is a result of the work in Extrinsic Horizon and RFC99.
@@ -12225,12 +12312,12 @@ work.The current election mechanism for permissionless collators on system chains was introduced in RFC-7. This RFC proposes a mechanism to facilitate replacements in the invulnerable sets of system chains by breaking down barriers that exist today.
-Following RFC-7 and the introduction of the collator election mechanism, anyone can now collate on a system chain on the permissionless slots, but the invulnerable set has been a contentious issue among @@ -12265,7 +12352,7 @@ invulnerable set de facto immutable.
circle. The aim of this RFC is to provide a clear, reasonable, fair, and socially acceptable path for a permissionless collator with a proven track record to become an invulnerable while preserving the stability of the invulnerable set of a system parachain. -All election mechanisms as well as corner cases can be covered with unit tests.
-The chain will have to run extrinsics to start and end elections periodically, but the impact in terms of weight and PoV size is negligible.
@@ -12361,7 +12448,7 @@ guaranteed, path towards becoming an invulnerable, at least for a period of time invulnerable set interaction with the collator set chosen at the session boundary. The current invulnerable set for each chain can be grandfathered in when upgrading thecollator-selection
pallet version.
-This RFC builds on RFC-7, which introduced the election mechanism for system chain collators.
This RFC proposes a decentralized market mechanism for allocating Coretime on Polkadot, replacing the existing Dutch auction method (RFC17). The proposed model leverages convex preference interactions among agents, eliminating explicit bidding and centralized price determination. This ensures fairness, transparency, and decentralization.
-The current auction-based model (RFC17) presents critical issues:
The decentralized convex-preference model addresses these issues by facilitating asynchronous, equitable and transparent access before state coordination and deterministic verifiability during and after protocol consensus.
-Primary set of stakeholders are:
Formal verification of convergence routines and boundedness of the optimization space is RECOMMENDED for high-assurance deployments.
This leads to a more fluid, computation-bound system where efficiency stems from algorithmic design and verification speed, not from externally imposed timing constraints. Compatibility with existing Substrate pallets can be explored through modular implementation.
The system's performance depends on the availability of computational resources, not on arbitrary time windows or rounds. Price discovery and convergence are calculated as fast as the system can process the deterministic interaction rules. Pair-wise interactions can be batched and accumulated asynchronously. This enhances real-time responsiveness while removing artificial scheduling constraints.
@@ -12565,7 +12652,7 @@ of this RFC.Agents only need to express a simple scalar preference and their token/Coretime holdings, removing cognitive complexity. This lightweight interaction model improves usability, especially for smaller participants.
The mechanism is fully compatible with asynchronous execution architectures. Because it relies on deterministic local state transitions, it integrates seamlessly with Byzantine fault-tolerant consensus protocols and supports scalable, decentralized implementations.
-Initial Forum Discussion (superseded) : Invitation to Critically Evaluate Core Time Pricing Model Framework
RFC Draft Proposal Preliminary Forum Thread: RFC: Decentralized Convex-Preference Coretime Market for Polkadot Draft
@@ -12589,93 +12676,6 @@ of this RFC.Apply similar decentralized convex-preference principles to broader decentralized resource allocation challenges (e.g. JAM, energy/resource coordination, price stabilization).
Table of Contents
- -| Start Date | 25th of August 2025 |
| Description | Multi-Slot AURA for System Parachains |
| Authors | bhargavbh, burdges, AlistairStewart |
This RFC proposes a modification to the AURA round-robin block production mechanism for system parachains (e.g. Polkadot Hub). The proposed change increases the number of consecutive block production slots assigned to each collator from the current single-slot allocation to a configurable value, initially set at four. This modification aims to enhance censorship resistance by mitigating data-withholding attacks.
-The Polkadot Relay Chain guarantees the safety of parachain blocks, but it does not provide explicit guarantees for liveness or censorship resistance. With the planned migration of core Relay Chain functionalities—such as Balances, Staking, and Governance—to the Polkadot Hub system parachain in early November 2025, it becomes critical to establish a mechanism for achieving censorship resistance for these parachains without compromising throughput. For example, if governance functionality is migrated to Polkadot-Hub, malicious collators could systematically censor aye votes for a Relay Chain runtime upgrade, potentially altering the referendum's outcome. This demonstrates that censorship attacks on a system parachain can have a direct and undesirable impact on the security of the Relay Chain. This proposal addresses such censorship vulnerabilities by modifying the AURA block production mechanism utilized by system parachain collator with minimal honesty assumptions on the collators.
This analysis of censorship resistance for AURA-based parachains operates under the following assumptions:
-Collator Honesty: The model assumes the presence of at least one honest collator. We intentionally chose the most relaxed security assumption as collators are not slashable (unlike validators). Note that all system parachains use AURA via the Aura-Ext pallet.
-Backer Honesty: The backer assigned to a block candidate is assumed to be honest. This is a reasonable assumption given 2/3rd honesty on relay-chain and that backers are assigned randomly by ELVES. Additionally, we assume that backers responsible for disbursing the withheld block to the victim collators. Pre-PVFs can definitely help in improving the resilience of backers against DoS attacks. Essentially, the pre PVF lets backers check the slot ownership and hence backers can filter out spamming collators at this stage. However, pre-PVFs have not yet been implemented. The stronger on assumption on backer disbursing the block is only needed for efficiency concerns and not essential for censorship resistance itself (i.e. the collator can always reconstruct from the availability layer).
-Availability Layer: We also assume that the availability layer is robust and a collator can fetch the latest parablock (header and body) directly from the availability layer (or the backer) in a reasonable time, i.e., <6s from backer and <18s from availability layer provided by ELVES.
-Scope: We focus mainly on honest collators ability to produce and get their blocks backed, rather than censorship at the transaction level. Ideally, we want to achive the property that honest collators eventually get their blocks backed even if there is a slight delay (and provide a provable bound on this delay).
-The current AURA mechanism, which assigns a single block production slot per collator, is vulnerable to data-withholding attacks. A malicious collator can strategically produce a block and then selectively withhold it from subsequent collators. This can prevent honest collators from building their blocks in a timely manner, effectively censoring their block production.
-Consider 3 collators A, B and C assigned to consecutive slots by the AURA mechanism. A and C conspire to censor collator B, i.e., not allow B's block to get backed, they can execute the following attack: A produces block $b_A$ and submits it to the backers but it selectively witholds $b_A$ from B. Then C builds on top of $b_A$ and gets in its block before B can recover $b_A$ from availability layer and build on top of it.
-This proposal modifies the AURA round-robin mechanism to assign $x$ consecutive slots to each collator. The specific value of $x$ is contingent upon asynchronous backing parameters od the system parachain and will be derived using a generic formula provided in this document. The collator selected by AURA will be responsible for producing $x$ consecutive blocks. This modification will require corresponding adjustments to the AURA authorship checks within the PVF (Parachain Validation Function). For the current configuration of Polkadot Hub, $x=4$.
-The number of consecutive slots to be assigned to ensure AURA's censorship resistance depends on Async Backing Parameters like unincluded_segment_length. We now describe our approach for deriving $x$ based on paramters of async backing and other variables like block production and latency in availability layer. The relevant values can then be plugged in to obtain $x$ for any system parachain.
Clearly, the number of consecutive slots (x) in the round-robin is lower bounded by the time required to reconstruct the previous block from the availability layer (b) in addition to the block building time (a). Hence, we need to set $x$ such that $x\geq a+b$. But with async backing, a malicious collator sequentially tries to not share the block and just-in-time front-run the honest collator for all the unincluded_segment blocks. Hence, $x\geq (a+b)\cdot m$ is sufficient, where $m$ is the max allowed candidate depth (unincluded segment allowed).
-Independently, there is a check on the relay chain which filters out parablocks anchoring to very old relay_parents in the verify_backed_candidates. Any parablock which is anchored to a relay parent older than the oldest element in allowed_relay_parents gets rejected. Hence, the malicious collator can not front-run and censor the consequent collator after this delay as the parablock is no longer valid. The update of the allowed_relay_parents occurs at process_inherent_data where the buffer length of AllowedRelayParents is set by the scheduler parameter: lookahead (set to 3 by default). Therefore, the async_backing delay (asyncdelay) tolerated by the relay chain backers is $3*6s = 18s$. Hence, the number of consecutive slots is the minimum of the above two values:
$$x \geq min((a+b)\cdot m, a + b + asyncdelay)$$
-where $m$ is the max_candidate_depth (or unincluded segment as seen from collator's perpective).
Assuming the previous block data can be fetched from backers, then we comfortably have $a+b \leq 6s$, i.e. block buiding plus recoinstruciton time is < 6s. Using the current asyncdelay of 18s, suffices to set $x$ to 4. If the max_candidate_depth (m) for Polkadot Hub is set $m\leq3$, then this will reduce (improve) $x$ from 4 to $m$. Note that a channel would have to be provided for collators to fetch blocks from backers as the preferred option and only recover from availability layer as the fail-safe option.
The proposed changes are security critical and mitigate censorship attacks on core functionality like balances, staking and governance on Polkadot Hub. -This approach is compatible with the Slot-Based collation and the currently deployed FixedVelocityConsensusHook. Further analysis is needed to integrate with cusotm ConsesnsusHooks that leverage Elastic Scaling.
-Multi-slot collation however is vulnerable to liveness attacks: adversarial collators don't show up to stall the liveness but then also lose out on block production rewards. The amount of missed blocks because of collators skipping is same as in the current implementation, only the distribution of missed slots changes (they are chunked together instead of being evenly distributed). Secondly, when ratio of adversarial (censoring) collators $\alpha$ is high (close to 1), the ratio of uncensored block to all blocks produced drops to $(1-\alpha)/(x\alpha)$. For more practical lower values of $\alpha<1/4$, the ratio of uncensored to all blocks is almost 1.
-The latency for backing of blocks is affected as follows:
-Effective multi-slot collation requires that collators be able to prioritize transactions that have been targeted for censorship. The implementation should incorporate a framework for priority transactions (e.g., governance votes, election extrinsics) to ensure that such transactions are included in the uncensored blocks.
-This RFC is related to RFC-7, which details the selection mechanism for System Parachain Collators. In general, a more robust collator selection mechanism that reduces the proportion of malicious actors would directly benefit the effectiveness of the ideas presented in this RFC
-A resilient mechanism is needed for prioritising transactions in block production for collators that are actively targeted for censorship. There are two potential approches:
-Table of Contents