diff --git a/404.html b/404.html index d6b1bed..4b0b5a2 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index 5b0c183..548b40a 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index a954390..c69f7f3 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 054384c..25f5f4f 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 0ef7367..795b49a 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0009-improved-net-light-client-requests.html b/approved/0009-improved-net-light-client-requests.html index 3825c40..9a2230e 100644 --- a/approved/0009-improved-net-light-client-requests.html +++ b/approved/0009-improved-net-light-client-requests.html @@ -90,7 +90,7 @@ diff --git a/approved/0010-burn-coretime-revenue.html b/approved/0010-burn-coretime-revenue.html index dd76321..b37c21c 100644 --- a/approved/0010-burn-coretime-revenue.html +++ b/approved/0010-burn-coretime-revenue.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index faef656..ca15231 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html index 0971b30..da9c5b6 100644 --- a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 8ff312a..f951f68 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0017-coretime-market-redesign.html b/approved/0017-coretime-market-redesign.html index 0d0b692..be850da 100644 --- a/approved/0017-coretime-market-redesign.html +++ b/approved/0017-coretime-market-redesign.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 0809adf..435c3b1 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0026-sassafras-consensus.html b/approved/0026-sassafras-consensus.html index 405bf10..6a8002d 100644 --- a/approved/0026-sassafras-consensus.html +++ b/approved/0026-sassafras-consensus.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index d911bba..f989f1d 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0042-extrinsics-state-version.html b/approved/0042-extrinsics-state-version.html index 1cb0413..51adc3e 100644 --- a/approved/0042-extrinsics-state-version.html +++ b/approved/0042-extrinsics-state-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0043-storage-proof-size-hostfunction.html b/approved/0043-storage-proof-size-hostfunction.html index 88a0fcc..466210b 100644 --- a/approved/0043-storage-proof-size-hostfunction.html +++ b/approved/0043-storage-proof-size-hostfunction.html @@ -90,7 +90,7 @@ diff --git a/approved/0045-nft-deposits-asset-hub.html b/approved/0045-nft-deposits-asset-hub.html index 2e3a9bb..2961e6c 100644 --- a/approved/0045-nft-deposits-asset-hub.html +++ b/approved/0045-nft-deposits-asset-hub.html @@ -90,7 +90,7 @@ diff --git a/approved/0047-assignment-of-availability-chunks.html b/approved/0047-assignment-of-availability-chunks.html index 1208c2b..e446784 100644 --- a/approved/0047-assignment-of-availability-chunks.html +++ b/approved/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ diff --git a/approved/0048-session-keys-runtime-api.html b/approved/0048-session-keys-runtime-api.html index 9d6a272..e55f4ed 100644 --- a/approved/0048-session-keys-runtime-api.html +++ b/approved/0048-session-keys-runtime-api.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 2b2f34b..c9ea49c 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index be72114..1317900 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/approved/0059-nodes-capabilities-discovery.html b/approved/0059-nodes-capabilities-discovery.html index 5738633..f68b949 100644 --- a/approved/0059-nodes-capabilities-discovery.html +++ b/approved/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ diff --git a/approved/0078-merkleized-metadata.html b/approved/0078-merkleized-metadata.html index 7ee284d..615061e 100644 --- a/approved/0078-merkleized-metadata.html +++ b/approved/0078-merkleized-metadata.html @@ -90,7 +90,7 @@ diff --git a/approved/0084-general-transaction-extrinsic-format.html b/approved/0084-general-transaction-extrinsic-format.html index 5fe5b43..65741d4 100644 --- a/approved/0084-general-transaction-extrinsic-format.html +++ b/approved/0084-general-transaction-extrinsic-format.html @@ -90,7 +90,7 @@ diff --git a/approved/0091-dht-record-creation-time.html b/approved/0091-dht-record-creation-time.html index 2ccf246..9d0cefb 100644 --- a/approved/0091-dht-record-creation-time.html +++ b/approved/0091-dht-record-creation-time.html @@ -90,7 +90,7 @@ diff --git a/approved/0097-unbonding_queue.html b/approved/0097-unbonding_queue.html index 7cddd42..5a09b7a 100644 --- a/approved/0097-unbonding_queue.html +++ b/approved/0097-unbonding_queue.html @@ -90,7 +90,7 @@ diff --git a/approved/0099-transaction-extension-version.html b/approved/0099-transaction-extension-version.html index 993bf20..fbecdb2 100644 --- a/approved/0099-transaction-extension-version.html +++ b/approved/0099-transaction-extension-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0100-xcm-multi-type-asset-transfer.html b/approved/0100-xcm-multi-type-asset-transfer.html index b121355..21082c7 100644 --- a/approved/0100-xcm-multi-type-asset-transfer.html +++ b/approved/0100-xcm-multi-type-asset-transfer.html @@ -90,7 +90,7 @@ diff --git a/approved/0101-xcm-transact-remove-max-weight-param.html b/approved/0101-xcm-transact-remove-max-weight-param.html index 8bc48eb..7fd1234 100644 --- a/approved/0101-xcm-transact-remove-max-weight-param.html +++ b/approved/0101-xcm-transact-remove-max-weight-param.html @@ -90,7 +90,7 @@ diff --git a/approved/0103-introduce-core-index-commitment.html b/approved/0103-introduce-core-index-commitment.html index 982a818..6ad76f9 100644 --- a/approved/0103-introduce-core-index-commitment.html +++ b/approved/0103-introduce-core-index-commitment.html @@ -90,7 +90,7 @@ diff --git a/approved/0105-xcm-improved-fee-mechanism.html b/approved/0105-xcm-improved-fee-mechanism.html index 69006fd..103a285 100644 --- a/approved/0105-xcm-improved-fee-mechanism.html +++ b/approved/0105-xcm-improved-fee-mechanism.html @@ -90,7 +90,7 @@ diff --git a/approved/0107-xcm-execution-hints.html b/approved/0107-xcm-execution-hints.html index 05cd255..58b6867 100644 --- a/approved/0107-xcm-execution-hints.html +++ b/approved/0107-xcm-execution-hints.html @@ -90,7 +90,7 @@ diff --git a/approved/0108-xcm-remove-testnet-ids.html b/approved/0108-xcm-remove-testnet-ids.html index d7364b3..0a6456e 100644 --- a/approved/0108-xcm-remove-testnet-ids.html +++ b/approved/0108-xcm-remove-testnet-ids.html @@ -90,7 +90,7 @@ diff --git a/approved/0122-alias-origin-on-asset-transfers.html b/approved/0122-alias-origin-on-asset-transfers.html index 325c494..8bd9014 100644 --- a/approved/0122-alias-origin-on-asset-transfers.html +++ b/approved/0122-alias-origin-on-asset-transfers.html @@ -90,7 +90,7 @@ diff --git a/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html b/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html index 193902c..8d5677b 100644 --- a/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html +++ b/approved/0123-pending-code-as-storage-location-for-runtime-upgrades.html @@ -90,7 +90,7 @@ diff --git a/approved/0125-xcm-asset-metadata.html b/approved/0125-xcm-asset-metadata.html index e767b7b..5ee4e19 100644 --- a/approved/0125-xcm-asset-metadata.html +++ b/approved/0125-xcm-asset-metadata.html @@ -90,7 +90,7 @@ diff --git a/approved/0126-introduce-pvq.html b/approved/0126-introduce-pvq.html index 344b9de..7df4c30 100644 --- a/approved/0126-introduce-pvq.html +++ b/approved/0126-introduce-pvq.html @@ -90,7 +90,7 @@ diff --git a/approved/0135-compressed-blob-prefixes.html b/approved/0135-compressed-blob-prefixes.html index 9986c8c..64880f9 100644 --- a/approved/0135-compressed-blob-prefixes.html +++ b/approved/0135-compressed-blob-prefixes.html @@ -90,7 +90,7 @@ diff --git a/approved/0139-faster-erasure-coding.html b/approved/0139-faster-erasure-coding.html index f37bdf9..f65bdbd 100644 --- a/approved/0139-faster-erasure-coding.html +++ b/approved/0139-faster-erasure-coding.html @@ -90,7 +90,7 @@ diff --git a/approved/0146-deflationary-fee-proposal.html b/approved/0146-deflationary-fee-proposal.html index 399b6b3..1d5cdf9 100644 --- a/approved/0146-deflationary-fee-proposal.html +++ b/approved/0146-deflationary-fee-proposal.html @@ -90,7 +90,7 @@ diff --git a/approved/0149-rfc-1-renewal-adjustment.html b/approved/0149-rfc-1-renewal-adjustment.html index 4179592..26d2d08 100644 --- a/approved/0149-rfc-1-renewal-adjustment.html +++ b/approved/0149-rfc-1-renewal-adjustment.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index f566aa1..a0083f4 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index f566aa1..a0083f4 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index 8a182ed..1f48d65 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -1140,237 +1140,6 @@ This approach is compatible with the Slot-Based collation and the currently depl
  • One approach is to categorise which transactions or extrinsics are more likely to be censored and should be considered priority. This would allow an honest collator to maximize the utility of its consecutive block production slots and prioritise when building the uncensored block. While this is dependent on the specific parachain's functionality, a generic framework would be beneficial for runtime engineers to tag relevant transaction types. However, if there exist transactions which are cheap and high priority (e.g. a governance vote), this approach is not ideal as it lets an adversary spam the collators with cheap high-priority transactions.
  • AAlternatively, one could design a robust tipping mechanism where transaction actively being censored would have to pay a higher tip to get themselves included. Even if the adversary initiates a bidding war, since 100% of the tip is forwarded to the collator, it only increase the revenue of the collator further incentivising it to remain honest. A careful analysis of such an incentive mechanism is required, however, it is beyond the scope of this RFC.
  • -

    (source)

    -

    Table of Contents

    - -

    RFC-0155: pUSD (Polkadot USD over-collateralised debt token)

    -
    - - - -
    Start Date2025-09-11
    DescriptionPolkadot native stablecoin on Asset Hub
    AuthorsBryan Chen
    -
    -

    Summary

    -

    pUSD (Polkadot USD over-collateralised debt token) is a new DOT-collateralized stablecoin deployed on Asset Hub. It is an overcollateralized stablecoin backed purely by DOT. The implementation follows the Honzon protocol pioneered by Acala. In addition, this RFC introduces an opt-in pUSD Savings module that lets holders lock pUSD to earn interest funded from stability fees.

    -

    Motivation

    -
    -

    "Polkadot Hub should have a native DOT backed stable coin because people need it and otherwise we will haemorrhage benefits, liquidity and/or security." - Gav

    -
    -

    Primary use cases of pUSD:

    - -

    Stakeholders

    - -

    Explanation

    -

    pUSD is implemented using the Honzon protocol stack used to power aUSD, adapted for DOT-only collateral on Asset Hub.

    -

    Protocol Overview

    -

    The Honzon protocol functions as a lending system where users can:

    -
      -
    1. Deposit collateral: Lock DOT as collateral in Collateralized Debt Positions (CDPs).
    2. -
    3. Mint pUSD: Generate pUSD stablecoins against collateral value.
    4. -
    5. Accrue interest: Pay interest over time via the debit exchange rate (stability fee).
    6. -
    7. Maintain health: Keep CDPs above the liquidation ratio to avoid liquidation.
    8. -
    9. Liquidation: Underwater CDPs are liquidated via DEX and/or auctions to keep the system solvent.
    10. -
    -

    Oracle Infrastructure

    -

    The pUSD system relies on robust oracle infrastructure to maintain accurate price feeds for DOT and ensure proper collateral valuations.

    -

    Oracle source

    - -

    Price aggregation mechanism

    - -

    Issuance

    -

    DOT holders can open a vault (CDP) to lock their DOT and borrow up to a protocol-defined percentage of its value as pUSD, subject to a required collateral ratio and debt ceilings.

    -

    Redemption

    -

    At any time, the vault owner can repay pUSD (principal plus accrued interest via the debit exchange rate) to unlock DOT, fully or partially.

    -

    Liquidation

    -

    When a vault's collateral ratio falls below the liquidation ratio, it becomes unsafe and is liquidated. The system employs a tiered liquidation approach to maximize efficiency and minimize market impact.

    -

    Primary liquidation: DEX-first approach

    -
      -
    1. Instant settlement: Liquidation is executed immediately through available DEX liquidity on Asset Hub.
    2. -
    3. Market-rate execution: DOT is sold at current market rates with minimal slippage, reducing the liquidation penalty for vault owners.
    4. -
    5. Automated execution: Off-chain workers continuously monitor vault health and trigger DEX liquidations automatically when ratios fall below thresholds.
    6. -
    7. Slippage protection: Maximum slippage limits prevent excessive losses during low-liquidity periods.
    8. -
    -

    Fallback liquidation: auction mechanism

    - -

    Liquidation process flow

    -
      -
    1. Health check: Off-chain workers monitor collateral ratios continuously.
    2. -
    3. Trigger: When a ratio falls below the liquidation threshold, liquidation is queued.
    4. -
    5. DEX attempt: The system attempts to sell required collateral through the Asset Hub DEX.
    6. -
    7. Auction fallback: If DEX liquidation fails or is insufficient, collateral enters the auction system.
    8. -
    9. Settlement: Proceeds repay CDP debt plus penalties; excess collateral is returned to the owner.
    10. -
    11. Bad debt handling: Any shortfalls become bad debt managed by CDP treasury mechanisms.
    12. -
    -

    Any excess collateral after repaying debt and penalties is refunded to the owner. Shortfalls become bad debt and are handled by CDP treasury mechanisms.

    -

    Incentives: pUSD Savings (opt-in)

    -

    The protocol includes a Savings module that allows pUSD holders to lock tokens and earn interest paid from stability fees.

    -

    Design goals

    - -

    Mechanics

    - -

    Parameter examples (TBD by governance)

    - -

    Governance

    -

    A Financial Fellowship (within the broader Polkadot on-chain governance framework) will govern risk parameters and Treasury actions to ensure economic safety. The Fellowship can also perform emergency actions, such as freezing the oracle price feed if manipulation is detected.

    -

    Governance-managed parameters include (non-exhaustive):

    - -

    Emergency Shutdown

    -

    As a last resort, an emergency shutdown can be performed by the Fellowship to halt minting/liquidation and allow equitable settlement: lock oracle prices, cancel auctions, and let users settle pUSD against collateral at the locked rates. Savings deposits remain redeemable 1:1 for pUSD at the last savings index; interest accrual stops at shutdown.

    -

    Drawbacks

    - -

    Testing, Security, and Privacy

    -

    Testing requirements

    - -

    Security considerations

    - -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    -

    This proposal introduces necessary computational overhead to Asset Hub for CDP management, liquidation monitoring, and Savings accounting. The impact is minimized through:

    - -

    Ergonomics

    -

    The proposal optimizes for several key usage patterns:

    - -

    Compatibility

    - -

    Prior Art and References

    -

    The implementation follows the Honzon protocol pioneered by Acala for their aUSD stablecoin system. Key references include:

    - -

    Unresolved Questions

    - - -

    Smart-Contract Liquidation Participation

    -

    Future versions of the system will allow smart contracts to register as liquidation participants, enabling:

    - -

    Treasury Payment Transition

    -

    In a later phase, staking rewards may be paid in pUSD instead of DOT inflation, requiring:

    -

    (source)

    Table of Contents

    -

    Prior Art and References

    +

    Prior Art and References

    Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

    (source)

    Table of Contents

    @@ -2111,10 +1880,10 @@ InstaPoolHistory: (empty) AuthorsGavin Wood, Robert Habermeier -

    Summary

    +

    Summary

    In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

    This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

    -

    Motivation

    +

    Motivation

    The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

    Requirements

    -

    Stakeholders

    +

    Stakeholders

    Primary stakeholder sets are:

    Socialization:

    This content of this RFC was discussed in the Polkdot Fellows channel.

    -

    Explanation

    +

    Explanation

    The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

    Future work may include these messages being introduced into the XCM standard.

    UMP Message Types

    @@ -2209,17 +1978,17 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);

    Realistic Limits of the Usage

    For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

    For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

    -

    Performance, Ergonomics and Compatibility

    +

    Performance, Ergonomics and Compatibility

    No specific considerations.

    -

    Testing, Security and Privacy

    +

    Testing, Security and Privacy

    Standard Polkadot testing and security auditing applies.

    The proposal introduces no new privacy concerns.

    - +

    RFC-1 proposes a means of determining allocation of Coretime using this interface.

    RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

    Drawbacks, Alternatives and Unknowns

    None at present.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    (source)

    Table of Contents

    @@ -2265,13 +2034,13 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600); AuthorsJoe Petrowski -

    Summary

    +

    Summary

    As core functionality moves from the Relay Chain into system chains, so increases the reliance on the liveness of these chains for the use of the network. It is not economically scalable, nor necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a mechanism -- part technical and part social -- for ensuring reliable collator sets that are resilient to attemps to stop any subsytem of the Polkadot protocol.

    -

    Motivation

    +

    Motivation

    In order to guarantee access to Polkadot's system, the collators on its system chains must propose blocks (provide liveness) and allow all transactions to eventually be included. That is, some collators may censor transactions, but there must exist one collator in the set who will include a @@ -2307,12 +2076,12 @@ to censor any subset of transactions.

  • Collators selected by governance SHOULD have a reasonable expectation that the Treasury will reimburse their operating costs.
  • -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    This protocol builds on the existing Collator Selection pallet and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who @@ -2348,27 +2117,27 @@ approximately:

  • of which 15 are Invulnerable, and
  • five are elected by bond.
  • -

    Drawbacks

    +

    Drawbacks

    The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The vast majority of cases can be covered by unit testing. Integration test should ensure that the Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired number of Candidates, can handle updates over XCM from the system's governance location.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.

    -

    Performance

    +

    Performance

    As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.

    -

    Ergonomics

    +

    Ergonomics

    The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.

    -

    Compatibility

    +

    Compatibility

    This RFC is compatible with the existing implementation and can be handled via upgrades and migration.

    -

    Prior Art and References

    +

    Prior Art and References

    Written Discussions

    -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    - +

    There may exist in the future system chains for which this model of collator selection is not appropriate. These chains should be evaluated on a case-by-case basis.

    (source)

    @@ -2424,10 +2193,10 @@ appropriate. These chains should be evaluated on a case-by-case basis.

    AuthorsPierre Krieger -

    Summary

    +

    Summary

    The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

    This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

    -

    Motivation

    +

    Motivation

    The maintenance of bootnodes has long been an annoyance for everyone.

    When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

    @@ -2436,9 +2205,9 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-date

    Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

    While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

    Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

    -

    Stakeholders

    +

    Stakeholders

    This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

    -

    Explanation

    +

    Explanation

    The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

    Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

    While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

    @@ -2475,10 +2244,10 @@ message Response {

    The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

    Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

    -

    Drawbacks

    +

    Drawbacks

    The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

    The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

    This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

    @@ -2487,22 +2256,22 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) a

    For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

    Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

    Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

    Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

    Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    Irrelevant.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    -

    Unresolved Questions

    +

    Unresolved Questions

    While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

    - +

    It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

    (source)

    Table of Contents

    @@ -2535,9 +2304,9 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsPierre Krieger -

    Summary

    +

    Summary

    Improve the networking messages that query storage items from the remote, in order to reduce the bandwidth usage and number of round trips of light clients.

    -

    Motivation

    +

    Motivation

    Clients on the Polkadot peer-to-peer network can be divided into two categories: full nodes and light clients. So-called full nodes are nodes that store the content of the chain locally on their disk, while light clients are nodes that don't. In order to access for example the balance of an account, a full node can do a disk read, while a light client needs to send a network message to a full node and wait for the full node to reply with the desired value. This reply is in the form of a Merkle proof, which makes it possible for the light client to verify the exactness of the value.

    Unfortunately, this network protocol is suffering from some issues:

    Once Polkadot and Kusama will have transitioned to state_version = 1, which modifies the format of the trie entries, it will be possible to generate Merkle proofs that contain only the hashes of values in the storage. Thanks to this, it is already possible to prove the existence of a key without sending its entire value (only its hash), or to prove that a value has changed or not between two blocks (by sending just their hashes). Thus, the only reason why aforementioned issues exist is because the existing networking messages don't give the possibility for the querier to query this. This is what this proposal aims at fixing.

    -

    Stakeholders

    +

    Stakeholders

    This is the continuation of https://github.com/w3f/PPPs/pull/10, which itself is the continuation of https://github.com/w3f/PPPs/pull/5.

    -

    Explanation

    +

    Explanation

    The protobuf schema of the networking protocol can be found here: https://github.com/paritytech/substrate/blob/5b6519a7ff4a2d3cc424d78bc4830688f3b184c0/client/network/light/src/schema/light.v1.proto

    The proposal is to modify this protocol in this way:

    @@ -11,6 +11,7 @@ message Request {
    @@ -2607,26 +2376,26 @@ An alternative could have been to specify the child_trie_info for e
     Also note that child tries aren't considered as descendants of the main trie when it comes to the includeDescendants flag. In other words, if the request concerns the main trie, no content coming from child tries is ever sent back.

    This protocol keeps the same maximum response size limit as currently exists (16 MiB). It is not possible for the querier to know in advance whether its query will lead to a reply that exceeds the maximum size. If the reply is too large, the replier should send back only a limited number (but at least one) of requested items in the proof. The querier should then send additional requests for the rest of the items. A response containing none of the requested items is invalid.

    The server is allowed to silently discard some keys of the request if it judges that the number of requested keys is too high. This is in line with the fact that the server might truncate the response.

    -

    Drawbacks

    +

    Drawbacks

    This proposal doesn't handle one specific situation: what if a proof containing a single specific item would exceed the response size limit? For example, if the response size limit was 1 MiB, querying the runtime code (which is typically 1.0 to 1.5 MiB) would be impossible as it's impossible to generate a proof less than 1 MiB. The response size limit is currently 16 MiB, meaning that no single storage item must exceed 16 MiB.

    Unfortunately, because it's impossible to verify a Merkle proof before having received it entirely, parsing the proof in a streaming way is also not possible.

    A way to solve this issue would be to Merkle-ize large storage items, so that a proof could include only a portion of a large storage item. Since this would require a change to the trie format, it is not realistically feasible in a short time frame.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The main security consideration concerns the size of replies and the resources necessary to generate them. It is for example easily possible to ask for all keys and values of the chain, which would take a very long time to generate. Since responses to this networking protocol have a maximum size, the replier should truncate proofs that would lead to the response being too large. Note that it is already possible to send a query that would lead to a very large reply with the existing network protocol. The only thing that this proposal changes is that it would make it less complicated to perform such an attack.

    Implementers of the replier side should be careful to detect early on when a reply would exceed the maximum reply size, rather than inconditionally generate a reply, as this could take a very large amount of CPU, disk I/O, and memory. Existing implementations might currently be accidentally protected from such an attack thanks to the fact that requests have a maximum size, and thus that the list of keys in the query was bounded. After this proposal, this accidental protection would no longer exist.

    Malicious server nodes might truncate Merkle proofs even when they don't strictly need to, and it is not possible for the client to (easily) detect this situation. However, malicious server nodes can already do undesirable things such as throttle down their upload bandwidth or simply not respond. There is no need to handle unnecessarily truncated Merkle proofs any differently than a server simply not answering the request.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    It is unclear to the author of the RFC what the performance implications are. Servers are supposed to have limits to the amount of resources they use to respond to requests, and as such the worst that can happen is that light client requests become a bit slower than they currently are.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    The prior networking protocol is maintained for now. The older version of this protocol could get removed in a long time.

    -

    Prior Art and References

    +

    Prior Art and References

    None. This RFC is a clean-up of an existing mechanism.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    - +

    The current networking protocol could be deprecated in a long time. Additionally, the current "state requests" protocol (used for warp syncing) could also be deprecated in favor of this one.

    (source)

    Table of Contents

    @@ -2647,13 +2416,13 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJonas Gehrlein -

    Summary

    +

    Summary

    The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

    -

    Motivation

    +

    Motivation

    How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

    -

    Stakeholders

    +

    Stakeholders

    Polkadot DOT token holders.

    -

    Explanation

    +

    Explanation

    This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

    It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

    Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

    @@ -2696,13 +2465,13 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJoe Petrowski -

    Summary

    +

    Summary

    Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

    -

    Motivation

    +

    Motivation

    Many groups have expressed interest in representing collectives on-chain. Some of these include:

    The premise of this proposal is to offer a straightforward design that discovers the price of coretime within a period as a clearing_price. Long-term coretime holders still retain the privilege to keep their cores if they can pay the price discovered by the market (with some premium for that privilege). The proposed model aims to strike a balance between leveraging market forces for allocation while operating within defined bounds. In particular, prices are capped within a BULK_PERIOD, which gives some certainty about prices to existing teams. It must be noted, however, that under high demand, prices could increase exponentially between multiple market cycles. This is a necessary feature to ensure proper price discovery and efficient coretime allocation.

    Ultimately, the framework proposed here seeks to adhere to all requirements originally stated in RFC-1.

    -

    Stakeholders

    +

    Stakeholders

    Primary stakeholder sets are:

    • Protocol researchers, developers, and the Polkadot Fellowship.
    • Polkadot Parachain teams both present and future, and their users.
    • Polkadot DOT token holders.
    -

    Explanation

    +

    Explanation

    Overview

    The BULK_PERIOD has been restructured into two primary segments: the MARKET_PERIOD and the RENEWAL_PERIOD, along with an auxiliarySETTLEMENT_PERIOD. The latter does not require any active participation from the coretime system chain except to simply execute transfers of ownership between market participants. A significant departure from the current design lies in the timing of renewals, which now occur after the market phase. This adjustment aims to harmonize renewal prices with their market counterparts, ensuring a more consistent and equitable pricing model.

    Market Period (14 days)

    @@ -3195,7 +2964,7 @@ To mitigate this, we propose preventing the market from closing at the ope -

    Prior Art and References

    +

    Prior Art and References

    This RFC builds extensively on the available ideas put forward in RFC-1.

    Additionally, I want to express a special thanks to Samuel Haefner, Shahar Dobzinski, and Alistair Stewart for fruitful discussions and helping me structure my thoughts.

    (source)

    @@ -3223,19 +2992,19 @@ To mitigate this, we propose preventing the market from closing at the ope Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland -

    Summary

    +

    Summary

    Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

    -

    Motivation

    +

    Motivation

    Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

    Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

    -

    Stakeholders

    +

    Stakeholders

    • Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
    • Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
    • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
    • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
    -

    Explanation

    +

    Explanation

    Our PR has all details about our runtime and how we would move it into the fellowship repo.

    Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

    It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

    @@ -3244,17 +3013,17 @@ To mitigate this, we propose preventing the market from closing at the ope
  • Encointer will publish all its crates crates.io
  • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
  • -

    Drawbacks

    +

    Drawbacks

    Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    No changes to the existing system are proposed. Only changes to how maintenance is organized.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    No changes

    -

    Prior Art and References

    +

    Prior Art and References

    Existing Encointer runtime repo

    -

    Unresolved Questions

    +

    Unresolved Questions

    None identified

    - +

    More info on Encointer: encointer.org

    (source)

    Table of Contents

    @@ -4174,11 +3943,11 @@ other privacy-enhancing mechanisms to address this concern. AuthorsJoe Petrowski, Gavin Wood -

    Summary

    +

    Summary

    The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.

    -

    Motivation

    +

    Motivation

    Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -4190,13 +3959,13 @@ blockspace) to the network.

    By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace.

    -

    Stakeholders

    +

    Stakeholders

    • Parachains that interact with affected logic on the Relay Chain;
    • Core protocol and XCM format developers;
    • Tooling, block explorer, and UI developers.
    -

    Explanation

    +

    Explanation

    The following pallets and subsystems are good candidates to migrate from the Relay Chain:

    • Identity
    • @@ -4312,7 +4081,7 @@ in its first version. Any other systems that use overlapping locks, most notably need to recognise DOT held on both Asset Hub and the Staking parachain.

      There is more discussion about staking in a parachain in Moving Staking off the Relay Chain.

      -

      Governance

      +

      Governance

      Migrating governance into a parachain will be less complicated than staking. Most of the primitives needed for the migration already exist. The Treasury supports spending assets on remote chains and collectives like the Polkadot Technical Fellowship already function in a parachain. That is, XCM @@ -4342,36 +4111,36 @@ sensible to rehearse a migration.

      Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- will give confidence to the chain's robustness on Polkadot.

      -

      Drawbacks

      +

      Drawbacks

      These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular may require some optimizations to deal with constraints.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      Standard audit/review requirements apply. More powerful multi-chain integration test tools would be useful in developement.

      -

      Performance, Ergonomics, and Compatibility

      +

      Performance, Ergonomics, and Compatibility

      Describe the impact of the proposal on the exposed functionality of Polkadot.

      -

      Performance

      +

      Performance

      This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its primary resources are allocated to system performance.

      -

      Ergonomics

      +

      Ergonomics

      This proposal alters very little for coretime users (e.g. parachain developers). Application developers will need to interact with multiple chains, making ergonomic light client tools particularly important for application development.

      For existing parachains that interact with these subsystems, they will need to configure their runtimes to recognize the new locations in the network.

      -

      Compatibility

      +

      Compatibility

      Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. Application developers will need to interact with multiple chains in the network.

      -

      Prior Art and References

      +

      Prior Art and References

      -

      Unresolved Questions

      +

      Unresolved Questions

      There remain some implementation questions, like how to use balances for both Staking and Governance. See, for example, Moving Staking off the Relay Chain.

      - +

      Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. With Staking and Governance off the Relay Chain, this is not an unreasonable next step.

      With Identity on Polkadot, Kusama may opt to drop its People Chain.

      @@ -4406,13 +4175,13 @@ With Staking and Governance off the Relay Chain, this is not an unreasonable nex AuthorsVedhavyas Singareddi -

      Summary

      +

      Summary

      At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the Storage. We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field under RuntimeVersion, we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

      -

      Motivation

      +

      Motivation

      Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is further explored in https://github.com/polkadot-fellows/RFCs/issues/19

      @@ -4424,11 +4193,11 @@ One of the main challenge here is some extrinsics could be big enough that this included in the Consensus block due to Block's weight restriction. If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but rather at maximum, 32 byte of extrinsic data.

      -

      Stakeholders

      +

      Stakeholders

      • Technical Fellowship, in its role of maintaining system runtimes.
      -

      Explanation

      +

      Explanation

      In order to use project specific StateVersion for extrinsic roots, we proposed an implementation that introduced parameter to frame_system::Config but that unfortunately did not feel correct. @@ -4454,26 +4223,26 @@ pub const VERSION: RuntimeVersion = RuntimeVersion { system_version: 1, }; }

    -

    Drawbacks

    +

    Drawbacks

    There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated so that chains know which system_version to use.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    AFAIK, should not have any impact on the security or privacy.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    These changes should be compatible for existing chains if they use state_version value for system_verision.

    -

    Performance

    +

    Performance

    I do not believe there is any performance hit with this change.

    -

    Ergonomics

    +

    Ergonomics

    This does not break any exposed Apis.

    -

    Compatibility

    +

    Compatibility

    This change should not break any compatibility.

    -

    Prior Art and References

    +

    Prior Art and References

    We proposed introducing a similar change by introducing a parameter to frame_system::Config but did not feel that is the correct way of introducing this change.

    -

    Unresolved Questions

    +

    Unresolved Questions

    I do not have any specific questions about this change at the moment.

    - +

    IMO, this change is pretty self-contained and there won't be any future work necessary.

    (source)

    Table of Contents

    @@ -4502,9 +4271,9 @@ is the correct way of introducing this change.

    AuthorsSebastian Kunert -

    Summary

    +

    Summary

    This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

    -

    Motivation

    +

    Motivation

    The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

    Transact Over Bridge -

    Drawbacks

    +

    Drawbacks

    In terms of ergonomics and user experience, this support for combining an asset transfer with a subsequent action (like Transact) is a net positive.

    In terms of performance, and privacy, this is neutral with no changes.

    In terms of security, the feature by itself is also neutral because it allows preserve_origin: false usage for operating with no extra trust assumptions. When wanting to support preserving origin, chains need to configure secure origin aliasing filters. The one suggested in this RFC should be the right choice for the majority of chains, but each chain will ultimately choose depending on their business model and logic (e.g. chain does not plan to integrate with Asset Hub). It is up to the individual chains to configure accordingly.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Barriers should now allow AliasOrigin, DescendOrigin or ClearOrigin.

    Normally, XCM program builders should audit their programs and eliminate assumptions of "no origin" on remote side of this instruction. In this case, the InitiateAssetsTransfer has not been released yet, it will be part of XCMv5, and we can make this change part of the same XCMv5 so that there isn't even the possibility of someone in the wild having built XCM programs using this instruction on those wrong assumptions.

    The working assumption going forward is that the origin on the remote side can either be cleared or it can be the local origin's reanchored location. This assumption is in line with the current behavior of remote XCM programs sent over using pallet_xcm::send.

    The existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear origin same as before for compatibility reasons.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    No impact.

    -

    Ergonomics

    +

    Ergonomics

    Improves ergonomics by allowing the local origin to operate on the remote chain even when the XCM program includes an asset transfer.

    -

    Compatibility

    +

    Compatibility

    At the executor-level this change is backwards and forwards compatible. Both types of programs can be executed on new and old versions of XCM with no changes in behavior.

    New version of the InitiateAssetsTransfer instruction acts same as before when used with preserve_origin: false.

    For using the new capabilities, the XCM builder has to verify that the involved chains have the required origin-aliasing filters configured and use some new version of Barriers aware of AliasOrigin as an allowed alternative to ClearOrigin.

    For compatibility reasons, this RFC proposes this mechanism be added as an enhancement to the yet unreleased InitiateAssetsTransfer instruction, thus eliminating possibilities of XCM logic breakages in the wild. Following the same logic, the existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear the origin same as before for compatibility reasons.

    Any one of DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport instructions can be replaced with a InitiateAssetsTransfer instruction with or without origin aliasing, thus providing a clean and clear upgrade path for opting-in this new feature.

    -

    Prior Art and References

    +

    Prior Art and References

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    - +

    (source)

    Table of Contents

    -

    Stakeholders

    +

    Stakeholders

    • Runtime Developers
    • Tools/UI Developers
    -

    Explanation

    +

    Explanation

    The core idea of PVQ is to have a unified interface that meets the aforementioned requirements.

    On the runtime side, an extension-based system is introduced to serve as a standardization layer across different chains. Each extension specification defines a set of cohesive APIs. @@ -7973,12 +7742,12 @@ enum PvqError {

  • ExceedsMaxMessageSize
  • Transport
  • -

    Drawbacks

    +

    Drawbacks

    Performance issues

    • PVQ Program Size: The size of a complicated PVQ program may be too large to be suitable for efficient storage and transmission via XCMP/HRMP.
    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    • Testing:

      @@ -8015,27 +7784,27 @@ enum PvqError { N/A

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    As a newly introduced feature, PVQ operates independently and does not impact or degrade the performance of existing runtime implementations.

    -

    Ergonomics

    +

    Ergonomics

    From the perspective of off-chain tooling, this proposal streamlines development by unifying multiple chain-specific RuntimeAPIs under a single consistent interface. This significantly benefits wallet and dApp developers by eliminating the need to handle individual implementations for similar operations across different chains. The proposal also enhances development flexibility by allowing custom computations to be modularly encapsulated as PolkaVM programs that interact with the exposed APIs.

    -

    Compatibility

    +

    Compatibility

    For RuntimeAPI integration, the proposal defines new APIs, which do not break compatibility with existing interfaces. For XCM Integration, the proposal does not modify the existing XCM message format, which is backwards compatible.

    -

    Prior Art and References

    +

    Prior Art and References

    There are several discussions related to the proposal, including:

    • Original discussion about having a mechanism to avoid code duplications between the runtime and front-ends/wallets. In the original design, the custom computations are compiled as a wasm function.
    • View functions aims to provide view-only functions at the pallet level. Additionally, Facade Project aims to gather and return commonly wanted information in runtime level. PVQ does not conflict with them, and it can take advantage of these Pallet View Functions / Runtime APIs and allow people to build arbitrary PVQ programs to obtain more custom/complex data that is not otherwise expressed by these two proposals.
    -

    Unresolved Questions

    +

    Unresolved Questions

    • The specific conversion between gas and weight has not been finalized and will likely require development of a suitable benchmarking methodology.
    - +

    Once PVQ and the aforementioned Facade Project are ready, there are opportunities to consolidate overlapping functionality between the two systems. For example, the metadata APIs could potentially be unified to provide a more cohesive interface for runtime information. This would help reduce duplication and improve maintainability while preserving the distinct benefits of each approach.

    (source)

    Table of Contents

    @@ -8073,14 +7842,14 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View Authorss0me0ne-unkn0wn (13WGadgNgqSjiGQvfhimw9pX26mvGdYQ6XgrjPANSEDRoGMt) -

    Summary

    +

    Summary

    This RFC proposes a change that makes it possible to identify types of compressed blobs stored on-chain, as well as used off-chain, without the need for decompression.

    -

    Motivation

    +

    Motivation

    Currently, a compressed blob does not give any idea of what's inside because the only thing that can be inside, according to the spec, is Wasm. In reality, other blob types are already being used, and more are to come. Apart from being error-prone by itself, the current approach does not allow to properly route the blob through the execution paths before its decompression, which will result in suboptimal implementations when more blob types are used. Thus, it is necessary to introduce a mechanism allowing to identify the blob type without decompressing it.

    This proposal is intended to support future work enabling Polkadot to execute PolkaVM and, more generally, other-than-Wasm parachain runtimes, and allow developers to introduce arbitrary compression methods seamlessly in the future.

    -

    Stakeholders

    +

    Stakeholders

    Node developers are the main stakeholders for this proposal. It also creates a foundation on which parachain runtime developers will build.

    -

    Explanation

    +

    Explanation

    Overview

    The current approach to compressing binary blobs involves using zstd compression, and the resulting compressed blob is prefixed with a unique 64-bit magic value specified in that subsection. The same procedure is used to compress both Wasm code blobs and proofs-of-validity. Currently, having solely a compressed blob, it's impossible to tell what's inside it without decompression, a Wasm blob, or a PoV. That doesn't cause problems in the current protocol, as Wasm blobs and PoV blobs take completely different execution paths in the code.

    The changes proposed below are intended to define the means for distinguishing compressed blob types in a backward-compatible and future-proof way.

    @@ -8101,26 +7870,26 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View
  • Conservatively, wait until no more PVFs prefixed with CBLOB_ZSTD_LEGACY remain on-chain. That may take quite some time. Alternatively, create a migration that alters prefixes of existing blobs;
  • Removing CBLOB_ZSTD_LEGACY prefix will be possible after all the nodes in all the networks cease using the prefix which is a long process, and additional incentives should be offered to the community to make people upgrade.
  • -

    Drawbacks

    +

    Drawbacks

    Currently, the only requirement for a compressed blob prefix is not to coincide with Wasm magic bytes (as stated in code comments). Changes proposed here increase prefix collision risk, given that arbitrary data may be compressed in the future. However, it must be taken into account that:

    • Collision probability per arbitrary blob is ≈5,4×10⁻²⁰ for a single random 64-bit prefix (current situation) and ≈2,17×10⁻¹⁹ for the proposed set of four 64-bit prefixes (proposed situation), which is still low enough;
    • The current de facto protocol uses the current compression implementation to compress PoVs, which are arbitrary binary data, so the collision risk already exists and is not introduced by changes proposed here.
    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    As the change increases granularity, it will positively affect both testing possibilities and security, allowing developers to check what's inside a given compressed blob precisely. Testing the change itself is trivial. Privacy is not affected by this change.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The current implementation's performance is not affected by this change. Future implementations allowing for the execution of other-than-Wasm parachain runtimes will benefit from this change performance-wise.

    -

    Ergonomics

    +

    Ergonomics

    The end-user ergonomics is not affected. The ergonomics for developers will benefit from this change as it enables exact checks and less guessing.

    -

    Compatibility

    +

    Compatibility

    The change is designed to be backward-compatible.

    -

    Prior Art and References

    +

    Prior Art and References

    SDK PR#6704 (WIP) introduces a mechanism similar to that described in this proposal and proves the necessity of such a change.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    - +

    This proposal creates a foundation for two future work directions: