diff --git a/404.html b/404.html index 0de3900..23c558c 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index 5b362ee..ba4f2c0 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index d84a80a..fc87978 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 725ca18..b53faa5 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 32f1160..72222ce 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index 096ea65..79a435d 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index ab3eaa8..a6d7f1d 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 6e0feb8..992827c 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index 0c1e8b5..57caa34 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 67ec696..e5843cb 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index 0fdb03b..f76f055 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ @@ -271,7 +271,7 @@ This is equivalent to forcing the Vec<Transaction> to always - @@ -285,7 +285,7 @@ This is equivalent to forcing the Vec<Transaction> to always - diff --git a/index.html b/index.html index b61b196..e2da1d2 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index b61b196..e2da1d2 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/new/0066-add-smartcontracts-to-assethub.html b/new/0066-add-smartcontracts-to-assethub.html new file mode 100644 index 0000000..d5cbd1e --- /dev/null +++ b/new/0066-add-smartcontracts-to-assethub.html @@ -0,0 +1,426 @@ + + + + + + + RFC-0066: Add EVM+ink! Contracts Pallets to Asset Hub for Polkadot - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

(source)

+

Table of Contents

+ +

RFC-0066: Add EVM+ink! Contracts Pallets to Asset Hub for Polkadot

+
+ + + +
Start Date14 January 2024
DescriptionA proposal to add EVM+ink! Contracts to Asset Hub for Polkadot to support Polkadot Rollups and larger numbers of EVM/Coreplay smart contract developers and their users on Polkadot Rollups and AssetHub for Polkadot.
AuthorsSourabh Niyogi
+
+

Summary

+

This RFC proposes to add the two dominant smart contract programming languages in the Polkadot ecosystem to AssetHub: +EVM + ink!/Coreplay. The objective is to increase DOT Revenue by making AssetHub accessible to +(1) Polkadot Rollups; +(2) EVM smart contract programmers; +(3) Coreplay programmers who will benefit from easier-to-use smart contract environments.
+These changes in AssetHub are enabled by key Polkadot 2.0 technologies: +PolkaVM supporting Coreplay, and +hyper data availability in Blobs Chain.

+

Motivation

+

EVM Contracts are pervasive in the Web3 blockchain ecosystem, +while Polkadot 2.0's Coreplay aims to surpass EVM Contracts in ease-of-use using PolkaVM's RISC architecture.

+

Asset Hub for Polkadot does not have smart contract capabilities, +even though dominant stablecoin assets such as USDC and USDT are originated there.
+In addition, in the RFC #32 - Minimal Relay Chain architecture, +DOT balances are planned to be shifted to Asset Hub, to support Polkadot 2.0's +CoreJam map-reduce architecture.
+In this 2.0 architecture, there is no room for synchronous contracts on the Polkadot relay chain -- +doing so would waste precious resources that should be dedicated to sync+async composability.
+However, while Polkadot fellows have concluded the Polkadot relay chain should not support +synchronous smart contracts, this is not applicable to AssetHub for Polkadot.

+

The following sections argue for the need for Smart Contracts on AssetHub.

+

Defi+NFT Applications need Smart Contracts on AssetHub

+

EVM Smart Contract chains within Polkadot and outside are dominated by defi + NFT applications. While the assetConversion pallet (implementing Uniswap v1) is a start to having some basic defi on AssetHub, +many programmers may be surprised to find that synchronous EVM smart contract capabilities (e.g. uniswap v2+v3) on other chains are not possible on AssetHub.

+

Indeed, this is true for many Polkadot parachains, with the exception of the top 2 Polkadot parachains (by marketcap circa early 2024: Moonbeam + Astar) who do include the EVM pallets.
+This leads to a cumbersome Polkadot EVM smart contract programming experience between AssetHub and these 2 Polkadot parachains, making the Polkadot ecosystem hard to work with for asset-related applications from defi to NFTs.

+

The ink! defi ecosystem remains nascent, having only Astar as a potential home, and empirically has almost no defi/NFT activity. Although e.g. uniswap translations have been written,

+

An AssetHub for Polkadot deployment of EVM and ink! contracts, it is hoped, would likely support new applications for top assets (USDC and USDT) and spur many smart contract developers to develop end user applications with familiar synchronous programming constructs.

+

Rollups need Smart Contracts on AssetHub

+

Polkadot Data Availability technology is extremely promising but underutilized.
+We envision a new class of customer, "Polkadot Rollups" that can utilize Polkadot DA far better than Ethereum and other technology platforms.
+Unlike Ethereum's DA which is capped at a fixed throughput now extending to EIP-4844, Polkadot data availability is linear in the number of cores.
+This means Polkadot can support a much larger number of rollups than Ethereum now, and even more as the number of cores in Polkadot grows. +This performance difference has not been widely appreciated in the blockchain community.

+

Recently, a "Blobs" chain has been developed to expose Polkadot DA to rollups by senior Polkadot Fellows:

+ +

A rollup kit is mappable to widely used rollup platforms, such as OP Stack, Arbitrum Orbit or StarkNet Madara.
+A Blobs chain, currently deployed on Kusama (paraID 3338), enables rollups to utilize functionality outside the Polkadot 1.0 parachain architecture by having rollups submit transactions via a rollup kit abstraction. The Blobs chain write interface is simple blobs.submitBlob(namespaceId, blob) with a matching read interface.

+

However, simply sending blobs is not enough to power a rollup. End users need to interact with a "settlement layer", while rollups require proof systems for security.

+

Key functionality for optimistic rollups (e.g. OP Stack, Arbitrum Orbit) are:

+
    +
  • enabling the users of the rollup to deposit and withdraw the L1 native token (DOT) into the rollup and from the rollup. In an AssetHub context: +
      +
    • Deposits: send DOT to the rollup from AssetHub, by calling an EVM Contract function on AssetHub;
    • +
    • Withdrawal: withdraw DOT from the rollup by submitting a EVM transaction on the rollup. After some of days (e.g. 7 days on OP Stack), the user submits a transaction on AssetHub to claim their DOT, using a Merkle proof.
    • +
    +
  • +
  • enabling interactive fraud proofs. While this has rarely happened in practice, it is critical to rollup security. In an AssetHub context: +
      +
    • Anyone monitoring a rollup, using the Blobs chain can access the recent history.
    • +
    • When detecting invalid state transitions, anyone can interact with rollup and AssetHub's EVM to generate a fraud proof.
    • +
    +
  • +
+

Analogous functionality exist for ZK-rollup platforms (e.g. Polygon zkEVM, StarkNet Madara), with high potential for using the same Blobs+AssetHub chains.

+

While it is possible to have the operations in EVM Contracts translated in FRAME pallets (e.g. an "opstack" pallet), we do not believe a pallet translation confers significant benefits.
+Instead, we believe the translation would require regular updates from the rollup platform, which have proven difficult to implement in practice.

+

ink! on AssetHub will lead to CorePlay Developers on AssetHub

+

While ink! WASM Smart Contracts have been promising technology, the adoption of ink! WASM Contracts amongst Polkadot parachains has been low, in practice just Astar to date, with nowhere near as many developers.
+This may be due to missing tooling, slow compile times, and/or simply because ink!/Rust is just harder to learn than Solidity, the dominant programming language of EVM Chains.

+

Fortunately, ink! can compile to PolkaVM, a new RISC based VM that has the special capability of suspending and resuming the registers, supporting long-running computations.
+This has the key new promise of making smart contract languages easier to use -- instead of smart contract developers worrying about what can be done within the gas limits of a specific block or a specific transaction, Coreplay smart contracts can be much easier to program on (see here).

+

We believe AssetHub should support ink! as a precursor to support CorePlay's capabilities as soon as possible.
+To the best of our knowledge, release times of this are unknown but having ink! inside AssetHub would be natural for Polkadot 2.0.

+

Stakeholders

+
    +
  • Asset Hub Users: Those who call any extrinsic on Asset Hub for Polkadot.
  • +
  • DOT Token Holders: Those who hold DOT on any chain in the Polkadot ecosystem.
  • +
  • AssetHub Smart Contract Developers: Those who utilize EVM Smart Contracts, ink! Contracts or Coreplay Contracts on AssetHub.
  • +
  • Ethereum Rollups: Rollups who utilize Ethereum as a settlement layer and interactive fraud proofs or ZK proofs to secure their rollup and utilize Ethereum DA to record transactions, provide security for their rollup, and have rollup users settle on Ethereum.
  • +
  • Polkadot Rollups: Rollups who utilize AssetHub as a settlement layer and interactive fraud proofs or ZK proofs on Assethub and Blobs to record rollup transactions, provide security for their rollup, and have rollup users settle on AssetHub for Polkadot.
  • +
+

Explanation

+

Limit Smart Contract Weight allocation

+

AssetHub is a major component of the Polkadot 2.0 Minimal Relay Chain architecture. It is critical that smart contract developers not be able to clog AssetHub's blockspace for other mission critical applications, such as Staking and Governance.

+

As such, it is proposed that at most 50% of the available weight in AssetHub for Polkadot blocks be allocated to smart contracts pallets (EVM, ink! and/or Coreplay). While to date AssetHub has limited usage, it is believed (see here) that imposing this limit on smart contracts pallet would limit the effect on non-smart contract usage. A excessively small weight limit like 10% or 20% may limit the attractiveness of Polkadot as a platform for Polkadot rollups and EVM Contracts. A excessively large weight like 90% or 100% may threaten AssetHub usage.

+

In practice, this 50% weight limit would be used for 3 categories of smart contract usage:

+
    +
  • Defi/NFT applications: modern defi EVM Contracts would be expected to go beyond the capabilities of assetConversion pallet to support many common ERC20/ERC721/ERC1155 centric applications.
  • +
  • Polkadot Rollups users: deposit and withdrawal operations would be expected to dominate here. Note the operations of recording blocks would be done on the Blobs Chains, while interactive fraud proofs would be extremely rare by comparison.
  • +
  • Coreplay smart contract usage, at a future time.
  • +
+

We expect the first category to dominate. If AssetHub smart contract usage increases so as to approach this 50% limit, the gas price will increase significantly. This likely motivates EVM contract developers to migrate to a EVM contract chain and/or rethink their application to work asynchronously within CoreJam, another major Polkadot 2.0 technology.

+

Model AssetHub Assets inside EVM Smart Contracts based on Astar

+

It is essential to make AssetHub assets interface well with EVM Smart Contracts. +Polkadot parachains Astar and Moonbeam have a mapping between assetIDs and "virtual" EVM Contracts.

+ +

We propose that AssetHub support a systemic mapping following Astar: +(a) Native Relay DOT + KSM - should be mapped to 0xFFfFfFffFFfffFFfFFfFFFFFffFFFffffFfFFFfF on AssetHub for Polkadot and AssetHub for Kusama respectively +(b) Other Assethubs assets should map into an EVM address using a 0xffffffff prefix +https://docs.astar.network/docs/learn/interoperability/xcm/integration/tools#xc20-address

+

The usage of the above has been made code-complete by Astar:

+ +

Polkadot parachains Astar and Moonbeam adopted two very different approaches of how end users interact with EVM Contracts.
+We propose that AssetHub for Polkadot adopt the Astar solution, mirroring it as closely as possible.

+

New DOT Revenue Sources

+

A substantial motivation in this proposal is to increase demand for DOT via two key chains:

+
    +
  • AssetHub - from defi/NFT users, Polkadot Rollup users and AssetHub Smart Contract Developers
  • +
  • Blobs - for Polkadot Rollups
  • +
+

New Revenue from AssetHub EVM Contracts

+

Enabling EVM Contracts on AssetHub will support DOT revenue from:

+
    +
  • defi/NFT users who use AssetHub directly
  • +
  • rollup operators who utilize Blobs chain
  • +
  • rollup users who buy DOT to utilize Polkadot Rollups
  • +
+

New Revenue for ink!/Coreplay Contracts

+

Enabling ink! contracts will pave the way to a new class of AssetHub Smart Contract Developers.
+Given PolkaVM's proven reduced compile time and RISC architecture enabling register snapshots, it is natural to utilize these new technical capabilities on a flagship system chain.
+To the extent these capabilities are attractive to smart contract developers, this has the potential for bringing in new DOT revenue from a system chain.

+

Drawbacks and Tradeoffs

+

Supporting EVM Contracts in AssetHub is seen by some as undercutting Polkadot's 1.0 parachain architecture, both special purpose appchains and smart contract developer platform parachains.
+We believe the lack of growth of parachains in the last 12-18 months, and the high potential of CorePlay motivates new options be pursued in system chains.

+

Maintaining EVM Contracts on AssetHub may be seen as difficult and may require Substrate engineers to maintain EVM Pallets and manage the xcContracts.
+We believe this cost will be relatively small based on the proven deployment of Astar and Moonbeam.
+The cost will be justified compared to the potential upside of new DOT revenue from defi/NFT applications on AssetHub and the potential for utilizing Polkadot DA for Polkadot rollups.

+

Testing, Security, and Privacy

+

Testing the mapping between assetIDs and EVM Contracts thoroughly will be critical.

+

Having a complete working OP Stack chain using AssetHub for Kusama (1000) and Blobs on Kusama (3338) would be highly desirable, but is unlikely to be required.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The weight limit of 50% is expected to be adequate to limit excess smart contract usage at this time.

+

Storage bloat is expected to kept to a minimum with the nominal 0.01 Existential Deposit.

+

Ergonomics

+

Note that the existential deposit is not 0 DOT but being lowered from 0.1 DOT to 0.01 DOT, which may pose problems for some developers.
+Many developers routinely deploy their EVM contracts on many different EVM Chains in parallel. This non-zero ED may pose problems for some developers

+

The 0.01 DOT (worth $0.075 USD) is unlikely to pose significant issue.

+

Compatibility

+

It is believed that EVM pallet (as deployed on Moonbeam + Astar) is sufficiently compatible with Ethereum, and that the ED of 0.01 DOT pose negligible issues.

+

The messaging architecture for rollups are not compatible with Polkadot XCM.
+It is not clear if leading rollup platforms (OP Stack, Arbitrum Orbit, Polygon zkEVM) could be made compatible with XCM.

+

Unresolved Questions

+

It is highly desirable to know the throughput of Polkadot DA with popular rollup architectures OP Stack and Arbitrum Orbit.
+This would enable CEXs and EVM L2 builders to choose Polkadot over Ethereum.

+ +

If accepted, this RFC could pave the way for CorePlay on Asset Hub for Polkadot/Kusama, a major component of Polkadot 2.0's smart contract future.

+

The importance of precompiles should

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/print.html b/print.html index 2b1169d..31f6ed7 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -1902,6 +1902,204 @@ This is equivalent to forcing the Vec<Transaction> to always

None.

None. This is a simple isolated change.

+

(source)

+

Table of Contents

+ +

RFC-0066: Add EVM+ink! Contracts Pallets to Asset Hub for Polkadot

+
+ + + +
Start Date14 January 2024
DescriptionA proposal to add EVM+ink! Contracts to Asset Hub for Polkadot to support Polkadot Rollups and larger numbers of EVM/Coreplay smart contract developers and their users on Polkadot Rollups and AssetHub for Polkadot.
AuthorsSourabh Niyogi
+
+

Summary

+

This RFC proposes to add the two dominant smart contract programming languages in the Polkadot ecosystem to AssetHub: +EVM + ink!/Coreplay. The objective is to increase DOT Revenue by making AssetHub accessible to +(1) Polkadot Rollups; +(2) EVM smart contract programmers; +(3) Coreplay programmers who will benefit from easier-to-use smart contract environments.
+These changes in AssetHub are enabled by key Polkadot 2.0 technologies: +PolkaVM supporting Coreplay, and +hyper data availability in Blobs Chain.

+

Motivation

+

EVM Contracts are pervasive in the Web3 blockchain ecosystem, +while Polkadot 2.0's Coreplay aims to surpass EVM Contracts in ease-of-use using PolkaVM's RISC architecture.

+

Asset Hub for Polkadot does not have smart contract capabilities, +even though dominant stablecoin assets such as USDC and USDT are originated there.
+In addition, in the RFC #32 - Minimal Relay Chain architecture, +DOT balances are planned to be shifted to Asset Hub, to support Polkadot 2.0's +CoreJam map-reduce architecture.
+In this 2.0 architecture, there is no room for synchronous contracts on the Polkadot relay chain -- +doing so would waste precious resources that should be dedicated to sync+async composability.
+However, while Polkadot fellows have concluded the Polkadot relay chain should not support +synchronous smart contracts, this is not applicable to AssetHub for Polkadot.

+

The following sections argue for the need for Smart Contracts on AssetHub.

+

Defi+NFT Applications need Smart Contracts on AssetHub

+

EVM Smart Contract chains within Polkadot and outside are dominated by defi + NFT applications. While the assetConversion pallet (implementing Uniswap v1) is a start to having some basic defi on AssetHub, +many programmers may be surprised to find that synchronous EVM smart contract capabilities (e.g. uniswap v2+v3) on other chains are not possible on AssetHub.

+

Indeed, this is true for many Polkadot parachains, with the exception of the top 2 Polkadot parachains (by marketcap circa early 2024: Moonbeam + Astar) who do include the EVM pallets.
+This leads to a cumbersome Polkadot EVM smart contract programming experience between AssetHub and these 2 Polkadot parachains, making the Polkadot ecosystem hard to work with for asset-related applications from defi to NFTs.

+

The ink! defi ecosystem remains nascent, having only Astar as a potential home, and empirically has almost no defi/NFT activity. Although e.g. uniswap translations have been written,

+

An AssetHub for Polkadot deployment of EVM and ink! contracts, it is hoped, would likely support new applications for top assets (USDC and USDT) and spur many smart contract developers to develop end user applications with familiar synchronous programming constructs.

+

Rollups need Smart Contracts on AssetHub

+

Polkadot Data Availability technology is extremely promising but underutilized.
+We envision a new class of customer, "Polkadot Rollups" that can utilize Polkadot DA far better than Ethereum and other technology platforms.
+Unlike Ethereum's DA which is capped at a fixed throughput now extending to EIP-4844, Polkadot data availability is linear in the number of cores.
+This means Polkadot can support a much larger number of rollups than Ethereum now, and even more as the number of cores in Polkadot grows. +This performance difference has not been widely appreciated in the blockchain community.

+

Recently, a "Blobs" chain has been developed to expose Polkadot DA to rollups by senior Polkadot Fellows:

+ +

A rollup kit is mappable to widely used rollup platforms, such as OP Stack, Arbitrum Orbit or StarkNet Madara.
+A Blobs chain, currently deployed on Kusama (paraID 3338), enables rollups to utilize functionality outside the Polkadot 1.0 parachain architecture by having rollups submit transactions via a rollup kit abstraction. The Blobs chain write interface is simple blobs.submitBlob(namespaceId, blob) with a matching read interface.

+

However, simply sending blobs is not enough to power a rollup. End users need to interact with a "settlement layer", while rollups require proof systems for security.

+

Key functionality for optimistic rollups (e.g. OP Stack, Arbitrum Orbit) are:

+ +

Analogous functionality exist for ZK-rollup platforms (e.g. Polygon zkEVM, StarkNet Madara), with high potential for using the same Blobs+AssetHub chains.

+

While it is possible to have the operations in EVM Contracts translated in FRAME pallets (e.g. an "opstack" pallet), we do not believe a pallet translation confers significant benefits.
+Instead, we believe the translation would require regular updates from the rollup platform, which have proven difficult to implement in practice.

+

ink! on AssetHub will lead to CorePlay Developers on AssetHub

+

While ink! WASM Smart Contracts have been promising technology, the adoption of ink! WASM Contracts amongst Polkadot parachains has been low, in practice just Astar to date, with nowhere near as many developers.
+This may be due to missing tooling, slow compile times, and/or simply because ink!/Rust is just harder to learn than Solidity, the dominant programming language of EVM Chains.

+

Fortunately, ink! can compile to PolkaVM, a new RISC based VM that has the special capability of suspending and resuming the registers, supporting long-running computations.
+This has the key new promise of making smart contract languages easier to use -- instead of smart contract developers worrying about what can be done within the gas limits of a specific block or a specific transaction, Coreplay smart contracts can be much easier to program on (see here).

+

We believe AssetHub should support ink! as a precursor to support CorePlay's capabilities as soon as possible.
+To the best of our knowledge, release times of this are unknown but having ink! inside AssetHub would be natural for Polkadot 2.0.

+

Stakeholders

+ +

Explanation

+

Limit Smart Contract Weight allocation

+

AssetHub is a major component of the Polkadot 2.0 Minimal Relay Chain architecture. It is critical that smart contract developers not be able to clog AssetHub's blockspace for other mission critical applications, such as Staking and Governance.

+

As such, it is proposed that at most 50% of the available weight in AssetHub for Polkadot blocks be allocated to smart contracts pallets (EVM, ink! and/or Coreplay). While to date AssetHub has limited usage, it is believed (see here) that imposing this limit on smart contracts pallet would limit the effect on non-smart contract usage. A excessively small weight limit like 10% or 20% may limit the attractiveness of Polkadot as a platform for Polkadot rollups and EVM Contracts. A excessively large weight like 90% or 100% may threaten AssetHub usage.

+

In practice, this 50% weight limit would be used for 3 categories of smart contract usage:

+ +

We expect the first category to dominate. If AssetHub smart contract usage increases so as to approach this 50% limit, the gas price will increase significantly. This likely motivates EVM contract developers to migrate to a EVM contract chain and/or rethink their application to work asynchronously within CoreJam, another major Polkadot 2.0 technology.

+

Model AssetHub Assets inside EVM Smart Contracts based on Astar

+

It is essential to make AssetHub assets interface well with EVM Smart Contracts. +Polkadot parachains Astar and Moonbeam have a mapping between assetIDs and "virtual" EVM Contracts.

+ +

We propose that AssetHub support a systemic mapping following Astar: +(a) Native Relay DOT + KSM - should be mapped to 0xFFfFfFffFFfffFFfFFfFFFFFffFFFffffFfFFFfF on AssetHub for Polkadot and AssetHub for Kusama respectively +(b) Other Assethubs assets should map into an EVM address using a 0xffffffff prefix +https://docs.astar.network/docs/learn/interoperability/xcm/integration/tools#xc20-address

+

The usage of the above has been made code-complete by Astar:

+ +

Polkadot parachains Astar and Moonbeam adopted two very different approaches of how end users interact with EVM Contracts.
+We propose that AssetHub for Polkadot adopt the Astar solution, mirroring it as closely as possible.

+

New DOT Revenue Sources

+

A substantial motivation in this proposal is to increase demand for DOT via two key chains:

+ +

New Revenue from AssetHub EVM Contracts

+

Enabling EVM Contracts on AssetHub will support DOT revenue from:

+ +

New Revenue for ink!/Coreplay Contracts

+

Enabling ink! contracts will pave the way to a new class of AssetHub Smart Contract Developers.
+Given PolkaVM's proven reduced compile time and RISC architecture enabling register snapshots, it is natural to utilize these new technical capabilities on a flagship system chain.
+To the extent these capabilities are attractive to smart contract developers, this has the potential for bringing in new DOT revenue from a system chain.

+

Drawbacks and Tradeoffs

+

Supporting EVM Contracts in AssetHub is seen by some as undercutting Polkadot's 1.0 parachain architecture, both special purpose appchains and smart contract developer platform parachains.
+We believe the lack of growth of parachains in the last 12-18 months, and the high potential of CorePlay motivates new options be pursued in system chains.

+

Maintaining EVM Contracts on AssetHub may be seen as difficult and may require Substrate engineers to maintain EVM Pallets and manage the xcContracts.
+We believe this cost will be relatively small based on the proven deployment of Astar and Moonbeam.
+The cost will be justified compared to the potential upside of new DOT revenue from defi/NFT applications on AssetHub and the potential for utilizing Polkadot DA for Polkadot rollups.

+

Testing, Security, and Privacy

+

Testing the mapping between assetIDs and EVM Contracts thoroughly will be critical.

+

Having a complete working OP Stack chain using AssetHub for Kusama (1000) and Blobs on Kusama (3338) would be highly desirable, but is unlikely to be required.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The weight limit of 50% is expected to be adequate to limit excess smart contract usage at this time.

+

Storage bloat is expected to kept to a minimum with the nominal 0.01 Existential Deposit.

+

Ergonomics

+

Note that the existential deposit is not 0 DOT but being lowered from 0.1 DOT to 0.01 DOT, which may pose problems for some developers.
+Many developers routinely deploy their EVM contracts on many different EVM Chains in parallel. This non-zero ED may pose problems for some developers

+

The 0.01 DOT (worth $0.075 USD) is unlikely to pose significant issue.

+

Compatibility

+

It is believed that EVM pallet (as deployed on Moonbeam + Astar) is sufficiently compatible with Ethereum, and that the ED of 0.01 DOT pose negligible issues.

+

The messaging architecture for rollups are not compatible with Polkadot XCM.
+It is not clear if leading rollup platforms (OP Stack, Arbitrum Orbit, Polygon zkEVM) could be made compatible with XCM.

+

Unresolved Questions

+

It is highly desirable to know the throughput of Polkadot DA with popular rollup architectures OP Stack and Arbitrum Orbit.
+This would enable CEXs and EVM L2 builders to choose Polkadot over Ethereum.

+ +

If accepted, this RFC could pave the way for CorePlay on Asset Hub for Polkadot/Kusama, a major component of Polkadot 2.0's smart contract future.

+

The importance of precompiles should

(source)

Table of Contents

-

Stakeholders

+

Stakeholders

All chain teams are stakeholders, as implementing this feature would require timely effort on their side and would impact compatibility with older tools.

This feature is essential for all offline signer tools; many regular signing tools might make use of it. In general, this RFC greatly improves security of any network implementing it, as many governing keys are used with offline signers.

Implementing this RFC would remove requirement to maintain metadata portals manually, as task of metadata verification would be effectively moved to consensus mechanism of the chain.

-

Explanation

+

Explanation

Detailed description of metadata shortening and digest process is provided in metadata-shortener crate (see cargo doc --open and examples). Below are presented algorithms of the process.

Definitions

Metadata structure

@@ -3622,24 +3820,24 @@ modularized_registry.sort(|a, b| {

A 1-byte increase in transaction size due to signed extension value. Digest is not included in transferred transaction, only in signing process.

Transition overhead

Some slightly out of spec systems might experience breaking changes as new content of signed extensions is added. It is important to note, that there is no real overhead in processing time nor complexity, as the metadata checking mechanism is voluntary. The only drawbacks are expected for tools that do not implement MetadataV14 self-descripting features.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

The metadata shortening protocol should be extensively tested on all available examples of metadata before releasing changes to either metadata or shortener. Careful code review should be performed on shortener implementation code to ensure security. The main metadata tree would inevitably be constructed on runtime build which would also ensure correctness.

To be able to recall shortener protocol in case of vulnerability issues, a version byte is included.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

This is negligibly short pessimization during build time on the chain side. Cold wallets performance would improve mostly as metadata validity mechanism that was taking most of effort in cold wallet support would become trivial.

-

Ergonomics

+

Ergonomics

The proposal was optimized for cold storage wallets usage with minimal impact on all other parts of the ecosystem

-

Compatibility

+

Compatibility

Proposal in this form is not compatible with older tools that do not implement proper MetadataV14 self-descriptive features; those would have to be upgraded to include a new signed extensions field.

Prior Art and References

This project was developed upon a Polkadot Treasury grant; relevant development links are located in metadata-offline-project repository.

-

Unresolved Questions

+

Unresolved Questions

  1. How would polkadot-js handle the transition?
  2. Where would non-rust tools like Ledger apps get shortened metadata content?
- +

Changes to code of all cold signers to implement this mechanism SHOULD be done when this is enabled; non-cold signers may perform extra metadata check for better security. Ultimately, signing anything without decoding it with verifiable metadata should become discouraged in all situations where a decision-making mechanism is involved (that is, outside of fully automated blind signers like trade bots or staking rewards payout tools).

(source)

Table of Contents

@@ -3682,11 +3880,11 @@ modularized_registry.sort(|a, b| { AuthorsAlin Dima -

Summary

+

Summary

Propose a way of permuting the availability chunk indices assigned to validators, in the context of recovering available data from systematic chunks, with the purpose of fairly distributing network bandwidth usage.

-

Motivation

+

Motivation

Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 validators during an entire session, when favouring availability recovery from systematic chunks.

@@ -3694,9 +3892,9 @@ validators during an entire session, when favouring availability recovery from s systematic availability chunks to different validators, based on the relay chain block and core. The main purpose is to ensure fair distribution of network bandwidth usage for availability recovery in general and in particular for systematic chunk holders.

-

Stakeholders

+

Stakeholders

Relay chain node core developers.

-

Explanation

+

Explanation

Systematic erasure codes

An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the resulting code. @@ -3860,28 +4058,28 @@ mitigate this problem and will likely be needed in the future for CoreJam and/or Related discussion about updating CandidateReceipt

  • It's a breaking change that requires all validators and collators to upgrade their node version at least once.
  • -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Extensive testing will be conducted - both automated and manual. This proposal doesn't affect security or privacy.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of CPU time in polkadot as we scale up the parachain block size and number of availability cores.

    With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be halved and total POV recovery time decrease by 80% for large POVs. See more here.

    -

    Ergonomics

    +

    Ergonomics

    Not applicable.

    -

    Compatibility

    +

    Compatibility

    This is a breaking change. See upgrade path section above. All validators and collators need to have upgraded their node versions before the feature will be enabled via a governance call.

    Prior Art and References

    See comments on the tracking issue and the in-progress PR

    -

    Unresolved Questions

    +

    Unresolved Questions

    Not applicable.

    - +

    This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic chunks from backers/approval-checkers.

    Appendix A

    @@ -3963,20 +4161,20 @@ dispute scenarios.

    AuthorsPierre Krieger -

    Summary

    +

    Summary

    This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

    Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

    The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

    -

    Motivation

    +

    Motivation

    The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recently blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

    It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

    If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.

    This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.

    -

    Stakeholders

    +

    Stakeholders

    Low-level client developers. People interested in accessing the archive of the chain.

    -

    Explanation

    +

    Explanation

    Reading RFC #8 first might help with comprehension, as this RFC is very similar.

    Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

    Capabilities

    @@ -4013,28 +4211,28 @@ If blocks pruning is enabled and the chain is a relay chain, then Substrate unfo

    Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.

    Drawbacks

    None that I can see.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The content of this section is basically the same as the one in RFC 8.

    This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.

    Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

    For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.

    Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

    Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

    Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

    Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    Irrelevant.

    Prior Art and References

    Unknown.

    -

    Unresolved Questions

    +

    Unresolved Questions

    While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

    - +

    This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

    If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.

    @@ -4074,19 +4272,19 @@ We could even add to the peer-to-peer network nodes that are only capable of ser AuthorsJiahao Ye -

    Summary

    +

    Summary

    Currently, substrate runtime use an simple allocator defined by host side. Every runtime MUST import these allocator functions for normal execution. This situation make runtime code not versatile enough.

    So this RFC proposes to define a new spec for allocator part to make substrate runtime more generic.

    -

    Motivation

    +

    Motivation

    Since this RFC define a new way for allocator, we now regard the old one as legacy allocator. As we all know, since the allocator implementation details are defined by the substrate client, parachain/parathread cannot customize memory allocator algorithm, so the new specification allows the runtime to customize memory allocation, and then export the allocator function according to the specification for the client side to use. Another benefit is that some new host functions can be designed without allocating memory on the client, which may have potential performance improvements. Also it will help provide a unified and clean specification if substrate runtime support multi-targets(e.g. RISC-V). There is also a potential benefit. Many programming languages that support compilation to wasm may not be friendly to supporting external allocator. This is beneficial for other programming languages ​​to enter the substrate runtime ecosystem. The last and most important benefit is that for offchain context execution, the runtime can fully support pure wasm. What this means here is that all imported host functions could not actually be called (as stub functions), then the various verification logic of the runtime can be converted into pure wasm, which provides the possibility for the substrate runtime to run block verification in other environments (such as in browsers and other non-substrate environments).

    -

    Stakeholders

    +

    Stakeholders

    No attempt was made at convincing stakeholders.

    -

    Explanation

    +

    Explanation

    Runtime side spec

    This section contains a list of functions should be exported by substrate runtime.

    We define the spec as version 1, so the following dummy function v1 MUST be exported to hint @@ -4129,16 +4327,16 @@ allocator.

    The allocator inside of the runtime will make code size bigger, but it's not obvious. The allocator inside of the runtime maybe slow down(or speed up) the runtime, still not obvious.

    We could ignore these drawbacks since they are not prominent. And the execution efficiency is highly decided by runtime developer. We could not prevent a poor efficiency if developer want to do it.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Keep the legacy allocator runtime test cases, and add new feature to compile test cases for v1 allocator spec. And then update the test asserts.

    Update template runtime to enable v1 spec. Once the dev network runs well, it seems that the spec is implmented correctly.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    As the above says, not obvious impact about performance. And polkadot-sdk could offer the best practice allocator for all chains. Third party also could customized by theirself. So the performance could be improved over time.

    -

    Ergonomics

    +

    Ergonomics

    Only for runtime developer, Just need to import a new crate and enable a new feature. Maybe it's convienient for other wasm-target language to implment.

    -

    Compatibility

    +

    Compatibility

    It's 100% compatible. Only Some runtime configs and executor configs need to be depreacted.

    For support new runtime spec, we MUST upgrade the client binary to support new spec of client part firstly.

    We SHALL add an optional primtive crate to enable the version 1 spec and disable the legacy allocator by cargo feature. @@ -4148,9 +4346,9 @@ For the first year, we SHALL disable the v1 by default, and enable it by default

  • Move the allocator inside of the runtime
  • Add new allocator design
  • -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    - +

    The content discussed with RFC-0004 is basically orthogonal, but it could still be considered together, and it is preferred that this rfc be implmentented first.

    This feature could make substrate runtime be easier supported by other languages and integreted into other ecosystem.

    (source)

    @@ -4181,16 +4379,16 @@ For the first year, we SHALL disable the v1 by default, and enable it by default AuthorsPierre Krieger -

    Summary

    +

    Summary

    Update the runtime-host interface to no longer make use of a host-side allocator.

    -

    Motivation

    +

    Motivation

    The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.

    The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer in order to free the buffer.

    Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.

    Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.

    -

    Stakeholders

    +

    Stakeholders

    No attempt was made at convincing stakeholders.

    -

    Explanation

    +

    Explanation

    New host functions

    This section contains a list of new host functions to introduce.

    (func $ext_storage_read_version_2
    @@ -4397,7 +4595,7 @@ The following other host functions are similarly also considered deprecated:

    This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.

    Prior Art

    The API of these new functions was heavily inspired by API used by the C programming language.

    -

    Unresolved Questions

    +

    Unresolved Questions

    The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.

    It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions:

      @@ -4454,10 +4652,10 @@ This would remove the possibility to synchronize older blocks, which is probably LicenseMIT -

      Summary

      +

      Summary

      This RFC proposes a dynamic pricing model for the sale of Bulk Coretime on the Polkadot UC. The proposed model updates the regular price of cores for each sale period, by taking into account the number of cores sold in the previous sale, as well as a limit of cores and a target number of cores sold. It ensures a minimum price and limits price growth to a maximum price increase factor, while also giving govenance control over the steepness of the price change curve. It allows governance to address challenges arising from changing market conditions and should offer predictable and controlled price adjustments.

      Accompanying visualizations are provided at [1].

      -

      Motivation

      +

      Motivation

      RFC-1 proposes periodic Bulk Coretime Sales as a mechanism to sell continouos regions of blockspace (suggested to be 4 weeks in length). A number of Blockspace Regions (compare RFC-1 & RFC-3) are provided for sale to the Broker-Chain each period and shall be sold in a way that provides value-capture for the Polkadot network. The exact pricing mechanism is out of scope for RFC-1 and shall be provided by this RFC.

      A dynamic pricing model is needed. A limited number of Regions are offered for sale each period. The model needs to find the price for a period based on supply and demand of the previous period.

      The model shall give Coretime consumers predictability about upcoming price developments and confidence that Polkadot governance can adapt the pricing model to changing market conditions.

      @@ -4469,7 +4667,7 @@ This would remove the possibility to synchronize older blocks, which is probably
    • The solution SHOULD provide a maximum factor of price increase should the limit of Regions sold per period be reached.
    • The solution should allow governance to control the steepness of the price function
    • -

      Stakeholders

      +

      Stakeholders

      The primary stakeholders of this RFC are:

      • Protocol researchers and evelopers
      • @@ -4477,7 +4675,7 @@ This would remove the possibility to synchronize older blocks, which is probably
      • Polkadot parachains teams
      • Brokers involved in the trade of Bulk Coretime
      -

      Explanation

      +

      Explanation

      Overview

      The dynamic pricing model sets the new price based on supply and demand in the previous period. The model is a function of the number of Regions sold, piecewise-defined by two power functions.

        @@ -4617,9 +4815,9 @@ OLD_PRICE = 1000 AuthorsPierre Krieger -

        Summary

        +

        Summary

        Improve the networking messages that query storage items from the remote, in order to reduce the bandwidth usage and number of round trips of light clients.

        -

        Motivation

        +

        Motivation

        Clients on the Polkadot peer-to-peer network can be divided into two categories: full nodes and light clients. So-called full nodes are nodes that store the content of the chain locally on their disk, while light clients are nodes that don't. In order to access for example the balance of an account, a full node can do a disk read, while a light client needs to send a network message to a full node and wait for the full node to reply with the desired value. This reply is in the form of a Merkle proof, which makes it possible for the light client to verify the exactness of the value.

        Unfortunately, this network protocol is suffering from some issues:

          @@ -4629,9 +4827,9 @@ OLD_PRICE = 1000

        Once Polkadot and Kusama will have transitioned to state_version = 1, which modifies the format of the trie entries, it will be possible to generate Merkle proofs that contain only the hashes of values in the storage. Thanks to this, it is already possible to prove the existence of a key without sending its entire value (only its hash), or to prove that a value has changed or not between two blocks (by sending just their hashes). Thus, the only reason why aforementioned issues exist is because the existing networking messages don't give the possibility for the querier to query this. This is what this proposal aims at fixing.

        -

        Stakeholders

        +

        Stakeholders

        This is the continuation of https://github.com/w3f/PPPs/pull/10, which itself is the continuation of https://github.com/w3f/PPPs/pull/5.

        -

        Explanation

        +

        Explanation

        The protobuf schema of the networking protocol can be found here: https://github.com/paritytech/substrate/blob/5b6519a7ff4a2d3cc424d78bc4830688f3b184c0/client/network/light/src/schema/light.v1.proto

        The proposal is to modify this protocol in this way:

        @@ -11,6 +11,7 @@ message Request {
        @@ -4693,22 +4891,22 @@ Also note that child tries aren't considered as descendants of the main trie whe
         

        This proposal doesn't handle one specific situation: what if a proof containing a single specific item would exceed the response size limit? For example, if the response size limit was 1 MiB, querying the runtime code (which is typically 1.0 to 1.5 MiB) would be impossible as it's impossible to generate a proof less than 1 MiB. The response size limit is currently 16 MiB, meaning that no single storage item must exceed 16 MiB.

        Unfortunately, because it's impossible to verify a Merkle proof before having received it entirely, parsing the proof in a streaming way is also not possible.

        A way to solve this issue would be to Merkle-ize large storage items, so that a proof could include only a portion of a large storage item. Since this would require a change to the trie format, it is not realistically feasible in a short time frame.

        -

        Testing, Security, and Privacy

        +

        Testing, Security, and Privacy

        The main security consideration concerns the size of replies and the resources necessary to generate them. It is for example easily possible to ask for all keys and values of the chain, which would take a very long time to generate. Since responses to this networking protocol have a maximum size, the replier should truncate proofs that would lead to the response being too large. Note that it is already possible to send a query that would lead to a very large reply with the existing network protocol. The only thing that this proposal changes is that it would make it less complicated to perform such an attack.

        Implementers of the replier side should be careful to detect early on when a reply would exceed the maximum reply size, rather than inconditionally generate a reply, as this could take a very large amount of CPU, disk I/O, and memory. Existing implementations might currently be accidentally protected from such an attack thanks to the fact that requests have a maximum size, and thus that the list of keys in the query was bounded. After this proposal, this accidental protection would no longer exist.

        Malicious server nodes might truncate Merkle proofs even when they don't strictly need to, and it is not possible for the client to (easily) detect this situation. However, malicious server nodes can already do undesirable things such as throttle down their upload bandwidth or simply not respond. There is no need to handle unnecessarily truncated Merkle proofs any differently than a server simply not answering the request.

        -

        Performance, Ergonomics, and Compatibility

        -

        Performance

        +

        Performance, Ergonomics, and Compatibility

        +

        Performance

        It is unclear to the author of the RFC what the performance implications are. Servers are supposed to have limits to the amount of resources they use to respond to requests, and as such the worst that can happen is that light client requests become a bit slower than they currently are.

        -

        Ergonomics

        +

        Ergonomics

        Irrelevant.

        -

        Compatibility

        +

        Compatibility

        The prior networking protocol is maintained for now. The older version of this protocol could get removed in a long time.

        Prior Art and References

        None. This RFC is a clean-up of an existing mechanism.

        -

        Unresolved Questions

        +

        Unresolved Questions

        None

        - +

        The current networking protocol could be deprecated in a long time. Additionally, the current "state requests" protocol (used for warp syncing) could also be deprecated in favor of this one.

        (source)

        Table of Contents

        @@ -4729,13 +4927,13 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJonas Gehrlein -

        Summary

        +

        Summary

        The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

        -

        Motivation

        +

        Motivation

        How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

        -

        Stakeholders

        +

        Stakeholders

        Polkadot DOT token holders.

        -

        Explanation

        +

        Explanation

        This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

        It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

        Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

        @@ -4785,7 +4983,7 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJoe Petrowski -

        Summary

        +

        Summary

        The Assets pallet includes a notion of asset "sufficiency". Sufficient assets, when transferred to a non-existent account, will provide a sufficient reference that creates the account. That is, the asset is sufficient to justify an account's existence, even in lieu of the existential deposit of @@ -4793,7 +4991,7 @@ DOT.

        While convenient for sufficient assets, the vast majority of assets are not sufficient. This RFC proposes an opt-in means for users to create accounts from non-sufficient assets by swapping a portion of the first transfer to acquire the existential deposit of DOT.

        -

        Motivation

        +

        Motivation

        The network can make an asset "sufficient" via governance call. However, the network is still placing trust in the asset's administrator (which may be a third-party account or a protocol). The asset's administrator could mint the asset and create many accounts without paying an adequate @@ -4816,12 +5014,12 @@ unlimited number of accounts.

      • The system SHOULD allow users to hold and transact in any asset without separately and priorly acquiring DOT.
      -

      Stakeholders

      +

      Stakeholders

      • Polkadot users
      • Wallet and UI/UX developers
      -

      Explanation

      +

      Explanation

      By using the Asset Conversion protocol, the system can convert any asset to DOT as long as there is a path from that asset to DOT. As such, we can rely on the economic security provided by the existential deposit of DOT by simply converting some amount of the asset being transferred to the @@ -4886,23 +5084,23 @@ accounts or asset insufficiency.

      Drawbacks

      This solution would automatically convert some amount of another asset to DOT when acquiring DOT was perhaps not the recipient's intent. However, this is opt-in.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      An attacker that wanted to bloat state by sending worthless assets to many new accounts would need to put the DOT into an Asset Conversion pool with the asset (thereby making the asset not worthless with respect to DOT). This would provide the same cost and economic security as just sending the existential deposit of DOT to all the new accounts. This approach is no less secure than the DOT-only existential deposit system.

      This proposal introduces no privacy enhancements or reductions.

      -

      Performance, Ergonomics, and Compatibility

      -

      Performance

      +

      Performance, Ergonomics, and Compatibility

      +

      Performance

      The function to transfer assets will need to charge a larger weight at dispatch to account for the possibility of needing to perform a swap for DOT. It could return any unused weight.

      The implementation could also include witness data as to the destination account's existence so that the block builder can appropriately budget for the weight.

      -

      Ergonomics

      +

      Ergonomics

      This proposal would benefit the ergonomics of the system for end users by allowing all assets to create destination accounts when needed.

      -

      Compatibility

      +

      Compatibility

      This change would require changes to the Assets pallet to add the new account creation path.

      Prior Art and References

      Discussions with:

      @@ -4910,9 +5108,9 @@ create destination accounts when needed.

    • SR Labs auditors, in particular Jakob Lell and Louis Merlin
    • The monthly Asset Conversion ecosystem call, particular inspiration from Jakub Gregus
    -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    - +

    Not applicable.

    (source)

    Table of Contents

    @@ -4951,11 +5149,11 @@ create destination accounts when needed.

    AuthorsOliver Tale-Yazdi -

    Summary

    +

    Summary

    Introduces breaking changes to the BlockBuilder and Core runtime APIs.
    A new function BlockBuilder::last_inherent is introduced and the return value of Core::initialize_block is changed to an enum.
    The versions of both APIs are bumped; BlockBuilder to 7 and Core to 5.

    -

    Motivation

    +

    Motivation

    There are three main features that motivate for this RFC:

    1. Multi-Block-Migrations: These make it possible to split a migration over multiple blocks.
    2. @@ -4967,7 +5165,7 @@ The versions of both APIs are bumped; BlockBuilder to 7 and C
    3. The runtime can tell the block author to not include any transactions in the block.
    4. The runtime can execute logic right after all pallet-provided inherents have been applied.
    -

    Stakeholders

    +

    Stakeholders

    • Substrate Maintainers: They have to implement this, including tests, audit and maintenance burden.
    • @@ -4975,7 +5173,7 @@ maintenance burden.
    • Polkadot Parachain Teams: They also have to adapt to the breaking changes but then eventually have multi-block migrations available.
    -

    Explanation

    +

    Explanation

    Core::initialize_block

    This runtime API function is changed from returning () to ExtrinsicInclusionMode:

    #![allow(unused)]
    @@ -4997,21 +5195,21 @@ multi-block migrations available.
     

    3. System::PostInherents can be done in the same manner as poll.

    Drawbacks

    As noted in the review comments: this cements some assumptions about the order of inherents into the BlockBuilder traits. It was criticized for being to rigid in its assumptions.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Compliance of a block author can be tested by adding specific code to the last_inherent hook and checking that it always executes. The new logic of initialize_block can be tested by checking that the block-builder will skip transactions and optional hooks when OnlyInherents is returned.

    Security: n/a

    Privacy: n/a

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The performance overhead is minimal in the sense that no clutter was added after fulfilling the requirements. A slight performance penalty is expected from invoking last_inherent once per block.

    -

    Ergonomics

    +

    Ergonomics

    The new interface allows for more extensible runtime logic. In the future, this will be utilized for multi-block-migrations which should be a huge ergonomic advantage for parachain developers.

    -

    Compatibility

    +

    Compatibility

    The advice here is OPTIONAL and outside of the RFC. To not degrade user experience, it is recommended to ensure that an updated node can still import historic blocks.

    Prior Art and References

    @@ -5024,14 +5222,14 @@ transactions
  • There is no module hook after inherents and before transactions
  • -

    Unresolved Questions

    +

    Unresolved Questions

    Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode, ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them.
    => renamed to ExtrinsicInclusionMode

    Is post_inherents more consistent instead of last_inherent? Then we should change it.
    => renamed to last_inherent

    - +

    The long-term future here is to move the block building logic into the runtime. Currently there is a tight dance between the block author and the runtime; the author has to call into different runtime functions in quick succession and exact order. Any misstep causes the built block to be invalid.
    This can be unified and simplified by moving both parts of the logic into the runtime.

    (source)

    @@ -5062,9 +5260,9 @@ This can be unified and simplified by moving both parts of the logic into the ru AuthorsJonas Gehrlein -

    Summary

    +

    Summary

    This document is a proposal for restructuring the bulk markets in the Polkadot UC's coretime allocation system to improve efficiency and fairness. The proposal suggests separating the BULK_PERIOD into MARKET_PERIOD and RENEWAL_PERIOD, allowing for a market-driven price discovery through a clearing price Dutch auction during the MARKET_PERIOD followed by renewal offers at the MARKET_PRICE during the RENEWAL_PERIOD. The new system ensures synchronicity between renewal and market prices, fairness among all current tenants, and efficient price discovery, while preserving price caps to provide security for current tenants. It seeks to start a discussion about the possibility of long-term leases.

    -

    Motivation

    +

    Motivation

    While the initial RFC-1 has provided a robust framework for Coretime allocation within the Polkadot UC, this proposal builds upon its strengths and uses many provided building blocks to address some areas that could be further improved.

    In particular, this proposal introduces the following changes:

      @@ -5084,14 +5282,14 @@ This can be unified and simplified by moving both parts of the logic into the ru

    The premise of this proposal is to reduce complexity by introducing a common price (that develops releative to capacity consumption of Polkadot UC), while still allowing for market forces to add efficiency. Longterm lease owners still receive priority IF they can pay (close to) the market price. This prevents a situation where the renewal price significantly diverges from renewal prices which allows for core captures. While maximum price increase certainty might seem contradictory to efficient price discovery, the proposed model aims to balance these elements, utilizing market forces to determine the price and allocate cores effectively within certain bounds. It must be stated, that potential price increases remain predictable (in the worst-case) but could be higher than in the originally proposed design. The argument remains, however, that we need to allow market forces to affect all prices for an efficient Coretime pricing and allocation.

    Ultimately, this the framework proposed here adheres to all requirements stated in RFC-1.

    -

    Stakeholders

    +

    Stakeholders

    Primary stakeholder sets are:

    • Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
    • Polkadot Parachain teams both present and future, and their users.
    • Polkadot DOT token holders.
    -

    Explanation

    +

    Explanation

    Bulk Markets

    The BULK_PERIOD has been restructured into two primary segments: the MARKET_PERIOD and RENEWAL_PERIOD, along with an auxiliary SETTLEMENT_PERIOD. This latter period doesn't necessitate any actions from the coretime system chain, but it facilitates a more efficient allocation of coretime in secondary markets. A significant departure from the original proposal lies in the timing of renewals, which now occur post-market phase. This adjustment aims to harmonize renewal prices with their market counterparts, ensuring a more consistent and equitable pricing model.

    Market Period (14 days)

    @@ -5143,7 +5341,7 @@ This can be unified and simplified by moving both parts of the logic into the ru

    Prior Art and References

    This RFC builds extensively on the available ideas put forward in RFC-1.

    Additionally, I want to express a special thanks to Samuel Haefner and Shahar Dobzinski for fruitful discussions and helping me structure my thoughts.

    -

    Unresolved Questions

    +

    Unresolved Questions

    The technical feasability needs to be assessed.

    (source)

    Table of Contents

    @@ -5175,9 +5373,9 @@ This can be unified and simplified by moving both parts of the logic into the ru AuthorsChaosDAO -

    Summary

    +

    Summary

    This RFC proposes a change to the duration of the confirmation period for the treasurer track from 3 hours to at least 48 hours.

    -

    Motivation

    +

    Motivation

    Track parameters for Polkadot OpenGov should be configured in a way that their "difficulty" increases relative to the power associated with their respective origin. When we look at the confirmation periods for treasury based tracks, we can see that this is clearly the case - with the one notable exception to the trend being the treasurer track:

    @@ -5190,7 +5388,7 @@ This can be unified and simplified by moving both parts of the logic into the ru

    The confirmation period is one of the last lines of defence for the collective Polkadot stakeholders to react to a potentially bad referendum and vote NAY in order for its confirmation period to be aborted.

    Since the power / privilege level of the treasurer track is greater than that of the the big spender track – their confirmation period should be either equal, or the treasurer track's should be higher (note: currently the big spender track has a longer confirmation period than even the root track).

    -

    Stakeholders

    +

    Stakeholders

    The primary stakeholders of this RFC are:

    • DOT token holders – as this affects the protocol's treasury
    • @@ -5200,17 +5398,17 @@ This can be unified and simplified by moving both parts of the logic into the ru
    • Leemo - expressed interest to change this parameter
    • Paradox - expressed interest to change this parameter
    -

    Explanation

    +

    Explanation

    This RFC proposes to change the duration of the confirmation period for the treasurer track. In order to achieve that, the confirm_period parameter for the treasurer track in runtime/polkadot/src/governance/tracks.rs must be changed.

    Currently it is set to confirm_period: 3 * HOURS

    It should be changed to confirm_period: 48 * HOURS as a minimum.

    It may make sense for it to be changed to a value greater than 48 hours since the treasurer track has more power than the big spender track (48 hour confirmation period); however, the root track's confirmation period is 24 hours. 48 hours may be on the upper bounds of a trade-off between security and flexibility.

    Drawbacks

    The drawback of changing the treasurer track's confirmation period would be that the lifecycle of a referendum submitted on the treasurer track would ultimately be longer. However, the security of the protocol and its treasury should take priority here.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    This change will enhance / improve the security of the protocol as it relates to its treasury. The confirmation period is one of the last lines of defence for the collective Polkadot stakeholders to react to a potentially bad referendum and vote NAY in order for its confirmation period to be aborted. It makes sense for the treasurer track's confirmation period duration to be either equal to, or higher than, the big spender track confirmation period.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    This is a simple change (code wise) which should not affect the performance of the Polkadot protocol, outside of increasing the duration of the confirmation period on the treasurer track.

    Ergonomics & Compatibility

    If the proposal alters exposed interfaces to developers or end-users, which types of usage patterns have been optimized for?

    @@ -5222,12 +5420,12 @@ This can be unified and simplified by moving both parts of the logic into the ru

    Prior Art and References

    N/A

    -

    Unresolved Questions

    +

    Unresolved Questions

    The proposed change to the confirmation period duration for the treasurer track is to set it to 48 hours. This is equal to the current confirmation period for the big spender track.

    Typically it seems that track parameters increase in difficulty (duration, etc.) based on the power level of their associated origin.

    The longest confirmation period is that of the big spender, at 48 hours. There may be value in discussing whether or not the treasurer track confirmation period should be longer than 48 hours – a discussion of the trade-offs between security vs flexibility/agility.

    As a side note, the root track confirmation period is 24 hours.

    - +

    This RFC hopefully reminds the greater Polkadot community that it is possible to submit changes to the parameters of Polkadot OpenGov, and the greater protocol as a whole through the RFC process.

    (source)

    Table of Contents

    @@ -5259,7 +5457,7 @@ This can be unified and simplified by moving both parts of the logic into the ru
    Track DescriptionConfirmation Period Duration
    Small Tipper10 Min
    ChaosDAO
    -

    Summary

    +

    Summary

    This RFC proposes to make modifications to voting power delegations as part of the Conviction Voting pallet. The changes being proposed include:

    1. Allow a Delegator to vote independently of their Delegate if they so desire.
    2. @@ -5267,7 +5465,7 @@ This can be unified and simplified by moving both parts of the logic into the ru
    3. Make a change so that when a delegate votes abstain their delegated votes also vote abstain.
    4. Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call.
    -

    Motivation

    +

    Motivation

    It has become clear since the launch of OpenGov that there are a few common tropes which pop up time and time again:

    1. The frequency of referenda is often too high for network participants to have sufficient time to review, comprehend, and ultimately vote on each individual referendum. This means that these network participants end up being inactive in on-chain governance.
    2. @@ -5275,13 +5473,13 @@ This can be unified and simplified by moving both parts of the logic into the ru
    3. Delegating votes for all tracks currently requires long batched calls which result in high fees for the Delegator - resulting in a reluctance from many to delegate their votes.

    We believe (based on feedback from token holders with a larger stake in the network) that if there were some changes made to delegation mechanics, these larger stake holders would be more likely to delegate their voting power to active network participants – thus greatly increasing the support turnout.

    -

    Stakeholders

    +

    Stakeholders

    The primary stakeholders of this RFC are:

    • The Polkadot Technical Fellowship who will have to research and implement the technical aspects of this RFC
    • DOT token holders in general
    -

    Explanation

    +

    Explanation

    This RFC proposes to make 4 changes to the convictionVoting pallet logic in order to improve the user experience of those delegating their voting power to another account.

    1. @@ -5299,19 +5497,19 @@ This can be unified and simplified by moving both parts of the logic into the ru

    Drawbacks

    We do not foresee any drawbacks by implementing these changes. If anything we believe that this should help to increase overall voter turnout (via the means of delegation) which we see as a net positive.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    We feel that the Polkadot Technical Fellowship would be the most competent collective to identify the testing requirements for the ideas presented in this RFC.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    This change may add extra chain storage requirements on Polkadot, especially with respect to nested delegations.

    Ergonomics & Compatibility

    The change to add nested delegations may affect governance interfaces such as Nova Wallet who will have to apply changes to their indexers to support nested delegations. It may also affect the Polkadot Delegation Dashboard as well as Polkassembly & SubSquare.

    We want to highlight the importance for ecosystem builders to create a mechanism for indexers and wallets to be able to understand that changes have occurred such as increasing the pallet version, etc.

    Prior Art and References

    N/A

    -

    Unresolved Questions

    +

    Unresolved Questions

    N/A

    - +

    Additionally we would like to re-open the conversation about the potential for there to be free delegations. This was discussed by Dr Gavin Wood at Sub0 2022 and we feel like this would go a great way towards increasing the amount of network participants that are delegating: https://youtu.be/hSoSA6laK3Q?t=526

    Overall, we strongly feel that delegations are a great way to increase voter turnout, and the ideas presented in this RFC would hopefully help in that aspect.

    (source)

    @@ -5345,13 +5543,13 @@ This can be unified and simplified by moving both parts of the logic into the ru AuthorsVedhavyas Singareddi -

    Summary

    +

    Summary

    At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the Storage. We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field under RuntimeVersion, we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

    -

    Motivation

    +

    Motivation

    Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is further explored in https://github.com/polkadot-fellows/RFCs/issues/19

    @@ -5363,11 +5561,11 @@ One of the main challenge here is some extrinsics could be big enough that this included in the Consensus block due to Block's weight restriction. If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but rather at maximum, 32 byte of extrinsic data.

    -

    Stakeholders

    +

    Stakeholders

    • Technical Fellowship, in its role of maintaining system runtimes.
    -

    Explanation

    +

    Explanation

    In order to use project specific StateVersion for extrinsic roots, we proposed an implementation that introduced parameter to frame_system::Config but that unfortunately did not feel correct. @@ -5396,23 +5594,23 @@ pub const VERSION: RuntimeVersion = RuntimeVersion {

    Drawbacks

    There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated so that chains know which system_version to use.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    AFAIK, should not have any impact on the security or privacy.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    These changes should be compatible for existing chains if they use state_version value for system_verision.

    -

    Performance

    +

    Performance

    I do not believe there is any performance hit with this change.

    -

    Ergonomics

    +

    Ergonomics

    This does not break any exposed Apis.

    -

    Compatibility

    +

    Compatibility

    This change should not break any compatibility.

    Prior Art and References

    We proposed introducing a similar change by introducing a parameter to frame_system::Config but did not feel that is the correct way of introducing this change.

    -

    Unresolved Questions

    +

    Unresolved Questions

    I do not have any specific questions about this change at the moment.

    - +

    IMO, this change is pretty self-contained and there won't be any future work necessary.

    (source)

    Table of Contents

    @@ -5441,9 +5639,9 @@ is the correct way of introducing this change.

    AuthorsSebastian Kunert -

    Summary

    +

    Summary

    This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

    -

    Motivation

    +

    Motivation

    The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

    • Trie Depth: We assume a trie depth to account for intermediary nodes.
    • @@ -5452,12 +5650,12 @@ is the correct way of introducing this change.

      These pessimistic assumptions lead to an overestimation of storage weight, negatively impacting block utilization on parachains.

      In addition, the current model does not account for multiple accesses to the same storage items. While these repetitive accesses will not increase storage-proof size, the runtime-side weight monitoring will account for them multiple times. Since the proof size is completely opaque to the runtime, we can not implement retroactive storage weight correction.

      A solution must provide a way for the runtime to track the exact storage-proof size consumed on a per-extrinsic basis.

      -

      Stakeholders

      +

      Stakeholders

      • Parachain Teams: They MUST include this host function in their runtime and node.
      • Light-client Implementors: They SHOULD include this host function in their runtime and node.
      -

      Explanation

      +

      Explanation

      This RFC proposes a new host function that exposes the storage-proof size to the runtime. As a result, runtimes can implement storage weight reclaiming mechanisms that improve block utilization.

      This RFC proposes the following host function signature:

      #![allow(unused)]
      @@ -5465,12 +5663,12 @@ is the correct way of introducing this change.

      fn ext_storage_proof_size_version_1() -> u64; }

      The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.

      -

      Performance, Ergonomics, and Compatibility

      -

      Performance

      +

      Performance, Ergonomics, and Compatibility

      +

      Performance

      Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.

      -

      Ergonomics

      +

      Ergonomics

      The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.

      -

      Compatibility

      +

      Compatibility

      Parachain teams will need to include this host function to upgrade.

      Prior Art and References

        @@ -5508,24 +5706,24 @@ is the correct way of introducing this change.

        AuthorsBastian Köcher -

        Summary

        +

        Summary

        When rotating/generating the SessionKeys of a node, the node calls into the runtime using the SessionKeys::generate_session_keys runtime api. This runtime api function needs to be changed to add an extra parameter owner and to change the return value to also include the proof of ownership. The owner should be the account id of the account setting the SessionKeys on chain to allow the on chain logic the verification of the proof. The on chain logic is then able to proof the possession of the private keys of the SessionKeys using the proof.

        -

        Motivation

        +

        Motivation

        When a user sets new SessionKeys on chain the chain can currently not ensure that the user actually has control over the private keys of the SessionKeys. With the RFC applied the chain is able to ensure that the user actually is in possession of the private keys.

        -

        Stakeholders

        +

        Stakeholders

        • Polkadot runtime implementors
        • Polkadot node implementors
        • Validator operators
        -

        Explanation

        +

        Explanation

        We are first going to explain the proof format being used:

        #![allow(unused)]
         fn main() {
        @@ -5561,23 +5759,23 @@ the actual Proof type as explained above. The proof an
         

        Drawbacks

        Validator operators need to pass the their account id when rotating their session keys in a node. This will require updating some high level docs and making users familiar with the slightly changed ergonomics.

        -

        Testing, Security, and Privacy

        +

        Testing, Security, and Privacy

        Testing of the new changes is quite easy as it only requires passing an appropriate owner for the current testing context. The changes to the proof generation and verification got audited to ensure they are correct.

        -

        Performance, Ergonomics, and Compatibility

        -

        Performance

        +

        Performance, Ergonomics, and Compatibility

        +

        Performance

        Does not have any impact on the overall performance, only setting SessionKeys will require more weight.

        -

        Ergonomics

        +

        Ergonomics

        If the proposal alters exposed interfaces to developers or end-users, which types of usage patterns have been optimized for?

        -

        Compatibility

        +

        Compatibility

        Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before a runtime is enacted that contains these changes otherwise they will fail to generate session keys.

        Prior Art and References

        None.

        -

        Unresolved Questions

        +

        Unresolved Questions

        None.

        - +

        Substrate implementation of the RFC.

        (source)

        Table of Contents

        @@ -5610,16 +5808,16 @@ a runtime is enacted that contains these changes otherwise they will fail to gen AuthorsPierre Krieger -

        Summary

        +

        Summary

        Rather than enforce a limit to the total memory consumption on the client side by loading the value at :heappages, enforce that limit on the runtime side.

        -

        Motivation

        +

        Motivation

        From the early days of Substrate up until recently, the runtime was present in two forms: the wasm runtime (wasm bytecode passed through an interpreter) and the native runtime (native code directly run by the client).

        Since the wasm runtime has a lower amount of available memory (4 GiB maximum) compared to the native runtime, and in order to ensure sure that the wasm and native runtimes always produce the same outcome, it was necessary to clamp the amount of memory available to both runtimes to the same value.

        In order to achieve this, a special storage key (a "well-known" key) :heappages was introduced and represents the number of "wasm pages" (one page equals 64kiB) of memory that are available to the memory allocator of the runtimes. If this storage key is absent, it defaults to 2048, which is 128 MiB.

        The native runtime has since then been disappeared, but the concept of "heap pages" still exists. This RFC proposes a simplification to the design of Polkadot by removing the concept of "heap pages" as is currently known, and proposes alternative ways to achieve the goal of limiting the amount of memory available.

        -

        Stakeholders

        +

        Stakeholders

        Client implementers and low-level runtime developers.

        -

        Explanation

        +

        Explanation

        This RFC proposes the following changes to the client:

        • The client no longer considers :heappages as special.
        • @@ -5650,22 +5848,22 @@ a runtime is enacted that contains these changes otherwise they will fail to gen This is most likely not a problem, as storage values aren't supposed to be larger than a few megabytes at the very maximum.

          In the unfortunate event where the runtime runs out of memory, path B would make it more difficult to relax the memory limit, as we would need to re-upload the entire Wasm, compared to updating only :heappages in path A or before this RFC. In the case where the runtime runs out of memory only in the specific event where the Wasm runtime is modified, this could brick the chain. However, this situation is no different than the thousands of other ways that a bug in the runtime can brick a chain, and there's no reason to be particularily worried about this situation in particular.

          -

          Testing, Security, and Privacy

          +

          Testing, Security, and Privacy

          This RFC would reduce the chance of a consensus issue between clients. The :heappages are a rather obscure feature, and it is not clear what happens in some corner cases such as the value being too large (error? clamp?) or malformed. This RFC would completely erase these questions.

          -

          Performance, Ergonomics, and Compatibility

          -

          Performance

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          In case of path A, it is unclear how performances would be affected. Path A consists in moving client-side operations to the runtime without changing these operations, and as such performance differences are expected to be minimal. Overall, we're talking about one addition/subtraction per malloc and per free, so this is more than likely completely negligible.

          In case of path B and C, the performance gain would be a net positive, as this RFC strictly removes things.

          -

          Ergonomics

          +

          Ergonomics

          This RFC would isolate the client and runtime more from each other, making it a bit easier to reason about the client or the runtime in isolation.

          -

          Compatibility

          +

          Compatibility

          Not a breaking change. The runtime-side changes can be applied immediately (without even having to wait for changes in the client), then as soon as the runtime is updated, the client can be updated without any transition period. One can even consider updating the client before the runtime, as it corresponds to path C.

          Prior Art and References

          None.

          -

          Unresolved Questions

          +

          Unresolved Questions

          None.

          - +

          This RFC follows the same path as https://github.com/polkadot-fellows/RFCs/pull/4 by scoping everything related to memory allocations to the runtime.

          diff --git a/proposed/000x-lowering-deposits-assethub.html b/proposed/000x-lowering-deposits-assethub.html index 52a90a9..9341ddf 100644 --- a/proposed/000x-lowering-deposits-assethub.html +++ b/proposed/000x-lowering-deposits-assethub.html @@ -90,7 +90,7 @@ @@ -387,7 +387,7 @@