diff --git a/404.html b/404.html index 0baf858..0edc79a 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index f3799e4..73ff352 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index 5c8a15a..41ea6dc 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 67086eb..eb4dc17 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 5dafe12..7a37f38 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0010-burn-coretime-revenue.html b/approved/0010-burn-coretime-revenue.html index a5b6641..c180b4f 100644 --- a/approved/0010-burn-coretime-revenue.html +++ b/approved/0010-burn-coretime-revenue.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index 7d26d13..1f677aa 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html index 9dcb875..2b4b403 100644 --- a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 354f2f5..db5095d 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 982ecf4..488ed33 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index 3aecc6a..d7e75c1 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0042-extrinsics-state-version.html b/approved/0042-extrinsics-state-version.html index 2b5ddad..ea7e508 100644 --- a/approved/0042-extrinsics-state-version.html +++ b/approved/0042-extrinsics-state-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0043-storage-proof-size-hostfunction.html b/approved/0043-storage-proof-size-hostfunction.html index 06c002b..ab122f5 100644 --- a/approved/0043-storage-proof-size-hostfunction.html +++ b/approved/0043-storage-proof-size-hostfunction.html @@ -90,7 +90,7 @@ diff --git a/approved/0045-nft-deposits-asset-hub.html b/approved/0045-nft-deposits-asset-hub.html index 9bf867b..131e2f0 100644 --- a/approved/0045-nft-deposits-asset-hub.html +++ b/approved/0045-nft-deposits-asset-hub.html @@ -90,7 +90,7 @@ diff --git a/approved/0047-assignment-of-availability-chunks.html b/approved/0047-assignment-of-availability-chunks.html index f093eda..761fba2 100644 --- a/approved/0047-assignment-of-availability-chunks.html +++ b/approved/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 39fcdbb..5cc92ec 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index 548cdd1..8574dba 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/approved/0059-nodes-capabilities-discovery.html b/approved/0059-nodes-capabilities-discovery.html index 901562b..74adaf3 100644 --- a/approved/0059-nodes-capabilities-discovery.html +++ b/approved/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ diff --git a/approved/0078-merkleized-metadata.html b/approved/0078-merkleized-metadata.html index 3674fd2..2244236 100644 --- a/approved/0078-merkleized-metadata.html +++ b/approved/0078-merkleized-metadata.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index f6dc14c..83546fd 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index f6dc14c..83546fd 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index 380ea17..936f195 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -444,86 +444,6 @@ The following other host functions are similarly also considered deprecated:

Future Possibilities

After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.

-

(source)

-

Table of Contents

- -

RFC-0073: Decision Deposit Referendum Track

-
- - - -
Start Date12 February 2024
DescriptionAdd a referendum track which can place the decision deposit on any other track
AuthorsJelliedOwl
-
-

Summary

-

The current size of the decision deposit on some tracks is too high for many proposers. As a result, those needing to use it have to find someone else willing to put up the deposit for them - and a number of legitimate attempts to use the root track have timed out. This track would provide a more affordable (though slower) route for these holders to use the root track.

-

Motivation

-

There have been recent attempts to use the Kusama root track which have timed out with no decision deposit placed. Usually, these referenda have been related to parachain registration related issues.

-

Explanation

-

Propose to address this by adding a new referendum track [22] Referendum Deposit which can place the decision deposit on another referendum. This would require the following changes:

- -

Referendum track parameters - Polkadot

- -

Referendum track parameters - Kusama

- -

Drawbacks

-

This track would provide a route to starting a root referendum with a much-reduced slashable deposit. This might be undesirable but, assuming the decision deposit cost for this track is still high enough, slashing would still act as a disincentive.

-

An alternative to this might be to reduce the decision deposit size some of the more expensive tracks. However, part of the purpose of the high deposit - at least on the root track - is to prevent spamming the limited queue with junk referenda.

-

Testing, Security, and Privacy

-

Will need additional tests case for the modified pallet and runtime. No security or privacy issues.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

No significant performance impact.

-

Ergonomics

-

Only changes related to adding the track. Existing functionality is unchanged.

-

Compatibility

-

No compatibility issues.

-

Prior Art and References

- -

Unresolved Questions

-

Feedback on whether my proposed implementation of this is the best way to address the issue - including which calls the track should be allowed to make. Are the track parameters correct or should be use something different? Alternative would be welcome.

(source)

Table of Contents

-

Explanation

+

Explanation

I've created the stateful multisig pallet during my studies in Polkadot Blockchain Academy under supervision from @shawntabrizi and @ank4n. After that, I've enhanced it to be fully functional and this is a draft PR#3300 in polkadot-sdk. I'll list all the details and design decisions in the following sections. Note that the PR is not 1-1 exactly to the current RFC as the RFC is a more polished version of the PR after updating based on the feedback and discussions.

Let's start with a sequence diagram to illustrate the main operations of the Stateful Multisig.

multisig operations

@@ -1035,14 +955,14 @@ pub type PendingProposals<T: Config> = StorageDoubleMap<
  • In case threshold is lower than the number of approvers then the proposal is still valid.
  • In case threshold is higher than the number of approvers then we catch it during execute proposal and error.
  • -

    Drawbacks

    +

    Drawbacks

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Standard audit/review requirements apply.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    Doing back of the envelop calculation to proof that the stateful multisig is more efficient than the stateless multisig given it's smaller footprint size on blocks.

    Quick review over the extrinsics for both as it affects the block size:

    Stateless Multisig: @@ -1106,13 +1026,13 @@ We have the following extrinsics:

    | Stateless | N^2 | Nil | | Stateful | N | N |

    So even though the stateful multisig has a larger state size, it's still more efficient in terms of block size and total footprint on the blockchain.

    -

    Ergonomics

    +

    Ergonomics

    The Stateful Multisig will have better ergonomics for managing multisig accounts for both developers and end-users.

    -

    Compatibility

    +

    Compatibility

    This RFC is compatible with the existing implementation and can be handled via upgrades and migration. It's not intended to replace the existing multisig pallet.

    -

    Prior Art and References

    +

    Prior Art and References

    multisig pallet in polkadot-sdk

    -

    Unresolved Questions

    +

    Unresolved Questions

    @@ -1171,9 +1091,9 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsLuke Schoen -

    Summary

    +

    Summary

    This proposes to increase the maximum length of identity raw data values from a 32 bytes/chars limit to either a 64 bytes/chars limit or a 128 bytes/chars limit.

    -

    Motivation

    +

    Motivation

    Background

    At the moment if you upload a file and pin it to IPFS storage you get an IPFS Content Identifier (CID), which is the unique ID of the file in the IPFS Network. That CID may be used to retrieve the file again via a standard IPFS interface and gateway from anywhere.

    CIDs are reasonably short regardless of the size the underlying content and are based on the cryptographic hash of the content.

    @@ -1202,28 +1122,28 @@ Implement call filters. This will allow multisig accounts to only accept certain -

    Explanation

    +

    Explanation

    If a user tries to setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), then if they try to provide an email address or a website domain name that is longer than 32 characters, or try to use the IPFS CID (which may be associated with a file such as a document or image), then they will encounter this error:

    createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Data.Raw values are limited to a maximum length of 32 bytes
     

    Increasing maximum length of identity raw data values from the current 32 bytes/chars limit to at least a 59 bytes/chars limit (or a 66 bytes/chars limit) would overcome these errors and support IPFS CIDs that are either CIDv0 or CIDv1, satisfying the solution requirements.

    Increasing the maximum length of "additional" field values would also overcome these errors and should be increased from the current 32 bytes/chars limitFieldLimit by the same amount.

    -

    Drawbacks

    -

    Performance

    +

    Drawbacks

    +

    Performance

    If Polkadot on-chain identities are able to store raw data values greater than the current maximum length of 32 bytes, then each identity may want to use the maximum (or more) amount of "additional" custom fields or more, which would impact storage and performance on the network.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Implementations would need to be tested for adherance by checking that IPFS CIDs that are either CIDv0 or CIDv1 are supported.

    No effect on security or privacy has been identified than already exists.

    No implementation pitfalls have been identified.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.

    To minimize additional overhead the proposal suggests at least a 59 bytes/chars limit (or a 66 bytes/chars limit) since that would at least provide support for IPFS CIDs that are either CIDv0 or CIDv1, satisfying the solution requirements.

    -

    Ergonomics

    +

    Ergonomics

    It alters exposed interfaces to developers and end-users since they will now be able to provide IPFS CIDs that are either CIDv0 or CIDv1 as the values of Polkadot on-chain identity raw data input fields. Optionally 66 bytes/chars limit could be established to optimise for the usage pattern end-users, since that may be more intuitive to them, as it would allow users to provide a link (e.g. URI protocol + CID) rather than just the CID.

    -

    Compatibility

    +

    Compatibility

    Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

    -

    Prior Art and References

    +

    Prior Art and References

    Prior Art

    No prior articles.

    References

    @@ -1233,7 +1153,7 @@ Implement call filters. This will allow multisig accounts to only accept certain
  • 3
  • 4
  • -

    Unresolved Questions

    +

    Unresolved Questions

    Why can't we increase the maximum length of Polkadot on-chain identity raw data values and the "additional" field limit from 32 bytes/chars to an even longer maximum length than proposed of 128 bytes/chars?

    Relates to RFC entitled "Increase maximum length of identity PGP fingerprint values from 20 bytes".

    @@ -1274,9 +1194,9 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsLuke Schoen -

    Summary

    +

    Summary

    This proposes to increase the maximum length of PGP Fingerprint values from a 20 bytes/chars limit to a 40 bytes/chars limit.

    -

    Motivation

    +

    Motivation

    Background

    Pretty Good Privacy (PGP) Fingerprints are shorter versions of their corresponding Public Key that may be printed on a business card.

    They may be used by someone to validate the correct corresponding Public Key.

    @@ -1303,28 +1223,28 @@ Implement call filters. This will allow multisig accounts to only accept certain -

    Explanation

    +

    Explanation

    If a user tries to setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), then if they try to provide their 40 character long PGP Fingerprint or GPG Fingerprint, which is longer than the maximum length of 20 bytes/chars [u8;20], then they will encounter this error:

    createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Struct: failed on pgpFingerprint: Option<[u8;20]>:: Expected input with 20 bytes (160 bits), found 40 bytes
     

    Increasing maximum length of identity PGP Fingerprint values from the current 20 bytes/chars limit to at least a 40 bytes/chars limit would overcome these errors and support PGP Fingerprints and GPG Fingerprints, satisfying the solution requirements.

    -

    Drawbacks

    +

    Drawbacks

    No drawbacks have been identified.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Implementations would be tested for adherance by checking that 40 bytes/chars PGP Fingerprints are supported.

    No effect on security or privacy has been identified than already exists.

    No implementation pitfalls have been identified.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.

    To minimize additional overhead the proposal suggests a 40 bytes/chars limit since that would at least provide support for PGP Fingerprints, satisfying the solution requirements.

    -

    Ergonomics

    +

    Ergonomics

    No potential ergonomic optimizations have been identified.

    -

    Compatibility

    +

    Compatibility

    Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

    -

    Prior Art and References

    +

    Prior Art and References

    No prior articles or references.

    -

    Unresolved Questions

    +

    Unresolved Questions

    No further questions at this stage.

    Relates to RFC entitled "Increase maximum length of identity raw data values from 32 bytes".

    @@ -1359,14 +1279,14 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsGeorge Pisaltu -

    Summary

    +

    Summary

    This RFC proposes the expansion of the extrinsic type identifier from the current most significant bit of the first byte of the encoded extrinsic to the two most significant bits of the first byte of the encoded extrinsic.

    -

    Motivation

    +

    Motivation

    In order to support transactions that use different authorization schemes than currently exist, a new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time.

    By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.

    Stakeholders

    Runtime users.

    -

    Explanation

    +

    Explanation

    An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version.

    Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time.

    This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic type bit representation would change as follows:

    @@ -1377,21 +1297,21 @@ Implement call filters. This will allow multisig accounts to only accept certain 11reserved -

    Drawbacks

    +

    Drawbacks

    This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    There is no impact on testing, security or privacy.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.

    -

    Performance

    +

    Performance

    There is no performance impact.

    -

    Ergonomics

    +

    Ergonomics

    The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.

    -

    Compatibility

    +

    Compatibility

    This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.

    -

    Prior Art and References

    +

    Prior Art and References

    The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None.

    Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work.

    @@ -1435,9 +1355,9 @@ Implement call filters. This will allow multisig accounts to only accept certain AuthorsGavin Wood -

    Summary

    +

    Summary

    This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.

    -

    Motivation

    +

    Motivation

    Present System

    The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.

    The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.

    @@ -1467,7 +1387,7 @@ Implement call filters. This will allow multisig accounts to only accept certain

    Socialization:

    The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.

    -

    Explanation

    +

    Explanation

    Overview

    Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.

    When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.

    @@ -1899,11 +1819,11 @@ InstaPoolHistory: (empty)
  • Governance upgrade proposal(s).
  • Monitoring of the upgrade process.
  • -

    Performance, Ergonomics and Compatibility

    +

    Performance, Ergonomics and Compatibility

    No specific considerations.

    Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.

    While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.

    -

    Testing, Security and Privacy

    +

    Testing, Security and Privacy

    Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.

    A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.

    Any final implementation MUST pass a professional external security audit.

    @@ -1924,7 +1844,7 @@ InstaPoolHistory: (empty)
  • The percentage of cores to be sold as Bulk Coretime.
  • The fate of revenue collected.
  • -

    Prior Art and References

    +

    Prior Art and References

    Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

    (source)

    Table of Contents

    @@ -1957,10 +1877,10 @@ InstaPoolHistory: (empty) AuthorsGavin Wood, Robert Habermeier -

    Summary

    +

    Summary

    In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

    This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

    -

    Motivation

    +

    Motivation

    The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

    Requirements

    Socialization:

    This content of this RFC was discussed in the Polkdot Fellows channel.

    -

    Explanation

    +

    Explanation

    The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

    Future work may include these messages being introduced into the XCM standard.

    UMP Message Types

    @@ -2055,9 +1975,9 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);

    Realistic Limits of the Usage

    For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

    For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

    -

    Performance, Ergonomics and Compatibility

    +

    Performance, Ergonomics and Compatibility

    No specific considerations.

    -

    Testing, Security and Privacy

    +

    Testing, Security and Privacy

    Standard Polkadot testing and security auditing applies.

    The proposal introduces no new privacy concerns.

    @@ -2065,7 +1985,7 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);

    RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

    Drawbacks, Alternatives and Unknowns

    None at present.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    (source)

    Table of Contents

    @@ -2111,13 +2031,13 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600); AuthorsJoe Petrowski -

    Summary

    +

    Summary

    As core functionality moves from the Relay Chain into system chains, so increases the reliance on the liveness of these chains for the use of the network. It is not economically scalable, nor necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a mechanism -- part technical and part social -- for ensuring reliable collator sets that are resilient to attemps to stop any subsytem of the Polkadot protocol.

    -

    Motivation

    +

    Motivation

    In order to guarantee access to Polkadot's system, the collators on its system chains must propose blocks (provide liveness) and allow all transactions to eventually be included. That is, some collators may censor transactions, but there must exist one collator in the set who will include a @@ -2158,7 +2078,7 @@ reimburse their operating costs.

  • Infrastructure providers (people who run validator/collator nodes)
  • Polkadot Treasury
  • -

    Explanation

    +

    Explanation

    This protocol builds on the existing Collator Selection pallet and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who @@ -2194,27 +2114,27 @@ approximately:

  • of which 15 are Invulnerable, and
  • five are elected by bond.
  • -

    Drawbacks

    +

    Drawbacks

    The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The vast majority of cases can be covered by unit testing. Integration test should ensure that the Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired number of Candidates, can handle updates over XCM from the system's governance location.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.

    -

    Performance

    +

    Performance

    As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.

    -

    Ergonomics

    +

    Ergonomics

    The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.

    -

    Compatibility

    +

    Compatibility

    This RFC is compatible with the existing implementation and can be handled via upgrades and migration.

    -

    Prior Art and References

    +

    Prior Art and References

    Written Discussions

    -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    There may exist in the future system chains for which this model of collator selection is not @@ -2270,10 +2190,10 @@ appropriate. These chains should be evaluated on a case-by-case basis.

    AuthorsPierre Krieger -

    Summary

    +

    Summary

    The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

    This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

    -

    Motivation

    +

    Motivation

    The maintenance of bootnodes has long been an annoyance for everyone.

    When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

    @@ -2284,7 +2204,7 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-date

    Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

    Stakeholders

    This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

    -

    Explanation

    +

    Explanation

    The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

    Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

    While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

    @@ -2321,10 +2241,10 @@ message Response {

    The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

    Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

    -

    Drawbacks

    +

    Drawbacks

    The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

    The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

    This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

    @@ -2333,20 +2253,20 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) a

    For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

    Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

    Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

    Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

    Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    Irrelevant.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    -

    Unresolved Questions

    +

    Unresolved Questions

    While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

    It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

    @@ -2369,13 +2289,13 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsJonas Gehrlein -

    Summary

    +

    Summary

    The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

    -

    Motivation

    +

    Motivation

    How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

    Stakeholders

    Polkadot DOT token holders.

    -

    Explanation

    +

    Explanation

    This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

    It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

    Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

    @@ -2418,13 +2338,13 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsJoe Petrowski -

    Summary

    +

    Summary

    Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

    -

    Motivation

    +

    Motivation

    Many groups have expressed interest in representing collectives on-chain. Some of these include:

    -

    Explanation

    +

    Explanation

    The group that wishes to operate an on-chain collective should publish the following information:

    -

    Explanation

    +

    Explanation

    Core::initialize_block

    This runtime API function is changed from returning () to ExtrinsicInclusionMode:

    fn initialize_block(header: &<Block as BlockT>::Header)
    @@ -2566,23 +2486,23 @@ multi-block migrations available.
     

    1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.

    2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.

    3. System::PostInherents can be done in the same manner as poll.

    -

    Drawbacks

    +

    Drawbacks

    The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.

    Security: n/a

    Privacy: n/a

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The performance overhead is minimal in the sense that no clutter was added after fulfilling the requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.

    -

    Ergonomics

    +

    Ergonomics

    The new interface allows for more extensible runtime logic. In the future, this will be utilized for multi-block-migrations which should be a huge ergonomic advantage for parachain developers.

    -

    Compatibility

    +

    Compatibility

    The advice here is OPTIONAL and outside of the RFC. To not degrade user experience, it is recommended to ensure that an updated node can still import historic blocks.

    -

    Prior Art and References

    +

    Prior Art and References

    The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge requests:

    -

    Unresolved Questions

    +

    Unresolved Questions

    Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode, ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them.
    @@ -2636,14 +2556,14 @@ This can be unified and simplified by moving both parts into the runtime.

    AuthorsBryan Chen -

    Summary

    +

    Summary

    This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.

    This is achieved by remove existing lock conditions and only lock a parachain when:

    -

    Motivation

    +

    Motivation

    The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.

    The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.

    The key scenarios this RFC seeks to improve are:

    @@ -2667,7 +2587,7 @@ This can be unified and simplified by moving both parts into the runtime.

  • Parachain teams
  • Parachain users
  • -

    Explanation

    +

    Explanation

    Status quo

    A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:

    -

    Drawbacks

    +

    Drawbacks

    Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.

    For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.

    It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.

    Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.

    Existing operational parachains will not be impacted.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

    An audit maybe required to ensure the implementation does not introduce unwanted side effects.

    There is no privacy related concerns.

    -

    Performance

    +

    Performance

    This RFC should not introduce any performance impact.

    -

    Ergonomics

    +

    Ergonomics

    This RFC should improve the developer experiences for new and existing parachain teams

    -

    Compatibility

    +

    Compatibility

    This RFC is fully compatibility with existing interfaces.

    -

    Prior Art and References

    +

    Prior Art and References

    -

    Unresolved Questions

    +

    Unresolved Questions

    None at this stage.

    This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.

    @@ -2765,9 +2685,9 @@ This can be unified and simplified by moving both parts into the runtime.

    Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland -

    Summary

    +

    Summary

    Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

    -

    Motivation

    +

    Motivation

    Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

    Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

    Stakeholders

    @@ -2777,7 +2697,7 @@ This can be unified and simplified by moving both parts into the runtime.

  • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
  • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
  • -

    Explanation

    +

    Explanation

    Our PR has all details about our runtime and how we would move it into the fellowship repo.

    Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

    It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

    @@ -2786,15 +2706,15 @@ This can be unified and simplified by moving both parts into the runtime.

  • Encointer will publish all its crates crates.io
  • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
  • -

    Drawbacks

    +

    Drawbacks

    Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    No changes to the existing system are proposed. Only changes to how maintenance is organized.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    No changes

    -

    Prior Art and References

    +

    Prior Art and References

    Existing Encointer runtime repo

    -

    Unresolved Questions

    +

    Unresolved Questions

    None identified

    More info on Encointer: encointer.org

    @@ -2838,11 +2758,11 @@ This can be unified and simplified by moving both parts into the runtime.

    AuthorsJoe Petrowski, Gavin Wood -

    Summary

    +

    Summary

    The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.

    -

    Motivation

    +

    Motivation

    Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -2860,7 +2780,7 @@ Ubiquitous Computer can maximise its primary offering: secure blockspace.

  • Core protocol and XCM format developers;
  • Tooling, block explorer, and UI developers.
  • -

    Explanation

    +

    Explanation

    The following pallets and subsystems are good candidates to migrate from the Relay Chain:

    -

    Drawbacks

    +

    Drawbacks

    There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated so that chains know which system_version to use.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    AFAIK, should not have any impact on the security or privacy.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    These changes should be compatible for existing chains if they use state_version value for system_verision.

    -

    Performance

    +

    Performance

    I do not believe there is any performance hit with this change.

    -

    Ergonomics

    +

    Ergonomics

    This does not break any exposed Apis.

    -

    Compatibility

    +

    Compatibility

    This change should not break any compatibility.

    -

    Prior Art and References

    +

    Prior Art and References

    We proposed introducing a similar change by introducing a parameter to frame_system::Config but did not feel that is the correct way of introducing this change.

    -

    Unresolved Questions

    +

    Unresolved Questions

    I do not have any specific questions about this change at the moment.

    IMO, this change is pretty self-contained and there won't be any future work necessary.

    @@ -3166,9 +3086,9 @@ is the correct way of introducing this change.

    AuthorsSebastian Kunert -

    Summary

    +

    Summary

    This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

    -

    Motivation

    +

    Motivation

    The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

    -

    Explanation

    +

    Explanation

    This RFC proposes a new host function that exposes the storage-proof size to the runtime. As a result, runtimes can implement storage weight reclaiming mechanisms that improve block utilization.

    This RFC proposes the following host function signature:

    #![allow(unused)]
    @@ -3190,14 +3110,14 @@ is the correct way of introducing this change.

    fn ext_storage_proof_size_version_1() -> u64; }

    The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.

    -

    Ergonomics

    +

    Ergonomics

    The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.

    -

    Compatibility

    +

    Compatibility

    Parachain teams will need to include this host function to upgrade.

    -

    Prior Art and References

    +

    Prior Art and References

    -

    Explanation

    +

    Explanation

    This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to the amount or asset used would only be on a single value, and all others would adjust relatively. A III Dan is someone whose contributions match the expectations of a full-time individual contributor. @@ -3847,19 +3767,19 @@ other hand, more people will likely join the Fellowship in the coming years.

    Updates

    Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via RFC.

    -

    Drawbacks

    +

    Drawbacks

    By not using DOT for payment, the protocol relies on the stability of other assets and the ability to acquire them. However, the asset of choice can be changed in the future.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    N/A.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    N/A

    -

    Ergonomics

    +

    Ergonomics

    N/A

    -

    Compatibility

    +

    Compatibility

    N/A

    -

    Prior Art and References

    +

    Prior Art and References

    -

    Unresolved Questions

    +

    Unresolved Questions

    None at present.

    (source)

    Table of Contents

    @@ -3900,11 +3820,11 @@ States AuthorsPierre Krieger -

    Summary

    +

    Summary

    When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.

    Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime.

    This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1.

    -

    Motivation

    +

    Motivation

    There exists three motivations behind this change:

    Stakeholders

    Low-level developers.

    -

    Explanation

    +

    Explanation

    To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:

    concat(
         leb128(total-size-in-bytes-of-the-rest),
    @@ -3939,21 +3859,21 @@ A SCALE-compact encoded 1 is one byte of value 4. In o
     This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec.

    As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them.

    By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications.

    -

    Drawbacks

    +

    Drawbacks

    This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)).

    An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Irrelevant.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    Irrelevant.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.

    -

    Prior Art and References

    +

    Prior Art and References

    Irrelevant.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None.

    None. This is a simple isolated change.

    @@ -3995,11 +3915,11 @@ This is equivalent to forcing the Vec<Transaction> to always AuthorsPierre Krieger -

    Summary

    +

    Summary

    This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

    Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

    The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

    -

    Motivation

    +

    Motivation

    The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

    It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

    If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. @@ -4008,7 +3928,7 @@ In certain situations such as downloading the storage of old blocks, nodes that

    Stakeholders

    Low-level client developers. People interested in accessing the archive of the chain.

    -

    Explanation

    +

    Explanation

    Reading RFC #8 first might help with comprehension, as this RFC is very similar.

    Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

    Capabilities

    @@ -4044,28 +3964,28 @@ If blocks pruning is enabled and the chain is a relay chain, then Substrate unfo

    Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol.

    Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case.

    Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.

    -

    Drawbacks

    +

    Drawbacks

    None that I can see.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The content of this section is basically the same as the one in RFC 8.

    This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.

    Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

    For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.

    Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

    Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

    Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

    Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    Irrelevant.

    -

    Prior Art and References

    +

    Prior Art and References

    Unknown.

    -

    Unresolved Questions

    +

    Unresolved Questions

    While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

    This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

    @@ -4116,12 +4036,12 @@ We could even add to the peer-to-peer network nodes that are only capable of ser AuthorsZondax AG, Parity Technologies -

    Summary

    +

    Summary

    To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format.

    It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute.

    This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails.

    Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime.

    -

    Motivation

    +

    Motivation

    Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way.

    On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature.

    The two main reasons why this is not possible today are:

    @@ -4155,7 +4075,7 @@ We could even add to the peer-to-peer network nodes that are only capable of ser
  • Offline wallet implementors
  • The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.

    -

    Explanation

    +

    Explanation

    The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described.

    First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained.

    Metadata digest

    @@ -4422,21 +4342,21 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] }

    For the runtime the metadata hash is generated at compile time. Wallets will have to generate the hash using the FRAME metadata.

    The signing side should control whether it wants to add the metadata hash or if it wants to omit it. To accomplish this it is required to add one extra byte to the extrinsic itself. If this byte is 0 the metadata hash is not required and if the byte is 1 the metadata hash is added using V1 of the MetadataDigest. This leaves room for future versions of the MetadataDigest format. When the metadata hash should be included, it is only added to the data that is signed. This brings the advantage of not requiring to include 32 bytes into the extrinsic itself, because the runtime knows the metadata hash as well and can add it to the signed data as well if required. This is similar to the genesis hash, while this isn't added conditionally to the signed data.

    -

    Drawbacks

    +

    Drawbacks

    The chunking may not be the optimal case for every kind of offline wallet.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised.

    Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.

    Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.

    Ergonomics & Compatibility

    The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.

    -

    Prior Art and References

    +

    Prior Art and References

    RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well.

    On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None.

    -

    Explanation

    +

    Explanation

    Overview

    The dynamic pricing model sets the new price based on supply and demand in the previous period. The model is a function of the number of Regions sold, piecewise-defined by two power functions.

    -

    Explanation

    +

    Explanation

    We are first going to explain the proof format being used:

    #![allow(unused)]
     fn main() {
    @@ -6379,24 +6299,24 @@ actual exported function signature looks like:

    already gets the proof passed as Vec<u8>. This proof needs to be decoded to the actual Proof type as explained above. The proof and the SCALE encoded account_id of the sender are used to verify the ownership of the SessionKeys.

    -

    Drawbacks

    +

    Drawbacks

    Validator operators need to pass the their account id when rotating their session keys in a node. This will require updating some high level docs and making users familiar with the slightly changed ergonomics.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Testing of the new changes is quite easy as it only requires passing an appropriate owner for the current testing context. The changes to the proof generation and verification got audited to ensure they are correct.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    Does not have any impact on the overall performance, only setting SessionKeys will require more weight.

    -

    Ergonomics

    +

    Ergonomics

    If the proposal alters exposed interfaces to developers or end-users, which types of usage patterns have been optimized for?

    -

    Compatibility

    +

    Compatibility

    Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before a runtime is enacted that contains these changes otherwise they will fail to generate session keys.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None.

    Substrate implementation of the RFC.

    @@ -6431,16 +6351,16 @@ a runtime is enacted that contains these changes otherwise they will fail to gen AuthorsPierre Krieger -

    Summary

    +

    Summary

    Rather than enforce a limit to the total memory consumption on the client side by loading the value at :heappages, enforce that limit on the runtime side.

    -

    Motivation

    +

    Motivation

    From the early days of Substrate up until recently, the runtime was present in two forms: the wasm runtime (wasm bytecode passed through an interpreter) and the native runtime (native code directly run by the client).

    Since the wasm runtime has a lower amount of available memory (4 GiB maximum) compared to the native runtime, and in order to ensure sure that the wasm and native runtimes always produce the same outcome, it was necessary to clamp the amount of memory available to both runtimes to the same value.

    In order to achieve this, a special storage key (a "well-known" key) :heappages was introduced and represents the number of "wasm pages" (one page equals 64kiB) of memory that are available to the memory allocator of the runtimes. If this storage key is absent, it defaults to 2048, which is 128 MiB.

    The native runtime has since then been disappeared, but the concept of "heap pages" still exists. This RFC proposes a simplification to the design of Polkadot by removing the concept of "heap pages" as is currently known, and proposes alternative ways to achieve the goal of limiting the amount of memory available.

    Stakeholders

    Client implementers and low-level runtime developers.

    -

    Explanation

    +

    Explanation

    This RFC proposes the following changes to the client:

    • The client no longer considers :heappages as special.
    • @@ -6466,25 +6386,25 @@ a runtime is enacted that contains these changes otherwise they will fail to gen

    Each parachain can choose the option that they prefer, but the author of this RFC strongly suggests either option C or B.

    -

    Drawbacks

    +

    Drawbacks

    In case of path A, there is one situation where the behaviour pre-RFC is not equivalent to the one post-RFC: when a host function that performs an allocation (for example ext_storage_get) is called, without this RFC this allocation might fail due to reaching the maximum heap pages, while after this RFC this will always succeed. This is most likely not a problem, as storage values aren't supposed to be larger than a few megabytes at the very maximum.

    In the unfortunate event where the runtime runs out of memory, path B would make it more difficult to relax the memory limit, as we would need to re-upload the entire Wasm, compared to updating only :heappages in path A or before this RFC. In the case where the runtime runs out of memory only in the specific event where the Wasm runtime is modified, this could brick the chain. However, this situation is no different than the thousands of other ways that a bug in the runtime can brick a chain, and there's no reason to be particularily worried about this situation in particular.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    This RFC would reduce the chance of a consensus issue between clients. The :heappages are a rather obscure feature, and it is not clear what happens in some corner cases such as the value being too large (error? clamp?) or malformed. This RFC would completely erase these questions.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    In case of path A, it is unclear how performances would be affected. Path A consists in moving client-side operations to the runtime without changing these operations, and as such performance differences are expected to be minimal. Overall, we're talking about one addition/subtraction per malloc and per free, so this is more than likely completely negligible.

    In case of path B and C, the performance gain would be a net positive, as this RFC strictly removes things.

    -

    Ergonomics

    +

    Ergonomics

    This RFC would isolate the client and runtime more from each other, making it a bit easier to reason about the client or the runtime in isolation.

    -

    Compatibility

    +

    Compatibility

    Not a breaking change. The runtime-side changes can be applied immediately (without even having to wait for changes in the client), then as soon as the runtime is updated, the client can be updated without any transition period. One can even consider updating the client before the runtime, as it corresponds to path C.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None.

    This RFC follows the same path as https://github.com/polkadot-fellows/RFCs/pull/4 by scoping everything related to memory allocations to the runtime.

    @@ -6535,7 +6455,7 @@ The :heappages are a rather obscure feature, and it is not clear wh AuthorsSourabh Niyogi -

    Summary

    +

    Summary

    This RFC proposes to add the two dominant smart contract programming languages in the Polkadot ecosystem to AssetHub: EVM + ink!/Coreplay. The objective is to increase DOT Revenue by making AssetHub accessible to (1) Polkadot Rollups; @@ -6544,7 +6464,7 @@ EVM + ink!/Coreplay. The objective is to increase DOT Revenue by making AssetHu These changes in AssetHub are enabled by key Polkadot 2.0 technologies: PolkaVM supporting Coreplay, and hyper data availability in Blobs Chain.

    -

    Motivation

    +

    Motivation

    EVM Contracts are pervasive in the Web3 blockchain ecosystem, while Polkadot 2.0's Coreplay aims to surpass EVM Contracts in ease-of-use using PolkaVM's RISC architecture.

    Asset Hub for Polkadot does not have smart contract capabilities, @@ -6612,7 +6532,7 @@ To the best of our knowledge, release times of this are Explanation +

    Explanation

    Limit Smart Contract Weight allocation

    AssetHub is a major component of the Polkadot 2.0 Minimal Relay Chain architecture. It is critical that smart contract developers not be able to clog AssetHub's blockspace for other mission critical applications, such as Staking and Governance.

    As such, it is proposed that at most 50% of the available weight in AssetHub for Polkadot blocks be allocated to smart contracts pallets (EVM, ink! and/or Coreplay). While to date AssetHub has limited usage, it is believed (see here) that imposing this limit on smart contracts pallet would limit the effect on non-smart contract usage. A excessively small weight limit like 10% or 20% may limit the attractiveness of Polkadot as a platform for Polkadot rollups and EVM Contracts. A excessively large weight like 90% or 100% may threaten AssetHub usage.

    @@ -6665,22 +6585,22 @@ We believe the lack of growth of parachains in the last 12-18 months, and the hi

    Maintaining EVM Contracts on AssetHub may be seen as difficult and may require Substrate engineers to maintain EVM Pallets and manage the xcContracts.
    We believe this cost will be relatively small based on the proven deployment of Astar and Moonbeam.
    The cost will be justified compared to the potential upside of new DOT revenue from defi/NFT applications on AssetHub and the potential for utilizing Polkadot DA for Polkadot rollups.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Testing the mapping between assetIDs and EVM Contracts thoroughly will be critical.

    Having a complete working OP Stack chain using AssetHub for Kusama (1000) and Blobs on Kusama (3338) would be highly desirable, but is unlikely to be required.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The weight limit of 50% is expected to be adequate to limit excess smart contract usage at this time.

    Storage bloat is expected to kept to a minimum with the nominal 0.01 Existential Deposit.

    -

    Ergonomics

    +

    Ergonomics

    Note that the existential deposit is not 0 DOT but being lowered from 0.1 DOT to 0.01 DOT, which may pose problems for some developers.
    Many developers routinely deploy their EVM contracts on many different EVM Chains in parallel. This non-zero ED may pose problems for some developers

    The 0.01 DOT (worth $0.075 USD) is unlikely to pose significant issue.

    -

    Compatibility

    +

    Compatibility

    It is believed that EVM pallet (as deployed on Moonbeam + Astar) is sufficiently compatible with Ethereum, and that the ED of 0.01 DOT pose negligible issues.

    The messaging architecture for rollups are not compatible with Polkadot XCM.
    It is not clear if leading rollup platforms (OP Stack, Arbitrum Orbit, Polygon zkEVM) could be made compatible with XCM.

    -

    Unresolved Questions

    +

    Unresolved Questions

    It is highly desirable to know the throughput of Polkadot DA with popular rollup architectures OP Stack and Arbitrum Orbit.
    This would enable CEXs and EVM L2 builders to choose Polkadot over Ethereum.

    @@ -6722,7 +6642,7 @@ This would enable CEXs and EVM L2 builders to choose Polkadot over Ethereum.

    AuthorAdam Clay Steeber -

    Summary

    +

    Summary

    This RFC proposes adding a trivial governance track on Kusama to facilitate X (formerly known as Twitter) posts on the @kusamanetwork account. The technical aspect of implementing this in the runtime is very inconsequential and straight-forward, though it might get more technical if the Fellowship wants to regulate this track with a non-existent permission set. If this is implemented it would need to be followed up with:

    @@ -6730,7 +6650,7 @@ with a non-existent permission set. If this is implemented it would need to be f
  • the establishment of specifications for proposing X posts via this track, and
  • the development of tools/processes to ensure that the content contained in referenda enacted in this track would be automatically posted on X.
  • -

    Motivation

    +

    Motivation

    The overall motivation for this RFC is to decentralize the management of the Kusama brand/communication channel to KSM holders. This is necessary in my opinion primarily because of the inactivity of the account in recent history, with posts spanning weeks or months apart. I am currently unaware of who/what entity manages the Kusama X account, but if they are affiliated with Parity or W3F this proposed solution could also offload some of the legal ramifications of making (or not making) @@ -6744,7 +6664,7 @@ for pushing boundaries and trying new unconventional ideas.

    This idea has not been formalized by any individual (or group of) KSM holder(s). To my knowledge the socialization of this idea is contained entirely in my recent X post here, but it is possible that an idea like this one has been discussed in other places. It appears to me that the ecosystem would welcome a change like this which is why I am taking action to formalize the discussion.

    -

    Explanation

    +

    Explanation

    The implementation of this idea can be broken down into 3 primary phases:

    Phase 1 - Track configurations

    First, we begin with this RFC to ensure all feedback can be discussed and implemented in the proposal. After the Fellowship and the community come to a reasonable @@ -6797,7 +6717,7 @@ to implement them. Here's what would be needed:

  • a UI to allow layman users to propose referenda on this track
  • After everything is complete, we can update the Kusama wiki to include documentation on the X post specifications and include links to the tools/UI.

    -

    Drawbacks

    +

    Drawbacks

    The main drawback to this change is that it requires a lot of off-chain coordination. It's easy enough to include the track on Kusama but it's a totally different challenge to make it function as intended. The tools need to be built and the auth tokens need to be managed. It would certainly add an administrative burden to whoever manages the X account since they would either need to run the tools themselves or manage auth tokens.

    @@ -6809,32 +6729,112 @@ If that happens, we risk getting Kusama banned on X!

    agency to manage posts. It wouldn't be decentralized but it would probably be more effective in terms of creating good content.

    Finally, this solution is merely pseudo-decentralization since the X account manager would still have ultimate control of the account. It's decentralized insofar as the auth tokens are given to people actually running the tools; a house of cards is required to facilitate X posts via this track. Not ideal.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    There's major precedent for configuring tracks on openGov given the amount of power tracks have, so it shouldn't be hard to come up with a sound configuration. That's why I recommend restricting permissions of this track to remarks and batches of remarks, or something equally inconsequential.

    Building the tools for this implementation is really straight-forward and could be audited by Fellowship members, and the community at large, on Github.

    The largest security concern would be the management of Kusama's X account's auth tokens. We would need to ensure that they aren't compromised.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    If a track on Kusama promises users that compliant referenda enacted therein would be posted on Kusama's X account, users would expect that track to perform as promised. If the house of cards tumbles down and a compliant referendum doesn't actually get anything posted, users might think that Kusama is broken or unreliable. This could be damaging to Kusama's image and cause people to question the soundness of other features on Kusama.

    As mentioned in the drawbacks, the performance of this feature would depend on off-chain coordinations. We can reduce the administrative burden of these coordinations by funding third parties with the Treasury to deal with it, but then we're relying on trusting these parties.

    -

    Ergonomics

    +

    Ergonomics

    By adding a new track to Kusama, governance platforms like Polkassembly or Nova Wallet would need to include it on their applications. This shouldn't be too much of a burden or overhead since they've already built the infrastructure for other openGov tracks.

    -

    Compatibility

    +

    Compatibility

    This change wouldn't break any compatibility as far as I know.

    References

    One reference to a similar feature requiring on-chain/off-chain coordination would be the Kappa-Sigma-Mu Society. Nothing on-chain necessarily enforces the rules or facilitates bids, challenges, defenses, etc. However, the Society has managed to maintain itself with integrity to its rules. So I don't think this is totally out of Kusama's scope. But it will require some off-chain effort to maintain.

    -

    Unresolved Questions

    +

    Unresolved Questions

    • Who will develop the tools necessary to implement this feature? How do we select them?
    • How can this idea be better implemented with on-chain/substrate features?
    +

    (source)

    +

    Table of Contents

    + +

    RFC-0073: Decision Deposit Referendum Track

    +
    + + + +
    Start Date12 February 2024
    DescriptionAdd a referendum track which can place the decision deposit on any other track
    AuthorsJelliedOwl
    +
    +

    Summary

    +

    The current size of the decision deposit on some tracks is too high for many proposers. As a result, those needing to use it have to find someone else willing to put up the deposit for them - and a number of legitimate attempts to use the root track have timed out. This track would provide a more affordable (though slower) route for these holders to use the root track.

    +

    Motivation

    +

    There have been recent attempts to use the Kusama root track which have timed out with no decision deposit placed. Usually, these referenda have been related to parachain registration related issues.

    +

    Explanation

    +

    Propose to address this by adding a new referendum track [22] Referendum Deposit which can place the decision deposit on another referendum. This would require the following changes:

    +
      +
    • [Referenda Pallet] Modify the placeDecisionDesposit function to additionally allow it to be called by root, with root call bypassing the requirements for a deposit payment.
    • +
    • [Runtime] Add a new referendum track which can only call referenda->placeDecisionDeposit and the utility functions.
    • +
    +

    Referendum track parameters - Polkadot

    +
      +
    • Decision deposit: 1000 DOT
    • +
    • Decision period: 14 days
    • +
    • Confirmation period: 12 hours
    • +
    • Enactment period: 2 hour
    • +
    • Approval & Support curves: As per the root track, timed to match the decision period
    • +
    • Maximum deciding: 10
    • +
    +

    Referendum track parameters - Kusama

    +
      +
    • Decision deposit: 33.333333 KSM
    • +
    • Decision period: 7 days
    • +
    • Confirmation period: 6 hours
    • +
    • Enactment period: 1 hour
    • +
    • Approval & Support curves: As per the root track, timed to match the decision period
    • +
    • Maximum deciding: 10
    • +
    +

    Drawbacks

    +

    This track would provide a route to starting a root referendum with a much-reduced slashable deposit. This might be undesirable but, assuming the decision deposit cost for this track is still high enough, slashing would still act as a disincentive.

    +

    An alternative to this might be to reduce the decision deposit size some of the more expensive tracks. However, part of the purpose of the high deposit - at least on the root track - is to prevent spamming the limited queue with junk referenda.

    +

    Testing, Security, and Privacy

    +

    Will need additional tests case for the modified pallet and runtime. No security or privacy issues.

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    +

    No significant performance impact.

    +

    Ergonomics

    +

    Only changes related to adding the track. Existing functionality is unchanged.

    +

    Compatibility

    +

    No compatibility issues.

    +

    Prior Art and References

    + +

    Unresolved Questions

    +

    Feedback on whether my proposed implementation of this is the best way to address the issue - including which calls the track should be allowed to make. Are the track parameters correct or should be use something different? Alternative would be welcome.

    diff --git a/proposed/0004-remove-unnecessary-allocator-usage.html b/proposed/0004-remove-unnecessary-allocator-usage.html index f12da8e..b7a396b 100644 --- a/proposed/0004-remove-unnecessary-allocator-usage.html +++ b/proposed/0004-remove-unnecessary-allocator-usage.html @@ -90,7 +90,7 @@ @@ -447,7 +447,7 @@ This would remove the possibility to synchronize older blocks, which is probably - @@ -461,7 +461,7 @@ This would remove the possibility to synchronize older blocks, which is probably - diff --git a/proposed/0074-stateful-multisig-pallet.html b/proposed/0074-stateful-multisig-pallet.html index 558940e..703d732 100644 --- a/proposed/0074-stateful-multisig-pallet.html +++ b/proposed/0074-stateful-multisig-pallet.html @@ -90,7 +90,7 @@ @@ -780,7 +780,7 @@ Implement call filters. This will allow multisig accounts to only accept certain diff --git a/proposed/0073-referedum-deposit-track.html b/stale/0073-referedum-deposit-track.html similarity index 71% rename from proposed/0073-referedum-deposit-track.html rename to stale/0073-referedum-deposit-track.html index 76428df..df9afc0 100644 --- a/proposed/0073-referedum-deposit-track.html +++ b/stale/0073-referedum-deposit-track.html @@ -90,7 +90,7 @@ @@ -259,13 +259,10 @@ @@ -273,13 +270,10 @@