diff --git a/404.html b/404.html index 35d2dc6..645535c 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index 0633e86..c9f602f 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ @@ -710,7 +710,7 @@ InstaPoolHistory: (empty) diff --git a/index.html b/index.html index 5e15b97..36c7208 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index 5e15b97..36c7208 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/new/0156-bls-signatures.html b/new/0156-bls-signatures.html index 487cca3..3072dc2 100644 --- a/new/0156-bls-signatures.html +++ b/new/0156-bls-signatures.html @@ -90,7 +90,7 @@ @@ -361,7 +361,7 @@ - @@ -375,7 +375,7 @@ - diff --git a/print.html b/print.html index 26efdd5..df6c829 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -358,3977 +358,6 @@ detailing proposed changes to the technical implementation of the Polkadot netwo

Result

The function returns 0 on success. On error, -1 is returned, and the output buffer should be considered uninitialized.

-

(source)

-

Table of Contents

- -

RFC-0000: Pre-ELVES soft concensus

-
- - - -
Start DateDate of initial proposal
DescriptionProvide and exploit a soft consensus before launching approval checks
AuthorsJeff Burdges, Alistair Stewart
-
-

Summary

-

Availability (bitfield) votes gain a preferred_fork flag which expresses the validator's opinion upon relay chain equivocations and babe forks, while still sharing availability votes for all relay chain blocks. We make relay chain block production require a supermajority with preferred_fork set, so forks cannot advance if they split the honest validators, which creates an early soft concensus. We similarly defend ELVES from relay chain equivocation attacks and prevent redundent approvals across babe forks.

-

Motivation

-

We've always known relay chain equivocations break the ELVES threat model. We originally envisioned ELVES having fallback pathways, but doing fallbacks requires dangerous subtle debugging. We support more assignment schemes in ELVES this way too, including one novel post-quantum one, and very low CPU usage schemes.

-

We expect this early soft concensus creates back pressure that improves performance under babe forks.

-

Alistair: TODO?

-

Stakeholders

-

We modify the availability votes and restrict relay chain blocks, fork choice, and ELVES start conditions, so mostly the parachain. See alternatives notes on the flag under sassafras chains like JAM.

-

Explanation

-

Availability voting

-

At present, availability votes have a bitfield representing the cores, a relay_parent, and a signature. We process these on-chain in several steps: We first validate the signatures, zero any bits for cores included/enacted between the relay_parent and our predecessor, sum the set bits for each core, and finally include/enact the core if this exceeds 2/3rds of the validators.

-

Availability votes gain a preferred_fork flag, which honest validators set for exactly one relay_parent on their availability votes in a block production slot. We say a validator prefers a fork given by chain head h if it provides an availability vote with relay_parent = h and preferred_fork set.

-

Validators recieve a minor equivocations slash if they claim to set preferred_fork for two different relay_parents in the same slot. In sassafras, this means preferred fork equivocations can only occur for relay chain equivocations, but under babe preferred fork equivocations could occur between primary and secondary blocks, or other primary blocks.

-

All validators still provide availability votes for all forks, because those non-preferred votes could still help enact candidates faster, but those non-preferred vote have preferred_fork zeroed.

-

Around this, validators could optionally provide an early availability vote that commits to their preferred fork, and then later provide a second availability votes stating the same preferred fork but a fuller bitfield, provided doing so somehow helps relay chain blcok producers.

-

Fork choice

-

We require relay chain block producers build upon forks preferred by 2 f + 1 validators. In other words, a relay chain block with parent p must contain availability bitfield votes from 2 f + 1 validators with relay_parent = p and preferred_fork set. It follows our preferred fork votes override other fork choice priorities.

-

A relay chain block producer could lack this 2 f + 1 threshold for a prespective parent block p, in which case they must build upon the parent of p instead. We know availability votes simply being slow would cause this somtimes, in which case adding slightly more delay could save the relay chain slot Alternatively though, two distinct relay chain blocks in the same slot could each wind up prefered by f+1 validators, in which case we must abandond the slot entirely.

-

Elves

-

We only launch the approvals process aka (machine) elves for a relay chain block p once 2 f + 1 validators prefer that block, aka 2 f + 1 validators provide availability votes with relay_parent = p and preferred_fork set. We could optionally delay this further until we have some valid decendent of p.

-

Fast prunning

-

In fact, this new fork choice logic creates more short relay chain forks than exist currently: If the validators split their votes, then we create a new fork in a later slot. We no longer need to process every fork now though.

-

Instead, availability votes from honest validators must express the correct preferred fork, which requires validators carefully time when they judge and announce their preference flags. In babe, we need primary slots to be preferred over secondary slots, so the validators need logic that delays sending availability votes for a secondary slot, giving the primary slot enough time. We also prefer the primary slot with smallest VRF as well, so we need some delay even once we recieve a primary.

-

We suggest roughly this approach:

-

First, download only relay chain block headers, from which we determine our tentative preferred fork.

-

Second, we download and import only our currently tentatively preferred fork. We download our availability chunks as soon as we import a currently tentatively preferred relay chain block. We've no particular target for availability chunks other than simply some delay timer. In babe, we add some extra delay here for secondary slots, like perhaps 2 seconds minus the actual execution time, so that a fast secondary slot cannot beat a primary slot.

-

We somtimes obtain an even more preferable header during import, chunk distribution, and delays for our first tentatively preferred fork. Also, the first could simply turn out invalid. In either case, we loop to repeat this second step on our new tentative preferred fork. We repeat this process until an import succeeds and its timers run out, without receiving any more preferable header. Actual equivocations cannot be preferable over one another, so all this loops terminates reasonably quickly.

-

Next, we broadcast our availability vote with its relay_parent set to our tentatively preferred fork, and with its preferred_fork set.

-

Finally, if 2 f + 1 other validators have a different preference from us, then we download and import their preferred relay chain block, fetch chunks for it, and provide availability votes with preferred_fork zero. It's possible this occurs earlier than our preference finishes, in which case we probably still send out our preference, if only for forensic evidence.

-

Concerns: Drawbacks, Testing, Security, and Privacy

-

Adds subtle timing constraints, which could entrench existing performanceg obstacles. We might explore variations that ignore wall clock time.

-

We've always known relay chain equivocations break the ELVES threat model. We originally envisioned ELVES having fallback pathways, but these were complex and demanded unused code paths, which cannot realistically be debugged. Although complex, the early soft concensus scheme feels less complex overall. We know timing sucks to optimise a distributed system, but at least doing so use everyday code paths.

-

Performance, Ergonomics, and Compatibility

-

We expect early soft concensus introduce back pressure that radically alters performance. We no longer run approvals checks upon all forks. As primary slots occur once every other slot in expectation, one might expect a 25% reduction in CPU load, but this depends upon diverse factors.

-

We apply back pressure by dropping some whole relay chain blocks though, so this shall increase the expected parachain blocktime somewhat, but how much depens upon future optimisation work.

-

Compatibility

-

Major upgrade

-

Prior Art and References

-

...

-

Unresolved Questions

-

We halt the chain when less than 2/3 of validators are online. We consider this reasonable since governance now runs on a parachain, ELVES would not secure, and nothing can be finalized anyways. We could perhaps add some "recovery mode" where the relay chain embeds entire system parachain blocks, but doing so might not warrant the effort required.

- -

Sassafras

-

Arguably, a sassafras RC like JAM could avoid preferred_fork flag, by only releasing availability votes for at most one sassafras equivocation. We wanted availability for babe forks, but sassafras has only equivocations, so those block can simply be dropped.

-

In principle, a sassafras equivocation could still enter the valid chain, assuming 2/3rd of validators provide availability votes for the same equivocations. If JAM lacks the preferred_fork flag then enactment proceeds slower in this case, but this should almost never occur.

-

Thresahold randomness

-

We think threshold randomness could reduce the tranche zero approcha checker assigments by roughly 40%, meaning a fixed 15 vs the expected 25 in the elves paper (30 in production now).

-

We do know threshold VRF based schemes that address relay chain equivocations directly, by using as input the relay chain block hash. We have many more options with early soft concensus though. TODO In particular, we only know two post-quantum approaches to elves, and the bandwidth efficent one needs early soft concensus.

-

Mid-strenght concensus

-

In this RFC, we only require that each relay chain block contain preference votes for its parent from 2/3rds of validators. We could enforce the opposite direction too: Around y>2 seconds after a validator V has seen preference votes for a chain head X from 2/3rd of validators, the V begins rejecting any relay chain block that does not build upon X. This is tricky because the y>2 second delay must be long enough so that most honest nodes learn both X and its preference votes. In this, we might treat preferred_fork votes as evidence for finality of the parent of the vote's relay_parent. This strengthens MEV defenses that assume some honest nodes.

-

Avoid wall clock time

-

We know parachains could baset heir slots upon relay chain slots, instaed of wall clock time (RFC ToDo). After this happens, we could avoid or minimize wall clock timing in the relay chain too, so that relay chain slots could've a floating duration based upon workload.

-

Partial relay chain blocks

-

Above, we only discuss abandoning realy chain blocks which fail early soft concensus. We could alternatively treat them as partial blocks and build extension partial blocks that complete them, with elves probably using randomness from the final partial block.

-

(source)

-

Table of Contents

- -

RFC-0000: Validator Rewards

-
- - - -
Start DateDate of initial proposal
DescriptionRewards protocol for Polkadot validators
AuthorsJeff Burdges, ...
-
-

Summary

-

An off-chain approximation protocol should assign rewards based upon the approvals and availability work done by validators.

-

All validators track which approval votes they actually use, reporting the aggregate, after which an on-chain median computation gives a good approximation under byzantine assumptions. Approval checkers report aggregate information about which availability chunks they use too, but in availability we need a tit-for-tat game to enforce honesty, because approval committees could often bias results thanks to their small size.

-

Motivation

-

We want all or most polkadot subsystems be profitable for validataors, because otherwise operators might profit from running modified code. In particular, almost all rewards in Kusama/Polkadot should come from work done securing parachains, primarily approval checking, but also backing, availability, and support of XCMP.

-

Among these task, our highest priorities must be approval checks, which ensure soundness, and sending availability chunks to approval checkers. We prove backers must be paid strictly less than approval checkers.

-

At present though, validators' rewards have relatively little relationship to validators operating costs, in terms of bandwidth and CPU time. Worse, polkadot's scaling makes us particular vulnerable "no-shows" caused by validators skipping their approval checks.

-

We're particularly concernned about hardware specks impact upon the number of parachain cores. We've requested relatively low spec machines so far, only four physical CPU cores, although some run even lower specs like only two physical CPU cores. Alone, rewards cannot fix our low speced validator problem, but rewards and outreach together should far more impact than either alone.

-

In future, we'll further increase validator spec requirements, which directly improve polkadot's throughput, and which repeats this dynamic of purging underspeced nodes, except outreach becomes more important because de facto too many slow validators can "out vote" the faster ones

-

Stakeholders

-

We alter the validators rewards protocol, but with negligable impact upon rewards for honest validators who comply with hardware and bandwidth recommendations.

-

We shall still reward participation in relay chain concensus of course, which de facto means block production but not finality, but these current reward levels shall wind up greatly reduced. Any validators who manipulate block rewards now could lose rewards here, simply because of rewards being shifted from block production to availability, but this sounds desirable.

-

We've discussed roughly this rewards protocol in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF and https://github.com/paritytech/polkadot-sdk/issues/1811 as well as related topics like https://github.com/paritytech/polkadot-sdk/issues/5122

-

Logic

-

Categories

-

We alter the current rewards scheme by reducing to roughly these proportions of total rewards:

- -

We add roughly these proportions of total rewards covering parachain work:

- -

Observation

-

We track this data for each candidate during the approvals process:

-
/// Our subjective record of out availability transfers for this candidate.
-CandidateRewards {
-    /// Anyone who backed this parablock
-    backers: [AuthorityId; NumBackers],
-    /// Anyone to whome we think no-showed, even only briefly.
-    noshows: HashSet<AuthorityId>,
-    /// Anyone who sent us chunks for this candidate
-    downloaded_from: HashMap<AuthorityId,u16>,    
-    /// Anyone to whome we sent chunks for this candidate
-    uploaded_to: HashMap<AuthorityId,u16>,
-}
-
-

We no longer require this data during disputes.

- -

After we approve a relay chain block, then we collect all its CandidateRewards into an ApprovalsTally, with one ApprovalTallyLine for each validator. In this, we compute approval_usages from the final run of the approvals loop, plus 0.8 for each backer.

-

As discussed below, we say a validator 𝑢 uses an approval vote by a validator 𝑣 on a candidate 𝑐 if the the final approving run of the elves approval loop by 𝑢 counted the vote by 𝑣 towards approving the candidate 𝑐. We only count these useful votes that actually gets used.

-
/// Our subjective record of what we used from, and provided to, all other validators on the finalized chain
-pub struct ApprovalsTally(Vec<ApprovalTallyLine>);
-
-/// Our subjective record of what we used from, and provided to, all one other validators on the finalized chain
-pub struct ApprovalTallyLine {
-    /// Approvals by this validator which our approvals gadget used in marking candidates approved.
-    approval_usages: u32,
-    /// How many times we think this validator no-showed, even only briefly.
-    noshows: u32
-    /// Availability chunks we downloaded from this validator for our approval checks we used.
-    used_downloads: u32,
-    /// Availability chunks we uploaded to this validator which whose approval checks we used.
-    used_uploads: u32,
-}
-
-

At finality we sum these ApprovalsTally for one for the whole epoch so far, into another ApprovalsTally. We can optionally sum them earlier at chain heads, but this requires mutablity.

-

Messages

-

After the epoch is finalized, we share the first three field of each ApprovalTallyLine in its ApprovalTally.

-
/// Our subjective record of what we used from some other validator on the finalized chain
-pub struct ApprovalTallyMessageLine {
-    /// Approvals by this validator which our approvals gadget used in marking candidates approved.
-    approval_usages: u32,
-    /// How many times we think this validator no-showed, even only briefly.
-    noshows: u32
-    /// Availability chunks we downloaded from this validator for our approval checks we used.
-    used_downloads: u32,
-}
-
-/// Our subjective record of what we used from all other validators on the finalized chain
-pub struct ApprovalsTallyMessage(Vec<ApprovalTallyMessageLine>);
-
-

Actual ApprovalsTallyMessages sent over the wire must be signed of course, likely by the grandpa ed25519 key.

-

Rewards computation

-

We compute the approvals rewards for each validator by taking the median of the approval_usages fields for each validator across all validators ApprovalsTallyMessages. We compute some noshows_percentiles for each validator similarly, but using a 2/3 precentile instead of the median.

-
let mut approval_usages_medians = Vec::new(); 
-let mut noshows_percentiles = = Vec::new(); 
-for i in 0..num_validators {
-    let mut v: Vec<u32> = approvals_tally_messages.iter().map(|atm| atm.0[i].approval_usages);
-    v.sort();
-    approval_usages_medians.push(v[num_validators/2]);
-    let mut v: Vec<u32> = approvals_tally_messages.iter().map(|atm| atm.0[i].noshows);
-    v.sort();
-    noshows_percentiles.push(v[num_validators/3]); 
-}
-
-

Assuming more than 50% honersty, these median tell us how many approval votes form each validator.

-

We re-weight the used_downloads from the ith validator by their median times their expected f+1 chunks and divided by how many chunks downloads they claimed, and sum them

-
#[cfg(offchain)]
-let mut my_missing_uploads = my_approvals_tally.iter().map(|l| l.used_uploads).collect();
-let mut reweighted_total_used_downloads = vec[0u64; num_validators];
-for (mmu,atm) in my_missing_uploads.iter_mut().zip(approvals_tally_messages) {
-    let d = atm.0.iter().map(|l| l.used_downloads).sum();
-    for i in 0..num_validators {
-        let atm_from_i = approval_usages_medians[i] * (f+1) / d;
-        #[cfg(offchain)]
-        if i == me { mmu -= atm_from_i };
-        reweighted_total_used_downloads[i] += atm_from_i;
-    }
-}
-
-

We distribute rewards on-chain using approval_usages_medians and reweighted_total_used_downloads. Approval checkers could later change from who they download chunks using my_missing_uploads.

-

We deduct small amount of rewards using noshows_medians too, likely 1% of the rewards for an approval, but excuse some small number of noshows, ala noshows_medians[i].saturating_sub(MAX_NO_PENALTY_NOSHOWS).

-

Strategies

-

In theory, validators could adopt whatever strategy they like to penalize validators who stiff them on availability redistribution rewards, except they should not stiff back, only choose other availability providers. We discuss one good strategy below, but initially this could go unimplemented.

-

Concensus

-

We avoid placing rewards logic on the relay chain now, so we must either collect the signed ApprovalsTallyMessages and do the above computations somewhere sufficently trusted, like a parachain, or via some distributed protocol with its own assumptions.

-

In-core

-

A dedicated rewards parachain could easily collect the ApprovalsTallyMessages and do the above computations. In this, we logically have two phases, first we build the on-chain Merkle tree M of ApprovalsTallyMessages, and second we process those into the rewards data.

-

Any in-core approach risks enough malicious collators biasing the rewards by censoring the ApprovalsTallyMessages messages for some validators during the first phase. After this first phase completes, our second phase proceeds deterministically.

-

As an option, each validator could handle this second phase itself by creating single heavy transaction with n state accesses in this Merkle tree M, and this transaction sends the era points.

-

A remark for future developments..

-

JAM-like non/sub-parachain accumulation could mitigate the risk of the rewards parachain being captured.

-

JAM services all have either parachain accumulation or else non/sub-parachain accumulation.

- -

In our case, each ApprovalsTallyMessage would become a block for the first phase rewards service, so then the accumulation tracks an MMR of the rewards service block hashes, which becomes M from Option 1. At 1024 validators this requires 9 * 32 = 288 bytes for the MMR and 1024/8 = 128 bytes for a bitfield, so 416 bytes of relay chain state in total. Any validator could then add their ApprovalsTallyMessage in any order, but only one per relay chain block, so the submission timeframe should be long enough to prevent censorship.

-

Arguably after JAM, we should migrate critical functions to non/sub-parachain aka JAM services without mutable state, so this covers validator elections, DKGs, and rewards. Yet, non/sub-parachains cannot eliminate all censorship risks, so the near term benefits seem questionable.

-

Off-core

-

All validators could collect ApprovalsTallyMessages and independently compute rewards off-core. At that point, all validators have opinions about all other validators rewards, but even among honest validators these opinions could differ if some lack some ApprovalsTallyMessages.

-

We'd have the same in-core computation problem if we perform statistics like medians upon these opinions. We could however take an optimistic approach where each validator computes medians like above, but then shares their hash of the final rewards list. If 2/3rds voted for the same hash, then we distribute rewards as above. If not, then we distribute no rewards until governance selects the correct hash.

-

We never validate in-core the signatures on ApprovalsTallyMessages or the computation, so this approach permits more direct cheating by malicious 2/3rd majority, but if that occurs then we've broken our security assumptions anyways. It's somewhat likely these hashes do diverge during some network disruptions though, which increases our "drama" factor considerably, which maybe unacceptable.

-

Explanation

-

Backing

-

Polkadot's efficency creates subtle liveness concerns: Anytime one node cannot perform one of its approval checks then Polkadot loses in expectation 3.25 approval checks, or 0.10833 parablocks. This makes back pressure essential.

-

We cannot throttle approval checks securely either, so reactive off-chain back pressure only makes sense during or before the backing phase. In other words, if nodes feel overworked themselves, or perhaps beleive others to be, then they should drop backing checks, never approval checks. It follows backing work must be rewarded less well and less reliably than approvals, as otherwise validators could benefit from behavior that harms the network.

-

We propose that one backing statement be rewarded at 80% of one approval statement, so backers earn only 80% of what approval checkers earn. We omit rewards for availability distribution, so backers spend more on bandwidth too. Approval checkers always fetch chunks first from backers though, so good backers earn roughly 7% there, meaning backing checks earn roughly 13% less than approval checks. We should lower this 80% if we ever increase availability redistribution rewards.

-

Although imperfect, we believe this simplifies implementation, and provides robustness against mistakes elsewhere, including by governance mistakes, but incurs minimal risk. In principle, backer might not distribute systemic chunks, but approval checkers fetch systemic chunks from backers first anyways, so likely this yields negligable gains.

-

As always we require that backers' rewards covers their operational costs plus some profit, but approval checks must be more profitable.

-

Approvals

-

In polkadot, all validators run the elves approval loop for each candidate, in which the validator listens to other approval checkers assignments and approval statements/votes, and with which it marks checkers no-show or done, and marks candidates approved. Also, this loop determines and announces validators' own approval checker assignments.

-

Any validator should always conclude whatever approval checks it begins, but our approval assignment loop ignore some approval checks, either because they were announced too soon or because an earlier no-show delivered its approval vote before the final approval. We say a validator $u$ uses an approval vote by a validator $v$ on a candidate $c$ if the approval assignments loop by $u$ counted the vote by $v$ towards approving the candidate $c$. We actually rerun the elves approval loop quite frequently, but only the final run that marks the candidate approved determines the useful approval votes.

-

We should not rewards votes announced too soon, so by only counting the final run we unavoidably omit rewards for some honest no-show replacements too. We expect the 80%-ish discount for backing covers these losses, so approval checks remain more profitable than backing.

-

We propose a simple approximate solution based upon computing medians across validators for used votes.

-
    -
  1. -

    In an epoch $e$, each validator $u$ counts of the number $\alpha_{u,v}$ of votes they used from each validator $v$, including themselves. Any time a validator marks a candidate approved, they increment these counts appropriately.

    -
  2. -
  3. -

    After epoch $e$'s last block gets finalized, all validators of epoch $e$ submit an approvals tally message ApprovalsTallyMessage that reveals their number $\alpha_{u,v}$ of useful approvals they saw from each validator $v$ on candidates that became available in epoch $n$. We do not send $\alpha_{u,u}$ for tit-for-tat reasons discussed below, not for bias concerns. We record these approvals tally messages on-chain.

    -
  4. -
  5. -

    After some delay, we compute on-chain the median $\alpha_v := \textrm{median} { \alpha_{u,v} : u }$ used approvals statements for each validator $v$.

    -
  6. -
-

As discussed in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF we could compute these medians using the on-line algorithm if substrate had a nice priority queue.

-

We never achieve true consensus on approval checkers and their approval votes. Yet, our approval assignment loop gives a rough concensus, under our Byzantine assumption and some synchrony assumption. It then follows that miss-reporting by malicious validators should not appreciably alter the median $\alpha_v$ and hence rewards.

-

We never tally used approval assignments to candidate equivocations or other forks. Any validator should always conclude whatever approval checks it begins, even on other forks, but we expect relay chain equivocations should be vanishingly rare, and sassafras should make forks uncommon.

-

We account for noshows similarly, and deduce a much smaller amount of rewards, but require a 2/3 precentile level, not kjust a median.

-

Availability redistribution

-

As approval checkers could easily perform useless checks, we shall reward availability providers for the availability chunks they provide that resulted in useful approval checks. We enforce honesty using a tit-for-tat mechanism because chunk transfers are inherently subjective.

-

An approval checker reconstructs the full parachain block by downloading distinct $f+1$ chunks from other validators, where at most $f$ validators are byzantine, out of the $n \ge 3 f + 1$ total validators. In downloading chunks, validators prefer the $f+1$ systemic chunks over the non-systemic chunks, and prefer fetching from validators who already voted valid, like backing checkers. It follows some validators should recieve credit for more than one chunk per candidate.

-

We expect a validator $v$ has actually performed more approval checks $\omega_v$ than the median $\alpha_v$ for which they actually received credit. In fact, approval checkers even ignore some of their own approval checks, meaning $\alpha_{v,v} \le \omega_v$ too.

-

Alongside approvals count for epoch $e$, approval checker $v$ computes the counts $\beta_{u,v}$ of the number of chunks they downloaded from each availability provider $u$, excluding themselves, for which they percieve the approval check turned out useful, meaning their own approval counts in $\alpha_{v,v}$. Approval checkers publish $\beta_{u,v}$ alongside $\alpha_{u,v}$ in the approvals tally message ApprovalsTallyMessage. We originally proposed include the self availability usage $\beta_{v,v}$ here, but this should not matter, and excluding simplifies the code.

-

Symmetrically, availability provider $u$ computes the counts $\gamma_{u,v}$ of the number of chunks they uploaded to each approval checker $v$, again including themselves, again for which they percieve the approval check turned out useful. Availability provider $u$ never reveal its $\gamma_{u,v}$ however.

-

At this point, $\alpha_v$, $\alpha_{v,v}$, and $\alpha_{u,v}$ all potentially differ. We established consensus upon $\alpha_v$ above however, with which we avoid approval checkers printing unearned availability provider rewards:

-

After receiving "all" pairs $(\alpha_{u,v},\beta_{u,v})$, validator $w$ re-weights the $\beta_{u,v}$ and their own $\gamma_{w,v}$. -$$ -\begin{aligned} -\beta\prime_{w,v} &= {(f+1) \alpha_v \over \sum_u \beta_{u,v}} \beta_{w,v} \ -\gamma\prime_{w,v} &= {(f+1) \alpha_w \over \sum_v \gamma_{w,v}} \gamma_{w,v} \ -\end{aligned} -$$ -At this point, we compute $\beta\prime_w = \sum_v \beta\prime_{w,v}$ on-chain for each $w$ and reward $w$ proportionally.

-

Tit-for-tat

-

We employ a tit-for-tat strategy to punish validators who lie about from whome they obtain availability chunks. We only alter validators future choices in from whom they obtain availability chunks, and never punish by lying ourselves, so nothing here breaks polkadot, but not having roughly this strategy enables cheating.

-

An availability provider $w$ defines $\delta\prime_{w,v} := \gamma\prime_{w,v} - \beta\prime_{w,v}$ to be the re-weighted number of chunks by which $v$ stiffed $w$. Now $w$ increments their cumulative stiffing perception $\eta_{w,v}$ from $v$ by the value $\delta\prime_{w,v}$, so $\eta_{w,v} \mathrel{+}= \delta\prime_{w,v}$

-

In future, anytime $w$ seeks chunks in reconstruction $w$ skips $v$ proportional to $\eta_{w,v} / \sum_u \eta_{w,u}$, with each skip reducing $\eta_{w,u}$ by 1. We expect honest accedental availability stiffs have only small $\delta\prime_{w,v}$, so they clear out quickly, but intentional skips add up more quickly.

-

We keep $\gamma_{w,v}$ and $\alpha_{u,u}$ secret so that approval checkers cannot really know others stiffing perceptions, although $\alpha_{u,v}$ leaks some relevant information. We expect this secrecy keeps skips secret and thus prevents the tit-for-tat escalating beyond one round, which hopefully creates a desirable Nash equilibrium.

-

We favor skiping systematic chunks to reduce reconstructon costs, so we face costs when skipping them. We could however fetch systematic chunks from availability providers as well as backers, or even other approval checkers, so this might not become problematic in practice.

-

Concerns: Drawbacks, Testing, Security, and Privacy

-

We do not pay backers individually for availability distribution per se. We could only do so by including this information into the availability bitfields, which complicates on-chain computation. Also, if one of the two backers does not distribute then the availability core should remain occupied longer, meaning the lazy backer loses some rewards too. It's likely future protocol improbvements change this, so we should monitor for lazy backers outside the rewards system.

-

We discuss approvals being considered by the tit-for-tat in earlier drafts. An adversary who successfuly manipulates the rewards median votes would've alraedy violated polkadot's security assumptions though, which requires a hard fork and correcting the dot allocation. Incorrect report wrong approval_usages remain interesting statistics though.

-

Adversarial validators could manipulates their availability votes though, even without being a supermajority. If they still download honestly, then this costs them more rewards than they earn. We do not prevent validators from preferentially obtaining their pieces from their friends though. We should analyze, or at least observe, the long-term consequences.

-

A priori, whale nominator's validators could stiff validators but then rotate their validators quickly enough so that they never suffered being skipped back. We discuss several possible solution, and their difficulties, under "Rob's nominator-wise skipping" in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF but overall less seems like more here. Also frequent validator rotation could be penalized elsewhere.

-

Performance, Ergonomics, and Compatibility

- -

We operate off-chain except for final rewards votes and median tallies. We expect lower overhead rewards protocols would lack information, thereby admitting easier cheating.

-

Initially, we designed the ELVES approval gadget to allow on-chain operation, in part for rewards computation, but doing so looks expensive. Also, on-chain rewards computaiton remains only an approximation too, but could even be biased more easily than our off-chain protocol presented here.

- -

We alraedy teach validators about missed parachain blocks, but we'll teach approval checking more going forwards, because current efforts focus more upon backing.

- -

JAM's block exports should not complicate availability rewards, but could impact some alternative schemes.

-

Prior Art and References

-

None

-

Unresolved Questions

-

Provide specific questions to discuss and address before the RFC is voted on by the Fellowship. This should include, for example, alternatives to aspects of the proposed design where the appropriate trade-off to make is unclear.

- -

Synthetic parachain flag

-

Any rewards protocol could simply be "out voted" by too many slow validators: An increase the number of parachain cores increases more workload, but this creates no-shows if too few validators could handle this workload.

-

We could add a synthetic parachain flag, only settable by governance, which treats no-shows as positive approval votes for that parachain, but without adding rewards. We should never enable this for real parachains, only for synthetic ones like gluttons. We should not enable the synthetic parachain flag long-term even for gluttonsm, because validators could easily modify their code. Yet, synthetic approval checks might enable pushing the hardware upgrades more agressively over the short-term.

-

(source)

-

Table of Contents

- -

RFC-0004: Remove the host-side runtime memory allocator

-
- - - -
Start Date2023-07-04
DescriptionUpdate the runtime-host interface to no longer make use of a host-side allocator
AuthorsPierre Krieger
-
-

Summary

-

Update the runtime-host interface to no longer make use of a host-side allocator.

-

Motivation

-

The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.

-

The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer in order to free the buffer.

-

Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.

-

Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.

-

Stakeholders

-

No attempt was made at convincing stakeholders.

-

Explanation

-

New host functions

-

This section contains a list of new host functions to introduce.

-
(func $ext_storage_read_version_2
-    (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
-(func $ext_default_child_storage_read_version_2
-    (param $child_storage_key i64) (param $key i64) (param $value_out i64)
-    (param $offset i32) (result i64))
-
-

The signature and behaviour of ext_storage_read_version_2 and ext_default_child_storage_read_version_2 is identical to their version 1 counterparts, but the return value has a different meaning. -The new functions directly return the number of bytes that were written in the value_out buffer. If the entry doesn't exist, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in value_out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.

-

The runtime execution stops with an error if value_out is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.

-
(func $ext_storage_next_key_version_2
-    (param $key i64) (param $out i64) (return i32))
-(func $ext_default_child_storage_next_key_version_2
-    (param $child_storage_key i64) (param $key i64) (param $out i64) (return i32))
-
-

The behaviour of these functions is identical to their version 1 counterparts. -Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an out parameter containing a pointer-size to the memory location where the host writes the output. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. -These functions return the size, in bytes, of the next key, or 0 if there is no next key. If the size of the next key is larger than the buffer in out, the bytes of the key that fit the buffer are written to out and any extra byte that doesn't fit is discarded.

-

Some notes:

- -
(func $ext_hashing_keccak_256_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_keccak_512_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_sha2_256_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_blake2_128_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_blake2_256_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_twox_64_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_twox_128_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_twox_256_version_2
-    (param $data i64) (param $out i32))
-(func $ext_trie_blake2_256_root_version_3
-    (param $data i64) (param $version i32) (param $out i32))
-(func $ext_trie_blake2_256_ordered_root_version_3
-    (param $data i64) (param $version i32) (param $out i32))
-(func $ext_trie_keccak_256_root_version_3
-    (param $data i64) (param $version i32) (param $out i32))
-(func $ext_trie_keccak_256_ordered_root_version_3
-    (param $data i64) (param $version i32) (param $out i32))
-(func $ext_default_child_storage_root_version_3
-    (param $child_storage_key i64) (param $out i32))
-(func $ext_crypto_ed25519_generate_version_2
-    (param $key_type_id i32) (param $seed i64) (param $out i32))
-(func $ext_crypto_sr25519_generate_version_2
-    (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
-(func $ext_crypto_ecdsa_generate_version_2
-    (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
-
-

The behaviour of these functions is identical to their version 1 or version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of these functions accepts an out parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

-
(func $ext_default_child_storage_root_version_3
-    (param $child_storage_key i64) (param $out i32))
-(func $ext_storage_root_version_3
-    (param $out i32))
-
-

The behaviour of these functions is identical to their version 1 and version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new versions of these functions accepts an out parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

-

I have taken the liberty to take the version 1 of these functions as a base rather than the version 2, as a PPP deprecating the version 2 of these functions has previously been accepted: https://github.com/w3f/PPPs/pull/6.

-
(func $ext_storage_clear_prefix_version_3
-    (param $prefix i64) (param $limit i64) (param $removed_count_out i32)
-    (return i32))
-(func $ext_default_child_storage_clear_prefix_version_3
-    (param $child_storage_key i64) (param $prefix i64)
-    (param $limit i64)  (param $removed_count_out i32) (return i32))
-(func $ext_default_child_storage_kill_version_4
-    (param $child_storage_key i64) (param $limit i64)
-    (param $removed_count_out i32) (return i32))
-
-

The behaviour of these functions is identical to their version 2 and 3 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the version 3 and 4 of these functions accepts a removed_count_out parameter containing the memory location to a 8 bytes buffer where the host writes the number of keys that were removed in little endian. The runtime execution stops with an error if removed_count_out is outside of the range of the memory of the virtual machine. The functions return 1 to indicate that there are keys remaining, and 0 to indicate that all keys have been removed.

-

Note that there is an alternative proposal to add new host functions with the same names: https://github.com/w3f/PPPs/pull/7. This alternative doesn't conflict with this one except for the version number. One proposal or the other will have to use versions 4 and 5 rather than 3 and 4.

-
(func $ext_crypto_ed25519_sign_version_2
-    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
-(func $ext_crypto_sr25519_sign_version_2
-    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
-func $ext_crypto_ecdsa_sign_version_2
-    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
-(func $ext_crypto_ecdsa_sign_prehashed_version_2
-    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64))
-
-

The behaviour of these functions is identical to their version 1 counterparts. The new versions of these functions accept an out parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. The signatures are always of a size known at compilation time. On success, these functions return 0. If the public key can't be found in the keystore, these functions return 1 and do not write anything to out.

-

Note that the return value is 0 on success and 1 on failure, while the previous version of these functions write 1 on success (as it represents a SCALE-encoded Some) and 0 on failure (as it represents a SCALE-encoded None). Returning 0 on success and non-zero on failure is consistent with common practices in the C programming language and is less surprising than the opposite.

-
(func $ext_crypto_secp256k1_ecdsa_recover_version_3
-    (param $sig i32) (param $msg i32) (param $out i32) (return i64))
-(func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3
-    (param $sig i32) (param $msg i32) (param $out i32) (return i64))
-
-

The behaviour of these functions is identical to their version 2 counterparts. The new versions of these functions accept an out parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. The signatures are always of a size known at compilation time. On success, these functions return 0. On failure, these functions return a non-zero value and do not write anything to out.

-

The non-zero value written on failure is:

- -

These values are equal to the values returned on error by the version 2 (see https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), but incremented by 1 in order to reserve 0 for success.

-
(func $ext_crypto_ed25519_num_public_keys_version_1
-    (param $key_type_id i32) (return i32))
-(func $ext_crypto_ed25519_public_key_version_2
-    (param $key_type_id i32) (param $key_index i32) (param $out i32))
-(func $ext_crypto_sr25519_num_public_keys_version_1
-    (param $key_type_id i32) (return i32))
-(func $ext_crypto_sr25519_public_key_version_2
-    (param $key_type_id i32) (param $key_index i32) (param $out i32))
-(func $ext_crypto_ecdsa_num_public_keys_version_1
-    (param $key_type_id i32) (return i32))
-(func $ext_crypto_ecdsa_public_key_version_2
-    (param $key_type_id i32) (param $key_index i32) (param $out i32))
-
-

The functions superceded the ext_crypto_ed25519_public_key_version_1, ext_crypto_sr25519_public_key_version_1, and ext_crypto_ecdsa_public_key_version_1 host functions.

-

Instead of calling ext_crypto_ed25519_public_key_version_1 in order to obtain the list of all keys at once, the runtime should instead call ext_crypto_ed25519_num_public_keys_version_1 in order to obtain the number of public keys available, then ext_crypto_ed25519_public_key_version_2 repeatedly. -The ext_crypto_ed25519_public_key_version_2 function writes the public key of the given key_index to the memory location designated by out. The key_index must be between 0 (included) and n (excluded), where n is the value returned by ext_crypto_ed25519_num_public_keys_version_1. Execution must trap if n is out of range.

-

The same explanations apply for ext_crypto_sr25519_public_key_version_1 and ext_crypto_ecdsa_public_key_version_1.

-

Host implementers should be aware that the list of public keys (including their ordering) must not change while the runtime is running. This is most likely done by copying the list of all available keys either at the start of the execution or the first time the list is accessed.

-
(func $ext_offchain_http_request_start_version_2
-  (param $method i64) (param $uri i64) (param $meta i64) (result i32))
-
-

The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the request identifier in it, and returning a pointer to it, the version 2 of this function simply returns the newly-assigned identifier to the HTTP request. On failure, this function returns -1. An identifier of -1 is invalid and is reserved to indicate failure.

-
(func $ext_offchain_http_request_write_body_version_2
-  (param $method i64) (param $uri i64) (param $meta i64) (result i32))
-(func $ext_offchain_http_response_read_body_version_2
-  (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64))
-
-

The behaviour of these functions is identical to their version 1 counterpart. Instead of allocating a buffer, writing two bytes in it, and returning a pointer to it, the new version of these functions simply indicates what happened:

- -

These values are equal to the values returned on error by the version 1 (see https://spec.polkadot.network/chap-host-api#defn-http-error), but tweaked in order to reserve positive numbers for success.

-

When it comes to ext_offchain_http_response_read_body_version_2, the host implementers must not read too much data at once in order to not create ambiguity in the returned value. Given that the size of the buffer is always inferior or equal to 4 GiB, this is not a problem.

-
(func $ext_offchain_http_response_wait_version_2
-    (param $ids i64) (param $deadline i64) (param $out i32))
-
-

The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an out parameter containing the memory location where the host writes the output. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

-

The encoding of the response code is also modified compared to its version 1 counterpart and each response code now encodes to 4 little endian bytes as described below:

- -

The buffer passed to out must always have a size of 4 * n where n is the number of elements in the ids.

-
(func $ext_offchain_http_response_header_name_version_1
-    (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
-(func $ext_offchain_http_response_header_value_version_1
-    (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
-
-

These functions supercede the ext_offchain_http_response_headers_version_1 host function.

-

Contrary to ext_offchain_http_response_headers_version_1, only one header indicated by header_index can be read at a time. Instead of calling ext_offchain_http_response_headers_version_1 once, the runtime should call ext_offchain_http_response_header_name_version_1 and ext_offchain_http_response_header_value_version_1 multiple times with an increasing header_index, until a value of -1 is returned.

-

These functions accept an out parameter containing a pointer-size to the memory location where the header name or value should be written. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out.

-

These functions return the size, in bytes, of the header name or header value. If request doesn't exist or is in an invalid state (as documented for ext_offchain_http_response_headers_version_1) or the header_index is out of range, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.

-

If the buffer in out is too small to fit the entire header name of value, only the bytes that fit are written and the rest are discarded.

-
(func $ext_offchain_submit_transaction_version_2
-    (param $data i64) (return i32))
-(func $ext_offchain_http_request_add_header_version_2
-    (param $request_id i32) (param $name i64) (param $value i64) (result i32))
-
-

Instead of allocating a buffer, writing 1 or 0 in it, and returning a pointer to it, the version 2 of these functions return 0 or 1, where 0 indicates success and 1 indicates failure. The runtime must interpret any non-0 value as failure, but the client must always return 1 in case of failure.

-
(func $ext_offchain_local_storage_read_version_1
-    (param $kind i32) (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
-
-

This function supercedes the ext_offchain_local_storage_get_version_1 host function, and uses an API and logic similar to ext_storage_read_version_2.

-

It reads the offchain local storage key indicated by kind and key starting at the byte indicated by offset, and writes the value to the pointer-size indicated by value_out.

-

The function returns the number of bytes that were written in the value_out buffer. If the entry doesn't exist, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in value_out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.

-

The runtime execution stops with an error if value_out is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.

-
(func $ext_offchain_network_peer_id_version_1
-    (param $out i64))
-
-

This function writes the PeerId of the local node to the memory location indicated by out. A PeerId is always 38 bytes long. -The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

-
(func $ext_input_size_version_1
-    (return i64))
-(func $ext_input_read_version_1
-    (param $offset i64) (param $out i64))
-
-

When a runtime function is called, the host uses the allocator to allocate memory within the runtime where to write some input data. These two new host functions provide an alternative way to access the input that doesn't make use of the allocator.

-

The ext_input_size_version_1 host function returns the size in bytes of the input data.

-

The ext_input_read_version_1 host function copies some data from the input data to the memory of the runtime. The offset parameter indicates the offset within the input data where to start copying, and must be inferior or equal to the value returned by ext_input_size_version_1. The out parameter is a pointer-size containing the buffer where to write to. -The runtime execution stops with an error if offset is strictly superior to the size of the input data, or if out is outside of the range of the memory of the virtual machine, even if the amount of data to copy would be 0 bytes.

-

Other changes

-

In addition to the new host functions, this RFC proposes two changes to the runtime-host interface:

- -

All the host functions that are being superceded by new host functions are now considered deprecated and should no longer be used. -The following other host functions are similarly also considered deprecated:

- -

Drawbacks

-

This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.

-

Prior Art

-

The API of these new functions was heavily inspired by API used by the C programming language.

-

Unresolved Questions

-

The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.

-

It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions:

- -

Future Possibilities

-

After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. -This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.

-

(source)

-

Table of Contents

- -

RFC-0006: Dynamic Pricing for Bulk Coretime Sales

-
- - - - -
Start DateJuly 09, 2023
DescriptionA dynamic pricing model to adapt the regular price for bulk coretime sales
AuthorsTommi Enenkel (Alice und Bob)
LicenseMIT
-
-

Summary

-

This RFC proposes a dynamic pricing model for the sale of Bulk Coretime on the Polkadot UC. The proposed model updates the regular price of cores for each sale period, by taking into account the number of cores sold in the previous sale, as well as a limit of cores and a target number of cores sold. It ensures a minimum price and limits price growth to a maximum price increase factor, while also giving govenance control over the steepness of the price change curve. It allows governance to address challenges arising from changing market conditions and should offer predictable and controlled price adjustments.

-

Accompanying visualizations are provided at [1].

-

Motivation

-

RFC-1 proposes periodic Bulk Coretime Sales as a mechanism to sell continouos regions of blockspace (suggested to be 4 weeks in length). A number of Blockspace Regions (compare RFC-1 & RFC-3) are provided for sale to the Broker-Chain each period and shall be sold in a way that provides value-capture for the Polkadot network. The exact pricing mechanism is out of scope for RFC-1 and shall be provided by this RFC.

-

A dynamic pricing model is needed. A limited number of Regions are offered for sale each period. The model needs to find the price for a period based on supply and demand of the previous period.

-

The model shall give Coretime consumers predictability about upcoming price developments and confidence that Polkadot governance can adapt the pricing model to changing market conditions.

-

Requirements

-
    -
  1. The solution SHOULD provide a dynamic pricing model that increases price with growing demand and reduces price with shrinking demand.
  2. -
  3. The solution SHOULD have a slow rate of change for price if the number of Regions sold is close to a given sales target and increase the rate of change as the number of sales deviates from the target.
  4. -
  5. The solution SHOULD provide the possibility to always have a minimum price per Region.
  6. -
  7. The solution SHOULD provide a maximum factor of price increase should the limit of Regions sold per period be reached.
  8. -
  9. The solution should allow governance to control the steepness of the price function
  10. -
-

Stakeholders

-

The primary stakeholders of this RFC are:

- -

Explanation

-

Overview

-

The dynamic pricing model sets the new price based on supply and demand in the previous period. The model is a function of the number of Regions sold, piecewise-defined by two power functions.

- -

The curve of the function forms a plateau around the target and then falls off to the left and rises up to the right. The shape of the plateau can be controlled via a scale factor for the left side and right side of the function respectively.

-

Parameters

-

From here on, we will also refer to Regions sold as 'cores' to stay congruent with RFC-1.

-
- - - - - - -
NameSuggested ValueDescriptionConstraints
BULK_LIMIT45The maximum number of cores being sold0 < BULK_LIMIT
BULK_TARGET30The target number of cores being sold0 < BULK_TARGET <= BULK_LIMIT
MIN_PRICE1The minimum price a core will always cost.0 < MIN_PRICE
MAX_PRICE_INCREASE_FACTOR2The maximum factor by which the price can change.1 < MAX_PRICE_INCREASE_FACTOR
SCALE_DOWN2The steepness of the left side of the function.0 < SCALE_DOWN
SCALE_UP2The steepness of the right side of the function.0 < SCALE_UP
-
-

Function

-
P(n) = \begin{cases} 
-    (P_{\text{old}} - P_{\text{min}}) \left(1 - \left(\frac{T - n}{T}\right)^d\right) + P_{\text{min}} & \text{if } n \leq T \\
-    ((F - 1) \cdot P_{\text{old}} \cdot \left(\frac{n - T}{L - T}\right)^u) + P_{\text{old}} & \text{if } n > T 
-\end{cases}
-
- -

Left side

-

The left side is a power function that describes an increasing concave downward curvature that approaches old_price. We realize this by using the form $y = a(1 - x^d)$, usually used as a downward sloping curve, but in our case flipped horizontally by letting the argument $x = \frac{T-n}{T}$ decrease with $n$, doubly inversing the curve.

-

This approach is chosen over a decaying exponential because it let's us a better control the shape of the plateau, especially allowing us to get a straight line by setting SCALE_DOWN to $1$.

-

Ride side

-

The right side is a power function of the form $y = a(x^u)$.

-

Pseudo-code

-
NEW_PRICE := IF CORES_SOLD <= BULK_TARGET THEN
-    (OLD_PRICE - MIN_PRICE) * (1 - ((BULK_TARGET - CORES_SOLD)^SCALE_DOWN / BULK_TARGET^SCALE_DOWN)) + MIN_PRICE
-ELSE
-    ((MAX_PRICE_INCREASE_FACTOR - 1) * OLD_PRICE * ((CORES_SOLD - BULK_TARGET)^SCALE_UP / (BULK_LIMIT - BULK_TARGET)^SCALE_UP)) + OLD_PRICE
-END IF
-
-

Properties of the Curve

-

Minimum Price

-

We introduce MIN_PRICE to control the minimum price.

-

The left side of the function shall be allowed to come close to 0 if cores sold approaches 0. The rationale is that if there are actually 0 cores sold, the previous sale price was too high and the price needs to adapt quickly.

-

Price forms a plateau around the target

-

If the number of cores is close to BULK_TARGET, less extreme price changes might be sensible. This ensures that a drop in sold cores or an increase doesn’t lead to immediate price changes, but rather slowly adapts. Only if more extreme changes in the number of sold cores occur, does the price slope increase.

-

We introduce SCALE_DOWN and SCALE_UP to control for the steepness of the left and the right side of the function respectively.

-

Max price increase factor

-

We introduce MAX_PRICE_INCREASE_FACTOR as the factor that controls how much the price may increase from one period to another.

-

Introducing this variable gives governance an additional control lever and avoids the necessity for a future runtime upgrade.

-

Example Configurations

-

Baseline

-

This example proposes the baseline parameters. If not mentioned otherwise, other examples use these values.

-

The minimum price of a core is 1 DOT, the price can double every 4 weeks. Price change around BULK_TARGET is dampened slightly.

-
BULK_TARGET = 30
-BULK_LIMIT = 45
-MIN_PRICE = 1
-MAX_PRICE_INCREASE_FACTOR = 2
-SCALE_DOWN = 2
-SCALE_UP = 2
-OLD_PRICE = 1000
-
-

More aggressive pricing

-

We might want to have a more aggressive price growth, allowing the price to triple every 4 weeks and have a linear increase in price on the right side.

-
BULK_TARGET = 30
-BULK_LIMIT = 45
-MIN_PRICE = 1
-MAX_PRICE_INCREASE_FACTOR = 3
-SCALE_DOWN = 2
-SCALE_UP = 1
-OLD_PRICE = 1000
-
-

Conservative pricing to ensure quick corrections in an affluent market

-

If governance considers the risk that a sudden surge in DOT price might price chains out from bulk coretime markets, it can ensure the model quickly reacts to a quick drop in demand, by setting 0 < SCALE_DOWN < 1 and setting the max price increase factor more conservatively.

-
BULK_TARGET = 30
-BULK_LIMIT = 45
-MIN_PRICE = 1
-MAX_PRICE_INCREASE_FACTOR = 1.5
-SCALE_DOWN = 0.5
-SCALE_UP = 2
-OLD_PRICE = 1000
-
-

Linear pricing

-

By setting the scaling factors to 1 and potentially adapting the max price increase, we can achieve a linear function

-
BULK_TARGET = 30
-BULK_LIMIT = 45
-MIN_PRICE = 1
-MAX_PRICE_INCREASE_FACTOR = 1.5
-SCALE_DOWN = 1
-SCALE_UP = 1
-OLD_PRICE = 1000
-
-

Drawbacks

-

None at present.

-

Prior Art and References

-

This pricing model is based on the requirements from the basic linear solution proposed in RFC-1, which is a simple dynamic pricing model and only used as proof. The present model adds additional considerations to make the model more adaptable under real conditions.

-

Future Possibilities

-

This RFC, if accepted, shall be implemented in conjunction with RFC-1.

-

References

- -

(source)

-

Table of Contents

- -

RFC-34: XCM Absolute Location Account Derivation

-
- - - -
Start Date05 October 2023
DescriptionXCM Absolute Location Account Derivation
AuthorsGabriel Facco de Arruda
-
-

Summary

-

This RFC proposes changes that enable the use of absolute locations in AccountId derivations, which allows protocols built using XCM to have static account derivations in any runtime, regardless of its position in the family hierarchy.

-

Motivation

-

These changes would allow protocol builders to leverage absolute locations to maintain the exact same derived account address across all networks in the ecosystem, thus enhancing user experience.

-

One such protocol, that is the original motivation for this proposal, is InvArch's Saturn Multisig, which gives users a unifying multisig and DAO experience across all XCM connected chains.

-

Stakeholders

- -

Explanation

-

This proposal aims to make it possible to derive accounts for absolute locations, enabling protocols that require the ability to maintain the same derived account in any runtime. This is done by deriving accounts from the hash of described absolute locations, which are static across different destinations.

-

The same location can be represented in relative form and absolute form like so:

-
#![allow(unused)]
-fn main() {
-// Relative location (from own perspective)
-{
-    parents: 0,
-    interior: Here
-}
-
-// Relative location (from perspective of parent)
-{
-    parents: 0,
-    interior: [Parachain(1000)]
-}
-
-// Relative location (from perspective of sibling)
-{
-    parents: 1,
-    interior: [Parachain(1000)]
-}
-
-// Absolute location
-[GlobalConsensus(Kusama), Parachain(1000)]
-}
-

Using DescribeFamily, the above relative locations would be described like so:

-
#![allow(unused)]
-fn main() {
-// Relative location (from own perspective)
-// Not possible.
-
-// Relative location (from perspective of parent)
-(b"ChildChain", Compact::<u32>::from(*index)).encode()
-
-// Relative location (from perspective of sibling)
-(b"SiblingChain", Compact::<u32>::from(*index)).encode()
-
-}
-

The proposed description for absolute location would follow the same pattern, like so:

-
#![allow(unused)]
-fn main() {
-(
-    b"GlobalConsensus",
-    network_id,
-    b"Parachain",
-    Compact::<u32>::from(para_id),
-    tail
-).encode()
-}
-

This proposal requires the modification of two XCM types defined in the xcm-builder crate: The WithComputedOrigin barrier and the DescribeFamily MultiLocation descriptor.

-

WithComputedOrigin

-

The WtihComputedOrigin barrier serves as a wrapper around other barriers, consuming origin modification instructions and applying them to the message origin before passing to the inner barriers. One of the origin modifying instructions is UniversalOrigin, which serves the purpose of signaling that the origin should be a Universal Origin that represents the location as an absolute path prefixed by the GlobalConsensus junction.

-

In it's current state the barrier transforms locations with the UniversalOrigin instruction into relative locations, so the proposed changes aim to make it return absolute locations instead.

-

DescribeFamily

-

The DescribeFamily location descriptor is part of the HashedDescription MultiLocation hashing system and exists to describe locations in an easy format for encoding and hashing, so that an AccountId can be derived from this MultiLocation.

-

This implementation contains a match statement that does not match against absolute locations, so changes to it involve matching against absolute locations and providing appropriate descriptions for hashing.

-

Drawbacks

-

No drawbacks have been identified with this proposal.

-

Testing, Security, and Privacy

-

Tests can be done using simple unit tests, as this is not a change to XCM itself but rather to types defined in xcm-builder.

-

Security considerations should be taken with the implementation to make sure no unwanted behavior is introduced.

-

This proposal does not introduce any privacy considerations.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

Depending on the final implementation, this proposal should not introduce much overhead to performance.

-

Ergonomics

-

The ergonomics of this proposal depend on the final implementation details.

-

Compatibility

-

Backwards compatibility should remain unchanged, although that depend on the final implementation.

-

Prior Art and References

- -

Unresolved Questions

-

Implementation details and overall code is still up to discussion.

-

(source)

-

Table of Contents

- -

RFC-0035: Conviction Voting Delegation Modifications

-
- - - -
October 10, 2023
Conviction Voting Delegation Modifications
ChaosDAO
-
-

Summary

-

This RFC proposes to make modifications to voting power delegations as part of the Conviction Voting pallet. The changes being proposed include:

-
    -
  1. Allow a Delegator to vote independently of their Delegate if they so desire.
  2. -
  3. Allow nested delegations – for example Charlie delegates to Bob who delegates to Alice – when Alice votes then both Bob and Charlie vote alongside Alice (in the current implementation Charlie will not vote when Alice votes).
  4. -
  5. Make a change so that when a delegate votes abstain their delegated votes also vote abstain.
  6. -
  7. Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call.
  8. -
-

Motivation

-

It has become clear since the launch of OpenGov that there are a few common tropes which pop up time and time again:

-
    -
  1. The frequency of referenda is often too high for network participants to have sufficient time to review, comprehend, and ultimately vote on each individual referendum. This means that these network participants end up being inactive in on-chain governance.
  2. -
  3. There are active network participants who are reviewing every referendum and are providing feedback in an attempt to help make the network thrive – but often time these participants do not control enough voting power to influence the network with their positive efforts.
  4. -
  5. Delegating votes for all tracks currently requires long batched calls which result in high fees for the Delegator - resulting in a reluctance from many to delegate their votes.
  6. -
-

We believe (based on feedback from token holders with a larger stake in the network) that if there were some changes made to delegation mechanics, these larger stake holders would be more likely to delegate their voting power to active network participants – thus greatly increasing the support turnout.

-

Stakeholders

-

The primary stakeholders of this RFC are:

- -

Explanation

-

This RFC proposes to make 4 changes to the convictionVoting pallet logic in order to improve the user experience of those delegating their voting power to another account.

-
    -
  1. -

    Allow a Delegator to vote independently of their Delegate if they so desire – this would empower network participants to more actively delegate their voting power to active voters, removing the tedious steps of having to undelegate across an entire track every time they do not agree with their delegate's voting direction for a particular referendum.

    -
  2. -
  3. -

    Allow nested delegations – for example Charlie delegates to Bob who delegates to Alice – when Alice votes then both Bob and Charlie vote alongside Alice (in the current runtime Charlie will not vote when Alice votes) – This would allow network participants who control multiple (possibly derived) accounts to be able to delegate all of their voting power to a single account under their control, which would in turn delegate to a more active voting participant. Then if the delegator wishes to vote independently of their delegate they can control all of their voting power from a single account, which again removes the pain point of having to issue multiple undelegate extrinsics in the event that they disagree with their delegate.

    -
  4. -
  5. -

    Have delegated votes follow their delegates abstain votes – there are times where delegates may vote abstain on a particular referendum and adding this functionality will increase the support of a particular referendum. It has a secondary benefit of meaning that Validators who are delegating their voting power do not lose points in the 1KV program in the event that their delegate votes abstain (another pain point which may be preventing those network participants from delegating).

    -
  6. -
  7. -

    Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call - in order to delegate votes across all tracks, a user must batch 15 calls - resulting in high costs for delegation. A single call for delegate_all/ undelegate_all would reduce the complexity and therefore costs of delegations considerably for prospective Delegators.

    -
  8. -
-

Drawbacks

-

We do not foresee any drawbacks by implementing these changes. If anything we believe that this should help to increase overall voter turnout (via the means of delegation) which we see as a net positive.

-

Testing, Security, and Privacy

-

We feel that the Polkadot Technical Fellowship would be the most competent collective to identify the testing requirements for the ideas presented in this RFC.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This change may add extra chain storage requirements on Polkadot, especially with respect to nested delegations.

-

Ergonomics & Compatibility

-

The change to add nested delegations may affect governance interfaces such as Nova Wallet who will have to apply changes to their indexers to support nested delegations. It may also affect the Polkadot Delegation Dashboard as well as Polkassembly & SubSquare.

-

We want to highlight the importance for ecosystem builders to create a mechanism for indexers and wallets to be able to understand that changes have occurred such as increasing the pallet version, etc.

-

Prior Art and References

-

N/A

-

Unresolved Questions

-

N/A

- -

Additionally we would like to re-open the conversation about the potential for there to be free delegations. This was discussed by Dr Gavin Wood at Sub0 2022 and we feel like this would go a great way towards increasing the amount of network participants that are delegating: https://youtu.be/hSoSA6laK3Q?t=526

-

Overall, we strongly feel that delegations are a great way to increase voter turnout, and the ideas presented in this RFC would hopefully help in that aspect.

-

(source)

-

Table of Contents

- -

RFC-0044: Rent based registration model

-
- - - -
Start Date6 November 2023
DescriptionA new rent based parachain registration model
AuthorsSergej Sakac
-
-

Summary

-

This RFC proposes a new model for a sustainable on-demand parachain registration, involving a smaller initial deposit and periodic rent payments. The new model considers that on-demand chains may be unregistered and later re-registered. The proposed solution also ensures a quick startup for on-demand chains on Polkadot in such cases.

-

Motivation

-

With the support of on-demand parachains on Polkadot, there is a need to explore a new, more cost-effective model for registering validation code. In the current model, the parachain manager is responsible for reserving a unique ParaId and covering the cost of storing the validation code of the parachain. These costs can escalate, particularly if the validation code is large. We need a better, sustainable model for registering on-demand parachains on Polkadot to help smaller teams deploy more easily.

-

This RFC suggests a new payment model to create a more financially viable approach to on-demand parachain registration. In this model, a lower initial deposit is required, followed by recurring payments upon parachain registration.

-

This new model will coexist with the existing one-time deposit payment model, offering teams seeking to deploy on-demand parachains on Polkadot a more cost-effective alternative.

-

Requirements

-
    -
  1. The solution SHOULD NOT affect the current model for registering validation code.
  2. -
  3. The solution SHOULD offer an easily configurable way for governance to adjust the initial deposit and recurring rent cost.
  4. -
  5. The solution SHOULD provide an incentive to prune validation code for which rent is not paid.
  6. -
  7. The solution SHOULD allow anyone to re-register validation code under the same ParaId without the need for redundant pre-checking if it was already verified before.
  8. -
  9. The solution MUST be compatible with the Agile Coretime model, as described in RFC#0001
  10. -
  11. The solution MUST allow anyone to pay the rent.
  12. -
  13. The solution MUST prevent the removal of validation code if it could still be required for disputes or approval checking.
  14. -
-

Stakeholders

- -

Explanation

-

This RFC proposes a set of changes that will enable the new rent based approach to registering and storing validation code on-chain. -The new model, compared to the current one, will require periodic rent payments. The parachain won't be pruned automatically if the rent is not paid, but by permitting anyone to prune the parachain and rewarding the caller, there will be an incentive for the removal of the validation code.

-

On-demand parachains should still be able to utilize the current one-time payment model. However, given the size of the deposit required, it's highly likely that most on-demand parachains will opt for the new rent-based model.

-

Importantly, this solution doesn't require any storage migrations in the current system nor does it introduce any breaking changes. The following provides a detailed description of this solution.

-

Registering an on-demand parachain

-

In the current implementation of the registrar pallet, there are two constants that specify the necessary deposit for parachains to register and store their validation code:

-
#![allow(unused)]
-fn main() {
-trait Config {
-	// -- snip --
-
-	/// The deposit required for reserving a `ParaId`.
-	#[pallet::constant]
-	type ParaDeposit: Get<BalanceOf<Self>>;
-
-	/// The deposit to be paid per byte stored on chain.
-	#[pallet::constant]
-	type DataDepositPerByte: Get<BalanceOf<Self>>;
-}
-}
-

This RFC proposes the addition of three new constants that will determine the payment amount and the frequency of the recurring rent payment:

-
#![allow(unused)]
-fn main() {
-trait Config {
-	// -- snip --
-
-	/// Defines how frequently the rent needs to be paid.
-	///
-	/// The duration is set in sessions instead of block numbers.
-	#[pallet::constant]
-	type RentDuration: Get<SessionIndex>;
-
-	/// The initial deposit amount for registering validation code.
-	///
-	/// This is defined as a proportion of the deposit that would be required in the regular
-	/// model.
-	#[pallet::constant]
-	type RentalDepositProportion: Get<Perbill>;
-
-	/// The recurring rental cost defined as a proportion of the initial rental registration deposit.
-	#[pallet::constant]
-	type RentalRecurringProportion: Get<Perbill>;
-}
-}
-

Users will be able to reserve a ParaId and register their validation code for a proportion of the regular deposit required. However, they must also make additional rent payments at intervals of T::RentDuration.

-

For registering using the new rental system we will have to make modifications to the paras-registrar pallet. We should expose two new extrinsics for this:

-
#![allow(unused)]
-fn main() {
-mod pallet {
-	// -- snip --
-
-	pub fn register_rental(
-		origin: OriginFor<T>,
-		id: ParaId,
-		genesis_head: HeadData,
-		validation_code: ValidationCode,
-	) -> DispatchResult { /* ... */ }
-
-	pub fn pay_rent(origin: OriginFor<T>, id: ParaId) -> DispatchResult {
-		/* ... */ 
-	}
-}
-}
-

A call to register_rental will require the reservation of only a percentage of the deposit that would otherwise be required to register the validation code when using the regular model. -As described later in the Quick para re-registering section below, we will also store the code hash of each parachain to enable faster re-registration after a parachain has been pruned. For this reason the total initial deposit amount is increased to account for that.

-
#![allow(unused)]
-fn main() {
-// The logic for calculating the initial deposit for parachain registered with the 
-// new rent-based model:
-
-let validation_code_deposit = per_byte_fee.saturating_mul((validation_code.0.len() as u32).into());
-
-let head_deposit = per_byte_fee.saturating_mul((genesis_head.0.len() as u32).into())
-let hash_deposit = per_byte_fee.saturating_mul(HASH_SIZE);
-
-let deposit = T::RentalDepositProportion::get().mul_ceil(validation_code_deposit)
-	.saturating_add(T::ParaDeposit::get())
-	.saturating_add(head_deposit)
-	.saturating_add(hash_deposit)
-}
-

Once the ParaId is reserved and the validation code is registered the rent must be periodically paid to ensure the on-demand parachain doesn't get removed from the state. The pay_rent extrinsic should be callable by anyone, removing the need for the parachain to depend on the parachain manager for rent payments.

-

On-demand parachain pruning

-

If the rent is not paid, anyone has the option to prune the on-demand parachain and claim a portion of the initial deposit reserved for storing the validation code. This type of 'light' pruning only removes the validation code, while the head data and validation code hash are retained. The validation code hash is stored to allow anyone to register it again as well as to enable quicker re-registration by skipping the pre-checking process.

-

The moment the rent is no longer paid, the parachain won't be able to purchase on-demand access, meaning no new blocks are allowed. This stage is called the "hibernation" stage, during which all the parachain-related data is still stored on-chain, but new blocks are not permitted. The reason for this is to ensure that the validation code is available in case it is needed in the dispute or approval checking subsystems. Waiting for one entire session will be enough to ensure it is safe to deregister the parachain.

-

This means that anyone can prune the parachain only once the "hibernation" stage is over, which lasts for an entire session after the moment that the rent is not paid.

-

The pruning described here is a light form of pruning, since it only removes the validation code. As with all parachains, the parachain or para manager can use the deregister extrinsic to remove all associated state.

-

Ensuring rent is paid

-

The paras pallet will be loosely coupled with the para-registrar pallet. This approach enables all the pallets tightly coupled with the paras pallet to have access to the rent status information.

-

Once the validation code is stored without having its rent paid the assigner_on_demand pallet will ensure that an order for that parachain cannot be placed. This is easily achievable given that the assigner_on_demand pallet is tightly coupled with the paras pallet.

-

On-demand para re-registration

-

If the rent isn't paid on time, and the parachain gets pruned, the new model should provide a quick way to re-register the same validation code under the same ParaId. This can be achieved by skipping the pre-checking process, as the validation code hash will be stored on-chain, allowing us to easily verify that the uploaded code remains unchanged.

-
#![allow(unused)]
-fn main() {
-/// Stores the validation code hash for parachains that successfully completed the 
-/// pre-checking process.
-///
-/// This is stored to enable faster on-demand para re-registration in case its pvf has been earlier
-/// registered and checked.
-///
-/// NOTE: During a runtime upgrade where the pre-checking rules change this storage map should be
-/// cleared appropriately.
-#[pallet::storage]
-pub(super) type CheckedCodeHash<T: Config> =
-	StorageMap<_, Twox64Concat, ParaId, ValidationCodeHash>;
-}
-

To enable parachain re-registration, we should introduce a new extrinsic in the paras-registrar pallet that allows this. The logic of this extrinsic will be same as regular registration, with the distinction that it can be called by anyone, and the required deposit will be smaller since it only has to cover for the storage of the validation code.

-

Drawbacks

-

This RFC does not alter the process of reserving a ParaId, and therefore, it does not propose reducing it, even though such a reduction could be beneficial.

-

Even though this RFC doesn't delve into the specifics of the configuration values for parachain registration but rather focuses on the mechanism, configuring it carelessly could lead to potential problems.

-

Since the validation code hash and head data are not removed when the parachain is pruned but only when the deregister extrinsic is called, the T::DataDepositPerByte must be set to a higher value to create a strong enough incentive for removing it from the state.

-

Testing, Security, and Privacy

-

The implementation of this RFC will be tested on Rococo first.

-

Proper research should be conducted on setting the configuration values of the new system since these values can have great impact on the network.

-

An audit is required to ensure the implementation's correctness.

-

The proposal introduces no new privacy concerns.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This RFC should not introduce any performance impact.

-

Ergonomics

-

This RFC does not affect the current parachains, nor the parachains that intend to use the one-time payment model for parachain registration.

-

Compatibility

-

This RFC does not break compatibility.

-

Prior Art and References

-

Prior discussion on this topic: https://github.com/paritytech/polkadot-sdk/issues/1796

-

Unresolved Questions

-

None at this time.

- -

As noted in this GitHub issue, we want to raise the per-byte cost of on-chain data storage. However, a substantial increase in this cost would make it highly impractical for on-demand parachains to register on Polkadot. -This RFC offers an alternative solution for on-demand parachains, ensuring that the per-byte cost increase doesn't overly burden the registration process.

-

(source)

-

Table of Contents

- -

RFC-0054: Remove the concept of "heap pages" from the client

-
- - - -
Start Date2023-11-24
DescriptionRemove the concept of heap pages from the client and move it to the runtime.
AuthorsPierre Krieger
-
-

Summary

-

Rather than enforce a limit to the total memory consumption on the client side by loading the value at :heappages, enforce that limit on the runtime side.

-

Motivation

-

From the early days of Substrate up until recently, the runtime was present in two forms: the wasm runtime (wasm bytecode passed through an interpreter) and the native runtime (native code directly run by the client).

-

Since the wasm runtime has a lower amount of available memory (4 GiB maximum) compared to the native runtime, and in order to ensure sure that the wasm and native runtimes always produce the same outcome, it was necessary to clamp the amount of memory available to both runtimes to the same value.

-

In order to achieve this, a special storage key (a "well-known" key) :heappages was introduced and represents the number of "wasm pages" (one page equals 64kiB) of memory that are available to the memory allocator of the runtimes. If this storage key is absent, it defaults to 2048, which is 128 MiB.

-

The native runtime has since then been disappeared, but the concept of "heap pages" still exists. This RFC proposes a simplification to the design of Polkadot by removing the concept of "heap pages" as is currently known, and proposes alternative ways to achieve the goal of limiting the amount of memory available.

-

Stakeholders

-

Client implementers and low-level runtime developers.

-

Explanation

-

This RFC proposes the following changes to the client:

- -

With these changes, the memory available to the runtime is now only bounded by the available memory space (4 GiB), and optionally by the maximum amount of memory specified in the Wasm binary (see https://webassembly.github.io/spec/core/bikeshed/#memories%E2%91%A0). In Rust, the latter can be controlled during compilation with the flag -Clink-arg=--max-memory=....

-

Since the client-side change is strictly more tolerant than before, we can perform the change immediately after the runtime has been updated, and without having to worry about backwards compatibility.

-

This RFC proposes three alternative paths (different chains might choose to follow different paths):

- -

Each parachain can choose the option that they prefer, but the author of this RFC strongly suggests either option C or B.

-

Drawbacks

-

In case of path A, there is one situation where the behaviour pre-RFC is not equivalent to the one post-RFC: when a host function that performs an allocation (for example ext_storage_get) is called, without this RFC this allocation might fail due to reaching the maximum heap pages, while after this RFC this will always succeed. -This is most likely not a problem, as storage values aren't supposed to be larger than a few megabytes at the very maximum.

-

In the unfortunate event where the runtime runs out of memory, path B would make it more difficult to relax the memory limit, as we would need to re-upload the entire Wasm, compared to updating only :heappages in path A or before this RFC. -In the case where the runtime runs out of memory only in the specific event where the Wasm runtime is modified, this could brick the chain. However, this situation is no different than the thousands of other ways that a bug in the runtime can brick a chain, and there's no reason to be particularily worried about this situation in particular.

-

Testing, Security, and Privacy

-

This RFC would reduce the chance of a consensus issue between clients. -The :heappages are a rather obscure feature, and it is not clear what happens in some corner cases such as the value being too large (error? clamp?) or malformed. This RFC would completely erase these questions.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

In case of path A, it is unclear how performances would be affected. Path A consists in moving client-side operations to the runtime without changing these operations, and as such performance differences are expected to be minimal. Overall, we're talking about one addition/subtraction per malloc and per free, so this is more than likely completely negligible.

-

In case of path B and C, the performance gain would be a net positive, as this RFC strictly removes things.

-

Ergonomics

-

This RFC would isolate the client and runtime more from each other, making it a bit easier to reason about the client or the runtime in isolation.

-

Compatibility

-

Not a breaking change. The runtime-side changes can be applied immediately (without even having to wait for changes in the client), then as soon as the runtime is updated, the client can be updated without any transition period. One can even consider updating the client before the runtime, as it corresponds to path C.

-

Prior Art and References

-

None.

-

Unresolved Questions

-

None.

- -

This RFC follows the same path as https://github.com/polkadot-fellows/RFCs/pull/4 by scoping everything related to memory allocations to the runtime.

-

(source)

-

Table of Contents

- -

RFC-0070: X Track for @kusamanetwork

-
- - - -
Start DateJanuary 29, 2024
DescriptionAdd a governance track to facilitate posts on the @kusamanetwork's X account
AuthorAdam Clay Steeber
-
-

Summary

-

This RFC proposes adding a trivial governance track on Kusama to facilitate X (formerly known as Twitter) posts on the @kusamanetwork account. The technical aspect -of implementing this in the runtime is very inconsequential and straight-forward, though it might get more technical if the Fellowship wants to regulate this track -with a non-existent permission set. If this is implemented it would need to be followed up with:

-
    -
  1. the establishment of specifications for proposing X posts via this track, and
  2. -
  3. the development of tools/processes to ensure that the content contained in referenda enacted in this track would be automatically posted on X.
  4. -
-

Motivation

-

The overall motivation for this RFC is to decentralize the management of the Kusama brand/communication channel to KSM holders. This is necessary in my opinion primarily -because of the inactivity of the account in recent history, with posts spanning weeks or months apart. I am currently unaware of who/what entity manages the Kusama -X account, but if they are affiliated with Parity or W3F this proposed solution could also offload some of the legal ramifications of making (or not making) -announcements to the public regarding Kusama. While centralized control of the X account would still be present, it could become totally moot if this RFC is implemented -and the community becomes totally autonomous in the management of Kusama's X posts.

-

This solution does not cover every single communication front for Kusama, but it does cover one of the largest. It also establishes a precedent for other communication channels -that could be offloaded to openGov, provided this proof-of-concept is successful.

-

Finally, this RFC is the epitome of experimentation that Kusama is ideal for. This proposal may spark newfound excitement for Kusama and help us realize Kusama's potential -for pushing boundaries and trying new unconventional ideas.

-

Stakeholders

-

This idea has not been formalized by any individual (or group of) KSM holder(s). To my knowledge the socialization of this idea is contained -entirely in my recent X post here, but it is possible that an idea like this one has been discussed in -other places. It appears to me that the ecosystem would welcome a change like this which is why I am taking action to formalize the discussion.

-

Explanation

-

The implementation of this idea can be broken down into 3 primary phases:

-

Phase 1 - Track configurations

-

First, we begin with this RFC to ensure all feedback can be discussed and implemented in the proposal. After the Fellowship and the community come to a reasonable -agreement on the changes necessary to make this happen, the Fellowship can merge changes into Kusama's runtime to include this new track with appropriate track configurations. -As a starting point, I recommend the following track configurations:

-
const APP_X_POST: Curve = Curve::make_linear(7, 28, percent(50), percent(100));
-const SUP_X_POST: Curve = Curve::make_reciprocal(?, ?, percent(?), percent(?), percent(?));
-
-// I don't know how to configure the make_reciprocal variables to get what I imagine for support,
-// but I recommend starting at 50% support and sharply decreasing such that 1% is sufficient quarterway
-// through the decision period and hitting 0% at the end of the decision period, or something like that.
-
-	(
-		69,
-		pallet_referenda::TrackInfo {
-			name: "x_post",
-			max_deciding: 50,
-			decision_deposit: 1 * UNIT,
-			prepare_period: 10 * MINUTES,
-			decision_period: 4 * DAYS,
-			confirm_period: 10 * MINUTES,
-			min_enactment_period: 1 * MINUTES,
-			min_approval: APP_X_POST,
-			min_support: SUP_X_POST,
-		},
-	),
-
-

I also recommend restricting permissions of this track to only submitting remarks or batches of remarks - that's all we'll need for its purpose. I'm not sure how -easy that is to configure, but it is important since we don't want such an agile track to be able to make highly consequential calls.

-

Phase 2 - Establish Specs for X Post Track Referenda

-

It is important that we establish the specifications of referenda that will be submitted in this track to ensure that whatever automation tool is built can easily -make posts once a referendum is enacted. As stated above, we really only need a system.remark (or batch of remarks) to indicate the contents of a proposed X post. -The most straight-forward way to do this is to require remarks to adhere to X's requirements for making posts via their API.

-

For example, if I wanted to propose a post that contained the text "Hello World!" I would propose a referendum in the X post track that contains the following call data: -0x0000607b2274657874223a202248656c6c6f20576f726c6421227d (i.e. system.remark('{"text": "Hello World!"}')).

-

At first, we could support text posts only to prove the concept. Later on we could expand this spec to add support for media, likes, retweets, replies, polls, and -whatever other X features we want.

-

Phase 3 - Release, Tooling, & Documentation

-

Once we agree on track configurations and specs for referenda in this track, the Fellowship can move forward with merging these changes into Kusama's runtime and -include them in its next release. We could also move forward with developing the necessary tools that would listen for enacted referenda to post automatically on X. -This would require coordination with whoever controls the X account; they would either need to run the tools themselves or add a third party as an authorized user to -run the tools to make posts on the account's behalf. This is a bottleneck for decentralization, but as long as the tools are run by the X account manager or by a trusted third party -it should be fine. I'm open to more decentralized solutions, but those always come at a cost of complexity.

-

For the tools themselves, we could open a bounty on Kusama for developers/teams to bid on. We could also just ask the community to step up with a Treasury proposal -to have anyone fund the build. Or, the Fellowship could make the release of these changes contingent on their endorsement of developers/teams to build these tools. Lots of options! -For the record, me and my team could develop all the necessary tools, but all because I'm proposing these changes doesn't entitle me to funds to build the tools needed -to implement them. Here's what would be needed:

- -

After everything is complete, we can update the Kusama wiki to include documentation on the X post specifications and include links to the tools/UI.

-

Drawbacks

-

The main drawback to this change is that it requires a lot of off-chain coordination. It's easy enough to include the track on Kusama but it's a totally different -challenge to make it function as intended. The tools need to be built and the auth tokens need to be managed. It would certainly add an administrative burden to whoever -manages the X account since they would either need to run the tools themselves or manage auth tokens.

-

This change also introduces on-going costs to the Treasury since it would need to compensate people to support the tools necessary to facilitate this idea. The ultimate -question is whether these on-going costs would be worth the ability for KSM holders to make posts on Kusama's X account.

-

There's also the risk of misconfiguring the track to make referenda too easy to pass, potentially allowing a malicious actor to get content posted on X that violates X's ToS. -If that happens, we risk getting Kusama banned on X!

-

This change might also be outside the scope of the Fellowship/openGov. Perhaps the best solution for the X account is to have the Treasury pay for a professional -agency to manage posts. It wouldn't be decentralized but it would probably be more effective in terms of creating good content.

-

Finally, this solution is merely pseudo-decentralization since the X account manager would still have ultimate control of the account. It's decentralized insofar as -the auth tokens are given to people actually running the tools; a house of cards is required to facilitate X posts via this track. Not ideal.

-

Testing, Security, and Privacy

-

There's major precedent for configuring tracks on openGov given the amount of power tracks have, so it shouldn't be hard to come up with a sound configuration. -That's why I recommend restricting permissions of this track to remarks and batches of remarks, or something equally inconsequential.

-

Building the tools for this implementation is really straight-forward and could be audited by Fellowship members, and the community at large, on Github.

-

The largest security concern would be the management of Kusama's X account's auth tokens. We would need to ensure that they aren't compromised.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

If a track on Kusama promises users that compliant referenda enacted therein would be posted on Kusama's X account, users would expect that track to perform as promised. -If the house of cards tumbles down and a compliant referendum doesn't actually get anything posted, users might think that Kusama is broken or unreliable. This -could be damaging to Kusama's image and cause people to question the soundness of other features on Kusama.

-

As mentioned in the drawbacks, the performance of this feature would depend on off-chain coordinations. We can reduce the administrative burden of these coordinations -by funding third parties with the Treasury to deal with it, but then we're relying on trusting these parties.

-

Ergonomics

-

By adding a new track to Kusama, governance platforms like Polkassembly or Nova Wallet would need to include it on their applications. This shouldn't be too -much of a burden or overhead since they've already built the infrastructure for other openGov tracks.

-

Compatibility

-

This change wouldn't break any compatibility as far as I know.

-

References

-

One reference to a similar feature requiring on-chain/off-chain coordination would be the Kappa-Sigma-Mu Society. Nothing on-chain necessarily enforces the rules -or facilitates bids, challenges, defenses, etc. However, the Society has managed to maintain itself with integrity to its rules. So I don't think this is totally -out of Kusama's scope. But it will require some off-chain effort to maintain.

-

Unresolved Questions

- -

(source)

-

Table of Contents

- -

RFC-0073: Decision Deposit Referendum Track

-
- - - -
Start Date12 February 2024
DescriptionAdd a referendum track which can place the decision deposit on any other track
AuthorsJelliedOwl
-
-

Summary

-

The current size of the decision deposit on some tracks is too high for many proposers. As a result, those needing to use it have to find someone else willing to put up the deposit for them - and a number of legitimate attempts to use the root track have timed out. This track would provide a more affordable (though slower) route for these holders to use the root track.

-

Motivation

-

There have been recent attempts to use the Kusama root track which have timed out with no decision deposit placed. Usually, these referenda have been related to parachain registration related issues.

-

Explanation

-

Propose to address this by adding a new referendum track [22] Referendum Deposit which can place the decision deposit on another referendum. This would require the following changes:

- -

Referendum track parameters - Polkadot

- -

Referendum track parameters - Kusama

- -

Drawbacks

-

This track would provide a route to starting a root referendum with a much-reduced slashable deposit. This might be undesirable but, assuming the decision deposit cost for this track is still high enough, slashing would still act as a disincentive.

-

An alternative to this might be to reduce the decision deposit size some of the more expensive tracks. However, part of the purpose of the high deposit - at least on the root track - is to prevent spamming the limited queue with junk referenda.

-

Testing, Security, and Privacy

-

Will need additional tests case for the modified pallet and runtime. No security or privacy issues.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

No significant performance impact.

-

Ergonomics

-

Only changes related to adding the track. Existing functionality is unchanged.

-

Compatibility

-

No compatibility issues.

-

Prior Art and References

- -

Unresolved Questions

-

Feedback on whether my proposed implementation of this is the best way to address the issue - including which calls the track should be allowed to make. Are the track parameters correct or should be use something different? Alternative would be welcome.

-

(source)

-

Table of Contents

- -

RFC-0074: Stateful Multisig Pallet

-
- - - -
Start Date15 February 2024
DescriptionAdd Enhanced Multisig Pallet to System chains
AuthorsAbdelrahman Soliman (Boda)
-
-

Summary

-

A pallet to facilitate enhanced multisig accounts. The main enhancement is that we store a multisig account in the state with related info (signers, threshold,..etc). The module affords enhanced control over administrative operations such as adding/removing signers, changing the threshold, account deletion, canceling an existing proposal. Each signer can approve/reject a proposal while still exists. The proposal is not intended for migrating or getting rid of existing multisig. It's to allow both options to coexist.

-

For the rest of the RFC We use the following terms:

- -

Motivation

-

Problem

-

Entities in the Polkadot ecosystem need to have a way to manage their funds and other operations in a secure and efficient way. Multisig accounts are a common way to achieve this. Entities by definition change over time, members of the entity may change, threshold requirements may change, and the multisig account may need to be deleted. For even more enhanced hierarchical control, the multisig account may need to be controlled by other multisig accounts.

-

Current native solutions for multisig operations are less optimal, performance-wise (as we'll explain later in the RFC), and lack fine-grained control over the multisig account.

-

Stateless Multisig

-

We refer to current multisig pallet in polkadot-sdk because the multisig account is only derived and not stored in the state. Although deriving the account is determinsitc as it relies on exact users (sorted) and thershold to derive it. This does not allow for control over the multisig account. It's also tightly coupled to exact users and threshold. This makes it hard for an organization to manage existing accounts and to change the threshold or add/remove signers.

-

We believe as well that the stateless multisig is not efficient in terms of block footprint as we'll show in the performance section.

-

Pure Proxy

-

Pure proxy can achieve having a stored and determinstic multisig account from different users but it's unneeded complexity as a way around the limitations of the current multisig pallet. It doesn't also have the same fine grained control over the multisig account.

-

Other points mentioned by @tbaut

- -

Requirements

-

Basic requirements for the Stateful Multisig are:

- -

Use Cases

- -

and much more...

-

Stakeholders

- -

Explanation

-

I've created the stateful multisig pallet during my studies in Polkadot Blockchain Academy under supervision from @shawntabrizi and @ank4n. After that, I've enhanced it to be fully functional and this is a draft PR#3300 in polkadot-sdk. I'll list all the details and design decisions in the following sections. Note that the PR is not 1-1 exactly to the current RFC as the RFC is a more polished version of the PR after updating based on the feedback and discussions.

-

Let's start with a sequence diagram to illustrate the main operations of the Stateful Multisig.

-

multisig operations

-

Notes on above diagram:

- -

State Transition Functions

-

having the following enum to store the call or the hash:

-
#![allow(unused)]
-fn main() {
-enum CallOrHash<T: Config> {
-	Call(<T as Config>::RuntimeCall),
-	Hash(T::Hash),
-}
-}
- -
#![allow(unused)]
-fn main() {
-		/// Creates a new multisig account and attach signers with a threshold to it.
-		///
-		/// The dispatch origin for this call must be _Signed_. It is expected to be a nomral AccountId and not a
-		/// Multisig AccountId.
-		///
-		/// T::BaseCreationDeposit + T::PerSignerDeposit * signers.len() will be held from the caller's account.
-		///
-		/// # Arguments
-		///
-		/// - `signers`: Initial set of accounts to add to the multisig. These may be updated later via `add_signer`
-		/// and `remove_signer`.
-		/// - `threshold`: The threshold number of accounts required to approve an action. Must be greater than 0 and
-		/// less than or equal to the total number of signers.
-		///
-		/// # Errors
-		///
-		/// * `TooManySignatories` - The number of signatories exceeds the maximum allowed.
-		/// * `InvalidThreshold` - The threshold is greater than the total number of signers.
-		pub fn create_multisig(
-			origin: OriginFor<T>,
-			signers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>,
-			threshold: u32,
-		) -> DispatchResult 
-}
- -
#![allow(unused)]
-fn main() {
-		/// Starts a new proposal for a dispatchable call for a multisig account.
-		/// The caller must be one of the signers of the multisig account.
-		/// T::ProposalDeposit will be held from the caller's account.
-		///
-		/// # Arguments
-		///
-		/// * `multisig_account` - The multisig account ID.
-		/// * `call_or_hash` - The enum having the call or the hash of the call to be approved and executed later.
-		///
-		/// # Errors
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
-		/// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. (shouldn't really happen as it's the first approval)
-		pub fn start_proposal(
-			origin: OriginFor<T>,
-			multisig_account: T::AccountId,
-			call_or_hash: CallOrHash,
-		) -> DispatchResult
-}
- -
#![allow(unused)]
-fn main() {
-		/// Approves a proposal for a dispatchable call for a multisig account.
-		/// The caller must be one of the signers of the multisig account.
-		///
-		/// If a signer did approve -> reject -> approve, the proposal will be approved.
-		/// If a signer did approve -> reject, the proposal will be rejected.
-		///
-		/// # Arguments
-		///
-		/// * `multisig_account` - The multisig account ID.
-		/// * `call_or_hash` - The enum having the call or the hash of the call to be approved.
-		///
-		/// # Errors
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
-		/// * `TooManySignatories` - The number of signatories exceeds the maximum allowed.
-		/// This shouldn't really happen as it's an approval, not an addition of a new signer.
-		pub fn approve(
-			origin: OriginFor<T>,
-			multisig_account: T::AccountId,
-			call_or_hash: CallOrHash,
-		) -> DispatchResult
-}
- -
#![allow(unused)]
-fn main() {
-		/// Rejects a proposal for a multisig account.
-		/// The caller must be one of the signers of the multisig account.
-		///
-		/// Between approving and rejecting, last call wins.
-		/// If a signer did approve -> reject -> approve, the proposal will be approved.
-		/// If a signer did approve -> reject, the proposal will be rejected.
-		///
-		/// # Arguments
-		///
-		/// * `multisig_account` - The multisig account ID.
-		/// * `call_or_hash` - The enum having the call or the hash of the call to be rejected.
-		///
-		/// # Errors
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
-		/// * `SignerNotFound` - The caller has not approved the proposal.
-		#[pallet::call_index(3)]
-		#[pallet::weight(Weight::default())]
-		pub fn reject(
-			origin: OriginFor<T>,
-			multisig_account: T::AccountId,
-			call_or_hash: CallOrHash,
-		) -> DispatchResult
-}
- -
#![allow(unused)]
-fn main() {
-		/// Executes a proposal for a dispatchable call for a multisig account.
-		/// Poropsal needs to be approved by enough signers (exceeds or equal multisig threshold) before it can be executed.
-		/// The caller must be one of the signers of the multisig account.
-		///
-		/// This function does an extra check to make sure that all approvers still exist in the multisig account.
-		/// That is to make sure that the multisig account is not compromised by removing an signer during an active proposal.
-		///
-		/// Once finished, the withheld deposit will be returned to the proposal creator.
-		///
-		/// # Arguments
-		///
-		/// * `multisig_account` - The multisig account ID.
-		/// * `call_or_hash` - We should have gotten the RuntimeCall (preimage) and stored it in the proposal by the time the extrinsic is called.
-		///
-		/// # Errors
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
-		/// * `NotEnoughApprovers` - approvers don't exceed the threshold.
-		/// * `ProposalNotFound` -  The proposal does not exist.
-		/// * `CallPreImageNotFound` -  The proposal doesn't have the preimage of the call in the state.
-		pub fn execute_proposal(
-			origin: OriginFor<T>,
-			multisig_account: T::AccountId,
-			call_or_hash: CallOrHash,
-		) -> DispatchResult
-}
- -
#![allow(unused)]
-fn main() {
-		/// Cancels an existing proposal for a multisig account.
-		/// Poropsal needs to be rejected by enough signers (exceeds or equal multisig threshold) before it can be executed.
-		/// The caller must be one of the signers of the multisig account.
-		///
-		/// This function does an extra check to make sure that all rejectors still exist in the multisig account.
-		/// That is to make sure that the multisig account is not compromised by removing an signer during an active proposal.
-		///
-		/// Once finished, the withheld deposit will be returned to the proposal creator./
-		///
-		/// # Arguments
-		///
-		/// * `origin` - The origin multisig account who wants to cancel the proposal.
-		/// * `call_or_hash` - The call or hash of the call to be canceled.
-		///
-		/// # Errors
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `ProposalNotFound` - The proposal does not exist.
-		pub fn cancel_proposal(
-		origin: OriginFor<T>, 
-		multisig_account: T::AccountId, 
-		call_or_hash: CallOrHash) -> DispatchResult
-}
- -
#![allow(unused)]
-fn main() {
-		/// Cancels an existing proposal for a multisig account Only if the proposal doesn't have approvers other than
-		/// the proposer.
-		///
-		///	This function needs to be called from a the proposer of the proposal as the origin.
-		///
-		/// The withheld deposit will be returned to the proposal creator.
-		///
-		/// # Arguments
-		///
-		/// * `multisig_account` - The multisig account ID.
-		/// * `call_or_hash` - The hash of the call to be canceled.
-		///
-		/// # Errors
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `ProposalNotFound` - The proposal does not exist.
-		pub fn cancel_own_proposal(
-			origin: OriginFor<T>,
-			multisig_account: T::AccountId,
-			call_or_hash: CallOrHash,
-		) -> DispatchResult
-}
- -
#![allow(unused)]
-fn main() {
-		/// Cleanup proposals of a multisig account. This function will iterate over a max limit per extrinsic to ensure
-		/// we don't have unbounded iteration over the proposals.
-		///
-		/// The withheld deposit will be returned to the proposal creator.
-		///
-		/// # Arguments
-		///
-		/// * `multisig_account` - The multisig account ID.
-		///
-		/// # Errors
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `ProposalNotFound` - The proposal does not exist.
-		pub fn cleanup_proposals(
-			origin: OriginFor<T>,
-			multisig_account: T::AccountId,
-		) -> DispatchResult
-}
-

Note: Next functions need to be called from the multisig account itself. Deposits are reserved from the multisig account as well.

- -
#![allow(unused)]
-fn main() {
-		/// Adds a new signer to the multisig account.
-		/// This function needs to be called from a Multisig account as the origin.
-		/// Otherwise it will fail with MultisigNotFound error.
-		///
-		/// T::PerSignerDeposit will be held from the multisig account.
-		///
-		/// # Arguments
-		///
-		/// * `origin` - The origin multisig account who wants to add a new signer to the multisig account.
-		/// * `new_signer` - The AccountId of the new signer to be added.
-		/// * `new_threshold` - The new threshold for the multisig account after adding the new signer.
-		///
-		/// # Errors
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `InvalidThreshold` - The threshold is greater than the total number of signers or is zero.
-		/// * `TooManySignatories` - The number of signatories exceeds the maximum allowed.
-		pub fn add_signer(
-			origin: OriginFor<T>,
-			new_signer: T::AccountId,
-			new_threshold: u32,
-		) -> DispatchResult
-}
- -
#![allow(unused)]
-fn main() {
-		/// Removes an  signer from the multisig account.
-		/// This function needs to be called from a Multisig account as the origin.
-		/// Otherwise it will fail with MultisigNotFound error.
-		/// If only one signer exists and is removed, the multisig account and any pending proposals for this account will be deleted from the state.
-		///
-		/// # Arguments
-		///
-		/// * `origin` - The origin multisig account who wants to remove an signer from the multisig account.
-		/// * `signer_to_remove` - The AccountId of the signer to be removed.
-		/// * `new_threshold` - The new threshold for the multisig account after removing the signer. Accepts zero if
-		/// the signer is the only one left.kkk
-		///
-		/// # Errors
-		///
-		/// This function can return the following errors:
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `InvalidThreshold` - The new threshold is greater than the total number of signers or is zero.
-		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
-		pub fn remove_signer(
-			origin: OriginFor<T>,
-			signer_to_remove: T::AccountId,
-			new_threshold: u32,
-		) -> DispatchResult
-}
- -
#![allow(unused)]
-fn main() {
-		/// Sets a new threshold for a multisig account.
-		///	This function needs to be called from a Multisig account as the origin.
-		/// Otherwise it will fail with MultisigNotFound error.
-		///
-		/// # Arguments
-		///
-		/// * `origin` - The origin multisig account who wants to set the new threshold.
-		/// * `new_threshold` - The new threshold to be set.
-		/// # Errors
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		/// * `InvalidThreshold` - The new threshold is greater than the total number of signers or is zero.
-		set_threshold(origin: OriginFor<T>, new_threshold: u32) -> DispatchResult
-}
- -
#![allow(unused)]
-fn main() {
-		/// Deletes a multisig account and all related proposals.
-		///
-		///	This function needs to be called from a Multisig account as the origin.
-		/// Otherwise it will fail with MultisigNotFound error.
-		///
-		/// # Arguments
-		///
-		/// * `origin` - The origin multisig account who wants to cancel the proposal.
-		///
-		/// # Errors
-		///
-		/// * `MultisigNotFound` - The multisig account does not exist.
-		pub fn delete_account(origin: OriginFor<T>) -> DispatchResult
-}
-

Storage/State

- -
#![allow(unused)]
-fn main() {
-#[pallet::storage]
-  pub type MultisigAccount<T: Config> = StorageMap<_, Twox64Concat, T::AccountId, MultisigAccountDetails<T>>;
-
-/// The set of open multisig proposals. A proposal is uniquely identified by the multisig account and the call hash.
-/// (maybe a nonce as well in the future)
-#[pallet::storage]
-pub type PendingProposals<T: Config> = StorageDoubleMap<
-    _,
-    Twox64Concat,
-    T::AccountId, // Multisig Account
-    Blake2_128Concat,
-    T::Hash, // Call Hash
-    MultisigProposal<T>,
->;
-}
-

As for the values:

-
#![allow(unused)]
-fn main() {
-pub struct MultisigAccountDetails<T: Config> {
-	/// The signers of the multisig account. This is a BoundedBTreeSet to ensure faster operations (add, remove).
-	/// As well as lookups and faster set operations to ensure approvers is always a subset from signers. (e.g. in case of removal of an signer during an active proposal)
-	pub signers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>,
-	/// The threshold of approvers required for the multisig account to be able to execute a call.
-	pub threshold: u32,
-	pub deposit: BalanceOf<T>,
-}
-}
-
#![allow(unused)]
-fn main() {
-pub struct MultisigProposal<T: Config> {
-    /// Proposal creator.
-    pub creator: T::AccountId,
-    pub creation_deposit: BalanceOf<T>,
-    /// The extrinsic when the multisig operation was opened.
-    pub when: Timepoint<BlockNumberFor<T>>,
-    /// The approvers achieved so far, including the depositor.
-    /// The approvers are stored in a BoundedBTreeSet to ensure faster lookup and operations (approve, reject).
-    /// It's also bounded to ensure that the size don't go over the required limit by the Runtime.
-    pub approvers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>,
-    /// The rejectors for the proposal so far.
-    /// The rejectors are stored in a BoundedBTreeSet to ensure faster lookup and operations (approve, reject).
-    /// It's also bounded to ensure that the size don't go over the required limit by the Runtime.
-    pub rejectors: BoundedBTreeSet<T::AccountId, T::MaxSignatories>,
-    /// The block number until which this multisig operation is valid. None means no expiry.
-    pub expire_after: Option<BlockNumberFor<T>>,
-}
-}
-

For optimization we're using BoundedBTreeSet to allow for efficient lookups and removals. Especially in the case of approvers, we need to be able to remove an approver from the list when they reject their approval. (which we do lazily when execute_proposal is called).

-

There's an extra storage map for the deposits of the multisig accounts per signer added. This is to ensure that we can release the deposits when the multisig removes them even if the constant deposit per signer changed in the runtime later on.

-

Considerations & Edge cases

-

Removing an signer from the multisig account during an active proposal

-

We need to ensure that the approvers are always a subset from signers. This is also partially why we're using BoundedBTreeSet for signers and approvers. Once execute proposal is called we ensure that the proposal is still valid and the approvers are still a subset from current signers.

-

Multisig account deletion and cleaning up existing proposals

-

Once the last signer of a multisig account is removed or the multisig approved the account deletion we delete the multisig accound from the state and keep the proposals until someone calls cleanup_proposals multiple times which iterates over a max limit per extrinsic. This is to ensure we don't have unbounded iteration over the proposals. Users are already incentivized to call cleanup_proposals to get their deposits back.

-

Multisig account deletion and existing deposits

-

We currently just delete the account without checking for deposits (Would like to hear your thoughts here). We can either

- -

Approving a proposal after the threshold is changed

-

We always use latest threshold and don't store each proposal with different threshold. This allows the following:

- -

Drawbacks

- -

Testing, Security, and Privacy

-

Standard audit/review requirements apply.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

Doing back of the envelop calculation to proof that the stateful multisig is more efficient than the stateless multisig given it's smaller footprint size on blocks.

-

Quick review over the extrinsics for both as it affects the block size:

-

Stateless Multisig: -Both as_multi and approve_as_multi has a similar parameters:

-
#![allow(unused)]
-fn main() {
-origin: OriginFor<T>,
-threshold: u16,
-other_signatories: Vec<T::AccountId>,
-maybe_timepoint: Option<Timepoint<BlockNumberFor<T>>>,
-call_hash: [u8; 32],
-max_weight: Weight,
-}
-

Stateful Multisig: -We have the following extrinsics:

-
#![allow(unused)]
-fn main() {
-pub fn start_proposal(
-			origin: OriginFor<T>,
-			multisig_account: T::AccountId,
-			call_or_hash: CallOrHash,
-		)
-}
-
#![allow(unused)]
-fn main() {
-pub fn approve(
-			origin: OriginFor<T>,
-			multisig_account: T::AccountId,
-			call_or_hash: CallOrHash,
-		)
-}
-
#![allow(unused)]
-fn main() {
-pub fn execute_proposal(
-			origin: OriginFor<T>,
-			multisig_account: T::AccountId,
-			call_or_hash: CallOrHash,
-		)
-}
-

The main takeway is that we don't need to pass the threshold and other signatories in the extrinsics. This is because we already have the threshold and signatories in the state (only once).

-

So now for the caclulations, given the following:

- -

The table calculates if each of the K multisig accounts has one proposal and it gets approved by the 2N/3 and then executed. How much did the total Blocks and States sizes increased by the end of the day.

-

Note: We're not calculating the cost of proposal as both in statefull and stateless multisig they're almost the same and gets cleaned up from the state once the proposal is executed or canceled.

-

Stateless effect on blocksizes = 2/3KN^2 (as each user of the 2/3 users will need to call approve_as_multi with all the other signatories(N) in extrinsic body)

-

Stateful effect on blocksizes = K * N (as each user will need to call approve with the multisig account only in extrinsic body)

-

Stateless effect on statesizes = Nil (as the multisig account is not stored in the state)

-

Stateful effect on statesizes = K*N (as each multisig account (K) will be stored with all the signers (K) in the state)

-
- - -
PalletBlock SizeState Size
Stateless2/3KN^2Nil
StatefulK*NK*N
-
-

Simplified table removing K from the equation: -| Pallet | Block Size | State Size | -|----------------|:-------------:|-----------:| -| Stateless | N^2 | Nil | -| Stateful | N | N |

-

So even though the stateful multisig has a larger state size, it's still more efficient in terms of block size and total footprint on the blockchain.

-

Ergonomics

-

The Stateful Multisig will have better ergonomics for managing multisig accounts for both developers and end-users.

-

Compatibility

-

This RFC is compatible with the existing implementation and can be handled via upgrades and migration. It's not intended to replace the existing multisig pallet.

-

Prior Art and References

-

multisig pallet in polkadot-sdk

-

Unresolved Questions

- - - -

(source)

-

Table of Contents

- -

RFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytes

-
- - - -
Start Date20 Feb 2024
DescriptionIncrease the maximum length of identity PGP fingerprint values from 20 bytes
AuthorsLuke Schoen
-
-

Summary

-

This proposes to increase the maximum length of PGP Fingerprint values from a 20 bytes/chars limit to a 40 bytes/chars limit.

-

Motivation

-

Background

-

Pretty Good Privacy (PGP) Fingerprints are shorter versions of their corresponding Public Key that may be printed on a business card.

-

They may be used by someone to validate the correct corresponding Public Key.

-

It should be possible to add PGP Fingerprints to Polkadot on-chain identities.

-

GNU Privacy Guard (GPG) is compliant with PGP and the two acronyms are used interchangeably.

-

Problem

-

If you want to set a Polkadot on-chain identity, users may provide a PGP Fingerprint value in the "pgpFingerprint" field, which may be longer than 20 bytes/chars (e.g. PGP Fingerprints are 40 bytes/chars long), however that field can only store a maximum length of 20 bytes/chars of information.

-

Possible disadvantages of the current 20 bytes/chars limitation:

- -

Solution Requirements

-

The maximum length of identity PGP Fingerprint values should be increased from the current 20 bytes/chars limit at least a 40 bytes/chars limit to support PGP Fingerprints and GPG Fingerprints.

-

Stakeholders

- -

Explanation

-

If a user tries to setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), then if they try to provide their 40 character long PGP Fingerprint or GPG Fingerprint, which is longer than the maximum length of 20 bytes/chars [u8;20], then they will encounter this error:

-
createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Struct: failed on pgpFingerprint: Option<[u8;20]>:: Expected input with 20 bytes (160 bits), found 40 bytes
-
-

Increasing maximum length of identity PGP Fingerprint values from the current 20 bytes/chars limit to at least a 40 bytes/chars limit would overcome these errors and support PGP Fingerprints and GPG Fingerprints, satisfying the solution requirements.

-

Drawbacks

-

No drawbacks have been identified.

-

Testing, Security, and Privacy

-

Implementations would be tested for adherance by checking that 40 bytes/chars PGP Fingerprints are supported.

-

No effect on security or privacy has been identified than already exists.

-

No implementation pitfalls have been identified.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.

-

To minimize additional overhead the proposal suggests a 40 bytes/chars limit since that would at least provide support for PGP Fingerprints, satisfying the solution requirements.

-

Ergonomics

-

No potential ergonomic optimizations have been identified.

-

Compatibility

-

Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

-

Prior Art and References

-

No prior articles or references.

-

Unresolved Questions

-

No further questions at this stage.

- -

Relates to RFC entitled "Increase maximum length of identity raw data values from 32 bytes".

-

(source)

-

Table of Contents

- -

RFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker pallet

-
- - - -
Start Date25 Apr 2024
DescriptionAdd slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker pallet
AuthorsLuke Schoen
-
-

Summary

-

This proposes to require a slashable deposit in the broker pallet when initially purchasing or renewing Bulk Coretime or Instantaneous Coretime cores.

-

Additionally, it proposes to record a reputational status based on the behavior of the purchaser, as it relates to their use of Kusama Coretime cores that they purchase, and to possibly reserve a proportion of the cores for prospective purchasers that have an on-chain identity.

-

Motivation

-

Background

-

There are sales of Kusama Coretime cores that are scheduled to occur later this month by Coretime Marketplace Lastic.xyz initially in limited quantities, and potentially also by RegionX in future that is subject to their Polkadot referendum #582. This poses a risk in that some Kusama Coretime core purchasers may buy Kusama Coretime cores when they have no intention of actually placing a workload on them or leasing them out, which would prevent those that wish to purchase and actually use Kusama Coretime cores from being able to use any at cores at all.

-

Problem

-

The types of purchasers may include:

- -

Chaoatic repurcussions could include the following:

- -

Solution Requirements

-
    -
  1. -

    On-chain identity. It may be possible to circumvent bots and scalpers to an extent by requiring a proportion of Kusama Coretime purchasers to have an on-chain identity. As such, a possible solution could be to allow the configuration of a threshold in the Broker pallet that reserves a proportion of the cores for accounts that have an on-chain identity, that reverts to a waiting list of anonymous account purchasers if the reserved proportion of cores remain unsold.

    -
  2. -
  3. -

    Slashable deposit. A viable solution could be to require a slashable deposit to be locked prior to the purchase or renewal of a core, similar to how decision deposits are used in OpenGov to prevent spam, but where if you buy a Kusama Coretime core you could be challenged by one of more collectives of fishermen to provide proof against certain criteria of how you used it, and if you fail to provide adequate evidence in response to that scrutiny, then you would lose a proportion of that deposit and face restrictions on purchasing or renewing cores in future that may also be configured on-chain.

    -
  4. -
  5. -

    Reputation. To disincentivise certain behaviours, a reputational status indicator could be used to record the historic behavior of the purchaser and whether on-chain judgement has determined they have adequately rectified that behaviour, as it relates to their usage of Kusama Coretime cores that they purchase.

    -
  6. -
-

Stakeholders

- -

Drawbacks

-

Performance

-

The slashable deposit if set too high, may result in an economic impact, where less Kusama Coretime core sales are purchased.

-

Testing, Security, and Privacy

-

Lack of a slashable deposit in the Broker pallet is a security concern, since it exposes Kusama Coretime sales to potential abuse.

-

Reserving a proportion of Kusama Coretime sales cores for those with on-chain identities should not be to the exclusion of accounts that wish to remain anonymous or cause cores to be wasted unnecessarily. As such, if cores that are reserved for on-chain identities remain unsold then they should be released to anonymous accounts that are on a waiting list.

-

No implementation pitfalls have been identified.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

It should improve performance as it reduces the potential for state bloat since there is less risk of undesirable Kusama Coretime sales activity that would be apparent with no requirement for a slashable deposit or there being no reputational risk to purchasers that waste or misuse Kusama Coretime cores.

-

The solution proposes to minimize the risk of some Kusama Coretime cores not even being used or leased to perform any tasks at all.

-

It will be important to monitor and manage the slashable deposits, purchaser reputations, and utilization of the proportion of cores that are reserved for accounts with an on-chain identity.

-

Ergonomics

-

The mechanism for setting a slashable deposit amount, should avoid undue complexity for users.

-

Compatibility

-

Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

-

Prior Art and References

-

Prior Art

-

No prior articles.

-

Unresolved Questions

-

None

- -

None

-

(source)

-

Table of Contents

- -

RFC-0001: Secondary Market for Regions

-
- - - -
Start Date2024-06-09
DescriptionImplement a secondary market for region listings and sales
AuthorsAurora Poppyseed, Philip Lucsok
-
-

Summary

-

This RFC proposes the addition of a secondary market feature to either the broker pallet or as a separate pallet maintained by Lastic, enabling users to list and purchase regions. This includes creating, purchasing, and removing listings, as well as emitting relevant events and handling associated errors.

-

Motivation

-

Currently, the broker pallet lacks functionality for a secondary market, which limits users' ability to freely trade regions. This RFC aims to introduce a secure and straightforward mechanism for users to list regions they own for sale and allow other users to purchase these regions.

-

While integrating this functionality directly into the broker pallet is one option, another viable approach is to implement it as a separate pallet maintained by Lastic. This separate pallet would have access to the broker pallet and add minimal functionality necessary to support the secondary market.

-

Adding smart contracts to the Coretime chain could also address this need; however, this process is expected to be lengthy and complex. We cannot afford to wait for this extended timeline to enable basic secondary market functionality. By proposing either integration into the broker pallet or the creation of a dedicated pallet, we can quickly enhance the flexibility and utility of the broker pallet, making it more user-friendly and valuable.

-

Stakeholders

-

Primary stakeholders include:

- -

Explanation

-

This RFC introduces the following key features:

-
    -
  1. -

    Storage Changes:

    - -
  2. -
  3. -

    New Dispatchable Functions:

    - -
  4. -
  5. -

    Events:

    - -
  6. -
  7. -

    Error Handling:

    - -
  8. -
  9. -

    Testing:

    - -
  10. -
-

Drawbacks

-

The main drawback of adding the additional complexity directly to the broker pallet is the potential increase in maintenance overhead. Therefore, we propose adding additional functionality as a separate pallet on the Coretime chain. To take the pressure off from implementing these features, implementation along with unit tests would be taken care of by Lastic (Aurora Makovac, Philip Lucsok).

-

There are potential risks of security vulnerabilities in the new market functionalities, such as unauthorized region transfers or incorrect balance adjustments. Therefore, extensive security measures would have to be implemented.

-

Testing, Security, and Privacy

-

Testing

- -

Security

- -

Privacy

- -

Performance, Ergonomics, and Compatibility

-

Performance

- -

Ergonomics

- -

Compatibility

- -

Prior Art and References

- -

Unresolved Questions

- - - -

(source)

-

Table of Contents

- -

RFC-0002: Smart Contracts on the Coretime Chain

-
- - - -
Start Date2024-06-09
DescriptionImplement smart contracts on the Coretime chain
AuthorsAurora Poppyseed, Phil Lucksok
-
-

Summary

-

This RFC proposes the integration of smart contracts on the Coretime chain to enhance flexibility and enable complex decentralized applications, including secondary market functionalities.

-

Motivation

-

Currently, the Coretime chain lacks the capability to support smart contracts, which limits the range of decentralized applications that can be developed and deployed. By enabling smart contracts, the Coretime chain can facilitate more sophisticated functionalities such as automated region trading, dynamic pricing mechanisms, and other decentralized applications that require programmable logic. This will enhance the utility of the Coretime chain, attract more developers, and create more opportunities for innovation.

-

Additionally, while there is a proposal (#885) to allow EVM-compatible contracts on Polkadot’s Asset Hub, the implementation of smart contracts directly on the Coretime chain will provide synchronous interactions and avoid the complexities of asynchronous operations via XCM.

-

Stakeholders

-

Primary stakeholders include:

- -

Explanation

-

This RFC introduces the following key components:

-
    -
  1. -

    Smart Contract Support:

    - -
  2. -
  3. -

    Storage and Execution:

    - -
  4. -
  5. -

    Integration with Existing Pallets:

    - -
  6. -
  7. -

    Security and Auditing:

    - -
  8. -
-

Drawbacks

-

There are several drawbacks to consider:

- -

Testing, Security, and Privacy

-

Testing

- -

Security

- -

Privacy

- -

Performance, Ergonomics, and Compatibility

-

Performance

- -

Ergonomics

- -

Compatibility

- -

Prior Art and References

- -

Unresolved Questions

- - - -

By enabling smart contracts on the Coretime chain, we can significantly expand its capabilities and attract a wider range of developers and users, fostering innovation and growth in the ecosystem.

-

(source)

-

Table of Contents

- -

RFC-0000: Feature Name Here

-
- - - -
Start Date13 July 2024
DescriptionImplement off-chain parachain runtime upgrades
Authorseskimor
-
-

Summary

-

Change the upgrade process of a parachain runtime upgrade to become an off-chain -process with regards to the relay chain. Upgrades are still contained in -parachain blocks, but will no longer need to end up in relay chain blocks nor in -relay chain state.

-

Motivation

-

Having parachain runtime upgrades go through the relay chain has always been -seen as a scalability concern. Due to optimizations in statement -distribution and asynchronous backing it became less crucial and got -de-prioritized, the original issue can be found -here.

-

With the introduction of Agile Coretime and in general our efforts to reduce -barrier to entry more for Polkadot more, the issue becomes more relevant again: -We would like to reduce the required storage deposit for PVF registration, with -the aim to not only make it cheaper to run a parachain (bulk + on-demand -coretime), but also reduce the amount of capital required for the deposit. With -this we would hope for far more parachains to get registered, thousands -potentially even ten thousands. With so many PVFs registered, updates are -expected to become more frequent and even attacks on service quality for other -parachains would become a higher risk.

-

Stakeholders

- -

Explanation

-

The issues with on-chain runtime upgrades are:

-
    -
  1. Needlessly costly.
  2. -
  3. A single runtime upgrade more or less occupies an entire relay chain block, thus it -might affect also other parachains, especially if their candidates are also -not negligible due to messages for example or they want to uprade their -runtime at the same time.
  4. -
  5. The signalling of the parachain to notify the relay chain of an upcoming -runtime upgrade already contains the upgrade. Therefore the only way to rate -limit upgrades is to drop an already distributed update in the size of -megabytes: With the result that the parachain missed a block and more -importantly it will try again with the very next block, until it finally -succeeds. If we imagine to reduce capacity of runtime upgrades to let's say 1 -every 100 relay chain blocks, this results in lot's of wasted effort and lost -blocks.
  6. -
-

We discussed introducing a separate signalling before submitting the actual -runtime, but I think we should just go one step further and make upgrades fully -off-chain. Which also helps bringing down deposit costs in a secure way, as we -are also actually reducing costs for the network.

-

Introduce a new UMP message type RequestCodeUpgrade

-

As part of elastic scaling we are already planning to increase flexibility of UMP -messages, we can now use this to our advantage and introduce another UMP message:

-
#![allow(unused)]
-fn main() {
-enum UMPSignal {
-  // For elastic scaling
-  OnCore(CoreIndex),
-  // For off-chain upgrades
-  RequestCodeUpgrade(Hash),
-}
-}
-

We could also make that new message a regular XCM, calling an extrinsic on the -relay chain, but we will want to look into that message right after validation -on the backers on the node side, making a straight forward semantic message more -apt for the purpose.

-

Handle RequestCodeUpgrade on backers

-

We will introduce a new request/response protocol for both collators and -validators, with the following request/response:

-
#![allow(unused)]
-fn main() {
-struct RequestBlob {
-  blob_hash: Hash,
-}
-
-struct BlobResponse {
-  blob: Vec<u8>
-}
-}
-

This protocol will be used by backers to request the PVF from collators in the -following conditions:

-
    -
  1. They received a collation sending RequestCodeUpgrade.
  2. -
  3. They received a collation, but they don't yet have the code that was -previously registered on the relaychain. (E.g. disk pruned, new validator)
  4. -
-

In case they received the collation via PoV distribution instead of from the -collator itself, they will use the exact same message to fetch from the valiator -they got the PoV from.

-

Get the new code to all validators

-

Once the candidate issuing RequestCodeUpgrade got backed on chain, validators -will start fetching the code from the backers as part of availability -distribution.

-

To mitigate attack vectors we should make sure that serving requests for code -can be treated as low priority requests. Thus I am suggesting the following -scheme:

-

Validators will notice via a runtime API (TODO: Define) that a new code has been requested, the -API will return the Hash and a counter, which starts at some configurable -value e.g. 10. The validators are now aware of the new hash and start fetching, -but they don't have to wait for the fetch to succeed to sign their bitfield.

-

Then on each further candidate from that chain that counter gets decremented. -Validators which have not yet succeeded fetching will now try again. This game -continues until the counter reached 0. Now it is mandatory to have to code in -order to sign a 1 in the bitfield.

-

PVF pre-checking will happen after the candidate which brought the counter to -0 has been successfully included and thus is also able to assume that 2/3 of -the validators have the code.

-

This scheme serves two purposes:

-
    -
  1. Fetching can happen over a longer period of time with low priority. E.g. if -we waited for the PVF at the very first avaialbility distribution, this might -actually affect liveness of other chains on the same core. Distributing -megabytes of data to a thousand validators, might take a bit. Thus this helps -isolating parachains from each other.
  2. -
  3. By configuring the initial counter value we can affect how much an upgrade -costs. E.g. forcing the parachain to produce 10 blocks, means 10x the cost -for issuing an update. If too frequent upgrades ever become a problem for the -system, we have a knob to make them more costly.
  4. -
-

On-chain code upgrade process

-

First when a candidate is backed we need to make the new hash available -(together with a counter) via a -runtime API so validators in availability distribution can check for it and -fetch it if changed (see previous section). For performance reasons, I think we -should not do an additional call, but replace the existing one with one containing the new additional information (Option<(Hash, Counter)>).

-

Once the candidate gets included (counter 0), the hash is given to pre-checking -and only after pre-checking succeeded (and a full session passed) it is finally -enacted and the parachain can switch to the new code. (Same process as it used -to be.)

-

Handling new validators

-

Backers

-

If a backer receives a collation for a parachain it does not yet have the code -as enacted on chain (see "On-chain code upgrade process"), it will use above -request/response protocol to fetch it from whom it received the collation.

-

Availablity Distribution

-

Validators in availability distribution will be changed to only sign a 1 in -the bitfield of a candidate if they not only have the chunk, but also the -currently active PVF. They will fetch it from backers in case they don't have it -yet.

-

How do other parties get hold of the PVF?

-

Two ways:

-
    -
  1. Discover collators via relay chain DHT and request from them: Preferred way, -as it is less load on validators.
  2. -
  3. Request from validators, which will serve on a best effort basis.
  4. -
-

Pruning

-

We covered how validators get hold of new code, but when can they prune old ones? -In principle it is not an issue, if some validors prune code, because:

-
    -
  1. We changed it so that a candidate is not deemed available if validators were -not able to fetch the PVF.
  2. -
  3. Backers can always fetch the PVF from collators as part of the collation -fetching.
  4. -
-

But the majority of validators should always keep the latest code of any -parachain and only prune the previous one, once the first candidate using the -new code got finalized. This ensures that disputes will always be able to -resolve.

-

Drawbacks

-

The major drawback of this solution is the same as any solution the moves work -off-chain, it adds complexity to the node. E.g. nodes needing the PVF, need to -store them separately, together with their own pruning strategy as well.

-

Testing, Security, and Privacy

-

Implementations adhering to this RFC, will respond to PVF requests with the -actual PVF, if they have it. Requesters will persist received PVFs on disk for -as long as they are replaced by a new one. Implementations must not be lazy -here, if validators only fetched the PVF when needed, they can be prevented from -participating in disputes.

-

Validators should treat incoming requests for PVFs in general with rather low -priority, but should prefer fetches from other validators over requests from -random peers.

-

Given that we are altering what set bits in the availability bitfields mean (not -only chunk, but also PVF available), it is important to have enough validators -upgraded, before we allow collators to make use of the new runtime upgrade -mechanism. Otherwise we would risk disputes to not being able to succeed.

-

This RFC has no impact on privacy.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This proposal lightens the load on the relay chain and is thus in general -beneficial for the performance of the network, this is achieved by the -following:

-
    -
  1. Code upgrades are still propagated to all validators, but only once, not -twice (First statements, then via the containing relay chain block).
  2. -
  3. Code upgrades are only communicated to validators and other nodes which are -interested, not any full node as it has been before.
  4. -
  5. Relay chain block space is preserved. Previously we could only do one runtime -upgrade per relay chain block, occupying almost all of the blockspace.
  6. -
  7. Signalling an upgrade no longer contains the upgrade, hence if we need to -push back on an upgrade for whatever reason, no network bandwidth and core -time gets wasted because of this.
  8. -
-

Ergonomics

-

End users are only affected by better performance and more stable block times. -Parachains will need to implement the introduced request/response protocol and -adapt to the new signalling mechanism via an UMP message, instead of sending -the code upgrade directly.

-

For parachain operators we should emit events on initiated runtime upgrade and -each block reporting the current counter and how many blocks to go until the -upgrade gets passed to pre-checking. This is especially important for on-demand -chains or bulk users not occupying a full core. Further more that behaviour of -requiring multiple blocks to fully initiate a runtime upgrade needs to be well -documented.

-

Compatibility

-

We will continue to support the old mechanism for code upgrades for a while, but -will start to impose stricter limits over time, with the number of registered -parachains going up. With those limits in place parachains not migrating to the -new scheme might be having a harder time upgrading and will miss more blocks. I -guess we can be lenient for a while still, so the upgrade path for -parachains should be rather smooth.

-

In total the protocol changes we need are:

-

For validators and collators:

-
    -
  1. New request/response protocol for fetching PVF data from collators and -validators.
  2. -
  3. New UMP message type for signalling a runtime upgrade.
  4. -
-

Only for validators:

-
    -
  1. New runtime API for determining to be enacted code upgrades.
  2. -
  3. Different behaviour of bitfields (only sign a 1 bit, if validator has chunk + -"hot" PVF).
  4. -
  5. Altered behaviour in availability-distribution: Fetch missing PVFS.
  6. -
-

Prior Art and References

-

Off-chain runtime upgrades have been discussed before, the architecture -described here is simpler though as it piggybacks on already existing features, -namely:

-
    -
  1. availability-distribution: No separate I have code messages anymore.
  2. -
  3. Existing pre-checking.
  4. -
-

https://github.com/paritytech/polkadot-sdk/issues/971

-

Unresolved Questions

-
    -
  1. What about the initial runtime, shall we make that off-chain as well?
  2. -
  3. Good news, at least after the first upgrade, no code will be stored on chain -any more, this means that we also have to redefine the storage deposit now. -We no longer charge for chain storage, but validator disk storage -> Should -be cheaper. Solution to this: Not only store the hash on chain, but also the -size of the data. Then define a price per byte and charge that, but: - -
  4. -
-

TODO: Fully resolve these questions and incorporate in RFC text.

- -

Further Hardening

-

By no longer having code upgrade go through the relay chain, occupying a full relay -chain block, the impact on other parachains is already greatly reduced, if we -make distribution and PVF pre-checking low-priority processes on validators. The -only thing attackers might be able to do is delay upgrades of other parachains.

-

Which seems like a problem to be solved once we actually see it as a problem in -the wild (and can already be mitigated by adjusting the counter). The good thing -is that we have all the ingredients to go further if need be. Signalling no -longer actually includes the code, hence there is no need to reject the -candidate: The parachain can make progress even if we choose not to immediately -act on the request and no relay chain resources are wasted either.

-

We could for example introduce another UMP Signalling message -RequestCodeUpgradeWithPriority which not just requests a code upgrade, but -also offers some DOT to get ranked up in a queue.

-

Generalize this off-chain storage mechanism?

-

Making this storage mechanism more general purpose is worth thinking about. E.g. -by resolving above "fee" question, we might also be able to resolve the pruning -question in a more generic way and thus could indeed open this storage facility -for other purposes as well. E.g. smart contracts, so the PoV would only need to -reference contracts by hash and the actual PoV is stored on validators and -collators and thus no longer needs to be part of the PoV.

-

A possible avenue would be to change the response to:

-
#![allow(unused)]
-fn main() {
-enum BlobResponse {
-  Blob(Vec<u8>),
-  Blobs(MerkleTree),
-}
-}
-

With this the hash specified in the request can also be a merkle root and the -responder will respond with the entire merkle tree (only hashes, no payload). -Then the requester can traverse the leaf hashes and use the same request -response protocol to request any locally missing blobs in that tree.

-

One leaf would for example be the PVF others could be smart contracts. With a -properly specified format (e.g. which leaf is the PVF?), what we got here is -that a parachain can not only update its PVF, but additional data, -incrementally. E.g. adding another smart contract, does not require resubmitting -the entire PVF to validators, only the root hash on the relay chain gets -updated, then validators fetch the merkle tree and only fetch any missing -leaves. That additional data could be made available to the PVF via a to be -added host function. The nice thing about this approach is, that while we can -upgrade incrementally, lifetime is still tied to the PVF and we get all the same -guarantees. Assuming the validators store blobs by hash, we even get disk -sharing if multiple parachains use the same data (e.g. same smart contracts).

-

(source)

-

Table of Contents

- -

RFC-0106: Remove XCM fees mode

-
- - - -
Start Date23 July 2024
DescriptionRemove the SetFeesMode instruction and fees_mode register from XCM
AuthorsFrancisco Aguirre
-
-

Summary

-

The SetFeesMode instruction and the fees_mode register allow for the existence of JIT withdrawal. -JIT withdrawal complicates the fee mechanism and leads to bugs and unexpected behaviour. -The proposal is to remove said functionality. -Another effort to simplify fee handling in XCM.

-

Motivation

-

The JIT withdrawal mechanism creates bugs such as not being able to get fees when all assets are put into holding and none left in the origin location. -This is a confusing behavior, since there are funds for fees, just not where the XCVM wants them. -The XCVM should have only one entrypoint to fee payment, the holding register. -That way there is also less surface for bugs.

-

Stakeholders

- -

Explanation

-

The SetFeesMode instruction will be removed. -The Fees Mode register will be removed.

-

Drawbacks

-

Users will have to make sure to put enough assets in WithdrawAsset when -previously some things might have been charged directly from their accounts. -This leads to a more predictable behaviour though so it will only be -a drawback for the minority of users.

-

Testing, Security, and Privacy

-

Implementations and benchmarking must change for most existing pallet calls -that send XCMs to other locations.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

Performance will be improved since unnecessary checks will be avoided.

-

Ergonomics

-

JIT withdrawal was a way of side-stepping the regular flow of XCM programs. -By removing it, the spec is simplified but now old use-cases have to work with -the original intended behaviour, which may result in more implementation work.

-

Ergonomics for users will undoubtedly improve since the system is more predictable.

-

Compatibility

-

Existing programs in the ecosystem will break. -The instruction should be deprecated as soon as this RFC is approved -(but still fully supported), then removed in a subsequent XCM version -(probably deprecate in v5, remove in v6).

-

Prior Art and References

-

The previous RFC PR on the xcm-format repo, before XCM RFCs were moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/57.

-

Unresolved Questions

-

None.

- -

The new generic fees mechanism is related to this proposal and further stimulates it as the JIT withdraw fees mechanism will become useless anyway.

-

(source)

-

Table of Contents

- -

RFC-0111: Pure Proxy Replication

-
- - - -
Start Date12 Aug 2024.
DescriptionReplication of pure proxy account ownership to a remote chain
Authors@muharem @xlc
-
-

Summary

-

This RFC proposes a solution to replicate an existing pure proxy from one chain to others. The aim is to address the current limitations where pure proxy accounts, which are keyless, cannot have their proxy relationships recreated on different chains. This leads to issues where funds or permissions transferred to the same keyless account address on chains other than its origin chain become inaccessible.

-

Motivation

-

A pure proxy is a new account created by a primary account. The primary account is set as a proxy for the pure proxy account, managing it. Pure proxies are keyless and non-reproducible, meaning they lack a private key and have an address derived from a preimage determined by on-chain logic. More on pure proxies can be found here.

-

For the purpose of this document, we define a keyless account as a "pure account", the controlling account as a "proxy account", and the entire relationship as a "pure proxy".

-

The relationship between a pure account (e.g., account ID: pure1) and its proxy (e.g., account ID: alice) is stored on-chain (e.g., parachain A) and currently cannot be replicated to another chain (e.g., parachain B). Because the account pure1 is keyless and its proxy relationship with alice is not replicable from the parachain A to the parachain B, alice does not control the pure1 account on the parachain B.

-

Although this behaviour is not promised, users and clients often mistakenly expect alice to control the same pure1 account on the parachain B. As a result, assets transferred to the account or permissions granted for it are inaccessible. Several factors contribute to this misuse:

- -

Given that these mistakes are likely, it is necessary to provide a solution to either prevent them or enable access to a pure account on a target chain.

-

Stakeholders

-

Runtime Users, Runtime Devs, wallets, cross-chain dApps.

-

Explanation

-

One possible solution is to allow a proxy to create or replicate a pure proxy relationship for the same pure account on a target chain. For example, Alice, as the proxy of the pure1 pure account on parachain A, should be able to set a proxy for the same pure1 account on parachain B.

-

To minimise security risks, the parachain B should grant the parachain A the least amount of permission necessary for the replication. First, Parachain A claims to Parachain B that the operation is commanded by the pure account, and thus by its proxy, and second, provides proof that the account is keyless.

-

The replication process will be facilitated by XCM, with the first claim made using the DescendOrigin instruction. The replication call on parachain A would require a signed origin by the pure account and construct an XCM program for parachain B, where it first descends the origin, resulting in the ParachainA/AccountId32(pure1) origin location on the receiving side.

-

To prove that the pure account is keyless, the client must provide the initial preimage used by the chain to derive the pure account. Parachain A verifies it and sends it to parachain B with the replication request.

-

We can draft a pallet extension for the proxy pallet, which needs to be initialised on both sides to enable replication:

-
#![allow(unused)]
-fn main() {
-// Simplified version to illustrate the concept.
-mod pallet_proxy_replica {
-  /// The part of the pure account preimage that has to be provided by a client.
-  struct Witness {
-    /// Pure proxy swapner
-    spawner: AccountId,
-    /// Disambiguation index
-    index: u16,
-    /// The block height and extrinsic index of when the pure account was created.  
-    block_number: BlockNumber,
-    /// The extrinsic index.
-    ext_index: u32,
-    // Part of the preimage, but constant.
-    // proxy_type: ProxyType::Any,
-  } 
-  // ...
-  
-  /// The replication call to be initiated on the source chain.
-  // Simplified version, the XCM part will be abstracted by the `Config` trait.
-  fn replicate(origin: SignedOrigin, witness: Witness, proxy: xcm::Location) -> ... {
-       let pure = ensure_signed(origin);
-       ensure!(pure == proxy_pallet::derive_pure_account(witness), Error::NotPureAccount);
-       let xcm = vec![
-         DescendOrigin(who),
-         Transact(
-             // …
-             origin_kind: OriginKind::Xcm,
-	     call: pallet_proxy_replica::create(witness, proxy).encode(),
-         )
-       ];
-       xcmTransport::send(xcm)?;
-  }
-  // …
-  
-  /// The call initiated by the source chain on the receiving chain.
-  // `Config::CreateOrigin` - generally open for whitelisted parachain IDs and 
-  // converts `Origin::Xcm(ParachainA/AccountId32(pure1))` to `AccountID(pure1)`.
-  fn create(origin: Config::CreateOrigin, witness: Witness, proxy: xcm::Location) -> ... {
-       let pure = T::CreateOrigin::ensure_origin(origin);
-       ensure!(pure == proxy_pallet::derive_pure_account(witness), Error::NotPureAccount);
-       proxy_pallet::create_pure_proxy(pure, proxy);
-  }
-}
-
-}
-

Drawbacks

-

There are two disadvantages to this approach:

- -

We could eliminate the first disadvantage by allowing only the spawner of the pure proxy to recreate the pure proxies, if they sign the transaction on a remote chain and supply the witness/preimage. Since the preimage of a pure account includes the account ID of the spawner, we can verify that the account signing the transaction is indeed the spawner of the given pure account. However, this approach would grant exclusive rights to the spawner over the pure account, which is not a property of pure proxies at present. This is why it's not an option for us.

-

As an alternative to requiring clients to provide a witness data, we could label pure accounts on the source chain and trust it on the receiving chain. However, this would require the receiving chain to place greater trust in the source chain. If the source chain is compromised, any type of account on the trusting chain could also be compromised.

-

A conceptually different solution would be to not implement replication of pure proxies and instead inform users that ownership of a pure proxy on one chain does not imply ownership of the same account on another chain. This solution seems complex, as it would require UIs and clients to adapt to this understanding. Moreover, mistakes would likely remain unavoidable.

-

Testing, Security, and Privacy

-

Each chain expressly authorizes another chain to replicate its pure proxies, accepting the inherent risk of that chain potentially being compromised. This authorization allows a malicious actor from the compromised chain to take control of any pure proxy account on the chain that granted the authorization. However, this is limited to pure proxies that originated from the compromised chain if they have a chain-specific seed within the preimage.

-

There is a security issue, not introduced by the proposed solution but worth mentioning. The same spawner can create the pure accounts on different chains controlled by the different accounts. This is possible because the current preimage version of the proxy pallet does not include any non-reproducible, chain-specific data, and elements like block numbers and extrinsic indexes can be reproduced with some effort. This issue could be addressed by adding a chain-specific seed into the preimages of pure accounts.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The replication is facilitated by XCM, which adds some additional load to the communication channel. However, since the number of replications is not expected to be large, the impact is minimal.

-

Ergonomics

-

The proposed solution does not alter any existing interfaces. It does require clients to obtain the witness data which should not be an issue with support of an indexer.

-

Compatibility

-

None.

-

Prior Art and References

-

None.

-

Unresolved Questions

-

None.

- - -

(source)

-

Table of Contents

- -

RFC-0112: Compress the State Response Message in State Sync

-
- - - -
Start Date14 August 2024
DescriptionCompress the state response message to reduce the data transfer during the state syncing
AuthorsLiu-Cheng Xu
-
-

Summary

-

This RFC proposes compressing the state response message during the state syncing process to reduce the amount of data transferred.

-

Motivation

-

State syncing can require downloading several gigabytes of data, particularly for blockchains with large state sizes, such as Astar, which -has a state size exceeding 5 GiB (https://github.com/AstarNetwork/Astar/issues/1110). This presents a significant -challenge for nodes with slower network connections. Additionally, the current state sync implementation lacks a persistence feature (https://github.com/paritytech/polkadot-sdk/issues/4), -meaning any network disruption forces the node to re-download the entire state, making the process even more difficult.

-

Stakeholders

-

This RFC benefits all projects utilizing the Substrate framework, specifically in improving the efficiency of state syncing.

- -

Explanation

-

The largest portion of the state response message consists of either CompactProof or Vec<KeyValueStateEntry>, depending on whether a proof is requested (source):

- -

Drawbacks

-

None identified.

-

Testing, Security, and Privacy

-

The code changes required for this RFC are straightforward: compress the state response on the sender side and decompress it on the receiver side. Existing sync tests should ensure functionality remains intact.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This RFC optimizes network bandwidth usage during state syncing, particularly for blockchains with gigabyte-sized states, while introducing negligible CPU overhead for compression and decompression. For example, compressing the state response during a recent Polkadot warp sync (around height #22076653) reduces the data transferred from 530,310,121 bytes to 352,583,455 bytes — a 33% reduction, saving approximately 169 MiB of data.

-

Performance data is based on this patch, with logs available here.

-

Ergonomics

-

None.

-

Compatibility

-

No compatibility issues identified.

-

Prior Art and References

-

None.

-

Unresolved Questions

-

None.

- -

None.

-

(source)

-

Table of Contents

- -

RFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signatures

-
- - - -
Start Date16 August 2024
DescriptionHost function to verify NIST-P256 elliptic curve signatures.
AuthorsRodrigo Quelhas
-
-

Summary

-

This RFC proposes a new host function, secp256r1_ecdsa_verify_prehashed, for verifying NIST-P256 signatures. The function takes as input the message hash, r and s components of the signature, and the x and y coordinates of the public key. By providing this function, runtime authors can leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures, reducing computational costs and improving overall performance.

-

Motivation

-

“secp256r1” elliptic curve is a standardized curve by NIST which has the same calculations by different input parameters with “secp256k1” elliptic curve. The cost of combined attacks and the security conditions are almost the same for both curves. Adding a host function can provide signature verifications using the “secp256r1” elliptic curve in the runtime and multi-faceted benefits can occur. One important factor is that this curve is widely used and supported in many modern devices such as Apple’s Secure Enclave, Webauthn, Android Keychain which proves the user adoption. Additionally, the introduction of this host function could enable valuable features in the account abstraction which allows more efficient and flexible management of accounts by transaction signs in mobile devices. -Most of the modern devices and applications rely on the “secp256r1” elliptic curve. The addition of this host function enables a more efficient verification of device native transaction signing mechanisms. For example:

-
    -
  1. Apple's Secure Enclave: There is a separate “Trusted Execution Environment” in Apple hardware which can sign arbitrary messages and can only be accessed by biometric identification.
  2. -
  3. Webauthn: Web Authentication (WebAuthn) is a web standard published by the World Wide Web Consortium (W3C). WebAuthn aims to standardize an interface for authenticating users to web-based applications and services using public-key cryptography. It is being used by almost all of the modern web browsers.
  4. -
  5. Android Keystore: Android Keystore is an API that manages the private keys and signing methods. The private keys are not processed while using Keystore as the applications’ signing method. Also, it can be done in the “Trusted Execution Environment” in the microchip.
  6. -
  7. Passkeys: Passkeys is utilizing FIDO Alliance and W3C standards. It replaces passwords with cryptographic key-pairs which is also can be used for the elliptic curve cryptography.
  8. -
-

Stakeholders

- -

Explanation

-

This RFC proposes a new host function for runtime authors to leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures.

-

Proposed host function signature:

-
#![allow(unused)]
-fn main() {
-fn ext_secp256r1_ecdsa_verify_prehashed_version_1(
-    sig: &[u8; 64],
-    msg: &[u8; 32],
-    pub_key: &[u8; 64],
-) -> bool;
-}
-

The host function MUST return true if the signature is valid or false otherwise.

-

Drawbacks

-

N/A

-

Testing, Security, and Privacy

-

Security

-

The changes are not directly affecting the protocol security, parachains are not enforced to use the host function.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

N/A

-

Ergonomics

-

The host function proposed in this RFC allows parachain runtime developers to use a more efficient verification mechanism for "secp256r1" elliptic curve signatures.

-

Compatibility

-

Parachain teams will need to include this host function to upgrade.

-

Prior Art and References

- -

(source)

-

Table of Contents

- -

RFC-0117: The Unbrick Collective

-
- - - -
Start Date22 August 2024
DescriptionThe Unbrick Collective aims to help teams rescuing a para once it stops producing blocks
AuthorsBryan Chen, Pablo Dorado
-
-

Summary

-

A followup of the RFC-0014. This RFC proposes adding a new collective to the Polkadot Collectives -Chain: The Unbrick Collective, as well as improvements in the mechanisms that will allow teams -operating paras that had stopped producing blocks to be assisted, in order to restore the production -of blocks of these paras.

-

Motivation

-

Since the initial launch of Polkadot parachains, there has been many incidients causing parachains -to stop producing new blocks (therefore, being bricked) and many occurrences that required -Polkadot governance to update the parachain head state/wasm. This can be due to many reasons range -from incorrectly registering the initial head state, inability to use sudo key, bad runtime -migration, bad weight configuration, and bugs in the development of the Polkadot SDK.

-

Currently, when the para is not unlocked in the paras registrar1, the Root origin is required to -perform such actions, involving the governance process to invoke this origin, which can be very -resource expensive for the teams. The long voting and enactment times also could result significant -damage to the parachain and users.

-

Finally, other instances of governance that might enact a call using the Root origin (like the -Polkadot Fellowship), due to the nature of their mission, are not fit to carry these kind of tasks.

-

In consequence, the idea of a Unbrick Collective that can provide assistance to para teams when -they brick and further protection against future halts is reasonable enough.

-

Stakeholders

- -

Explanation

-

The Collective

-

The Unbrick Collective is defined as an unranked collective of members, not paid by the Polkadot -Treasury. Its main goal is to serve as a point of contact and assistance for enacting the actions -needed to unbrick a para. Such actions are:

- -

In order to ensure these changes are safe enough for the network, actions enacted by the Unbrick -Collective must be whitelisted via similar mechanisms followed by collectives like the Polkadot -Fellowship. This will prevent unintended, not overseen changes on other paras to occur.

-

Also, teams might opt-in to delegate handling their para in the registry to the Collective. This -allows to perform similar actions using the paras registrar, allowing for a shorter path to unbrick a -para.

-

Initially, the unbrick collective has powers similar to a parachains own sudo, but permits more -decentralized control. In the future, Polkadot shall provide functionality like SPREE or JAM that -exceeds sudo permissions, so the unbrick collective cannot modify those state roots or code.

-

The Unbrick Process

-
flowchart TD
-    A[Start] 
-
-    A -- Bricked --> C[Request para unlock via Root]
-    C -- Approved --> Y
-    C -- Rejected --> A
-    
-    D[unbrick call proposal on WhitelistedUnbrickCaller]
-    E[whitelist call proposal on the Unbrick governance]
-    E -- call whitelisted --> F[unbrick call enacted]
-    D -- unbrick called --> F
-    F --> Y
-
-    A -- Not bricked --> O[Opt-in to the Collective]
-    O -- Bricked --> D
-    O -- Bricked --> E
-
-    Y[update PVF / head state] -- Unbricked --> Z[End]
-
-

Initially, a para team has two paths to handle a potential unbrick of their para in the case it -stops producing blocks.

-
    -
  1. Opt-in to the Unbrick Collective: This is done by delegating the handling of the para -in the paras registrar to an origin related to the Collective. This doesn't require unlocking -the para. This way, the collective is enabled to perform changes in the paras module, after -the Unbrick Process proceeds.
  2. -
  3. Request a Para Unlock: In case the para hasn't delegated its handling in the paras -registrar, it'll be still possible for the para team to submit a proposal to unlock the para, -which can be assisted by the Collective. However, this involves submitting a proposal to the Root -governance origin.
  4. -
-

Belonging to the Collective

-

The collective will be initially created without members (no seeding). There will be additional -governance proposals to setup the seed members.

-

The origins able to modify the members of the collective are:

- -

The members are responsible to verify the technical details of the unbrick requests (i.e. the hash -of the new PVF being set). Therefore, they must have the technical capacity to perform such tasks.

-

Suggested requirements to become a member are the following:

- -

Drawbacks

-

The ability to modify the Head State and/or the PVF of a para means a possibility to perform -arbitrary modifications of it (i.e. take control the native parachain token or any bridged assets -in the para).

-

This could introduce a new attack vector, and therefore, such great power needs to be handled -carefully.

-

Testing, Security, and Privacy

-

The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

-

An audit will be required to ensure the implementation doesn't introduce unwanted side effects.

-

There are no privacy related concerns.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This RFC should not introduce any performance impact.

-

Ergonomics

-

This RFC should improve the experience for new and existing parachain teams, lowering the barrier -to unbrick a stalled para.

-

Compatibility

-

This RFC is fully compatible with existing interfaces.

-

Prior Art and References

- -

Unresolved Questions

- - -
1 -

The paras registrar refers to a pallet in the Relay, responsible to gather registration info -of the paras, the locked/unlocked state, and the manager info.

-
- -

(source)

-

Table of Contents

- -

RFC-0120: Referenda Confirmation by Candle Mechanism

-
- - - -
Start Date22 March 2024
DescriptionProposal to decide polls after confirm period via a mechanism similar to a candle auction
AuthorsPablo Dorado, Daniel Olano
-
-

Summary

-

In an attempt to mitigate risks derived from unwanted behaviours around long decision periods on -referenda, this proposal describes how to finalize and decide a result of a poll via a -mechanism similar to candle auctions.

-

Motivation

-

Referenda protocol provide permissionless and efficient mechanisms to enable governance actors to -decide the future of the blockchains around Polkadot network. However, they pose a series of risks -derived from the game theory perspective around these mechanisms. One of them being where an actor -uses the the public nature of the tally of a poll as a way of determining the best point in time to -alter a poll in a meaningful way.

-

While this behaviour is expected based on the current design of the referenda logic, given the -recent extension of ongoing times (up to 1 month), the incentives for a bad actor to cause losses -on a proposer, reflected as wasted cost of opportunity increase, and thus, this otherwise -reasonable outcome becomes an attack vector, a potential risk to mitigate, especially when such -attack can compromise critical guarantees of the protocol (such as its upgradeability).

-

To mitigate this, the referenda underlying mechanisms should incentive actors to cast their votes -on a poll as early as possible. This proposal's approach suggests using a Candle Auction that will -be determined right after the confirm period finishes, thus decreasing the chances of actors to -alter the results of a poll on confirming state, and instead incentivizing them to cast their votes -earlier, on deciding state.

-

Stakeholders

- -

Explanation

-

Currently, the process of a referendum/poll is defined as an sequence between an ongoing state -(where accounts can vote), comprised by a with a preparation period, a decision period, and a -confirm period. If the poll is passing before the decision period ends, it's possible to push -forward to confirm period, and still, go back in case the poll fails. Once the decision period -ends, a failure of the poll in the confirm period will lead to the poll to ultimately be rejected.

-
stateDiagram-v2
-    sb: Submission
-    pp: Preparation Period
-    dp: Decision Period
-    cp: Confirmation Period
-    state dpd <<choice>>
-    state ps <<choice>>
-    cf: Approved
-    rj: Rejected
-
-    [*] --> sb
-    sb --> pp
-    pp --> dp: decision period starts
-    dp --> cp: poll is passing
-    dp --> ps: decision period ends
-    ps --> cp: poll is passing
-    cp --> dpd: poll fails
-    dpd --> dp: decision period not deadlined
-    ps --> rj: poll is failing
-    dpd --> rj: decision period deadlined
-    cp --> cf
-    cf --> [*]
-    rj --> [*]
-
-

This specification proposes three changes to implement this candle mechanism:

-
    -
  1. -

    This mechanism MUST be enabled via a configuration parameter. Once enabled, the referenda system -MAY record the next poll ID from which to start enabling this mechanism. This is to preserve -backwards compatibility with currently ongoing polls.

    -
  2. -
  3. -

    A record of the poll status (whether it is passing or not) is stored once the decision period is -finished.

    -
  4. -
  5. -

    Including a Finalization period as part of the ongoing state. From this point, the poll MUST -be immutable at this point.

    -

    This period begins the moment after confirm period ends, and extends the decision for a couple -of blocks, until the VRF seed used to determine the candle block can be considered -"good enough". This is, not known before the ongoing period (decision/confirmation) was over.

    -

    Once that happens, a random block within the confirm period is chosen, and the decision of -approving or rejecting the poll is based on the status immediately before the block where the -candle was "lit-off".

    -
  6. -
-

When enabled, the state diagram for the referenda system is the following:

-
stateDiagram-v2
-    sb: Submission
-    pp: Preparation Period
-    dp: Decision Period
-    cp: Confirmation Period
-    cds: Finalization
-    state dpd <<choice>>
-    state ps <<choice>>
-    state cd <<choice>>
-    cf: Approved
-    rj: Rejected
-
-    [*] --> sb
-    sb --> pp
-    pp --> dp: decision period starts
-    dp --> cp: poll is passing
-    ps --> cp: poll is passing
-    dp --> ps: decision period ends
-    ps --> rj: poll is failing
-    cp --> dpd: poll fails
-    dpd --> cp: decision period over
-    dpd --> dp: decision period not over
-    cp --> cds: confirmation period ends
-    cds --> cd: define moment when candle lit-off
-    cd --> cf: poll passed
-    cd --> rj: poll failed
-    cf --> [*]
-    rj --> [*]
-
-

Drawbacks

-

This approach doesn't include a mechanism to determine whether a change of the poll status in the -confirming period is due to a legitimate change of mind of the voters, or an exploitation of its -aforementioned vulnerabilities (like a sniping attack), instead treating all of them as potential -attacks.

-

This is an issue that can be addressed by additional mechanisms, and heuristics that can help -determine the probability of a change of poll status to happen as a result of a legitimate behaviour.

-

Testing, Security, and Privacy

-

The implementation of this RFC will be tested on testnets (Paseo and Westend) first. Furthermore, it -should be enabled in a canary network (like Kusama) to ensure the behaviours it is trying to address -is indeed avoided.

-

An audit will be required to ensure the implementation doesn't introduce unwanted side effects.

-

There are no privacy related concerns.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The added steps imply pessimization, necessary to meet the expected changes. An implementation MUST -exit from the Finalization period as early as possible to minimize this impact.

-

Ergonomics

-

This proposal does not alter the already exposed interfaces or developers or end users. However, they -must be aware of the changes in the additional overhead the new period might incur (these depend on the -implemented VRF).

-

Compatibility

-

This proposal does not break compatibility with existing interfaces, older versions, but it alters the -previous implementation of the referendum processing algorithm.

-

An acceptable upgrade strategy that can be applied is defining a point in time (block number, poll index) -from which to start applying the new mechanism, thus, not affecting the already ongoing referenda.

-

Prior Art and References

- -

Unresolved Questions

- - -

A proposed implementation of this change can be seen on this Pull Request.

-

(source)

-

Table of Contents

- -

RFC-0124: Extrinsic version 5

-
- - - -
Start Date18 October 2024
DescriptionDefinition and specification of version 5 extrinsics
AuthorsGeorge Pisaltu
-
-

Summary

-

This RFC proposes the definition of version 5 extrinsics along with changes to the specification and -encoding from version 4.

-

Motivation

-

RFC84 -introduced the specification of General transactions, a new type of extrinsic besides the Signed -and Unsigned variants available previously in version 4. Additionally, -RFC99 -introduced versioning of transaction extensions through an extra byte in the extrinsic encoding. -Both of these changes require an extrinsic format version bump as both the semantics around -extensions as well as the actual encoding of extrinsics need to change to accommodate these new -features.

-

Stakeholders

- -

Explanation

-

Changes to extrinsic authorization

-

The introduction of General transactions allows the authorization of any and all origins through -extensions. This means that, with the appropriate extension, General transactions can replicate -the same behavior present-day v4 Signed transactions. Specifically for Polkadot chains, an example -implementation for such an extension is -VerifySignature, -introduced in the Transaction Extension -PR3685. Other extensions can be inserted -into the extension pipeline to authorize different custom origins. Therefore, a Signed extrinsic -variant is redundant to a General one strictly in terms of user functionality and could eventually -be deprecated and removed.

-

Encoding format for version 5

-

As with version 4, the encoded extrinsic v5 is a SCALE encoded vector of bytes (u8), therefore -starting with the encoded length of the following bytes in compact format. The leading byte after -the length determines the version and type of extrinsic, as specified by -RFC84. -For reasons mentioned above, this RFC removes the Signed variant for v5 extrinsics.

-

For Bare extrinsics, the following bytes will just be the encoded call and nothing else.

-

For General transactions, as stated in -RFC99, -an extension version byte must be added to the extrinsic format. This byte should allow runtimes to -expose more than one set of extensions which can be used for a transaction. As far as the v5 -extrinsic encoding is concerned, this extension byte should be encoded immediately after the leading -encoding byte. The extension version byte should be included in payloads to be signed by all -extensions configured by runtime devs to ensure a user's extension version choice cannot be altered -by third parties.

-

After the extension version byte, the extensions will be encoded next, followed by the call itself.

-

A quick visualization of the encoding:

- -

Signatures on Polkadot in General transactions

-

In order to run a transaction with a signed origin in extrinsic version 5, a user must create the -transaction with an instance of at least one extension responsible for authorizing Signed origins -with a provided signature.

-

As stated before, PR3685 comes with a -Transaction Extension which replicates the current Signed transactions in v5 extrinsics, namely -VerifySignature. -I will use this extension as an example on how to replicate current Signed transaction -functionality in the new v5 extrinsic format, though the runtime logic is not constrained to this -particular implementation.

-

This extension leverages the new inherited implication functionality introduced in -TransactionExtension and creates a payload to be signed using the data of all extensions after -itself in the extension pipeline. This extension can be configured to accept a MultiSignature, -which makes it compatible with all signature types currently used in Polkadot.

-

In the context of using an extension such as VerifySignature, for example, to replicate current -Signed transaction functionality, the steps to generate the payload to be signed would be:

-
    -
  1. The extension version byte, call, extension and extension implicit should be encoded (by -"extension" and its implicit we mean only the data associated with extensions that follow this -one in the composite extension type);
  2. -
  3. The result of the encoding should then be hashed using the BLAKE2_256 hasher;
  4. -
  5. The result of the hash should then be signed with the signature type specified in the extension definition.
  6. -
-
#![allow(unused)]
-fn main() {
-// Step 1: encode the bytes
-let encoded = (extension_version_byte, call, transaction_extension, transaction_extension_implicit).encode();
-// Step 2: hash them
-let payload = blake2_256(&encoded[..]);
-// Step 3: sign the payload
-let signature = keyring.sign(&payload[..]);
-}
-

Summary of changes in version 5

-

In order to minimize the number of changes to the extrinsic format version and also to help all -consumers downstream in the transition period between these extrinsic versions, we should:

- -

Drawbacks

-

The metadata will have to accommodate two distinct extrinsic format versions at a given point in -time in order to provide the new functionality in a non-breaking way for users and tooling.

-

Although having to support multiple extrinsic versions in metadata involves extra work, the change -is ultimately an improvement to metadata and the extra functionality may be useful in other future -scenarios.

-

Testing, Security, and Privacy

-

There is no impact on testing, security or privacy.

-

Performance, Ergonomics, and Compatibility

-

This change makes the authorization through signatures configurable by runtime devs in version 5 -extrinsics, as opposed to version 4 where the signing payload algorithm and signatures were -hardcoded. This moves the responsibility of ensuring proper authentication through -TransactionExtension to the runtime devs, but a sensible default which closely resembles the -present day behavior will be provided in VerifySignature.

-

Performance

-

There is no performance impact.

-

Ergonomics

-

Tooling will have to adapt to be able to tell which authorization scheme is used by a particular -transaction by decoding the extension and checking which particular TransactionExtension in the -pipeline is enabled to do the origin authorization. Previously, this was done by simply checking -whether the transaction is signed or unsigned, as there was only one method of authentication.

-

Compatibility

-

As long as extrinsic version 4 is still exposed in the metadata when version 5 will be introduced, -the changes will not break existing infrastructure. This should give enough time for tooling to -support version 5 and to remove version 4 in the future.

-

Prior Art and References

-

This is a result of the work in Extrinsic -Horizon and -RFC99.

-

Unresolved Questions

-

None.

- -

Following this change, extrinsic version 5 will be introduced as part of the Extrinsic -Horizon effort, which will shape future -work.

-

(source)

-

Table of Contents

- -

RFC-0138: Election mechanism for invulnerable collators on system chains

-
- - - -
Start Date28 January 2025
DescriptionMechanism for electing invulnerable collators on system chains.
AuthorsGeorge Pisaltu
-
-

Summary

-

The current election mechanism for permissionless collators on system chains was introduced in -RFC-7. -This RFC proposes a mechanism to facilitate replacements in the invulnerable sets of system chains -by breaking down barriers that exist today.

-

Motivation

-

Following RFC-7 and the introduction of the collator election -mechanism, anyone can now collate on a system -chain on the permissionless slots, but the invulnerable set has been a contentious issue among -current collators on system chains as the path towards an invulnerable slot is almost impossible to -pursue. From a technical standpoint, nothing is preventing a permissionless collator, or anyone for -that matter, from submitting a referendum to remove one collator from the invulnerable set and add -themselves in their place. However, as it quickly becomes obvious, such a referendum would be very -difficult to pass under normal circumstances.

-

The first reason this would be contentious is that there is no significant difference between -collators with good performance. There is no reasonable way to keep track of arbitrary data on-chain -which could clearly and consistently distinguish between one collator or another. Collators that -perform well propose blocks when they are supposed to and that is what is being tracked on-chain. -Any other metrics for performance are arbitrary as far as the runtime logic is concerned and should -be reasoned upon by humans using public discussion and a referendum.

-

The second reason for this is the inherently social aspect of this action. Even just proposing the -referendum would be perceived as an attack on a specific collator in the set, singling them out, -when in reality the proposer likely just wants to be part of the set and doesn't necessarily care -who is kicked. In order to consolidate their position, the other invulnerables will rally behind the -one that was challenged and the bid to replace one invulnerable will probably fail.

-

Existing invulnerables have a vested interest in protecting any other invulnerable from such attacks -so that they themselves would be protected if need be. The existing collator set has already -demonstrated that they can work together and subvert the free market mechanism offered by the -runtime when they agreed to not outbid each other on permissionless slots after the new collator -selection mechanism was introduced.

-

The existing invulnerable set on a given system chain are there for a reason; they have demonstrated -reliability in the past and were rewarded by governance with invulnerable slots and a bounty to -cover their expenses. This means they have a solid reputation and a strong say in governance over -matters related to collation. The optics of a permissionless collator actively challenging an -invulnerable, even when it's justified, combined with the support of other invulnerables, make the -invulnerable set de facto immutable.

-

While there should be strong guarantees of stability for invulnerables, they should not be a closed -circle. The aim of this RFC is to provide a clear, reasonable, fair, and socially acceptable path -for a permissionless collator with a proven track record to become an invulnerable while preserving -the stability of the invulnerable set of a system parachain.

-

Stakeholders

- -

Explanation

-

Proposal

-

This RFC proposes a periodic, mandatory, round-robin, two-round election mechanism for -invulnerables.

-

How it works

-

The election should be implemented on top of the current logic in the collator-selection pallet. -In this mechanism, candidates would register for the first round of the next election by placing -deposits.

-

When the period between elections passes, the first round of the election starts with every -candidate that registered, excluding the incumbent, as an option on the ballot. Votes should be -expressed using tokens which should not be available for other transactions while the election is -ongoing in order to introduce some opportunity cost to voting. After a certain amount of time -passes, the election closes and the candidate who wins the first round of the election advances to -the second and final round of the election. The deposits held for voting in the first round must be -released before the second round.

-

In the second round of the election, the winner of the first round has the chance to replace the -invulnerable currently holding the slot. A referendum is submitted to replace the incumbent with the -winner of the first round of the election, turning the second round of the election into a -conviction-voting compatible referendum. If the referendum fails, the incumbent keeps their slot.

-

The period between elections should be configurable at the collator-selection pallet level. A full -election cycle ends when the pallet held an election for every single invulnerable slot. To qualify -for the ballot, candidates must have been collating for at least one period from a permissionless -slot or be the incumbent.

-

Motivations behind the particularities of this mechanism

- -

Corner cases

- -

Drawbacks

-

The first major drawback of this proposal is that it would put more responsibility on governance by -having people vote regularly in order to maintain the invulnerable collator set on each chain. Today -the collator-selection pallet employs a fire-and-forget system where the invulnerables are chosen -once by governance vote. Although in theory governance can always intervene to elect new -invulnerables, for the reasons stated in this RFC this is not the case in practice. Moving away from -this system means more action is needed from governance to ensure the stability of the invulnerable -collator sets on each system chain, which automatically increases the probability of errors. -However, governance is the ultimate source of truth on-chain and there is a lot more at stake in the -hands of governance than the invulnerable collator sets on system chains, so I think this risk is -acceptable.

-

The second drawback of this proposal is the imperfect voting mechanism. Probably the simplest and -most fair voting system for this scenario would have been First Past the Post, where all candidates -participate in a single election round and the candidate with the most votes wins the election -outright. However, the downside of such a system is the technical complexity behind running such an -election on-chain. This election mechanism would require a multiple choice referendum implementation -in the collator-selection pallet or at the system level somewhere else (e.g. on the Collectives -chain), which would be a mix between the conviction-voting and staking pallets and would -possibly communicate with all system chains via XCM. While this voting system could be useful in -other contexts as well, I don't think it's worth conditioning the invulnerable collator redesign on -a separate implementation of the multiple choice voting system when the Two-Round proposed achieves -the objectives of this RFC.

-

Testing, Security, and Privacy

-

All election mechanisms as well as corner cases can be covered with unit tests.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The chain will have to run extrinsics to start and end elections periodically, but the impact in -terms of weight and PoV size is negligible.

-

Ergonomics

-

The invulnerables will be the most affected group, as they will have to now compete in elections -periodically to secure their spots. Permissionless candidates will now have a clear, though not -guaranteed, path towards becoming an invulnerable, at least for a period of time.

-

Compatibility

-

Any changes to the election mechanism of invulnerables should be compatible with the current -invulnerable set interaction with the collator set chosen at the session boundary. The current -invulnerable set for each chain can be grandfathered in when upgrading the collator-selection -pallet version.

-

Prior Art and References

-

This RFC builds on RFC-7, which introduced the election mechanism for system chain collators.

-

Unresolved Questions

- - -

The main spinoff of this RFC might be a multiple choice poll implementation in a separate pallet to -hold a First Past the Post election instead of the Two-Round System proposed, which would prompt a -migration to the new voting system within the collator-selection pallet. Additionally, a more -complex solution where the voting for all system chains happens in a single place which then sends -XCM responses with election results back to system chains can be implemented in the next iteration -of this RFC.

(source)

Table of Contents

-

Motivation

+

Motivation

The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.

The API of many host functions contains buffer allocations. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32-byte buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer to free the buffer.

Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack, in the worst case, consists simply of decreasing a number; in the best case, it is free. Doing so would save many VM memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.

Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way: every allocation is rounded up to the next power of two, and once a piece of memory is allocated it can only be reused for allocations which also round up to the exactly the same size. So in theory it's possible to end up in a situation where we still technically have plenty of free memory, but our allocations will fail because all of that memory is reserved for differently sized buckets. That behavior is de-facto hardcoded into the current protocol and for determinism and backwards compatibility reasons, it needs to be implemented exactly identically in every client implementation.

In addition to that, runtimes make substantial use of heap memory allocations, and each allocation needs to go through the runtime <-> host boundary twice (once for allocating and once for freeing). Moving the allocator to the runtime side would be a good idea, although it would increase the runtime size. But before the host-side allocator can be deprecated, all the host functions that use it must be updated to avoid using it.

-

Stakeholders

+

Stakeholders

Runtime developers, who will benefit from the improved performance and more deterministic behavior of the runtime code.

-

Explanation

+

Explanation

New definitions

New Definition I: Runtime Optional Positive Integer

The Runtime optional positive integer is a signed 64-bit value. Positive values in the range of [0..2³²) represent corresponding unsigned 32-bit values. The value of -1 represents a non-existing value (an absent value). All other values are invalid.

@@ -5012,7 +1041,7 @@ of this RFC.

-

Other changes

+

Other changes

Currently, all runtime entrypoints have the following identical Wasm function signatures:

(func $runtime_entrypoint (param $data i32) (param $len i32) (result i64))
 
@@ -5069,9 +1098,9 @@ of this RFC.

Authorspolka.dom (polkadotdom) -

Summary

+

Summary

This RFC proposes changes to pallet-conviction-voting that allow for simultaneous voting and delegation. For example, Alice could delegate to Bob, then later vote on a referendum while keeping their delegation to Bob intact. It is a strict subset of Leemo's RFC 35.

-

Motivation

+

Motivation

Backdrop

Under our current voting system, a voter can either vote or delegate. To vote, they must first ensure they have no delegate, and to delegate, they must first clear their current votes.

The Issue

@@ -5086,12 +1115,12 @@ of this RFC.

This RFC aims to solve the second and third issue and thus more accurately align governance to the true voter preferences.

An Aside

One may ask, could a voter not just undelegate, vote, then delegate again? Could this just be built into the user interface? Unfortunately, this does not work due to the need to clear their votes before redelegation. In practice the voter would undelegate, vote, wait until the referendum is closed, hope that there's no other referenda they would like to vote on, then redelegate. At best it's a temporally extended friction. At worst the voter goes unrepresented in voting for the duration of the vote clearing period.

-

Stakeholders

+

Stakeholders

Runtime developers: If runtime developers are relying on the previous assumptions for their VotingHooks implementations, they will need to rethink their approach. In addition, a runtime migration is needed. Lastly, it is a serious change in governance that requires some consideration beyond the technical.

App developers: Apps like Subsquare and Polkassembly would need to update their user interface logic. They will also need to handle the new error.

Users: We will want users to be aware of the new functionality, though not required.

Technical Writers: This change will require rewrites of documentation and tutorials.

-

Explanation

+

Explanation

New Data & Runtime Logic

The new logic allows a delegator's vote on a specific poll to override their delegation for that poll only. When a delegator votes, their delegated voting power is temporarily "clawed back" from their delegate for that single referendum. This ensures a delegator's direct vote takes precedence.

The core of the algorithm is as follows:

@@ -5129,318 +1158,28 @@ of this RFC.

A user's locked balance will be the greater of the delegation lock and the voting lock.

Migrations

A multi-block runtime migration is necessary. It would iterate over the VotingFor storage item and convert the old vote data structure to the new structure.

-

Drawbacks

+

Drawbacks

There are two potential drawbacks to this system -

An unbounded rate of change of the voter preferences function

If implemented, there will be no friction in delegating, undelegating, and voting. Therefore, there could be large and immediate shifts in the voter preferences function. In other voting systems we see bounds added to the rate of change (voting cycles, etc). That said, it is unclear whether this is desired or advantageous. Additionally, there are more easily parameterized and analytically tractable ways to handle this than what we currently have. See future directions.

Lessened value in becoming a delegate

If a delegate's voting power can be stripped from them at any point, then there is necessarily a reduction in their power within the system. This provides less incentive to become a delegate. But again, there are more customizable ways to handle this if it proves necessary.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

This change would mean a more complicated STF for voting, which would increase difficulty of hardening. Though sufficient unit testing should handle this with ease.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

The proposed changes would increase both the compute and storage requirements by about 2x for all voting functions. No change in complexity.

-

Ergonomics

+

Ergonomics

Voting and delegation will both become more ergonomic for users, as there are no longer hard constraints affecting what you can do and when you can do it.

-

Compatibility

+

Compatibility

Runtime developers will need to add the migration and ensure their hooks still work.

App developers will need to update their user interfaces to accommodate the new functionality. They will need to handle the new error as well.

-

Prior Art and References

+

Prior Art and References

A current implementation can be found here.

-

Unresolved Questions

+

Unresolved Questions

None

- +

It is possible we would like to add a system parameter for the rate of change of the voting/delegation system. This could prevent wild swings in the voter preferences function and motivate/shield delegates by solidifying their positions over some amount of time. However, it's unclear that this would be valuable or even desirable.

-

(source)

-

Table of Contents

- -

RFC-0152: Decentralized Convex-Preference Coretime Market for Polkadot

-
- - - - -
Start Date2025-06-30
DescriptionThis RFC proposes a decentralized market mechanism for allocating Coretime on Polkadot, replacing the existing Dutch auction method (RFC17). The proposed model leverages convex preference interactions among agents, eliminating explicit bidding and centralized price determination. This ensures fairness, transparency, and decentralization.
**Conflicts-WithRFC-0017
AuthorsDiego Correa Tristain algoritmia@labormedia.cl
-
-

Summary

-

This RFC proposes a decentralized market mechanism for allocating Coretime on Polkadot, replacing the existing Dutch auction method (RFC17). The proposed model leverages convex preference interactions among agents, eliminating explicit bidding and centralized price determination. This ensures fairness, transparency, and decentralization.

-

Motivation

-

The current auction-based model (RFC17) presents critical issues:

- -

The decentralized convex-preference model addresses these issues by facilitating asynchronous, equitable and transparent access before state coordination and deterministic verifiability during and after protocol consensus.

-

Stakeholders

-

Primary set of stakeholders are:

- -

Explanation

-

Guide-Level Explanation

-

Agents participating in the Coretime market (such as parachains, parathreads, or smart contracts) declare two parameters:

- -

These parameters are recorded transparently on-chain. Transactions between agents are conducted through deterministic convex optimizations, ensuring local Pareto-optimal exchanges. A global equilibrium price naturally emerges from these local interactions without any centralized authority or external pricing mechanism Tristain, 2024.

-

Reference-Level Explanation

-

Economic Model

-

Agents' preferences are represented using a Cobb-Douglas utility function:

-

$U_i(x, y) = x^{α_i} y^{1-α_i}$

-

where:

- -

Mechanism Implementation

-

Implementation involves the following components:

-
    -
  1. Preference Declaration: Agents MUST explicitly register their scalar preference (α) and initial asset holdings on-chain.
  2. -
  3. Interaction Module: A dedicated runtime pallet or smart contract SHOULD manage interactions, ensuring Pareto-optimal deterministic outcomes.
  4. -
  5. Convergence Enforcement: Interaction ordering MUST follow a deterministic protocol prioritizing transactions significantly enhancing price convergence, sequencing from higher to lower exchange ratios.
  6. -
  7. On-chain Verifiability: Transaction histories and convergence processes MUST be transparently auditable and verifiable on-chain.
  8. -
-

Example Flow Diagram

-
Preference & Asset Declaration → Paired-exchange Convex Optimization → Interaction Ordering (High-to-Low Exchange Impact) → Global Price Convergence → On-chain Auditability
-
-

Drawbacks

-

Performance

- -

User Experience

- -

Governance Burden

- -

Testing, Security, and Privacy

-

The implementation of this decentralized convex-preference Coretime market mechanism demands particular care in maintaining determinism, accuracy, and security in all on-chain interactions. Key considerations include:

-

Precision and Determinism in Arithmetic

- -

Security

- -

Privacy

- -

Testing and Recommendations

- -

Performance, Ergonomics, and Compatibility

-

This leads to a more fluid, computation-bound system where efficiency stems from algorithmic design and verification speed, not from externally imposed timing constraints. Compatibility with existing Substrate pallets can be explored through modular implementation.

-

Performance

-

The system's performance depends on the availability of computational resources, not on arbitrary time windows or rounds. Price discovery and convergence are calculated as fast as the system can process the deterministic interaction rules. Pair-wise interactions can be batched and accumulated asynchronously. This enhances real-time responsiveness while removing artificial scheduling constraints.

-

Ergonomics

-

Agents only need to express a simple scalar preference and their token/Coretime holdings, removing cognitive complexity. This lightweight interaction model improves usability, especially for smaller participants.

-

Compatibility

-

The mechanism is fully compatible with asynchronous execution architectures. Because it relies on deterministic local state transitions, it integrates seamlessly with Byzantine fault-tolerant consensus protocols and supports scalable, decentralized implementations.

-

Prior Art and References

-

RFC-1

-

Initial Forum Discussion (superseded) : Invitation to Critically Evaluate Core Time Pricing Model Framework

-

RFC Draft Proposal Preliminary Forum Thread: RFC: Decentralized Convex-Preference Coretime Market for Polkadot Draft

-

"Emergent Properties of Distributed Agents with Two-Stage Convex Zero-Sum Optimal Exchange Network": Tristain, 2024

-

Personally, I want to express a special gratitude to Edmundo Beteta for introduccing me to Microeconomics Theory and guiding my curiosity at the Faculty of Economics and Administration, Universidad de Chile.

-

Unresolved Questions

- - - -

(source)

-

Table of Contents

- -

RFC-0154: AURA Multi-Slot Collation

-
- - - -
Start Date25th of August 2025
DescriptionMulti-Slot AURA for System Parachains
Authorsbhargavbh, burdges, AlistairStewart
-
-

Summary

-

This RFC proposes a modification to the AURA round-robin block production mechanism for system parachains (e.g. Polkadot Hub). The proposed change increases the number of consecutive block production slots assigned to each collator from the current single-slot allocation to a configurable value, initially set at four. This modification aims to enhance censorship resistance by mitigating data-withholding attacks.

-

Motivation

-

The Polkadot Relay Chain guarantees the safety of parachain blocks, but it does not provide explicit guarantees for liveness or censorship resistance. With the planned migration of core Relay Chain functionalities—such as Balances, Staking, and Governance—to the Polkadot Hub system parachain in early November 2025, it becomes critical to establish a mechanism for achieving censorship resistance for these parachains without compromising throughput. For example, if governance functionality is migrated to Polkadot-Hub, malicious collators could systematically censor aye votes for a Relay Chain runtime upgrade, potentially altering the referendum's outcome. This demonstrates that censorship attacks on a system parachain can have a direct and undesirable impact on the security of the Relay Chain. This proposal addresses such censorship vulnerabilities by modifying the AURA block production mechanism utilized by system parachain collator with minimal honesty assumptions on the collators.

-

Stakeholders

- -

Threat Model

-

This analysis of censorship resistance for AURA-based parachains operates under the following assumptions:

- -

Proposed Changes

-

The current AURA mechanism, which assigns a single block production slot per collator, is vulnerable to data-withholding attacks. A malicious collator can strategically produce a block and then selectively withhold it from subsequent collators. This can prevent honest collators from building their blocks in a timely manner, effectively censoring their block production.

-

Illustrative Attack Scenario:

-

Consider 3 collators A, B and C assigned to consecutive slots by the AURA mechanism. A and C conspire to censor collator B, i.e., not allow B's block to get backed, they can execute the following attack: A produces block $b_A$ and submits it to the backers but it selectively witholds $b_A$ from B. Then C builds on top of $b_A$ and gets in its block before B can recover $b_A$ from availability layer and build on top of it.

-

Proposed Solution

-

This proposal modifies the AURA round-robin mechanism to assign $x$ consecutive slots to each collator. The specific value of $x$ is contingent upon asynchronous backing parameters od the system parachain and will be derived using a generic formula provided in this document. The collator selected by AURA will be responsible for producing $x$ consecutive blocks. This modification will require corresponding adjustments to the AURA authorship checks within the PVF (Parachain Validation Function). For the current configuration of Polkadot Hub, $x=4$.

-

Analysis

-

The number of consecutive slots to be assigned to ensure AURA's censorship resistance depends on Async Backing Parameters like unincluded_segment_length. We now describe our approach for deriving $x$ based on paramters of async backing and other variables like block production and latency in availability layer. The relevant values can then be plugged in to obtain $x$ for any system parachain.

-

Clearly, the number of consecutive slots (x) in the round-robin is lower bounded by the time required to reconstruct the previous block from the availability layer (b) in addition to the block building time (a). Hence, we need to set $x$ such that $x\geq a+b$. But with async backing, a malicious collator sequentially tries to not share the block and just-in-time front-run the honest collator for all the unincluded_segment blocks. Hence, $x\geq (a+b)\cdot m$ is sufficient, where $m$ is the max allowed candidate depth (unincluded segment allowed).

-

Independently, there is a check on the relay chain which filters out parablocks anchoring to very old relay_parents in the verify_backed_candidates. Any parablock which is anchored to a relay parent older than the oldest element in allowed_relay_parents gets rejected. Hence, the malicious collator can not front-run and censor the consequent collator after this delay as the parablock is no longer valid. The update of the allowed_relay_parents occurs at process_inherent_data where the buffer length of AllowedRelayParents is set by the scheduler parameter: lookahead (set to 3 by default). Therefore, the async_backing delay (asyncdelay) tolerated by the relay chain backers is $3*6s = 18s$. Hence, the number of consecutive slots is the minimum of the above two values:

-

$$x \geq min((a+b)\cdot m, a + b + asyncdelay)$$

-

where $m$ is the max_candidate_depth (or unincluded segment as seen from collator's perpective).

-

Number of consecutive slots for Polkadot Hub

-

Assuming the previous block data can be fetched from backers, then we comfortably have $a+b \leq 6s$, i.e. block buiding plus recoinstruciton time is < 6s. Using the current asyncdelay of 18s, suffices to set $x$ to 4. If the max_candidate_depth (m) for Polkadot Hub is set $m\leq3$, then this will reduce (improve) $x$ from 4 to $m$. Note that a channel would have to be provided for collators to fetch blocks from backers as the preferred option and only recover from availability layer as the fail-safe option.

-

Performance, Ergonomics, and Compatibility

-

The proposed changes are security critical and mitigate censorship attacks on core functionality like balances, staking and governance on Polkadot Hub. -This approach is compatible with the Slot-Based collation and the currently deployed FixedVelocityConsensusHook. Further analysis is needed to integrate with cusotm ConsesnsusHooks that leverage Elastic Scaling.

-

Multi-slot collation however is vulnerable to liveness attacks: adversarial collators don't show up to stall the liveness but then also lose out on block production rewards. The amount of missed blocks because of collators skipping is same as in the current implementation, only the distribution of missed slots changes (they are chunked together instead of being evenly distributed). Secondly, when ratio of adversarial (censoring) collators $\alpha$ is high (close to 1), the ratio of uncensored block to all blocks produced drops to $(1-\alpha)/(x\alpha)$. For more practical lower values of $\alpha<1/4$, the ratio of uncensored to all blocks is almost 1.

-

The latency for backing of blocks is affected as follows:

- -

Effective multi-slot collation requires that collators be able to prioritize transactions that have been targeted for censorship. The implementation should incorporate a framework for priority transactions (e.g., governance votes, election extrinsics) to ensure that such transactions are included in the uncensored blocks.

-

Prior Art and References

-

This RFC is related to RFC-7, which details the selection mechanism for System Parachain Collators. In general, a more robust collator selection mechanism that reduces the proportion of malicious actors would directly benefit the effectiveness of the ideas presented in this RFC

-

Future Directions

-

A resilient mechanism is needed for prioritising transactions in block production for collators that are actively targeted for censorship. There are two potential approches:

-

(source)

Table of Contents

-

Stakeholders

+

Stakeholders

-

Explanation

+

Explanation

pUSD is implemented using the Honzon protocol stack used to power aUSD, adapted for DOT-only collateral on Asset Hub.

Protocol Overview

The Honzon protocol functions as a lending system where users can:

@@ -5600,13 +1339,13 @@ This approach is compatible with the Slot-Based collation and the currently depl

Emergency Shutdown

As a last resort, an emergency shutdown can be performed by the Fellowship to halt minting/liquidation and allow equitable settlement: lock oracle prices, cancel auctions, and let users settle pUSD against collateral at the locked rates. Savings deposits remain redeemable 1:1 for pUSD at the last savings index; interest accrual stops at shutdown.

-

Drawbacks

+

Drawbacks

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

Testing requirements

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

This proposal introduces necessary computational overhead to Asset Hub for CDP management, liquidation monitoring, and Savings accounting. The impact is minimized through:

-

Ergonomics

+

Ergonomics

The proposal optimizes for several key usage patterns:

-

Compatibility

+

Compatibility

-

Prior Art and References

+

Prior Art and References

The implementation follows the Honzon protocol pioneered by Acala for their aUSD stablecoin system. Key references include:

-

Unresolved Questions

+

Unresolved Questions

- +

Smart-Contract Liquidation Participation

Future versions of the system will allow smart contracts to register as liquidation participants, enabling:

-

Prior Art and References

+

Prior Art and References

Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

(source)

Table of Contents

@@ -6477,12 +2064,12 @@ InstaPoolHistory: (empty) AuthorsGavin Wood, Robert Habermeier -

Summary

+

Summary

In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

-

Motivation

+

Motivation

The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

-

Requirements

+

Requirements

-

Stakeholders

+

Stakeholders

Primary stakeholder sets are:

Socialization:

This content of this RFC was discussed in the Polkdot Fellows channel.

-

Explanation

+

Explanation

The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

Future work may include these messages being introduced into the XCM standard.

UMP Message Types

@@ -6575,17 +2162,17 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);

Realistic Limits of the Usage

For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

-

Performance, Ergonomics and Compatibility

+

Performance, Ergonomics and Compatibility

No specific considerations.

-

Testing, Security and Privacy

+

Testing, Security and Privacy

Standard Polkadot testing and security auditing applies.

The proposal introduces no new privacy concerns.

- +

RFC-1 proposes a means of determining allocation of Coretime using this interface.

RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

Drawbacks, Alternatives and Unknowns

None at present.

-

Prior Art and References

+

Prior Art and References

None.

(source)

Table of Contents

@@ -6631,13 +2218,13 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600); AuthorsJoe Petrowski -

Summary

+

Summary

As core functionality moves from the Relay Chain into system chains, so increases the reliance on the liveness of these chains for the use of the network. It is not economically scalable, nor necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a mechanism -- part technical and part social -- for ensuring reliable collator sets that are resilient to attemps to stop any subsytem of the Polkadot protocol.

-

Motivation

+

Motivation

In order to guarantee access to Polkadot's system, the collators on its system chains must propose blocks (provide liveness) and allow all transactions to eventually be included. That is, some collators may censor transactions, but there must exist one collator in the set who will include a @@ -6664,7 +2251,7 @@ coordinated attempts to stop a single chain from halting or to censor a particul transactions.

In the case that users do not trust this set, this RFC also proposes that each chain always have available collator positions that can be acquired by anyone by placing a bond.

-

Requirements

+

Requirements

-

Stakeholders

+

Stakeholders

-

Explanation

+

Explanation

This protocol builds on the existing Collator Selection pallet and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who @@ -6714,27 +2301,27 @@ approximately:

  • of which 15 are Invulnerable, and
  • five are elected by bond.
  • -

    Drawbacks

    +

    Drawbacks

    The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The vast majority of cases can be covered by unit testing. Integration test should ensure that the Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired number of Candidates, can handle updates over XCM from the system's governance location.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.

    -

    Performance

    +

    Performance

    As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.

    -

    Ergonomics

    +

    Ergonomics

    The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.

    -

    Compatibility

    +

    Compatibility

    This RFC is compatible with the existing implementation and can be handled via upgrades and migration.

    -

    Prior Art and References

    +

    Prior Art and References

    Written Discussions

    -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    - +

    There may exist in the future system chains for which this model of collator selection is not appropriate. These chains should be evaluated on a case-by-case basis.

    (source)

    @@ -6790,10 +2377,10 @@ appropriate. These chains should be evaluated on a case-by-case basis.

    AuthorsPierre Krieger -

    Summary

    +

    Summary

    The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

    This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

    -

    Motivation

    +

    Motivation

    The maintenance of bootnodes has long been an annoyance for everyone.

    When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

    @@ -6802,9 +2389,9 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-date

    Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

    While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

    Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

    -

    Stakeholders

    +

    Stakeholders

    This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

    -

    Explanation

    +

    Explanation

    The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

    Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

    While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

    @@ -6841,10 +2428,10 @@ message Response {

    The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

    Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

    -

    Drawbacks

    +

    Drawbacks

    The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

    The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

    This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

    @@ -6853,22 +2440,22 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) a

    For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

    Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

    Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

    Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

    Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    Irrelevant.

    -

    Prior Art and References

    +

    Prior Art and References

    None.

    -

    Unresolved Questions

    +

    Unresolved Questions

    While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

    - +

    It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

    (source)

    Table of Contents

    @@ -6901,9 +2488,9 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsPierre Krieger -

    Summary

    +

    Summary

    Improve the networking messages that query storage items from the remote, in order to reduce the bandwidth usage and number of round trips of light clients.

    -

    Motivation

    +

    Motivation

    Clients on the Polkadot peer-to-peer network can be divided into two categories: full nodes and light clients. So-called full nodes are nodes that store the content of the chain locally on their disk, while light clients are nodes that don't. In order to access for example the balance of an account, a full node can do a disk read, while a light client needs to send a network message to a full node and wait for the full node to reply with the desired value. This reply is in the form of a Merkle proof, which makes it possible for the light client to verify the exactness of the value.

    Unfortunately, this network protocol is suffering from some issues:

    Once Polkadot and Kusama will have transitioned to state_version = 1, which modifies the format of the trie entries, it will be possible to generate Merkle proofs that contain only the hashes of values in the storage. Thanks to this, it is already possible to prove the existence of a key without sending its entire value (only its hash), or to prove that a value has changed or not between two blocks (by sending just their hashes). Thus, the only reason why aforementioned issues exist is because the existing networking messages don't give the possibility for the querier to query this. This is what this proposal aims at fixing.

    -

    Stakeholders

    +

    Stakeholders

    This is the continuation of https://github.com/w3f/PPPs/pull/10, which itself is the continuation of https://github.com/w3f/PPPs/pull/5.

    -

    Explanation

    +

    Explanation

    The protobuf schema of the networking protocol can be found here: https://github.com/paritytech/substrate/blob/5b6519a7ff4a2d3cc424d78bc4830688f3b184c0/client/network/light/src/schema/light.v1.proto

    The proposal is to modify this protocol in this way:

    @@ -11,6 +11,7 @@ message Request {
    @@ -6973,26 +2560,26 @@ An alternative could have been to specify the child_trie_info for e
     Also note that child tries aren't considered as descendants of the main trie when it comes to the includeDescendants flag. In other words, if the request concerns the main trie, no content coming from child tries is ever sent back.

    This protocol keeps the same maximum response size limit as currently exists (16 MiB). It is not possible for the querier to know in advance whether its query will lead to a reply that exceeds the maximum size. If the reply is too large, the replier should send back only a limited number (but at least one) of requested items in the proof. The querier should then send additional requests for the rest of the items. A response containing none of the requested items is invalid.

    The server is allowed to silently discard some keys of the request if it judges that the number of requested keys is too high. This is in line with the fact that the server might truncate the response.

    -

    Drawbacks

    +

    Drawbacks

    This proposal doesn't handle one specific situation: what if a proof containing a single specific item would exceed the response size limit? For example, if the response size limit was 1 MiB, querying the runtime code (which is typically 1.0 to 1.5 MiB) would be impossible as it's impossible to generate a proof less than 1 MiB. The response size limit is currently 16 MiB, meaning that no single storage item must exceed 16 MiB.

    Unfortunately, because it's impossible to verify a Merkle proof before having received it entirely, parsing the proof in a streaming way is also not possible.

    A way to solve this issue would be to Merkle-ize large storage items, so that a proof could include only a portion of a large storage item. Since this would require a change to the trie format, it is not realistically feasible in a short time frame.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The main security consideration concerns the size of replies and the resources necessary to generate them. It is for example easily possible to ask for all keys and values of the chain, which would take a very long time to generate. Since responses to this networking protocol have a maximum size, the replier should truncate proofs that would lead to the response being too large. Note that it is already possible to send a query that would lead to a very large reply with the existing network protocol. The only thing that this proposal changes is that it would make it less complicated to perform such an attack.

    Implementers of the replier side should be careful to detect early on when a reply would exceed the maximum reply size, rather than inconditionally generate a reply, as this could take a very large amount of CPU, disk I/O, and memory. Existing implementations might currently be accidentally protected from such an attack thanks to the fact that requests have a maximum size, and thus that the list of keys in the query was bounded. After this proposal, this accidental protection would no longer exist.

    Malicious server nodes might truncate Merkle proofs even when they don't strictly need to, and it is not possible for the client to (easily) detect this situation. However, malicious server nodes can already do undesirable things such as throttle down their upload bandwidth or simply not respond. There is no need to handle unnecessarily truncated Merkle proofs any differently than a server simply not answering the request.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    It is unclear to the author of the RFC what the performance implications are. Servers are supposed to have limits to the amount of resources they use to respond to requests, and as such the worst that can happen is that light client requests become a bit slower than they currently are.

    -

    Ergonomics

    +

    Ergonomics

    Irrelevant.

    -

    Compatibility

    +

    Compatibility

    The prior networking protocol is maintained for now. The older version of this protocol could get removed in a long time.

    -

    Prior Art and References

    +

    Prior Art and References

    None. This RFC is a clean-up of an existing mechanism.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    - +

    The current networking protocol could be deprecated in a long time. Additionally, the current "state requests" protocol (used for warp syncing) could also be deprecated in favor of this one.

    (source)

    Table of Contents

    @@ -7013,13 +2600,13 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJonas Gehrlein -

    Summary

    +

    Summary

    The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

    -

    Motivation

    +

    Motivation

    How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

    -

    Stakeholders

    +

    Stakeholders

    Polkadot DOT token holders.

    -

    Explanation

    +

    Explanation

    This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

    It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

    Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

    @@ -7062,13 +2649,13 @@ Also note that child tries aren't considered as descendants of the main trie whe AuthorsJoe Petrowski -

    Summary

    +

    Summary

    Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

    -

    Motivation

    +

    Motivation

    Many groups have expressed interest in representing collectives on-chain. Some of these include:

    -

    Prior Art and References

    +

    Prior Art and References

    This RFC builds extensively on the available ideas put forward in RFC-1.

    Additionally, I want to express a special thanks to Samuel Haefner, Shahar Dobzinski, and Alistair Stewart for fruitful discussions and helping me structure my thoughts.

    (source)

    @@ -7589,19 +3176,19 @@ To mitigate this, we propose preventing the market from closing at the ope Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland -

    Summary

    +

    Summary

    Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

    -

    Motivation

    +

    Motivation

    Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

    Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

    -

    Stakeholders

    +

    Stakeholders

    • Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
    • Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
    • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
    • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
    -

    Explanation

    +

    Explanation

    Our PR has all details about our runtime and how we would move it into the fellowship repo.

    Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

    It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

    @@ -7610,17 +3197,17 @@ To mitigate this, we propose preventing the market from closing at the ope
  • Encointer will publish all its crates crates.io
  • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
  • -

    Drawbacks

    +

    Drawbacks

    Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    No changes to the existing system are proposed. Only changes to how maintenance is organized.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    No changes

    -

    Prior Art and References

    +

    Prior Art and References

    Existing Encointer runtime repo

    -

    Unresolved Questions

    +

    Unresolved Questions

    None identified

    - +

    More info on Encointer: encointer.org

    (source)

    Table of Contents

    @@ -8540,11 +4127,11 @@ other privacy-enhancing mechanisms to address this concern. AuthorsJoe Petrowski, Gavin Wood -

    Summary

    +

    Summary

    The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.

    -

    Motivation

    +

    Motivation

    Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -8556,13 +4143,13 @@ blockspace) to the network.

    By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace.

    -

    Stakeholders

    +

    Stakeholders

    • Parachains that interact with affected logic on the Relay Chain;
    • Core protocol and XCM format developers;
    • Tooling, block explorer, and UI developers.
    -

    Explanation

    +

    Explanation

    The following pallets and subsystems are good candidates to migrate from the Relay Chain:

    • Identity
    • @@ -8708,36 +4295,36 @@ sensible to rehearse a migration.

      Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- will give confidence to the chain's robustness on Polkadot.

      -

      Drawbacks

      +

      Drawbacks

      These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular may require some optimizations to deal with constraints.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      Standard audit/review requirements apply. More powerful multi-chain integration test tools would be useful in developement.

      -

      Performance, Ergonomics, and Compatibility

      +

      Performance, Ergonomics, and Compatibility

      Describe the impact of the proposal on the exposed functionality of Polkadot.

      -

      Performance

      +

      Performance

      This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its primary resources are allocated to system performance.

      -

      Ergonomics

      +

      Ergonomics

      This proposal alters very little for coretime users (e.g. parachain developers). Application developers will need to interact with multiple chains, making ergonomic light client tools particularly important for application development.

      For existing parachains that interact with these subsystems, they will need to configure their runtimes to recognize the new locations in the network.

      -

      Compatibility

      +

      Compatibility

      Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. Application developers will need to interact with multiple chains in the network.

      -

      Prior Art and References

      +

      Prior Art and References

      -

      Unresolved Questions

      +

      Unresolved Questions

      There remain some implementation questions, like how to use balances for both Staking and Governance. See, for example, Moving Staking off the Relay Chain.

      - +

      Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. With Staking and Governance off the Relay Chain, this is not an unreasonable next step.

      With Identity on Polkadot, Kusama may opt to drop its People Chain.

      @@ -8772,13 +4359,13 @@ With Staking and Governance off the Relay Chain, this is not an unreasonable nex AuthorsVedhavyas Singareddi -

      Summary

      +

      Summary

      At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the Storage. We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field under RuntimeVersion, we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

      -

      Motivation

      +

      Motivation

      Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is further explored in https://github.com/polkadot-fellows/RFCs/issues/19

      @@ -8790,11 +4377,11 @@ One of the main challenge here is some extrinsics could be big enough that this included in the Consensus block due to Block's weight restriction. If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but rather at maximum, 32 byte of extrinsic data.

      -

      Stakeholders

      +

      Stakeholders

      • Technical Fellowship, in its role of maintaining system runtimes.
      -

      Explanation

      +

      Explanation

      In order to use project specific StateVersion for extrinsic roots, we proposed an implementation that introduced parameter to frame_system::Config but that unfortunately did not feel correct. @@ -8820,26 +4407,26 @@ pub const VERSION: RuntimeVersion = RuntimeVersion { system_version: 1, }; }

    -

    Drawbacks

    +

    Drawbacks

    There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated so that chains know which system_version to use.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    AFAIK, should not have any impact on the security or privacy.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    These changes should be compatible for existing chains if they use state_version value for system_verision.

    -

    Performance

    +

    Performance

    I do not believe there is any performance hit with this change.

    -

    Ergonomics

    +

    Ergonomics

    This does not break any exposed Apis.

    -

    Compatibility

    +

    Compatibility

    This change should not break any compatibility.

    -

    Prior Art and References

    +

    Prior Art and References

    We proposed introducing a similar change by introducing a parameter to frame_system::Config but did not feel that is the correct way of introducing this change.

    -

    Unresolved Questions

    +

    Unresolved Questions

    I do not have any specific questions about this change at the moment.

    - +

    IMO, this change is pretty self-contained and there won't be any future work necessary.

    (source)

    Table of Contents

    @@ -8868,9 +4455,9 @@ is the correct way of introducing this change.

    AuthorsSebastian Kunert -

    Summary

    +

    Summary

    This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

    -

    Motivation

    +

    Motivation

    The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

    Transact Over Bridge -

    Drawbacks

    +

    Drawbacks

    In terms of ergonomics and user experience, this support for combining an asset transfer with a subsequent action (like Transact) is a net positive.

    In terms of performance, and privacy, this is neutral with no changes.

    In terms of security, the feature by itself is also neutral because it allows preserve_origin: false usage for operating with no extra trust assumptions. When wanting to support preserving origin, chains need to configure secure origin aliasing filters. The one suggested in this RFC should be the right choice for the majority of chains, but each chain will ultimately choose depending on their business model and logic (e.g. chain does not plan to integrate with Asset Hub). It is up to the individual chains to configure accordingly.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Barriers should now allow AliasOrigin, DescendOrigin or ClearOrigin.

    Normally, XCM program builders should audit their programs and eliminate assumptions of "no origin" on remote side of this instruction. In this case, the InitiateAssetsTransfer has not been released yet, it will be part of XCMv5, and we can make this change part of the same XCMv5 so that there isn't even the possibility of someone in the wild having built XCM programs using this instruction on those wrong assumptions.

    The working assumption going forward is that the origin on the remote side can either be cleared or it can be the local origin's reanchored location. This assumption is in line with the current behavior of remote XCM programs sent over using pallet_xcm::send.

    The existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear origin same as before for compatibility reasons.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    No impact.

    -

    Ergonomics

    +

    Ergonomics

    Improves ergonomics by allowing the local origin to operate on the remote chain even when the XCM program includes an asset transfer.

    -

    Compatibility

    +

    Compatibility

    At the executor-level this change is backwards and forwards compatible. Both types of programs can be executed on new and old versions of XCM with no changes in behavior.

    New version of the InitiateAssetsTransfer instruction acts same as before when used with preserve_origin: false.

    For using the new capabilities, the XCM builder has to verify that the involved chains have the required origin-aliasing filters configured and use some new version of Barriers aware of AliasOrigin as an allowed alternative to ClearOrigin.

    For compatibility reasons, this RFC proposes this mechanism be added as an enhancement to the yet unreleased InitiateAssetsTransfer instruction, thus eliminating possibilities of XCM logic breakages in the wild. Following the same logic, the existing DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport cross chain asset transfer instructions will not attempt to do origin aliasing and will always clear the origin same as before for compatibility reasons.

    Any one of DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport instructions can be replaced with a InitiateAssetsTransfer instruction with or without origin aliasing, thus providing a clean and clear upgrade path for opting-in this new feature.

    -

    Prior Art and References

    +

    Prior Art and References

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    - +

    (source)

    Table of Contents

    -

    Stakeholders

    +

    Stakeholders

    • Runtime Developers
    • Tools/UI Developers
    -

    Explanation

    +

    Explanation

    The core idea of PVQ is to have a unified interface that meets the aforementioned requirements.

    On the runtime side, an extension-based system is introduced to serve as a standardization layer across different chains. Each extension specification defines a set of cohesive APIs. @@ -12339,12 +7926,12 @@ enum PvqError {

  • ExceedsMaxMessageSize
  • Transport
  • -

    Drawbacks

    +

    Drawbacks

    Performance issues

    • PVQ Program Size: The size of a complicated PVQ program may be too large to be suitable for efficient storage and transmission via XCMP/HRMP.
    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    • Testing:

      @@ -12381,27 +7968,27 @@ enum PvqError { N/A

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    As a newly introduced feature, PVQ operates independently and does not impact or degrade the performance of existing runtime implementations.

    -

    Ergonomics

    +

    Ergonomics

    From the perspective of off-chain tooling, this proposal streamlines development by unifying multiple chain-specific RuntimeAPIs under a single consistent interface. This significantly benefits wallet and dApp developers by eliminating the need to handle individual implementations for similar operations across different chains. The proposal also enhances development flexibility by allowing custom computations to be modularly encapsulated as PolkaVM programs that interact with the exposed APIs.

    -

    Compatibility

    +

    Compatibility

    For RuntimeAPI integration, the proposal defines new APIs, which do not break compatibility with existing interfaces. For XCM Integration, the proposal does not modify the existing XCM message format, which is backwards compatible.

    -

    Prior Art and References

    +

    Prior Art and References

    There are several discussions related to the proposal, including:

    • Original discussion about having a mechanism to avoid code duplications between the runtime and front-ends/wallets. In the original design, the custom computations are compiled as a wasm function.
    • View functions aims to provide view-only functions at the pallet level. Additionally, Facade Project aims to gather and return commonly wanted information in runtime level. PVQ does not conflict with them, and it can take advantage of these Pallet View Functions / Runtime APIs and allow people to build arbitrary PVQ programs to obtain more custom/complex data that is not otherwise expressed by these two proposals.
    -

    Unresolved Questions

    +

    Unresolved Questions

    • The specific conversion between gas and weight has not been finalized and will likely require development of a suitable benchmarking methodology.
    - +

    Once PVQ and the aforementioned Facade Project are ready, there are opportunities to consolidate overlapping functionality between the two systems. For example, the metadata APIs could potentially be unified to provide a more cohesive interface for runtime information. This would help reduce duplication and improve maintainability while preserving the distinct benefits of each approach.

    (source)

    Table of Contents

    @@ -12439,15 +8026,15 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View Authorss0me0ne-unkn0wn (13WGadgNgqSjiGQvfhimw9pX26mvGdYQ6XgrjPANSEDRoGMt) -

    Summary

    +

    Summary

    This RFC proposes a change that makes it possible to identify types of compressed blobs stored on-chain, as well as used off-chain, without the need for decompression.

    -

    Motivation

    +

    Motivation

    Currently, a compressed blob does not give any idea of what's inside because the only thing that can be inside, according to the spec, is Wasm. In reality, other blob types are already being used, and more are to come. Apart from being error-prone by itself, the current approach does not allow to properly route the blob through the execution paths before its decompression, which will result in suboptimal implementations when more blob types are used. Thus, it is necessary to introduce a mechanism allowing to identify the blob type without decompressing it.

    This proposal is intended to support future work enabling Polkadot to execute PolkaVM and, more generally, other-than-Wasm parachain runtimes, and allow developers to introduce arbitrary compression methods seamlessly in the future.

    -

    Stakeholders

    +

    Stakeholders

    Node developers are the main stakeholders for this proposal. It also creates a foundation on which parachain runtime developers will build.

    -

    Explanation

    -

    Overview

    +

    Explanation

    +

    Overview

    The current approach to compressing binary blobs involves using zstd compression, and the resulting compressed blob is prefixed with a unique 64-bit magic value specified in that subsection. The same procedure is used to compress both Wasm code blobs and proofs-of-validity. Currently, having solely a compressed blob, it's impossible to tell what's inside it without decompression, a Wasm blob, or a PoV. That doesn't cause problems in the current protocol, as Wasm blobs and PoV blobs take completely different execution paths in the code.

    The changes proposed below are intended to define the means for distinguishing compressed blob types in a backward-compatible and future-proof way.

    It is proposed to introduce an open list of 64-bit prefixes, each representing a compressed blob of a specific type compressed with a specific compression method. The currently used prefix becomes deprecated and will be removed or reused when it is no longer in use.

    @@ -12467,26 +8054,26 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View
  • Conservatively, wait until no more PVFs prefixed with CBLOB_ZSTD_LEGACY remain on-chain. That may take quite some time. Alternatively, create a migration that alters prefixes of existing blobs;
  • Removing CBLOB_ZSTD_LEGACY prefix will be possible after all the nodes in all the networks cease using the prefix which is a long process, and additional incentives should be offered to the community to make people upgrade.
  • -

    Drawbacks

    +

    Drawbacks

    Currently, the only requirement for a compressed blob prefix is not to coincide with Wasm magic bytes (as stated in code comments). Changes proposed here increase prefix collision risk, given that arbitrary data may be compressed in the future. However, it must be taken into account that:

    • Collision probability per arbitrary blob is ≈5,4×10⁻²⁰ for a single random 64-bit prefix (current situation) and ≈2,17×10⁻¹⁹ for the proposed set of four 64-bit prefixes (proposed situation), which is still low enough;
    • The current de facto protocol uses the current compression implementation to compress PoVs, which are arbitrary binary data, so the collision risk already exists and is not introduced by changes proposed here.
    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    As the change increases granularity, it will positively affect both testing possibilities and security, allowing developers to check what's inside a given compressed blob precisely. Testing the change itself is trivial. Privacy is not affected by this change.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The current implementation's performance is not affected by this change. Future implementations allowing for the execution of other-than-Wasm parachain runtimes will benefit from this change performance-wise.

    -

    Ergonomics

    +

    Ergonomics

    The end-user ergonomics is not affected. The ergonomics for developers will benefit from this change as it enables exact checks and less guessing.

    -

    Compatibility

    +

    Compatibility

    The change is designed to be backward-compatible.

    -

    Prior Art and References

    +

    Prior Art and References

    SDK PR#6704 (WIP) introduces a mechanism similar to that described in this proposal and proves the necessity of such a change.

    -

    Unresolved Questions

    +

    Unresolved Questions

    None

    - +

    This proposal creates a foundation for two future work directions:

    • Proposing to introduce other-than-Wasm code executors, including PolkaVM, allowing parachain runtime authors to seamlessly change execution platform using the existing mechanism of runtime upgrades;
    • @@ -12526,9 +8113,9 @@ PVQ does not conflict with them, and it can take advantage of these Pallet View Authorsordian -

      Summary

      +

      Summary

      This RFC proposes changes to the erasure coding algorithm and the method for computing the erasure root on Polkadot to improve performance of both processes.

      -

      Motivation

      +

      Motivation

      The Data Availability (DA) Layer in Polkadot provides a foundation for shared security, enabling Approval Checkers and Collators to download Proofs-of-Validity (PoV) for security and liveness purposes respectively. @@ -12545,12 +8132,12 @@ The proposed change is orthogonal to RFC-47 and can be used in conjunction with collator nodes), we propose bundling another performance-enhancing breaking change that addresses the CPU bottleneck in the erasure coding process, but using a separate node feature (NodeFeatures part of HostConfiguration) for its activation.

      -

      Stakeholders

      +

      Stakeholders

      • Infrastructure providers (operators of validator/collator nodes) will need to upgrade their client version in a timely manner
      -

      Explanation

      +

      Explanation

      We propose two specific changes:

      1. @@ -12584,24 +8171,24 @@ faster deployment for most parachains but would add complexity.

      2. Activate RFC-47 via Configuration::set_node_feature runtime change.
      3. Activate the new erasure coding scheme using another Configuration::set_node_feature runtime change.
      -

      Drawbacks

      +

      Drawbacks

      Bundling this breaking change with RFC-47 might reset progress in updating collators. However, the omni node initiative should help mitigate this issue.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      Testing is needed to ensure binary compatibility across implementations in multiple languages.

      Performance and Compatibility

      -

      Performance

      +

      Performance

      According to benchmarks:

      • A proper SIMD implementation of Reed-Solomon is 3-4× faster for encoding and up to 9× faster for full decoding
      • Binary Merkle Trees produce proofs that are 4× smaller and slightly faster to generate and verify
      -

      Compatibility

      +

      Compatibility

      This requires a breaking change that can be coordinated following the same approach as in RFC-47.

      -

      Prior Art and References

      +

      Prior Art and References

      JAM already utilizes the same optimizations described in the Graypaper.

      -

      Unresolved Questions

      +

      Unresolved Questions

      None.

      - +

      Future improvements could include:

      • Using ZK proofs to eliminate the need for re-encoding data to verify correct encoding
      • @@ -12631,7 +8218,7 @@ faster deployment for most parachains but would add complexity.

        AuthorsJonas Gehrlein -

        Summary

        +

        Summary

        This RFC proposes burning 80% of transaction fees accrued on Polkadot’s Relay Chain and, more significantly, on all its system parachains. The remaining 20% would continue to incentivize Validators (on the Relay Chain) and Collators (on system parachains) for including transactions. The 80:20 split is motivated by preserving the incentives for Validators, which are crucial for the security of the network, while establishing a consistent fee policy across the Relay Chain and all system parachains.

        • @@ -12642,7 +8229,7 @@ faster deployment for most parachains but would add complexity.

        This proposal extends the system's deflationary direction and is enabling direct value capture for DOT holders of an overall increased activity on the network.

        -

        Motivation

        +

        Motivation

        Historically, transaction fees on both the Relay Chain and the system parachains (with a few exceptions) have been relatively low. This is by design—Polkadot is built to scale and offer low-cost transactions. While this principle remains unchanged, growing network activity could still result in a meaningful accumulation of fees over time.

        Implementing this RFC ensures that potentially increasing activity manifesting in more fees is captured for all token holders. It further aligns the way that the network is handling fees (such as from transactions or for coretime usage) is handled. The arguments in support of this are close to those outlined in RFC0010. Specifically, burning transaction fees has the following benefits:

        Compensation for Coretime Usage

        @@ -12650,7 +8237,7 @@ faster deployment for most parachains but would add complexity.

        Value Accrual and Deflationary Pressure

        By burning the transaction fees, the system effectively reduces the token supply and thereby increase the scarcity of the native token. This deflationary pressure can increase the token's long-term value and ensures that the value captured is translated equally to all existing token holders.

        This proposal requires only minimal code changes, making it inexpensive to implement, yet it introduces a consistent policy for handling transaction fees across the network. Crucially, it positions Polkadot for a future where fee burning could serve as a counterweight to an otherwise inflationary token model, ensuring that value generated by network usage is returned to all DOT holders.

        -

        Stakeholders

        +

        Stakeholders

        • All DOT Token Holders: Benefit from reduced supply and direct value capture as network usage increases.

          @@ -12688,12 +8275,12 @@ faster deployment for most parachains but would add complexity.

          Authorseskimor -

          Summary

          +

          Summary

          This RFC proposes an amendment to RFC-1 Agile Coretime: Renewal prices will no longer only be adjusted based on a configurable renewal bump, but also to the lower end of the current sale - if that turns out higher.

          An implementation can be found here.

          -

          Motivation

          +

          Motivation

          In RFC-1, we strived for perfect predictability on renewal prices, but what we expected unfortunately got proven in practice: Perfect predictability allows for core hoarding and cheap market manipulation, with the effect that both on @@ -12705,9 +8292,9 @@ extend to elastic scaling and in practice, even existing teams wanting to keep their core, because they forgot to renew in the interlude.

          In a nutshell the current situation is severely hindering teams from deploying on Polkadot: We are essentially in a Denial of Service situation.

          -

          Stakeholders

          +

          Stakeholders

          Stakeholders should be existing teams already having a core and new teams wanting to join the ecosystem.

          -

          Explanation

          +

          Explanation

          This RFC proposes to fix this situation, by limiting renewal price predictability to reasonable levels, by introducing a weak coupling to the current market price: We ensure that the price for renewals is at least as high @@ -12775,13 +8362,13 @@ ensures that any additional attack will be expensive: 10 cores, results in to 100% capacity to have some leeway for governance in case of unforeseen attacks/weaknesses.

        • -

          Drawbacks

          +

          Drawbacks

          We are dropping almost perfect predictability on renewal prices, in favor of predictability within reasonable bounds. The introduction of a minimum price, will also result in huge relative price adjustments for existing tenants, because prices were so unreasonably low on Kusama. In practice this should not be an issue for any real project.

          -

          Testing, Security, and Privacy

          +

          Testing, Security, and Privacy

          This RFC is proposing a single line of code change. A test has been added to make sure it is working as expected.

          @@ -12804,15 +8391,15 @@ tenants. Having them exposed at least with this 10x reduction seems a sensible valuation.

          There are no privacy concerns.

          -

          Performance, Ergonomics, and Compatibility

          +

          Performance, Ergonomics, and Compatibility

          The proposed changes are backwards compatible. No interfaces are changed. Performance is not affected. Ergonomics should be greatly improved especially for new entrants, as cores will be available for sale again. A configured minimum price also ensures that the starting price of the Dutch auction stays reasonably high, deterring sniping all the cores at the beginning of a sale.

          -

          Prior Art and References

          +

          Prior Art and References

          This RFC is altering RFC-1 and taking ideas from RFC-17, mainly the introduction of a minimum price.

          - +

          This RFC should solve the immediate problems we are seeing in production right now. Longer term, improvements to the market in terms of price discovery (RFC-17) should be considered, especially once demand grows.

          @@ -12827,6 +8414,4419 @@ now. Longer term, improvements to the market in terms of price discovery

          Mitigation for this edge case is relatively simple: Bump renewals more aggressively the less cores are available on the free market. For now, leaving a few cores not for sale should be enough to mitigate such a situation.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0000: Pre-ELVES soft concensus

          +
          + + + +
          Start DateDate of initial proposal
          DescriptionProvide and exploit a soft consensus before launching approval checks
          AuthorsJeff Burdges, Alistair Stewart
          +
          +

          Summary

          +

          Availability (bitfield) votes gain a preferred_fork flag which expresses the validator's opinion upon relay chain equivocations and babe forks, while still sharing availability votes for all relay chain blocks. We make relay chain block production require a supermajority with preferred_fork set, so forks cannot advance if they split the honest validators, which creates an early soft concensus. We similarly defend ELVES from relay chain equivocation attacks and prevent redundent approvals across babe forks.

          +

          Motivation

          +

          We've always known relay chain equivocations break the ELVES threat model. We originally envisioned ELVES having fallback pathways, but doing fallbacks requires dangerous subtle debugging. We support more assignment schemes in ELVES this way too, including one novel post-quantum one, and very low CPU usage schemes.

          +

          We expect this early soft concensus creates back pressure that improves performance under babe forks.

          +

          Alistair: TODO?

          +

          Stakeholders

          +

          We modify the availability votes and restrict relay chain blocks, fork choice, and ELVES start conditions, so mostly the parachain. See alternatives notes on the flag under sassafras chains like JAM.

          +

          Explanation

          +

          Availability voting

          +

          At present, availability votes have a bitfield representing the cores, a relay_parent, and a signature. We process these on-chain in several steps: We first validate the signatures, zero any bits for cores included/enacted between the relay_parent and our predecessor, sum the set bits for each core, and finally include/enact the core if this exceeds 2/3rds of the validators.

          +

          Availability votes gain a preferred_fork flag, which honest validators set for exactly one relay_parent on their availability votes in a block production slot. We say a validator prefers a fork given by chain head h if it provides an availability vote with relay_parent = h and preferred_fork set.

          +

          Validators recieve a minor equivocations slash if they claim to set preferred_fork for two different relay_parents in the same slot. In sassafras, this means preferred fork equivocations can only occur for relay chain equivocations, but under babe preferred fork equivocations could occur between primary and secondary blocks, or other primary blocks.

          +

          All validators still provide availability votes for all forks, because those non-preferred votes could still help enact candidates faster, but those non-preferred vote have preferred_fork zeroed.

          +

          Around this, validators could optionally provide an early availability vote that commits to their preferred fork, and then later provide a second availability votes stating the same preferred fork but a fuller bitfield, provided doing so somehow helps relay chain blcok producers.

          +

          Fork choice

          +

          We require relay chain block producers build upon forks preferred by 2 f + 1 validators. In other words, a relay chain block with parent p must contain availability bitfield votes from 2 f + 1 validators with relay_parent = p and preferred_fork set. It follows our preferred fork votes override other fork choice priorities.

          +

          A relay chain block producer could lack this 2 f + 1 threshold for a prespective parent block p, in which case they must build upon the parent of p instead. We know availability votes simply being slow would cause this somtimes, in which case adding slightly more delay could save the relay chain slot Alternatively though, two distinct relay chain blocks in the same slot could each wind up prefered by f+1 validators, in which case we must abandond the slot entirely.

          +

          Elves

          +

          We only launch the approvals process aka (machine) elves for a relay chain block p once 2 f + 1 validators prefer that block, aka 2 f + 1 validators provide availability votes with relay_parent = p and preferred_fork set. We could optionally delay this further until we have some valid decendent of p.

          +

          Fast prunning

          +

          In fact, this new fork choice logic creates more short relay chain forks than exist currently: If the validators split their votes, then we create a new fork in a later slot. We no longer need to process every fork now though.

          +

          Instead, availability votes from honest validators must express the correct preferred fork, which requires validators carefully time when they judge and announce their preference flags. In babe, we need primary slots to be preferred over secondary slots, so the validators need logic that delays sending availability votes for a secondary slot, giving the primary slot enough time. We also prefer the primary slot with smallest VRF as well, so we need some delay even once we recieve a primary.

          +

          We suggest roughly this approach:

          +

          First, download only relay chain block headers, from which we determine our tentative preferred fork.

          +

          Second, we download and import only our currently tentatively preferred fork. We download our availability chunks as soon as we import a currently tentatively preferred relay chain block. We've no particular target for availability chunks other than simply some delay timer. In babe, we add some extra delay here for secondary slots, like perhaps 2 seconds minus the actual execution time, so that a fast secondary slot cannot beat a primary slot.

          +

          We somtimes obtain an even more preferable header during import, chunk distribution, and delays for our first tentatively preferred fork. Also, the first could simply turn out invalid. In either case, we loop to repeat this second step on our new tentative preferred fork. We repeat this process until an import succeeds and its timers run out, without receiving any more preferable header. Actual equivocations cannot be preferable over one another, so all this loops terminates reasonably quickly.

          +

          Next, we broadcast our availability vote with its relay_parent set to our tentatively preferred fork, and with its preferred_fork set.

          +

          Finally, if 2 f + 1 other validators have a different preference from us, then we download and import their preferred relay chain block, fetch chunks for it, and provide availability votes with preferred_fork zero. It's possible this occurs earlier than our preference finishes, in which case we probably still send out our preference, if only for forensic evidence.

          +

          Concerns: Drawbacks, Testing, Security, and Privacy

          +

          Adds subtle timing constraints, which could entrench existing performanceg obstacles. We might explore variations that ignore wall clock time.

          +

          We've always known relay chain equivocations break the ELVES threat model. We originally envisioned ELVES having fallback pathways, but these were complex and demanded unused code paths, which cannot realistically be debugged. Although complex, the early soft concensus scheme feels less complex overall. We know timing sucks to optimise a distributed system, but at least doing so use everyday code paths.

          +

          Performance, Ergonomics, and Compatibility

          +

          We expect early soft concensus introduce back pressure that radically alters performance. We no longer run approvals checks upon all forks. As primary slots occur once every other slot in expectation, one might expect a 25% reduction in CPU load, but this depends upon diverse factors.

          +

          We apply back pressure by dropping some whole relay chain blocks though, so this shall increase the expected parachain blocktime somewhat, but how much depens upon future optimisation work.

          +

          Compatibility

          +

          Major upgrade

          +

          Prior Art and References

          +

          ...

          +

          Unresolved Questions

          +

          We halt the chain when less than 2/3 of validators are online. We consider this reasonable since governance now runs on a parachain, ELVES would not secure, and nothing can be finalized anyways. We could perhaps add some "recovery mode" where the relay chain embeds entire system parachain blocks, but doing so might not warrant the effort required.

          + +

          Sassafras

          +

          Arguably, a sassafras RC like JAM could avoid preferred_fork flag, by only releasing availability votes for at most one sassafras equivocation. We wanted availability for babe forks, but sassafras has only equivocations, so those block can simply be dropped.

          +

          In principle, a sassafras equivocation could still enter the valid chain, assuming 2/3rd of validators provide availability votes for the same equivocations. If JAM lacks the preferred_fork flag then enactment proceeds slower in this case, but this should almost never occur.

          +

          Thresahold randomness

          +

          We think threshold randomness could reduce the tranche zero approcha checker assigments by roughly 40%, meaning a fixed 15 vs the expected 25 in the elves paper (30 in production now).

          +

          We do know threshold VRF based schemes that address relay chain equivocations directly, by using as input the relay chain block hash. We have many more options with early soft concensus though. TODO In particular, we only know two post-quantum approaches to elves, and the bandwidth efficent one needs early soft concensus.

          +

          Mid-strenght concensus

          +

          In this RFC, we only require that each relay chain block contain preference votes for its parent from 2/3rds of validators. We could enforce the opposite direction too: Around y>2 seconds after a validator V has seen preference votes for a chain head X from 2/3rd of validators, the V begins rejecting any relay chain block that does not build upon X. This is tricky because the y>2 second delay must be long enough so that most honest nodes learn both X and its preference votes. In this, we might treat preferred_fork votes as evidence for finality of the parent of the vote's relay_parent. This strengthens MEV defenses that assume some honest nodes.

          +

          Avoid wall clock time

          +

          We know parachains could baset heir slots upon relay chain slots, instaed of wall clock time (RFC ToDo). After this happens, we could avoid or minimize wall clock timing in the relay chain too, so that relay chain slots could've a floating duration based upon workload.

          +

          Partial relay chain blocks

          +

          Above, we only discuss abandoning realy chain blocks which fail early soft concensus. We could alternatively treat them as partial blocks and build extension partial blocks that complete them, with elves probably using randomness from the final partial block.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0000: Validator Rewards

          +
          + + + +
          Start DateDate of initial proposal
          DescriptionRewards protocol for Polkadot validators
          AuthorsJeff Burdges, ...
          +
          +

          Summary

          +

          An off-chain approximation protocol should assign rewards based upon the approvals and availability work done by validators.

          +

          All validators track which approval votes they actually use, reporting the aggregate, after which an on-chain median computation gives a good approximation under byzantine assumptions. Approval checkers report aggregate information about which availability chunks they use too, but in availability we need a tit-for-tat game to enforce honesty, because approval committees could often bias results thanks to their small size.

          +

          Motivation

          +

          We want all or most polkadot subsystems be profitable for validataors, because otherwise operators might profit from running modified code. In particular, almost all rewards in Kusama/Polkadot should come from work done securing parachains, primarily approval checking, but also backing, availability, and support of XCMP.

          +

          Among these task, our highest priorities must be approval checks, which ensure soundness, and sending availability chunks to approval checkers. We prove backers must be paid strictly less than approval checkers.

          +

          At present though, validators' rewards have relatively little relationship to validators operating costs, in terms of bandwidth and CPU time. Worse, polkadot's scaling makes us particular vulnerable "no-shows" caused by validators skipping their approval checks.

          +

          We're particularly concernned about hardware specks impact upon the number of parachain cores. We've requested relatively low spec machines so far, only four physical CPU cores, although some run even lower specs like only two physical CPU cores. Alone, rewards cannot fix our low speced validator problem, but rewards and outreach together should far more impact than either alone.

          +

          In future, we'll further increase validator spec requirements, which directly improve polkadot's throughput, and which repeats this dynamic of purging underspeced nodes, except outreach becomes more important because de facto too many slow validators can "out vote" the faster ones

          +

          Stakeholders

          +

          We alter the validators rewards protocol, but with negligable impact upon rewards for honest validators who comply with hardware and bandwidth recommendations.

          +

          We shall still reward participation in relay chain concensus of course, which de facto means block production but not finality, but these current reward levels shall wind up greatly reduced. Any validators who manipulate block rewards now could lose rewards here, simply because of rewards being shifted from block production to availability, but this sounds desirable.

          +

          We've discussed roughly this rewards protocol in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF and https://github.com/paritytech/polkadot-sdk/issues/1811 as well as related topics like https://github.com/paritytech/polkadot-sdk/issues/5122

          +

          Logic

          +

          Categories

          +

          We alter the current rewards scheme by reducing to roughly these proportions of total rewards:

          +
            +
          • 15-20% - Relay chain block production and uncle logic
          • +
          • 5% - Anything else related to relay chain finality, primarily beefy proving, but maybe other tastes exist.
          • +
          • Any existing rewards for on-chain validity statements would only cover backers, so those rewards must be removed.
          • +
          +

          We add roughly these proportions of total rewards covering parachain work:

          +
            +
          • 70-75% - approval and backing validity checks, with the backing rewards being required to be less than approval rewards.
          • +
          • 5-10% - Availability redistribution from availability providers to approval checkers. We do not reward for availability distribution from backers to availability providers.
          • +
          +

          Observation

          +

          We track this data for each candidate during the approvals process:

          +
          /// Our subjective record of out availability transfers for this candidate.
          +CandidateRewards {
          +    /// Anyone who backed this parablock
          +    backers: [AuthorityId; NumBackers],
          +    /// Anyone to whome we think no-showed, even only briefly.
          +    noshows: HashSet<AuthorityId>,
          +    /// Anyone who sent us chunks for this candidate
          +    downloaded_from: HashMap<AuthorityId,u16>,    
          +    /// Anyone to whome we sent chunks for this candidate
          +    uploaded_to: HashMap<AuthorityId,u16>,
          +}
          +
          +

          We no longer require this data during disputes.

          + +

          After we approve a relay chain block, then we collect all its CandidateRewards into an ApprovalsTally, with one ApprovalTallyLine for each validator. In this, we compute approval_usages from the final run of the approvals loop, plus 0.8 for each backer.

          +

          As discussed below, we say a validator 𝑢 uses an approval vote by a validator 𝑣 on a candidate 𝑐 if the the final approving run of the elves approval loop by 𝑢 counted the vote by 𝑣 towards approving the candidate 𝑐. We only count these useful votes that actually gets used.

          +
          /// Our subjective record of what we used from, and provided to, all other validators on the finalized chain
          +pub struct ApprovalsTally(Vec<ApprovalTallyLine>);
          +
          +/// Our subjective record of what we used from, and provided to, all one other validators on the finalized chain
          +pub struct ApprovalTallyLine {
          +    /// Approvals by this validator which our approvals gadget used in marking candidates approved.
          +    approval_usages: u32,
          +    /// How many times we think this validator no-showed, even only briefly.
          +    noshows: u32
          +    /// Availability chunks we downloaded from this validator for our approval checks we used.
          +    used_downloads: u32,
          +    /// Availability chunks we uploaded to this validator which whose approval checks we used.
          +    used_uploads: u32,
          +}
          +
          +

          At finality we sum these ApprovalsTally for one for the whole epoch so far, into another ApprovalsTally. We can optionally sum them earlier at chain heads, but this requires mutablity.

          +

          Messages

          +

          After the epoch is finalized, we share the first three field of each ApprovalTallyLine in its ApprovalTally.

          +
          /// Our subjective record of what we used from some other validator on the finalized chain
          +pub struct ApprovalTallyMessageLine {
          +    /// Approvals by this validator which our approvals gadget used in marking candidates approved.
          +    approval_usages: u32,
          +    /// How many times we think this validator no-showed, even only briefly.
          +    noshows: u32
          +    /// Availability chunks we downloaded from this validator for our approval checks we used.
          +    used_downloads: u32,
          +}
          +
          +/// Our subjective record of what we used from all other validators on the finalized chain
          +pub struct ApprovalsTallyMessage(Vec<ApprovalTallyMessageLine>);
          +
          +

          Actual ApprovalsTallyMessages sent over the wire must be signed of course, likely by the grandpa ed25519 key.

          +

          Rewards computation

          +

          We compute the approvals rewards for each validator by taking the median of the approval_usages fields for each validator across all validators ApprovalsTallyMessages. We compute some noshows_percentiles for each validator similarly, but using a 2/3 precentile instead of the median.

          +
          let mut approval_usages_medians = Vec::new(); 
          +let mut noshows_percentiles = = Vec::new(); 
          +for i in 0..num_validators {
          +    let mut v: Vec<u32> = approvals_tally_messages.iter().map(|atm| atm.0[i].approval_usages);
          +    v.sort();
          +    approval_usages_medians.push(v[num_validators/2]);
          +    let mut v: Vec<u32> = approvals_tally_messages.iter().map(|atm| atm.0[i].noshows);
          +    v.sort();
          +    noshows_percentiles.push(v[num_validators/3]); 
          +}
          +
          +

          Assuming more than 50% honersty, these median tell us how many approval votes form each validator.

          +

          We re-weight the used_downloads from the ith validator by their median times their expected f+1 chunks and divided by how many chunks downloads they claimed, and sum them

          +
          #[cfg(offchain)]
          +let mut my_missing_uploads = my_approvals_tally.iter().map(|l| l.used_uploads).collect();
          +let mut reweighted_total_used_downloads = vec[0u64; num_validators];
          +for (mmu,atm) in my_missing_uploads.iter_mut().zip(approvals_tally_messages) {
          +    let d = atm.0.iter().map(|l| l.used_downloads).sum();
          +    for i in 0..num_validators {
          +        let atm_from_i = approval_usages_medians[i] * (f+1) / d;
          +        #[cfg(offchain)]
          +        if i == me { mmu -= atm_from_i };
          +        reweighted_total_used_downloads[i] += atm_from_i;
          +    }
          +}
          +
          +

          We distribute rewards on-chain using approval_usages_medians and reweighted_total_used_downloads. Approval checkers could later change from who they download chunks using my_missing_uploads.

          +

          We deduct small amount of rewards using noshows_medians too, likely 1% of the rewards for an approval, but excuse some small number of noshows, ala noshows_medians[i].saturating_sub(MAX_NO_PENALTY_NOSHOWS).

          +

          Strategies

          +

          In theory, validators could adopt whatever strategy they like to penalize validators who stiff them on availability redistribution rewards, except they should not stiff back, only choose other availability providers. We discuss one good strategy below, but initially this could go unimplemented.

          +

          Concensus

          +

          We avoid placing rewards logic on the relay chain now, so we must either collect the signed ApprovalsTallyMessages and do the above computations somewhere sufficently trusted, like a parachain, or via some distributed protocol with its own assumptions.

          +

          In-core

          +

          A dedicated rewards parachain could easily collect the ApprovalsTallyMessages and do the above computations. In this, we logically have two phases, first we build the on-chain Merkle tree M of ApprovalsTallyMessages, and second we process those into the rewards data.

          +

          Any in-core approach risks enough malicious collators biasing the rewards by censoring the ApprovalsTallyMessages messages for some validators during the first phase. After this first phase completes, our second phase proceeds deterministically.

          +

          As an option, each validator could handle this second phase itself by creating single heavy transaction with n state accesses in this Merkle tree M, and this transaction sends the era points.

          +

          A remark for future developments..

          +

          JAM-like non/sub-parachain accumulation could mitigate the risk of the rewards parachain being captured.

          +

          JAM services all have either parachain accumulation or else non/sub-parachain accumulation.

          +
            +
          • A parachain should mean any service that tracks mutable state roots onto the relay chain, with its accumulation updating the state roots. Inherently, these state roots create some capture risk for the parachain, although how much depends upon numerous other factors.
          • +
          • A non/sub-parachain means the service does not maintain state like a blockchain does, but could use some tiny state within the relay chain. Although seemingly less powerful than parachains, these non/sub-parachain accumulations could reduce the capture risk so that any validator could create a block for the service, without knowing any existing state.
          • +
          +

          In our case, each ApprovalsTallyMessage would become a block for the first phase rewards service, so then the accumulation tracks an MMR of the rewards service block hashes, which becomes M from Option 1. At 1024 validators this requires 9 * 32 = 288 bytes for the MMR and 1024/8 = 128 bytes for a bitfield, so 416 bytes of relay chain state in total. Any validator could then add their ApprovalsTallyMessage in any order, but only one per relay chain block, so the submission timeframe should be long enough to prevent censorship.

          +

          Arguably after JAM, we should migrate critical functions to non/sub-parachain aka JAM services without mutable state, so this covers validator elections, DKGs, and rewards. Yet, non/sub-parachains cannot eliminate all censorship risks, so the near term benefits seem questionable.

          +

          Off-core

          +

          All validators could collect ApprovalsTallyMessages and independently compute rewards off-core. At that point, all validators have opinions about all other validators rewards, but even among honest validators these opinions could differ if some lack some ApprovalsTallyMessages.

          +

          We'd have the same in-core computation problem if we perform statistics like medians upon these opinions. We could however take an optimistic approach where each validator computes medians like above, but then shares their hash of the final rewards list. If 2/3rds voted for the same hash, then we distribute rewards as above. If not, then we distribute no rewards until governance selects the correct hash.

          +

          We never validate in-core the signatures on ApprovalsTallyMessages or the computation, so this approach permits more direct cheating by malicious 2/3rd majority, but if that occurs then we've broken our security assumptions anyways. It's somewhat likely these hashes do diverge during some network disruptions though, which increases our "drama" factor considerably, which maybe unacceptable.

          +

          Explanation

          +

          Backing

          +

          Polkadot's efficency creates subtle liveness concerns: Anytime one node cannot perform one of its approval checks then Polkadot loses in expectation 3.25 approval checks, or 0.10833 parablocks. This makes back pressure essential.

          +

          We cannot throttle approval checks securely either, so reactive off-chain back pressure only makes sense during or before the backing phase. In other words, if nodes feel overworked themselves, or perhaps beleive others to be, then they should drop backing checks, never approval checks. It follows backing work must be rewarded less well and less reliably than approvals, as otherwise validators could benefit from behavior that harms the network.

          +

          We propose that one backing statement be rewarded at 80% of one approval statement, so backers earn only 80% of what approval checkers earn. We omit rewards for availability distribution, so backers spend more on bandwidth too. Approval checkers always fetch chunks first from backers though, so good backers earn roughly 7% there, meaning backing checks earn roughly 13% less than approval checks. We should lower this 80% if we ever increase availability redistribution rewards.

          +

          Although imperfect, we believe this simplifies implementation, and provides robustness against mistakes elsewhere, including by governance mistakes, but incurs minimal risk. In principle, backer might not distribute systemic chunks, but approval checkers fetch systemic chunks from backers first anyways, so likely this yields negligable gains.

          +

          As always we require that backers' rewards covers their operational costs plus some profit, but approval checks must be more profitable.

          +

          Approvals

          +

          In polkadot, all validators run the elves approval loop for each candidate, in which the validator listens to other approval checkers assignments and approval statements/votes, and with which it marks checkers no-show or done, and marks candidates approved. Also, this loop determines and announces validators' own approval checker assignments.

          +

          Any validator should always conclude whatever approval checks it begins, but our approval assignment loop ignore some approval checks, either because they were announced too soon or because an earlier no-show delivered its approval vote before the final approval. We say a validator $u$ uses an approval vote by a validator $v$ on a candidate $c$ if the approval assignments loop by $u$ counted the vote by $v$ towards approving the candidate $c$. We actually rerun the elves approval loop quite frequently, but only the final run that marks the candidate approved determines the useful approval votes.

          +

          We should not rewards votes announced too soon, so by only counting the final run we unavoidably omit rewards for some honest no-show replacements too. We expect the 80%-ish discount for backing covers these losses, so approval checks remain more profitable than backing.

          +

          We propose a simple approximate solution based upon computing medians across validators for used votes.

          +
            +
          1. +

            In an epoch $e$, each validator $u$ counts of the number $\alpha_{u,v}$ of votes they used from each validator $v$, including themselves. Any time a validator marks a candidate approved, they increment these counts appropriately.

            +
          2. +
          3. +

            After epoch $e$'s last block gets finalized, all validators of epoch $e$ submit an approvals tally message ApprovalsTallyMessage that reveals their number $\alpha_{u,v}$ of useful approvals they saw from each validator $v$ on candidates that became available in epoch $n$. We do not send $\alpha_{u,u}$ for tit-for-tat reasons discussed below, not for bias concerns. We record these approvals tally messages on-chain.

            +
          4. +
          5. +

            After some delay, we compute on-chain the median $\alpha_v := \textrm{median} { \alpha_{u,v} : u }$ used approvals statements for each validator $v$.

            +
          6. +
          +

          As discussed in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF we could compute these medians using the on-line algorithm if substrate had a nice priority queue.

          +

          We never achieve true consensus on approval checkers and their approval votes. Yet, our approval assignment loop gives a rough concensus, under our Byzantine assumption and some synchrony assumption. It then follows that miss-reporting by malicious validators should not appreciably alter the median $\alpha_v$ and hence rewards.

          +

          We never tally used approval assignments to candidate equivocations or other forks. Any validator should always conclude whatever approval checks it begins, even on other forks, but we expect relay chain equivocations should be vanishingly rare, and sassafras should make forks uncommon.

          +

          We account for noshows similarly, and deduce a much smaller amount of rewards, but require a 2/3 precentile level, not kjust a median.

          +

          Availability redistribution

          +

          As approval checkers could easily perform useless checks, we shall reward availability providers for the availability chunks they provide that resulted in useful approval checks. We enforce honesty using a tit-for-tat mechanism because chunk transfers are inherently subjective.

          +

          An approval checker reconstructs the full parachain block by downloading distinct $f+1$ chunks from other validators, where at most $f$ validators are byzantine, out of the $n \ge 3 f + 1$ total validators. In downloading chunks, validators prefer the $f+1$ systemic chunks over the non-systemic chunks, and prefer fetching from validators who already voted valid, like backing checkers. It follows some validators should recieve credit for more than one chunk per candidate.

          +

          We expect a validator $v$ has actually performed more approval checks $\omega_v$ than the median $\alpha_v$ for which they actually received credit. In fact, approval checkers even ignore some of their own approval checks, meaning $\alpha_{v,v} \le \omega_v$ too.

          +

          Alongside approvals count for epoch $e$, approval checker $v$ computes the counts $\beta_{u,v}$ of the number of chunks they downloaded from each availability provider $u$, excluding themselves, for which they percieve the approval check turned out useful, meaning their own approval counts in $\alpha_{v,v}$. Approval checkers publish $\beta_{u,v}$ alongside $\alpha_{u,v}$ in the approvals tally message ApprovalsTallyMessage. We originally proposed include the self availability usage $\beta_{v,v}$ here, but this should not matter, and excluding simplifies the code.

          +

          Symmetrically, availability provider $u$ computes the counts $\gamma_{u,v}$ of the number of chunks they uploaded to each approval checker $v$, again including themselves, again for which they percieve the approval check turned out useful. Availability provider $u$ never reveal its $\gamma_{u,v}$ however.

          +

          At this point, $\alpha_v$, $\alpha_{v,v}$, and $\alpha_{u,v}$ all potentially differ. We established consensus upon $\alpha_v$ above however, with which we avoid approval checkers printing unearned availability provider rewards:

          +

          After receiving "all" pairs $(\alpha_{u,v},\beta_{u,v})$, validator $w$ re-weights the $\beta_{u,v}$ and their own $\gamma_{w,v}$. +$$ +\begin{aligned} +\beta\prime_{w,v} &= {(f+1) \alpha_v \over \sum_u \beta_{u,v}} \beta_{w,v} \ +\gamma\prime_{w,v} &= {(f+1) \alpha_w \over \sum_v \gamma_{w,v}} \gamma_{w,v} \ +\end{aligned} +$$ +At this point, we compute $\beta\prime_w = \sum_v \beta\prime_{w,v}$ on-chain for each $w$ and reward $w$ proportionally.

          +

          Tit-for-tat

          +

          We employ a tit-for-tat strategy to punish validators who lie about from whome they obtain availability chunks. We only alter validators future choices in from whom they obtain availability chunks, and never punish by lying ourselves, so nothing here breaks polkadot, but not having roughly this strategy enables cheating.

          +

          An availability provider $w$ defines $\delta\prime_{w,v} := \gamma\prime_{w,v} - \beta\prime_{w,v}$ to be the re-weighted number of chunks by which $v$ stiffed $w$. Now $w$ increments their cumulative stiffing perception $\eta_{w,v}$ from $v$ by the value $\delta\prime_{w,v}$, so $\eta_{w,v} \mathrel{+}= \delta\prime_{w,v}$

          +

          In future, anytime $w$ seeks chunks in reconstruction $w$ skips $v$ proportional to $\eta_{w,v} / \sum_u \eta_{w,u}$, with each skip reducing $\eta_{w,u}$ by 1. We expect honest accedental availability stiffs have only small $\delta\prime_{w,v}$, so they clear out quickly, but intentional skips add up more quickly.

          +

          We keep $\gamma_{w,v}$ and $\alpha_{u,u}$ secret so that approval checkers cannot really know others stiffing perceptions, although $\alpha_{u,v}$ leaks some relevant information. We expect this secrecy keeps skips secret and thus prevents the tit-for-tat escalating beyond one round, which hopefully creates a desirable Nash equilibrium.

          +

          We favor skiping systematic chunks to reduce reconstructon costs, so we face costs when skipping them. We could however fetch systematic chunks from availability providers as well as backers, or even other approval checkers, so this might not become problematic in practice.

          +

          Concerns: Drawbacks, Testing, Security, and Privacy

          +

          We do not pay backers individually for availability distribution per se. We could only do so by including this information into the availability bitfields, which complicates on-chain computation. Also, if one of the two backers does not distribute then the availability core should remain occupied longer, meaning the lazy backer loses some rewards too. It's likely future protocol improbvements change this, so we should monitor for lazy backers outside the rewards system.

          +

          We discuss approvals being considered by the tit-for-tat in earlier drafts. An adversary who successfuly manipulates the rewards median votes would've alraedy violated polkadot's security assumptions though, which requires a hard fork and correcting the dot allocation. Incorrect report wrong approval_usages remain interesting statistics though.

          +

          Adversarial validators could manipulates their availability votes though, even without being a supermajority. If they still download honestly, then this costs them more rewards than they earn. We do not prevent validators from preferentially obtaining their pieces from their friends though. We should analyze, or at least observe, the long-term consequences.

          +

          A priori, whale nominator's validators could stiff validators but then rotate their validators quickly enough so that they never suffered being skipped back. We discuss several possible solution, and their difficulties, under "Rob's nominator-wise skipping" in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF but overall less seems like more here. Also frequent validator rotation could be penalized elsewhere.

          +

          Performance, Ergonomics, and Compatibility

          + +

          We operate off-chain except for final rewards votes and median tallies. We expect lower overhead rewards protocols would lack information, thereby admitting easier cheating.

          +

          Initially, we designed the ELVES approval gadget to allow on-chain operation, in part for rewards computation, but doing so looks expensive. Also, on-chain rewards computaiton remains only an approximation too, but could even be biased more easily than our off-chain protocol presented here.

          + +

          We alraedy teach validators about missed parachain blocks, but we'll teach approval checking more going forwards, because current efforts focus more upon backing.

          + +

          JAM's block exports should not complicate availability rewards, but could impact some alternative schemes.

          +

          Prior Art and References

          +

          None

          +

          Unresolved Questions

          +

          Provide specific questions to discuss and address before the RFC is voted on by the Fellowship. This should include, for example, alternatives to aspects of the proposed design where the appropriate trade-off to make is unclear.

          + +

          Synthetic parachain flag

          +

          Any rewards protocol could simply be "out voted" by too many slow validators: An increase the number of parachain cores increases more workload, but this creates no-shows if too few validators could handle this workload.

          +

          We could add a synthetic parachain flag, only settable by governance, which treats no-shows as positive approval votes for that parachain, but without adding rewards. We should never enable this for real parachains, only for synthetic ones like gluttons. We should not enable the synthetic parachain flag long-term even for gluttonsm, because validators could easily modify their code. Yet, synthetic approval checks might enable pushing the hardware upgrades more agressively over the short-term.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0004: Remove the host-side runtime memory allocator

          +
          + + + +
          Start Date2023-07-04
          DescriptionUpdate the runtime-host interface to no longer make use of a host-side allocator
          AuthorsPierre Krieger
          +
          +

          Summary

          +

          Update the runtime-host interface to no longer make use of a host-side allocator.

          +

          Motivation

          +

          The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.

          +

          The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer in order to free the buffer.

          +

          Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.

          +

          Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.

          +

          Stakeholders

          +

          No attempt was made at convincing stakeholders.

          +

          Explanation

          +

          New host functions

          +

          This section contains a list of new host functions to introduce.

          +
          (func $ext_storage_read_version_2
          +    (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
          +(func $ext_default_child_storage_read_version_2
          +    (param $child_storage_key i64) (param $key i64) (param $value_out i64)
          +    (param $offset i32) (result i64))
          +
          +

          The signature and behaviour of ext_storage_read_version_2 and ext_default_child_storage_read_version_2 is identical to their version 1 counterparts, but the return value has a different meaning. +The new functions directly return the number of bytes that were written in the value_out buffer. If the entry doesn't exist, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in value_out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.

          +

          The runtime execution stops with an error if value_out is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.

          +
          (func $ext_storage_next_key_version_2
          +    (param $key i64) (param $out i64) (return i32))
          +(func $ext_default_child_storage_next_key_version_2
          +    (param $child_storage_key i64) (param $key i64) (param $out i64) (return i32))
          +
          +

          The behaviour of these functions is identical to their version 1 counterparts. +Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an out parameter containing a pointer-size to the memory location where the host writes the output. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. +These functions return the size, in bytes, of the next key, or 0 if there is no next key. If the size of the next key is larger than the buffer in out, the bytes of the key that fit the buffer are written to out and any extra byte that doesn't fit is discarded.

          +

          Some notes:

          +
            +
          • It is never possible for the next key to be an empty buffer, because an empty key has no preceding key. For this reason, a return value of 0 can unambiguously be used to indicate the lack of next key.
          • +
          • The ext_storage_next_key_version_2 and ext_default_child_storage_next_key_version_2 are typically used in order to enumerate keys that starts with a certain prefix. Given that storage keys are constructed by concatenating hashes, the runtime is expected to know the size of the next key and can allocate a buffer that can fit said key. When the next key doesn't belong to the desired prefix, it might not fit the buffer, but given that the start of the key is written to the buffer anyway this can be detected in order to avoid calling the function a second time with a larger buffer.
          • +
          +
          (func $ext_hashing_keccak_256_version_2
          +    (param $data i64) (param $out i32))
          +(func $ext_hashing_keccak_512_version_2
          +    (param $data i64) (param $out i32))
          +(func $ext_hashing_sha2_256_version_2
          +    (param $data i64) (param $out i32))
          +(func $ext_hashing_blake2_128_version_2
          +    (param $data i64) (param $out i32))
          +(func $ext_hashing_blake2_256_version_2
          +    (param $data i64) (param $out i32))
          +(func $ext_hashing_twox_64_version_2
          +    (param $data i64) (param $out i32))
          +(func $ext_hashing_twox_128_version_2
          +    (param $data i64) (param $out i32))
          +(func $ext_hashing_twox_256_version_2
          +    (param $data i64) (param $out i32))
          +(func $ext_trie_blake2_256_root_version_3
          +    (param $data i64) (param $version i32) (param $out i32))
          +(func $ext_trie_blake2_256_ordered_root_version_3
          +    (param $data i64) (param $version i32) (param $out i32))
          +(func $ext_trie_keccak_256_root_version_3
          +    (param $data i64) (param $version i32) (param $out i32))
          +(func $ext_trie_keccak_256_ordered_root_version_3
          +    (param $data i64) (param $version i32) (param $out i32))
          +(func $ext_default_child_storage_root_version_3
          +    (param $child_storage_key i64) (param $out i32))
          +(func $ext_crypto_ed25519_generate_version_2
          +    (param $key_type_id i32) (param $seed i64) (param $out i32))
          +(func $ext_crypto_sr25519_generate_version_2
          +    (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
          +(func $ext_crypto_ecdsa_generate_version_2
          +    (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
          +
          +

          The behaviour of these functions is identical to their version 1 or version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of these functions accepts an out parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

          +
          (func $ext_default_child_storage_root_version_3
          +    (param $child_storage_key i64) (param $out i32))
          +(func $ext_storage_root_version_3
          +    (param $out i32))
          +
          +

          The behaviour of these functions is identical to their version 1 and version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new versions of these functions accepts an out parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

          +

          I have taken the liberty to take the version 1 of these functions as a base rather than the version 2, as a PPP deprecating the version 2 of these functions has previously been accepted: https://github.com/w3f/PPPs/pull/6.

          +
          (func $ext_storage_clear_prefix_version_3
          +    (param $prefix i64) (param $limit i64) (param $removed_count_out i32)
          +    (return i32))
          +(func $ext_default_child_storage_clear_prefix_version_3
          +    (param $child_storage_key i64) (param $prefix i64)
          +    (param $limit i64)  (param $removed_count_out i32) (return i32))
          +(func $ext_default_child_storage_kill_version_4
          +    (param $child_storage_key i64) (param $limit i64)
          +    (param $removed_count_out i32) (return i32))
          +
          +

          The behaviour of these functions is identical to their version 2 and 3 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the version 3 and 4 of these functions accepts a removed_count_out parameter containing the memory location to a 8 bytes buffer where the host writes the number of keys that were removed in little endian. The runtime execution stops with an error if removed_count_out is outside of the range of the memory of the virtual machine. The functions return 1 to indicate that there are keys remaining, and 0 to indicate that all keys have been removed.

          +

          Note that there is an alternative proposal to add new host functions with the same names: https://github.com/w3f/PPPs/pull/7. This alternative doesn't conflict with this one except for the version number. One proposal or the other will have to use versions 4 and 5 rather than 3 and 4.

          +
          (func $ext_crypto_ed25519_sign_version_2
          +    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
          +(func $ext_crypto_sr25519_sign_version_2
          +    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
          +func $ext_crypto_ecdsa_sign_version_2
          +    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
          +(func $ext_crypto_ecdsa_sign_prehashed_version_2
          +    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64))
          +
          +

          The behaviour of these functions is identical to their version 1 counterparts. The new versions of these functions accept an out parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. The signatures are always of a size known at compilation time. On success, these functions return 0. If the public key can't be found in the keystore, these functions return 1 and do not write anything to out.

          +

          Note that the return value is 0 on success and 1 on failure, while the previous version of these functions write 1 on success (as it represents a SCALE-encoded Some) and 0 on failure (as it represents a SCALE-encoded None). Returning 0 on success and non-zero on failure is consistent with common practices in the C programming language and is less surprising than the opposite.

          +
          (func $ext_crypto_secp256k1_ecdsa_recover_version_3
          +    (param $sig i32) (param $msg i32) (param $out i32) (return i64))
          +(func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3
          +    (param $sig i32) (param $msg i32) (param $out i32) (return i64))
          +
          +

          The behaviour of these functions is identical to their version 2 counterparts. The new versions of these functions accept an out parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. The signatures are always of a size known at compilation time. On success, these functions return 0. On failure, these functions return a non-zero value and do not write anything to out.

          +

          The non-zero value written on failure is:

          +
            +
          • 1: incorrect value of R or S
          • +
          • 2: incorrect value of V
          • +
          • 3: invalid signature
          • +
          +

          These values are equal to the values returned on error by the version 2 (see https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), but incremented by 1 in order to reserve 0 for success.

          +
          (func $ext_crypto_ed25519_num_public_keys_version_1
          +    (param $key_type_id i32) (return i32))
          +(func $ext_crypto_ed25519_public_key_version_2
          +    (param $key_type_id i32) (param $key_index i32) (param $out i32))
          +(func $ext_crypto_sr25519_num_public_keys_version_1
          +    (param $key_type_id i32) (return i32))
          +(func $ext_crypto_sr25519_public_key_version_2
          +    (param $key_type_id i32) (param $key_index i32) (param $out i32))
          +(func $ext_crypto_ecdsa_num_public_keys_version_1
          +    (param $key_type_id i32) (return i32))
          +(func $ext_crypto_ecdsa_public_key_version_2
          +    (param $key_type_id i32) (param $key_index i32) (param $out i32))
          +
          +

          The functions superceded the ext_crypto_ed25519_public_key_version_1, ext_crypto_sr25519_public_key_version_1, and ext_crypto_ecdsa_public_key_version_1 host functions.

          +

          Instead of calling ext_crypto_ed25519_public_key_version_1 in order to obtain the list of all keys at once, the runtime should instead call ext_crypto_ed25519_num_public_keys_version_1 in order to obtain the number of public keys available, then ext_crypto_ed25519_public_key_version_2 repeatedly. +The ext_crypto_ed25519_public_key_version_2 function writes the public key of the given key_index to the memory location designated by out. The key_index must be between 0 (included) and n (excluded), where n is the value returned by ext_crypto_ed25519_num_public_keys_version_1. Execution must trap if n is out of range.

          +

          The same explanations apply for ext_crypto_sr25519_public_key_version_1 and ext_crypto_ecdsa_public_key_version_1.

          +

          Host implementers should be aware that the list of public keys (including their ordering) must not change while the runtime is running. This is most likely done by copying the list of all available keys either at the start of the execution or the first time the list is accessed.

          +
          (func $ext_offchain_http_request_start_version_2
          +  (param $method i64) (param $uri i64) (param $meta i64) (result i32))
          +
          +

          The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the request identifier in it, and returning a pointer to it, the version 2 of this function simply returns the newly-assigned identifier to the HTTP request. On failure, this function returns -1. An identifier of -1 is invalid and is reserved to indicate failure.

          +
          (func $ext_offchain_http_request_write_body_version_2
          +  (param $method i64) (param $uri i64) (param $meta i64) (result i32))
          +(func $ext_offchain_http_response_read_body_version_2
          +  (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64))
          +
          +

          The behaviour of these functions is identical to their version 1 counterpart. Instead of allocating a buffer, writing two bytes in it, and returning a pointer to it, the new version of these functions simply indicates what happened:

          +
            +
          • For ext_offchain_http_request_write_body_version_2, 0 on success.
          • +
          • For ext_offchain_http_response_read_body_version_2, 0 or a non-zero number of bytes on success.
          • +
          • -1 if the deadline was reached.
          • +
          • -2 if there was an I/O error while processing the request.
          • +
          • -3 if the identifier of the request is invalid.
          • +
          +

          These values are equal to the values returned on error by the version 1 (see https://spec.polkadot.network/chap-host-api#defn-http-error), but tweaked in order to reserve positive numbers for success.

          +

          When it comes to ext_offchain_http_response_read_body_version_2, the host implementers must not read too much data at once in order to not create ambiguity in the returned value. Given that the size of the buffer is always inferior or equal to 4 GiB, this is not a problem.

          +
          (func $ext_offchain_http_response_wait_version_2
          +    (param $ids i64) (param $deadline i64) (param $out i32))
          +
          +

          The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an out parameter containing the memory location where the host writes the output. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

          +

          The encoding of the response code is also modified compared to its version 1 counterpart and each response code now encodes to 4 little endian bytes as described below:

          +
            +
          • 100-999: the request has finished with the given HTTP status code.
          • +
          • -1 if the deadline was reached.
          • +
          • -2 if there was an I/O error while processing the request.
          • +
          • -3 if the identifier of the request is invalid.
          • +
          +

          The buffer passed to out must always have a size of 4 * n where n is the number of elements in the ids.

          +
          (func $ext_offchain_http_response_header_name_version_1
          +    (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
          +(func $ext_offchain_http_response_header_value_version_1
          +    (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
          +
          +

          These functions supercede the ext_offchain_http_response_headers_version_1 host function.

          +

          Contrary to ext_offchain_http_response_headers_version_1, only one header indicated by header_index can be read at a time. Instead of calling ext_offchain_http_response_headers_version_1 once, the runtime should call ext_offchain_http_response_header_name_version_1 and ext_offchain_http_response_header_value_version_1 multiple times with an increasing header_index, until a value of -1 is returned.

          +

          These functions accept an out parameter containing a pointer-size to the memory location where the header name or value should be written. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out.

          +

          These functions return the size, in bytes, of the header name or header value. If request doesn't exist or is in an invalid state (as documented for ext_offchain_http_response_headers_version_1) or the header_index is out of range, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.

          +

          If the buffer in out is too small to fit the entire header name of value, only the bytes that fit are written and the rest are discarded.

          +
          (func $ext_offchain_submit_transaction_version_2
          +    (param $data i64) (return i32))
          +(func $ext_offchain_http_request_add_header_version_2
          +    (param $request_id i32) (param $name i64) (param $value i64) (result i32))
          +
          +

          Instead of allocating a buffer, writing 1 or 0 in it, and returning a pointer to it, the version 2 of these functions return 0 or 1, where 0 indicates success and 1 indicates failure. The runtime must interpret any non-0 value as failure, but the client must always return 1 in case of failure.

          +
          (func $ext_offchain_local_storage_read_version_1
          +    (param $kind i32) (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
          +
          +

          This function supercedes the ext_offchain_local_storage_get_version_1 host function, and uses an API and logic similar to ext_storage_read_version_2.

          +

          It reads the offchain local storage key indicated by kind and key starting at the byte indicated by offset, and writes the value to the pointer-size indicated by value_out.

          +

          The function returns the number of bytes that were written in the value_out buffer. If the entry doesn't exist, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in value_out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.

          +

          The runtime execution stops with an error if value_out is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.

          +
          (func $ext_offchain_network_peer_id_version_1
          +    (param $out i64))
          +
          +

          This function writes the PeerId of the local node to the memory location indicated by out. A PeerId is always 38 bytes long. +The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

          +
          (func $ext_input_size_version_1
          +    (return i64))
          +(func $ext_input_read_version_1
          +    (param $offset i64) (param $out i64))
          +
          +

          When a runtime function is called, the host uses the allocator to allocate memory within the runtime where to write some input data. These two new host functions provide an alternative way to access the input that doesn't make use of the allocator.

          +

          The ext_input_size_version_1 host function returns the size in bytes of the input data.

          +

          The ext_input_read_version_1 host function copies some data from the input data to the memory of the runtime. The offset parameter indicates the offset within the input data where to start copying, and must be inferior or equal to the value returned by ext_input_size_version_1. The out parameter is a pointer-size containing the buffer where to write to. +The runtime execution stops with an error if offset is strictly superior to the size of the input data, or if out is outside of the range of the memory of the virtual machine, even if the amount of data to copy would be 0 bytes.

          +

          Other changes

          +

          In addition to the new host functions, this RFC proposes two changes to the runtime-host interface:

          +
            +
          • The following function signature is now also accepted for runtime entry points: (func (result i64)).
          • +
          • Runtimes no longer need to expose a constant named __heap_base.
          • +
          +

          All the host functions that are being superceded by new host functions are now considered deprecated and should no longer be used. +The following other host functions are similarly also considered deprecated:

          +
            +
          • ext_storage_get_version_1
          • +
          • ext_default_child_storage_get_version_1
          • +
          • ext_allocator_malloc_version_1
          • +
          • ext_allocator_free_version_1
          • +
          • ext_offchain_network_state_version_1
          • +
          +

          Drawbacks

          +

          This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.

          +

          Prior Art

          +

          The API of these new functions was heavily inspired by API used by the C programming language.

          +

          Unresolved Questions

          +

          The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.

          +

          It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions:

          +
            +
          • +

            ext_input_size_version_1/ext_input_read_version_1 is inherently slower than obtaining a buffer with the entire data due to the two extra function calls and the extra copying. However, given that this only happens once per runtime call, the cost is expected to be negligible.

            +
          • +
          • +

            The ext_crypto_*_public_keys, ext_offchain_network_state, and ext_offchain_http_* host functions are likely slightly slower than their deprecated counterparts, but given that they are used only in offchain workers this is acceptable.

            +
          • +
          • +

            It is unclear how replacing ext_storage_get with ext_storage_read and ext_default_child_storage_get with ext_default_child_storage_read will impact performances.

            +
          • +
          • +

            It is unclear how the changes to ext_storage_next_key and ext_default_child_storage_next_key will impact performances.

            +
          • +
          +

          Future Possibilities

          +

          After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. +This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0006: Dynamic Pricing for Bulk Coretime Sales

          +
          + + + + +
          Start DateJuly 09, 2023
          DescriptionA dynamic pricing model to adapt the regular price for bulk coretime sales
          AuthorsTommi Enenkel (Alice und Bob)
          LicenseMIT
          +
          +

          Summary

          +

          This RFC proposes a dynamic pricing model for the sale of Bulk Coretime on the Polkadot UC. The proposed model updates the regular price of cores for each sale period, by taking into account the number of cores sold in the previous sale, as well as a limit of cores and a target number of cores sold. It ensures a minimum price and limits price growth to a maximum price increase factor, while also giving govenance control over the steepness of the price change curve. It allows governance to address challenges arising from changing market conditions and should offer predictable and controlled price adjustments.

          +

          Accompanying visualizations are provided at [1].

          +

          Motivation

          +

          RFC-1 proposes periodic Bulk Coretime Sales as a mechanism to sell continouos regions of blockspace (suggested to be 4 weeks in length). A number of Blockspace Regions (compare RFC-1 & RFC-3) are provided for sale to the Broker-Chain each period and shall be sold in a way that provides value-capture for the Polkadot network. The exact pricing mechanism is out of scope for RFC-1 and shall be provided by this RFC.

          +

          A dynamic pricing model is needed. A limited number of Regions are offered for sale each period. The model needs to find the price for a period based on supply and demand of the previous period.

          +

          The model shall give Coretime consumers predictability about upcoming price developments and confidence that Polkadot governance can adapt the pricing model to changing market conditions.

          +

          Requirements

          +
            +
          1. The solution SHOULD provide a dynamic pricing model that increases price with growing demand and reduces price with shrinking demand.
          2. +
          3. The solution SHOULD have a slow rate of change for price if the number of Regions sold is close to a given sales target and increase the rate of change as the number of sales deviates from the target.
          4. +
          5. The solution SHOULD provide the possibility to always have a minimum price per Region.
          6. +
          7. The solution SHOULD provide a maximum factor of price increase should the limit of Regions sold per period be reached.
          8. +
          9. The solution should allow governance to control the steepness of the price function
          10. +
          +

          Stakeholders

          +

          The primary stakeholders of this RFC are:

          +
            +
          • Protocol researchers and evelopers
          • +
          • Polkadot DOT token holders
          • +
          • Polkadot parachains teams
          • +
          • Brokers involved in the trade of Bulk Coretime
          • +
          +

          Explanation

          +

          Overview

          +

          The dynamic pricing model sets the new price based on supply and demand in the previous period. The model is a function of the number of Regions sold, piecewise-defined by two power functions.

          +
            +
          • The left side ranges from 0 to the target. It represents situations where demand was lower than the target.
          • +
          • The right side ranges from the target to limit. It represents situations where demand was higher than the target.
          • +
          +

          The curve of the function forms a plateau around the target and then falls off to the left and rises up to the right. The shape of the plateau can be controlled via a scale factor for the left side and right side of the function respectively.

          +

          Parameters

          +

          From here on, we will also refer to Regions sold as 'cores' to stay congruent with RFC-1.

          +
          + + + + + + +
          NameSuggested ValueDescriptionConstraints
          BULK_LIMIT45The maximum number of cores being sold0 < BULK_LIMIT
          BULK_TARGET30The target number of cores being sold0 < BULK_TARGET <= BULK_LIMIT
          MIN_PRICE1The minimum price a core will always cost.0 < MIN_PRICE
          MAX_PRICE_INCREASE_FACTOR2The maximum factor by which the price can change.1 < MAX_PRICE_INCREASE_FACTOR
          SCALE_DOWN2The steepness of the left side of the function.0 < SCALE_DOWN
          SCALE_UP2The steepness of the right side of the function.0 < SCALE_UP
          +
          +

          Function

          +
          P(n) = \begin{cases} 
          +    (P_{\text{old}} - P_{\text{min}}) \left(1 - \left(\frac{T - n}{T}\right)^d\right) + P_{\text{min}} & \text{if } n \leq T \\
          +    ((F - 1) \cdot P_{\text{old}} \cdot \left(\frac{n - T}{L - T}\right)^u) + P_{\text{old}} & \text{if } n > T 
          +\end{cases}
          +
          +
            +
          • $P_{\text{old}}$ is the old_price, the price of a core in the previous period.
          • +
          • $P_{\text{min}}$ is the MIN_PRICE, the minimum price a core will always cost.
          • +
          • $F$ is the MAX_PRICE_INCREASE_FACTOR, the factor by which the price maximally can change from one period to another.
          • +
          • $d$ is the SCALE_DOWN, the steepness of the left side of the function.
          • +
          • $u$ is the SCALE_UP, the steepness of the right side of the function.
          • +
          • $T$ is the BULK_TARGET, the target number of cores being sold.
          • +
          • $L$ is the BULK_LIMIT, the maximum number of cores being sold.
          • +
          • $n$ is cores_sold, the number of cores being sold.
          • +
          +

          Left side

          +

          The left side is a power function that describes an increasing concave downward curvature that approaches old_price. We realize this by using the form $y = a(1 - x^d)$, usually used as a downward sloping curve, but in our case flipped horizontally by letting the argument $x = \frac{T-n}{T}$ decrease with $n$, doubly inversing the curve.

          +

          This approach is chosen over a decaying exponential because it let's us a better control the shape of the plateau, especially allowing us to get a straight line by setting SCALE_DOWN to $1$.

          +

          Ride side

          +

          The right side is a power function of the form $y = a(x^u)$.

          +

          Pseudo-code

          +
          NEW_PRICE := IF CORES_SOLD <= BULK_TARGET THEN
          +    (OLD_PRICE - MIN_PRICE) * (1 - ((BULK_TARGET - CORES_SOLD)^SCALE_DOWN / BULK_TARGET^SCALE_DOWN)) + MIN_PRICE
          +ELSE
          +    ((MAX_PRICE_INCREASE_FACTOR - 1) * OLD_PRICE * ((CORES_SOLD - BULK_TARGET)^SCALE_UP / (BULK_LIMIT - BULK_TARGET)^SCALE_UP)) + OLD_PRICE
          +END IF
          +
          +

          Properties of the Curve

          +

          Minimum Price

          +

          We introduce MIN_PRICE to control the minimum price.

          +

          The left side of the function shall be allowed to come close to 0 if cores sold approaches 0. The rationale is that if there are actually 0 cores sold, the previous sale price was too high and the price needs to adapt quickly.

          +

          Price forms a plateau around the target

          +

          If the number of cores is close to BULK_TARGET, less extreme price changes might be sensible. This ensures that a drop in sold cores or an increase doesn’t lead to immediate price changes, but rather slowly adapts. Only if more extreme changes in the number of sold cores occur, does the price slope increase.

          +

          We introduce SCALE_DOWN and SCALE_UP to control for the steepness of the left and the right side of the function respectively.

          +

          Max price increase factor

          +

          We introduce MAX_PRICE_INCREASE_FACTOR as the factor that controls how much the price may increase from one period to another.

          +

          Introducing this variable gives governance an additional control lever and avoids the necessity for a future runtime upgrade.

          +

          Example Configurations

          +

          Baseline

          +

          This example proposes the baseline parameters. If not mentioned otherwise, other examples use these values.

          +

          The minimum price of a core is 1 DOT, the price can double every 4 weeks. Price change around BULK_TARGET is dampened slightly.

          +
          BULK_TARGET = 30
          +BULK_LIMIT = 45
          +MIN_PRICE = 1
          +MAX_PRICE_INCREASE_FACTOR = 2
          +SCALE_DOWN = 2
          +SCALE_UP = 2
          +OLD_PRICE = 1000
          +
          +

          More aggressive pricing

          +

          We might want to have a more aggressive price growth, allowing the price to triple every 4 weeks and have a linear increase in price on the right side.

          +
          BULK_TARGET = 30
          +BULK_LIMIT = 45
          +MIN_PRICE = 1
          +MAX_PRICE_INCREASE_FACTOR = 3
          +SCALE_DOWN = 2
          +SCALE_UP = 1
          +OLD_PRICE = 1000
          +
          +

          Conservative pricing to ensure quick corrections in an affluent market

          +

          If governance considers the risk that a sudden surge in DOT price might price chains out from bulk coretime markets, it can ensure the model quickly reacts to a quick drop in demand, by setting 0 < SCALE_DOWN < 1 and setting the max price increase factor more conservatively.

          +
          BULK_TARGET = 30
          +BULK_LIMIT = 45
          +MIN_PRICE = 1
          +MAX_PRICE_INCREASE_FACTOR = 1.5
          +SCALE_DOWN = 0.5
          +SCALE_UP = 2
          +OLD_PRICE = 1000
          +
          +

          Linear pricing

          +

          By setting the scaling factors to 1 and potentially adapting the max price increase, we can achieve a linear function

          +
          BULK_TARGET = 30
          +BULK_LIMIT = 45
          +MIN_PRICE = 1
          +MAX_PRICE_INCREASE_FACTOR = 1.5
          +SCALE_DOWN = 1
          +SCALE_UP = 1
          +OLD_PRICE = 1000
          +
          +

          Drawbacks

          +

          None at present.

          +

          Prior Art and References

          +

          This pricing model is based on the requirements from the basic linear solution proposed in RFC-1, which is a simple dynamic pricing model and only used as proof. The present model adds additional considerations to make the model more adaptable under real conditions.

          +

          Future Possibilities

          +

          This RFC, if accepted, shall be implemented in conjunction with RFC-1.

          +

          References

          + +

          (source)

          +

          Table of Contents

          + +

          RFC-34: XCM Absolute Location Account Derivation

          +
          + + + +
          Start Date05 October 2023
          DescriptionXCM Absolute Location Account Derivation
          AuthorsGabriel Facco de Arruda
          +
          +

          Summary

          +

          This RFC proposes changes that enable the use of absolute locations in AccountId derivations, which allows protocols built using XCM to have static account derivations in any runtime, regardless of its position in the family hierarchy.

          +

          Motivation

          +

          These changes would allow protocol builders to leverage absolute locations to maintain the exact same derived account address across all networks in the ecosystem, thus enhancing user experience.

          +

          One such protocol, that is the original motivation for this proposal, is InvArch's Saturn Multisig, which gives users a unifying multisig and DAO experience across all XCM connected chains.

          +

          Stakeholders

          +
            +
          • Ecosystem developers
          • +
          +

          Explanation

          +

          This proposal aims to make it possible to derive accounts for absolute locations, enabling protocols that require the ability to maintain the same derived account in any runtime. This is done by deriving accounts from the hash of described absolute locations, which are static across different destinations.

          +

          The same location can be represented in relative form and absolute form like so:

          +
          #![allow(unused)]
          +fn main() {
          +// Relative location (from own perspective)
          +{
          +    parents: 0,
          +    interior: Here
          +}
          +
          +// Relative location (from perspective of parent)
          +{
          +    parents: 0,
          +    interior: [Parachain(1000)]
          +}
          +
          +// Relative location (from perspective of sibling)
          +{
          +    parents: 1,
          +    interior: [Parachain(1000)]
          +}
          +
          +// Absolute location
          +[GlobalConsensus(Kusama), Parachain(1000)]
          +}
          +

          Using DescribeFamily, the above relative locations would be described like so:

          +
          #![allow(unused)]
          +fn main() {
          +// Relative location (from own perspective)
          +// Not possible.
          +
          +// Relative location (from perspective of parent)
          +(b"ChildChain", Compact::<u32>::from(*index)).encode()
          +
          +// Relative location (from perspective of sibling)
          +(b"SiblingChain", Compact::<u32>::from(*index)).encode()
          +
          +}
          +

          The proposed description for absolute location would follow the same pattern, like so:

          +
          #![allow(unused)]
          +fn main() {
          +(
          +    b"GlobalConsensus",
          +    network_id,
          +    b"Parachain",
          +    Compact::<u32>::from(para_id),
          +    tail
          +).encode()
          +}
          +

          This proposal requires the modification of two XCM types defined in the xcm-builder crate: The WithComputedOrigin barrier and the DescribeFamily MultiLocation descriptor.

          +

          WithComputedOrigin

          +

          The WtihComputedOrigin barrier serves as a wrapper around other barriers, consuming origin modification instructions and applying them to the message origin before passing to the inner barriers. One of the origin modifying instructions is UniversalOrigin, which serves the purpose of signaling that the origin should be a Universal Origin that represents the location as an absolute path prefixed by the GlobalConsensus junction.

          +

          In it's current state the barrier transforms locations with the UniversalOrigin instruction into relative locations, so the proposed changes aim to make it return absolute locations instead.

          +

          DescribeFamily

          +

          The DescribeFamily location descriptor is part of the HashedDescription MultiLocation hashing system and exists to describe locations in an easy format for encoding and hashing, so that an AccountId can be derived from this MultiLocation.

          +

          This implementation contains a match statement that does not match against absolute locations, so changes to it involve matching against absolute locations and providing appropriate descriptions for hashing.

          +

          Drawbacks

          +

          No drawbacks have been identified with this proposal.

          +

          Testing, Security, and Privacy

          +

          Tests can be done using simple unit tests, as this is not a change to XCM itself but rather to types defined in xcm-builder.

          +

          Security considerations should be taken with the implementation to make sure no unwanted behavior is introduced.

          +

          This proposal does not introduce any privacy considerations.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          Depending on the final implementation, this proposal should not introduce much overhead to performance.

          +

          Ergonomics

          +

          The ergonomics of this proposal depend on the final implementation details.

          +

          Compatibility

          +

          Backwards compatibility should remain unchanged, although that depend on the final implementation.

          +

          Prior Art and References

          +
            +
          • DescirbeFamily type: https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/xcm-builder/src/location_conversion.rs#L122
          • +
          • WithComputedOrigin type: https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/xcm-builder/src/barriers.rs#L153
          • +
          +

          Unresolved Questions

          +

          Implementation details and overall code is still up to discussion.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0035: Conviction Voting Delegation Modifications

          +
          + + + +
          October 10, 2023
          Conviction Voting Delegation Modifications
          ChaosDAO
          +
          +

          Summary

          +

          This RFC proposes to make modifications to voting power delegations as part of the Conviction Voting pallet. The changes being proposed include:

          +
            +
          1. Allow a Delegator to vote independently of their Delegate if they so desire.
          2. +
          3. Allow nested delegations – for example Charlie delegates to Bob who delegates to Alice – when Alice votes then both Bob and Charlie vote alongside Alice (in the current implementation Charlie will not vote when Alice votes).
          4. +
          5. Make a change so that when a delegate votes abstain their delegated votes also vote abstain.
          6. +
          7. Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call.
          8. +
          +

          Motivation

          +

          It has become clear since the launch of OpenGov that there are a few common tropes which pop up time and time again:

          +
            +
          1. The frequency of referenda is often too high for network participants to have sufficient time to review, comprehend, and ultimately vote on each individual referendum. This means that these network participants end up being inactive in on-chain governance.
          2. +
          3. There are active network participants who are reviewing every referendum and are providing feedback in an attempt to help make the network thrive – but often time these participants do not control enough voting power to influence the network with their positive efforts.
          4. +
          5. Delegating votes for all tracks currently requires long batched calls which result in high fees for the Delegator - resulting in a reluctance from many to delegate their votes.
          6. +
          +

          We believe (based on feedback from token holders with a larger stake in the network) that if there were some changes made to delegation mechanics, these larger stake holders would be more likely to delegate their voting power to active network participants – thus greatly increasing the support turnout.

          +

          Stakeholders

          +

          The primary stakeholders of this RFC are:

          +
            +
          • The Polkadot Technical Fellowship who will have to research and implement the technical aspects of this RFC
          • +
          • DOT token holders in general
          • +
          +

          Explanation

          +

          This RFC proposes to make 4 changes to the convictionVoting pallet logic in order to improve the user experience of those delegating their voting power to another account.

          +
            +
          1. +

            Allow a Delegator to vote independently of their Delegate if they so desire – this would empower network participants to more actively delegate their voting power to active voters, removing the tedious steps of having to undelegate across an entire track every time they do not agree with their delegate's voting direction for a particular referendum.

            +
          2. +
          3. +

            Allow nested delegations – for example Charlie delegates to Bob who delegates to Alice – when Alice votes then both Bob and Charlie vote alongside Alice (in the current runtime Charlie will not vote when Alice votes) – This would allow network participants who control multiple (possibly derived) accounts to be able to delegate all of their voting power to a single account under their control, which would in turn delegate to a more active voting participant. Then if the delegator wishes to vote independently of their delegate they can control all of their voting power from a single account, which again removes the pain point of having to issue multiple undelegate extrinsics in the event that they disagree with their delegate.

            +
          4. +
          5. +

            Have delegated votes follow their delegates abstain votes – there are times where delegates may vote abstain on a particular referendum and adding this functionality will increase the support of a particular referendum. It has a secondary benefit of meaning that Validators who are delegating their voting power do not lose points in the 1KV program in the event that their delegate votes abstain (another pain point which may be preventing those network participants from delegating).

            +
          6. +
          7. +

            Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call - in order to delegate votes across all tracks, a user must batch 15 calls - resulting in high costs for delegation. A single call for delegate_all/ undelegate_all would reduce the complexity and therefore costs of delegations considerably for prospective Delegators.

            +
          8. +
          +

          Drawbacks

          +

          We do not foresee any drawbacks by implementing these changes. If anything we believe that this should help to increase overall voter turnout (via the means of delegation) which we see as a net positive.

          +

          Testing, Security, and Privacy

          +

          We feel that the Polkadot Technical Fellowship would be the most competent collective to identify the testing requirements for the ideas presented in this RFC.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          This change may add extra chain storage requirements on Polkadot, especially with respect to nested delegations.

          +

          Ergonomics & Compatibility

          +

          The change to add nested delegations may affect governance interfaces such as Nova Wallet who will have to apply changes to their indexers to support nested delegations. It may also affect the Polkadot Delegation Dashboard as well as Polkassembly & SubSquare.

          +

          We want to highlight the importance for ecosystem builders to create a mechanism for indexers and wallets to be able to understand that changes have occurred such as increasing the pallet version, etc.

          +

          Prior Art and References

          +

          N/A

          +

          Unresolved Questions

          +

          N/A

          + +

          Additionally we would like to re-open the conversation about the potential for there to be free delegations. This was discussed by Dr Gavin Wood at Sub0 2022 and we feel like this would go a great way towards increasing the amount of network participants that are delegating: https://youtu.be/hSoSA6laK3Q?t=526

          +

          Overall, we strongly feel that delegations are a great way to increase voter turnout, and the ideas presented in this RFC would hopefully help in that aspect.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0044: Rent based registration model

          +
          + + + +
          Start Date6 November 2023
          DescriptionA new rent based parachain registration model
          AuthorsSergej Sakac
          +
          +

          Summary

          +

          This RFC proposes a new model for a sustainable on-demand parachain registration, involving a smaller initial deposit and periodic rent payments. The new model considers that on-demand chains may be unregistered and later re-registered. The proposed solution also ensures a quick startup for on-demand chains on Polkadot in such cases.

          +

          Motivation

          +

          With the support of on-demand parachains on Polkadot, there is a need to explore a new, more cost-effective model for registering validation code. In the current model, the parachain manager is responsible for reserving a unique ParaId and covering the cost of storing the validation code of the parachain. These costs can escalate, particularly if the validation code is large. We need a better, sustainable model for registering on-demand parachains on Polkadot to help smaller teams deploy more easily.

          +

          This RFC suggests a new payment model to create a more financially viable approach to on-demand parachain registration. In this model, a lower initial deposit is required, followed by recurring payments upon parachain registration.

          +

          This new model will coexist with the existing one-time deposit payment model, offering teams seeking to deploy on-demand parachains on Polkadot a more cost-effective alternative.

          +

          Requirements

          +
            +
          1. The solution SHOULD NOT affect the current model for registering validation code.
          2. +
          3. The solution SHOULD offer an easily configurable way for governance to adjust the initial deposit and recurring rent cost.
          4. +
          5. The solution SHOULD provide an incentive to prune validation code for which rent is not paid.
          6. +
          7. The solution SHOULD allow anyone to re-register validation code under the same ParaId without the need for redundant pre-checking if it was already verified before.
          8. +
          9. The solution MUST be compatible with the Agile Coretime model, as described in RFC#0001
          10. +
          11. The solution MUST allow anyone to pay the rent.
          12. +
          13. The solution MUST prevent the removal of validation code if it could still be required for disputes or approval checking.
          14. +
          +

          Stakeholders

          +
            +
          • Future Polkadot on-demand Parachains
          • +
          +

          Explanation

          +

          This RFC proposes a set of changes that will enable the new rent based approach to registering and storing validation code on-chain. +The new model, compared to the current one, will require periodic rent payments. The parachain won't be pruned automatically if the rent is not paid, but by permitting anyone to prune the parachain and rewarding the caller, there will be an incentive for the removal of the validation code.

          +

          On-demand parachains should still be able to utilize the current one-time payment model. However, given the size of the deposit required, it's highly likely that most on-demand parachains will opt for the new rent-based model.

          +

          Importantly, this solution doesn't require any storage migrations in the current system nor does it introduce any breaking changes. The following provides a detailed description of this solution.

          +

          Registering an on-demand parachain

          +

          In the current implementation of the registrar pallet, there are two constants that specify the necessary deposit for parachains to register and store their validation code:

          +
          #![allow(unused)]
          +fn main() {
          +trait Config {
          +	// -- snip --
          +
          +	/// The deposit required for reserving a `ParaId`.
          +	#[pallet::constant]
          +	type ParaDeposit: Get<BalanceOf<Self>>;
          +
          +	/// The deposit to be paid per byte stored on chain.
          +	#[pallet::constant]
          +	type DataDepositPerByte: Get<BalanceOf<Self>>;
          +}
          +}
          +

          This RFC proposes the addition of three new constants that will determine the payment amount and the frequency of the recurring rent payment:

          +
          #![allow(unused)]
          +fn main() {
          +trait Config {
          +	// -- snip --
          +
          +	/// Defines how frequently the rent needs to be paid.
          +	///
          +	/// The duration is set in sessions instead of block numbers.
          +	#[pallet::constant]
          +	type RentDuration: Get<SessionIndex>;
          +
          +	/// The initial deposit amount for registering validation code.
          +	///
          +	/// This is defined as a proportion of the deposit that would be required in the regular
          +	/// model.
          +	#[pallet::constant]
          +	type RentalDepositProportion: Get<Perbill>;
          +
          +	/// The recurring rental cost defined as a proportion of the initial rental registration deposit.
          +	#[pallet::constant]
          +	type RentalRecurringProportion: Get<Perbill>;
          +}
          +}
          +

          Users will be able to reserve a ParaId and register their validation code for a proportion of the regular deposit required. However, they must also make additional rent payments at intervals of T::RentDuration.

          +

          For registering using the new rental system we will have to make modifications to the paras-registrar pallet. We should expose two new extrinsics for this:

          +
          #![allow(unused)]
          +fn main() {
          +mod pallet {
          +	// -- snip --
          +
          +	pub fn register_rental(
          +		origin: OriginFor<T>,
          +		id: ParaId,
          +		genesis_head: HeadData,
          +		validation_code: ValidationCode,
          +	) -> DispatchResult { /* ... */ }
          +
          +	pub fn pay_rent(origin: OriginFor<T>, id: ParaId) -> DispatchResult {
          +		/* ... */ 
          +	}
          +}
          +}
          +

          A call to register_rental will require the reservation of only a percentage of the deposit that would otherwise be required to register the validation code when using the regular model. +As described later in the Quick para re-registering section below, we will also store the code hash of each parachain to enable faster re-registration after a parachain has been pruned. For this reason the total initial deposit amount is increased to account for that.

          +
          #![allow(unused)]
          +fn main() {
          +// The logic for calculating the initial deposit for parachain registered with the 
          +// new rent-based model:
          +
          +let validation_code_deposit = per_byte_fee.saturating_mul((validation_code.0.len() as u32).into());
          +
          +let head_deposit = per_byte_fee.saturating_mul((genesis_head.0.len() as u32).into())
          +let hash_deposit = per_byte_fee.saturating_mul(HASH_SIZE);
          +
          +let deposit = T::RentalDepositProportion::get().mul_ceil(validation_code_deposit)
          +	.saturating_add(T::ParaDeposit::get())
          +	.saturating_add(head_deposit)
          +	.saturating_add(hash_deposit)
          +}
          +

          Once the ParaId is reserved and the validation code is registered the rent must be periodically paid to ensure the on-demand parachain doesn't get removed from the state. The pay_rent extrinsic should be callable by anyone, removing the need for the parachain to depend on the parachain manager for rent payments.

          +

          On-demand parachain pruning

          +

          If the rent is not paid, anyone has the option to prune the on-demand parachain and claim a portion of the initial deposit reserved for storing the validation code. This type of 'light' pruning only removes the validation code, while the head data and validation code hash are retained. The validation code hash is stored to allow anyone to register it again as well as to enable quicker re-registration by skipping the pre-checking process.

          +

          The moment the rent is no longer paid, the parachain won't be able to purchase on-demand access, meaning no new blocks are allowed. This stage is called the "hibernation" stage, during which all the parachain-related data is still stored on-chain, but new blocks are not permitted. The reason for this is to ensure that the validation code is available in case it is needed in the dispute or approval checking subsystems. Waiting for one entire session will be enough to ensure it is safe to deregister the parachain.

          +

          This means that anyone can prune the parachain only once the "hibernation" stage is over, which lasts for an entire session after the moment that the rent is not paid.

          +

          The pruning described here is a light form of pruning, since it only removes the validation code. As with all parachains, the parachain or para manager can use the deregister extrinsic to remove all associated state.

          +

          Ensuring rent is paid

          +

          The paras pallet will be loosely coupled with the para-registrar pallet. This approach enables all the pallets tightly coupled with the paras pallet to have access to the rent status information.

          +

          Once the validation code is stored without having its rent paid the assigner_on_demand pallet will ensure that an order for that parachain cannot be placed. This is easily achievable given that the assigner_on_demand pallet is tightly coupled with the paras pallet.

          +

          On-demand para re-registration

          +

          If the rent isn't paid on time, and the parachain gets pruned, the new model should provide a quick way to re-register the same validation code under the same ParaId. This can be achieved by skipping the pre-checking process, as the validation code hash will be stored on-chain, allowing us to easily verify that the uploaded code remains unchanged.

          +
          #![allow(unused)]
          +fn main() {
          +/// Stores the validation code hash for parachains that successfully completed the 
          +/// pre-checking process.
          +///
          +/// This is stored to enable faster on-demand para re-registration in case its pvf has been earlier
          +/// registered and checked.
          +///
          +/// NOTE: During a runtime upgrade where the pre-checking rules change this storage map should be
          +/// cleared appropriately.
          +#[pallet::storage]
          +pub(super) type CheckedCodeHash<T: Config> =
          +	StorageMap<_, Twox64Concat, ParaId, ValidationCodeHash>;
          +}
          +

          To enable parachain re-registration, we should introduce a new extrinsic in the paras-registrar pallet that allows this. The logic of this extrinsic will be same as regular registration, with the distinction that it can be called by anyone, and the required deposit will be smaller since it only has to cover for the storage of the validation code.

          +

          Drawbacks

          +

          This RFC does not alter the process of reserving a ParaId, and therefore, it does not propose reducing it, even though such a reduction could be beneficial.

          +

          Even though this RFC doesn't delve into the specifics of the configuration values for parachain registration but rather focuses on the mechanism, configuring it carelessly could lead to potential problems.

          +

          Since the validation code hash and head data are not removed when the parachain is pruned but only when the deregister extrinsic is called, the T::DataDepositPerByte must be set to a higher value to create a strong enough incentive for removing it from the state.

          +

          Testing, Security, and Privacy

          +

          The implementation of this RFC will be tested on Rococo first.

          +

          Proper research should be conducted on setting the configuration values of the new system since these values can have great impact on the network.

          +

          An audit is required to ensure the implementation's correctness.

          +

          The proposal introduces no new privacy concerns.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          This RFC should not introduce any performance impact.

          +

          Ergonomics

          +

          This RFC does not affect the current parachains, nor the parachains that intend to use the one-time payment model for parachain registration.

          +

          Compatibility

          +

          This RFC does not break compatibility.

          +

          Prior Art and References

          +

          Prior discussion on this topic: https://github.com/paritytech/polkadot-sdk/issues/1796

          +

          Unresolved Questions

          +

          None at this time.

          + +

          As noted in this GitHub issue, we want to raise the per-byte cost of on-chain data storage. However, a substantial increase in this cost would make it highly impractical for on-demand parachains to register on Polkadot. +This RFC offers an alternative solution for on-demand parachains, ensuring that the per-byte cost increase doesn't overly burden the registration process.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0054: Remove the concept of "heap pages" from the client

          +
          + + + +
          Start Date2023-11-24
          DescriptionRemove the concept of heap pages from the client and move it to the runtime.
          AuthorsPierre Krieger
          +
          +

          Summary

          +

          Rather than enforce a limit to the total memory consumption on the client side by loading the value at :heappages, enforce that limit on the runtime side.

          +

          Motivation

          +

          From the early days of Substrate up until recently, the runtime was present in two forms: the wasm runtime (wasm bytecode passed through an interpreter) and the native runtime (native code directly run by the client).

          +

          Since the wasm runtime has a lower amount of available memory (4 GiB maximum) compared to the native runtime, and in order to ensure sure that the wasm and native runtimes always produce the same outcome, it was necessary to clamp the amount of memory available to both runtimes to the same value.

          +

          In order to achieve this, a special storage key (a "well-known" key) :heappages was introduced and represents the number of "wasm pages" (one page equals 64kiB) of memory that are available to the memory allocator of the runtimes. If this storage key is absent, it defaults to 2048, which is 128 MiB.

          +

          The native runtime has since then been disappeared, but the concept of "heap pages" still exists. This RFC proposes a simplification to the design of Polkadot by removing the concept of "heap pages" as is currently known, and proposes alternative ways to achieve the goal of limiting the amount of memory available.

          +

          Stakeholders

          +

          Client implementers and low-level runtime developers.

          +

          Explanation

          +

          This RFC proposes the following changes to the client:

          +
            +
          • The client no longer considers :heappages as special.
          • +
          • The memory allocator of the runtime is no longer bounded by the value of :heappages.
          • +
          +

          With these changes, the memory available to the runtime is now only bounded by the available memory space (4 GiB), and optionally by the maximum amount of memory specified in the Wasm binary (see https://webassembly.github.io/spec/core/bikeshed/#memories%E2%91%A0). In Rust, the latter can be controlled during compilation with the flag -Clink-arg=--max-memory=....

          +

          Since the client-side change is strictly more tolerant than before, we can perform the change immediately after the runtime has been updated, and without having to worry about backwards compatibility.

          +

          This RFC proposes three alternative paths (different chains might choose to follow different paths):

          +
            +
          • +

            Path A: add back the same memory limit to the runtime, like so:

            +
              +
            • At initialization, the runtime loads the value of :heappages from the storage (using ext_storage_get or similar), and sets a global variable to the decoded value.
            • +
            • The runtime tracks the total amount of memory that it has allocated using its instance of #[global_allocator] (https://github.com/paritytech/polkadot-sdk/blob/e3242d2c1e2018395c218357046cc88caaed78f3/substrate/primitives/io/src/lib.rs#L1748-L1762). This tracking should also be added around the host functions that perform allocations.
            • +
            • If an allocation is attempted that would go over the value in the global variable, the memory allocation fails.
            • +
            +
          • +
          • +

            Path B: define the memory limit using the -Clink-arg=--max-memory=... flag.

            +
          • +
          • +

            Path C: don't add anything to the runtime. This is effectively the same as setting the memory limit to ~4 GiB (compared to the current default limit of 128 MiB). This solution is viable only because we're compiling for 32bits wasm rather than for example 64bits wasm. If we ever compile for 64bits wasm, this would need to be revisited.

            +
          • +
          +

          Each parachain can choose the option that they prefer, but the author of this RFC strongly suggests either option C or B.

          +

          Drawbacks

          +

          In case of path A, there is one situation where the behaviour pre-RFC is not equivalent to the one post-RFC: when a host function that performs an allocation (for example ext_storage_get) is called, without this RFC this allocation might fail due to reaching the maximum heap pages, while after this RFC this will always succeed. +This is most likely not a problem, as storage values aren't supposed to be larger than a few megabytes at the very maximum.

          +

          In the unfortunate event where the runtime runs out of memory, path B would make it more difficult to relax the memory limit, as we would need to re-upload the entire Wasm, compared to updating only :heappages in path A or before this RFC. +In the case where the runtime runs out of memory only in the specific event where the Wasm runtime is modified, this could brick the chain. However, this situation is no different than the thousands of other ways that a bug in the runtime can brick a chain, and there's no reason to be particularily worried about this situation in particular.

          +

          Testing, Security, and Privacy

          +

          This RFC would reduce the chance of a consensus issue between clients. +The :heappages are a rather obscure feature, and it is not clear what happens in some corner cases such as the value being too large (error? clamp?) or malformed. This RFC would completely erase these questions.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          In case of path A, it is unclear how performances would be affected. Path A consists in moving client-side operations to the runtime without changing these operations, and as such performance differences are expected to be minimal. Overall, we're talking about one addition/subtraction per malloc and per free, so this is more than likely completely negligible.

          +

          In case of path B and C, the performance gain would be a net positive, as this RFC strictly removes things.

          +

          Ergonomics

          +

          This RFC would isolate the client and runtime more from each other, making it a bit easier to reason about the client or the runtime in isolation.

          +

          Compatibility

          +

          Not a breaking change. The runtime-side changes can be applied immediately (without even having to wait for changes in the client), then as soon as the runtime is updated, the client can be updated without any transition period. One can even consider updating the client before the runtime, as it corresponds to path C.

          +

          Prior Art and References

          +

          None.

          +

          Unresolved Questions

          +

          None.

          + +

          This RFC follows the same path as https://github.com/polkadot-fellows/RFCs/pull/4 by scoping everything related to memory allocations to the runtime.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0070: X Track for @kusamanetwork

          +
          + + + +
          Start DateJanuary 29, 2024
          DescriptionAdd a governance track to facilitate posts on the @kusamanetwork's X account
          AuthorAdam Clay Steeber
          +
          +

          Summary

          +

          This RFC proposes adding a trivial governance track on Kusama to facilitate X (formerly known as Twitter) posts on the @kusamanetwork account. The technical aspect +of implementing this in the runtime is very inconsequential and straight-forward, though it might get more technical if the Fellowship wants to regulate this track +with a non-existent permission set. If this is implemented it would need to be followed up with:

          +
            +
          1. the establishment of specifications for proposing X posts via this track, and
          2. +
          3. the development of tools/processes to ensure that the content contained in referenda enacted in this track would be automatically posted on X.
          4. +
          +

          Motivation

          +

          The overall motivation for this RFC is to decentralize the management of the Kusama brand/communication channel to KSM holders. This is necessary in my opinion primarily +because of the inactivity of the account in recent history, with posts spanning weeks or months apart. I am currently unaware of who/what entity manages the Kusama +X account, but if they are affiliated with Parity or W3F this proposed solution could also offload some of the legal ramifications of making (or not making) +announcements to the public regarding Kusama. While centralized control of the X account would still be present, it could become totally moot if this RFC is implemented +and the community becomes totally autonomous in the management of Kusama's X posts.

          +

          This solution does not cover every single communication front for Kusama, but it does cover one of the largest. It also establishes a precedent for other communication channels +that could be offloaded to openGov, provided this proof-of-concept is successful.

          +

          Finally, this RFC is the epitome of experimentation that Kusama is ideal for. This proposal may spark newfound excitement for Kusama and help us realize Kusama's potential +for pushing boundaries and trying new unconventional ideas.

          +

          Stakeholders

          +

          This idea has not been formalized by any individual (or group of) KSM holder(s). To my knowledge the socialization of this idea is contained +entirely in my recent X post here, but it is possible that an idea like this one has been discussed in +other places. It appears to me that the ecosystem would welcome a change like this which is why I am taking action to formalize the discussion.

          +

          Explanation

          +

          The implementation of this idea can be broken down into 3 primary phases:

          +

          Phase 1 - Track configurations

          +

          First, we begin with this RFC to ensure all feedback can be discussed and implemented in the proposal. After the Fellowship and the community come to a reasonable +agreement on the changes necessary to make this happen, the Fellowship can merge changes into Kusama's runtime to include this new track with appropriate track configurations. +As a starting point, I recommend the following track configurations:

          +
          const APP_X_POST: Curve = Curve::make_linear(7, 28, percent(50), percent(100));
          +const SUP_X_POST: Curve = Curve::make_reciprocal(?, ?, percent(?), percent(?), percent(?));
          +
          +// I don't know how to configure the make_reciprocal variables to get what I imagine for support,
          +// but I recommend starting at 50% support and sharply decreasing such that 1% is sufficient quarterway
          +// through the decision period and hitting 0% at the end of the decision period, or something like that.
          +
          +	(
          +		69,
          +		pallet_referenda::TrackInfo {
          +			name: "x_post",
          +			max_deciding: 50,
          +			decision_deposit: 1 * UNIT,
          +			prepare_period: 10 * MINUTES,
          +			decision_period: 4 * DAYS,
          +			confirm_period: 10 * MINUTES,
          +			min_enactment_period: 1 * MINUTES,
          +			min_approval: APP_X_POST,
          +			min_support: SUP_X_POST,
          +		},
          +	),
          +
          +

          I also recommend restricting permissions of this track to only submitting remarks or batches of remarks - that's all we'll need for its purpose. I'm not sure how +easy that is to configure, but it is important since we don't want such an agile track to be able to make highly consequential calls.

          +

          Phase 2 - Establish Specs for X Post Track Referenda

          +

          It is important that we establish the specifications of referenda that will be submitted in this track to ensure that whatever automation tool is built can easily +make posts once a referendum is enacted. As stated above, we really only need a system.remark (or batch of remarks) to indicate the contents of a proposed X post. +The most straight-forward way to do this is to require remarks to adhere to X's requirements for making posts via their API.

          +

          For example, if I wanted to propose a post that contained the text "Hello World!" I would propose a referendum in the X post track that contains the following call data: +0x0000607b2274657874223a202248656c6c6f20576f726c6421227d (i.e. system.remark('{"text": "Hello World!"}')).

          +

          At first, we could support text posts only to prove the concept. Later on we could expand this spec to add support for media, likes, retweets, replies, polls, and +whatever other X features we want.

          +

          Phase 3 - Release, Tooling, & Documentation

          +

          Once we agree on track configurations and specs for referenda in this track, the Fellowship can move forward with merging these changes into Kusama's runtime and +include them in its next release. We could also move forward with developing the necessary tools that would listen for enacted referenda to post automatically on X. +This would require coordination with whoever controls the X account; they would either need to run the tools themselves or add a third party as an authorized user to +run the tools to make posts on the account's behalf. This is a bottleneck for decentralization, but as long as the tools are run by the X account manager or by a trusted third party +it should be fine. I'm open to more decentralized solutions, but those always come at a cost of complexity.

          +

          For the tools themselves, we could open a bounty on Kusama for developers/teams to bid on. We could also just ask the community to step up with a Treasury proposal +to have anyone fund the build. Or, the Fellowship could make the release of these changes contingent on their endorsement of developers/teams to build these tools. Lots of options! +For the record, me and my team could develop all the necessary tools, but all because I'm proposing these changes doesn't entitle me to funds to build the tools needed +to implement them. Here's what would be needed:

          +
            +
          • a listener tool that would listen for enacted referenda in this track, verify the format of the remark(s), and submit to X's API with authenticating credentials
          • +
          • a UI to allow layman users to propose referenda on this track
          • +
          +

          After everything is complete, we can update the Kusama wiki to include documentation on the X post specifications and include links to the tools/UI.

          +

          Drawbacks

          +

          The main drawback to this change is that it requires a lot of off-chain coordination. It's easy enough to include the track on Kusama but it's a totally different +challenge to make it function as intended. The tools need to be built and the auth tokens need to be managed. It would certainly add an administrative burden to whoever +manages the X account since they would either need to run the tools themselves or manage auth tokens.

          +

          This change also introduces on-going costs to the Treasury since it would need to compensate people to support the tools necessary to facilitate this idea. The ultimate +question is whether these on-going costs would be worth the ability for KSM holders to make posts on Kusama's X account.

          +

          There's also the risk of misconfiguring the track to make referenda too easy to pass, potentially allowing a malicious actor to get content posted on X that violates X's ToS. +If that happens, we risk getting Kusama banned on X!

          +

          This change might also be outside the scope of the Fellowship/openGov. Perhaps the best solution for the X account is to have the Treasury pay for a professional +agency to manage posts. It wouldn't be decentralized but it would probably be more effective in terms of creating good content.

          +

          Finally, this solution is merely pseudo-decentralization since the X account manager would still have ultimate control of the account. It's decentralized insofar as +the auth tokens are given to people actually running the tools; a house of cards is required to facilitate X posts via this track. Not ideal.

          +

          Testing, Security, and Privacy

          +

          There's major precedent for configuring tracks on openGov given the amount of power tracks have, so it shouldn't be hard to come up with a sound configuration. +That's why I recommend restricting permissions of this track to remarks and batches of remarks, or something equally inconsequential.

          +

          Building the tools for this implementation is really straight-forward and could be audited by Fellowship members, and the community at large, on Github.

          +

          The largest security concern would be the management of Kusama's X account's auth tokens. We would need to ensure that they aren't compromised.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          If a track on Kusama promises users that compliant referenda enacted therein would be posted on Kusama's X account, users would expect that track to perform as promised. +If the house of cards tumbles down and a compliant referendum doesn't actually get anything posted, users might think that Kusama is broken or unreliable. This +could be damaging to Kusama's image and cause people to question the soundness of other features on Kusama.

          +

          As mentioned in the drawbacks, the performance of this feature would depend on off-chain coordinations. We can reduce the administrative burden of these coordinations +by funding third parties with the Treasury to deal with it, but then we're relying on trusting these parties.

          +

          Ergonomics

          +

          By adding a new track to Kusama, governance platforms like Polkassembly or Nova Wallet would need to include it on their applications. This shouldn't be too +much of a burden or overhead since they've already built the infrastructure for other openGov tracks.

          +

          Compatibility

          +

          This change wouldn't break any compatibility as far as I know.

          +

          References

          +

          One reference to a similar feature requiring on-chain/off-chain coordination would be the Kappa-Sigma-Mu Society. Nothing on-chain necessarily enforces the rules +or facilitates bids, challenges, defenses, etc. However, the Society has managed to maintain itself with integrity to its rules. So I don't think this is totally +out of Kusama's scope. But it will require some off-chain effort to maintain.

          +

          Unresolved Questions

          +
            +
          • Who will develop the tools necessary to implement this feature? How do we select them?
          • +
          • How can this idea be better implemented with on-chain/substrate features?
          • +
          +

          (source)

          +

          Table of Contents

          + +

          RFC-0073: Decision Deposit Referendum Track

          +
          + + + +
          Start Date12 February 2024
          DescriptionAdd a referendum track which can place the decision deposit on any other track
          AuthorsJelliedOwl
          +
          +

          Summary

          +

          The current size of the decision deposit on some tracks is too high for many proposers. As a result, those needing to use it have to find someone else willing to put up the deposit for them - and a number of legitimate attempts to use the root track have timed out. This track would provide a more affordable (though slower) route for these holders to use the root track.

          +

          Motivation

          +

          There have been recent attempts to use the Kusama root track which have timed out with no decision deposit placed. Usually, these referenda have been related to parachain registration related issues.

          +

          Explanation

          +

          Propose to address this by adding a new referendum track [22] Referendum Deposit which can place the decision deposit on another referendum. This would require the following changes:

          +
            +
          • [Referenda Pallet] Modify the placeDecisionDesposit function to additionally allow it to be called by root, with root call bypassing the requirements for a deposit payment.
          • +
          • [Runtime] Add a new referendum track which can only call referenda->placeDecisionDeposit and the utility functions.
          • +
          +

          Referendum track parameters - Polkadot

          +
            +
          • Decision deposit: 1000 DOT
          • +
          • Decision period: 14 days
          • +
          • Confirmation period: 12 hours
          • +
          • Enactment period: 2 hour
          • +
          • Approval & Support curves: As per the root track, timed to match the decision period
          • +
          • Maximum deciding: 10
          • +
          +

          Referendum track parameters - Kusama

          +
            +
          • Decision deposit: 33.333333 KSM
          • +
          • Decision period: 7 days
          • +
          • Confirmation period: 6 hours
          • +
          • Enactment period: 1 hour
          • +
          • Approval & Support curves: As per the root track, timed to match the decision period
          • +
          • Maximum deciding: 10
          • +
          +

          Drawbacks

          +

          This track would provide a route to starting a root referendum with a much-reduced slashable deposit. This might be undesirable but, assuming the decision deposit cost for this track is still high enough, slashing would still act as a disincentive.

          +

          An alternative to this might be to reduce the decision deposit size some of the more expensive tracks. However, part of the purpose of the high deposit - at least on the root track - is to prevent spamming the limited queue with junk referenda.

          +

          Testing, Security, and Privacy

          +

          Will need additional tests case for the modified pallet and runtime. No security or privacy issues.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          No significant performance impact.

          +

          Ergonomics

          +

          Only changes related to adding the track. Existing functionality is unchanged.

          +

          Compatibility

          +

          No compatibility issues.

          +

          Prior Art and References

          + +

          Unresolved Questions

          +

          Feedback on whether my proposed implementation of this is the best way to address the issue - including which calls the track should be allowed to make. Are the track parameters correct or should be use something different? Alternative would be welcome.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0074: Stateful Multisig Pallet

          +
          + + + +
          Start Date15 February 2024
          DescriptionAdd Enhanced Multisig Pallet to System chains
          AuthorsAbdelrahman Soliman (Boda)
          +
          +

          Summary

          +

          A pallet to facilitate enhanced multisig accounts. The main enhancement is that we store a multisig account in the state with related info (signers, threshold,..etc). The module affords enhanced control over administrative operations such as adding/removing signers, changing the threshold, account deletion, canceling an existing proposal. Each signer can approve/reject a proposal while still exists. The proposal is not intended for migrating or getting rid of existing multisig. It's to allow both options to coexist.

          +

          For the rest of the RFC We use the following terms:

          +
            +
          • proposal to refer to an extrinsic that is to be dispatched from a multisig account after getting enough approvals.
          • +
          • Stateful Multisig to refer to the proposed pallet.
          • +
          • Stateless Multisig to refer to the current multisig pallet in polkadot-sdk.
          • +
          +

          Motivation

          +

          Problem

          +

          Entities in the Polkadot ecosystem need to have a way to manage their funds and other operations in a secure and efficient way. Multisig accounts are a common way to achieve this. Entities by definition change over time, members of the entity may change, threshold requirements may change, and the multisig account may need to be deleted. For even more enhanced hierarchical control, the multisig account may need to be controlled by other multisig accounts.

          +

          Current native solutions for multisig operations are less optimal, performance-wise (as we'll explain later in the RFC), and lack fine-grained control over the multisig account.

          +

          Stateless Multisig

          +

          We refer to current multisig pallet in polkadot-sdk because the multisig account is only derived and not stored in the state. Although deriving the account is determinsitc as it relies on exact users (sorted) and thershold to derive it. This does not allow for control over the multisig account. It's also tightly coupled to exact users and threshold. This makes it hard for an organization to manage existing accounts and to change the threshold or add/remove signers.

          +

          We believe as well that the stateless multisig is not efficient in terms of block footprint as we'll show in the performance section.

          +

          Pure Proxy

          +

          Pure proxy can achieve having a stored and determinstic multisig account from different users but it's unneeded complexity as a way around the limitations of the current multisig pallet. It doesn't also have the same fine grained control over the multisig account.

          +

          Other points mentioned by @tbaut

          +
            +
          • pure proxies aren't (yet) a thing cross chain
          • +
          • the end user complexity is much much higher with pure proxies, also for new users smart contract multisig are widely known while pure proxies are obscure.
          • +
          • you can shoot yourself in the foot by deleting the proxy, and effectively loosing access to funds with pure proxies.
          • +
          +

          Requirements

          +

          Basic requirements for the Stateful Multisig are:

          +
            +
          • The ability to have concrete and permanent (unless deleted) multisig accounts in the state.
          • +
          • The ability to add/remove signers from an existing multisig account by the multisig itself.
          • +
          • The ability to change the threshold of an existing multisig account by the multisig itself.
          • +
          • The ability to delete an existing multisig account by the multisig itself.
          • +
          • The ability to cancel an existing proposal by the multisig itself.
          • +
          • Signers of multisig account can start a proposal on behalf of the multisig account which will be dispatched after getting enough approvals.
          • +
          • Signers of multisig account can approve/reject a proposal while still exists.
          • +
          +

          Use Cases

          +
            +
          • +

            Corporate Governance: +In a corporate setting, multisig accounts can be employed for decision-making processes. For example, a company may require the approval of multiple executives to initiate significant financial transactions.

            +
          • +
          • +

            Joint Accounts: +Multisig accounts can be used for joint accounts where multiple individuals need to authorize transactions. This is particularly useful in family finances or shared business accounts.

            +
          • +
          • +

            Decentralized Autonomous Organizations (DAOs): +DAOs can utilize multisig accounts to ensure that decisions are made collectively. Multiple key holders can be required to approve changes to the organization's rules or the allocation of funds.

            +
          • +
          +

          and much more...

          +

          Stakeholders

          +
            +
          • Polkadot holders
          • +
          • Polkadot developers
          • +
          +

          Explanation

          +

          I've created the stateful multisig pallet during my studies in Polkadot Blockchain Academy under supervision from @shawntabrizi and @ank4n. After that, I've enhanced it to be fully functional and this is a draft PR#3300 in polkadot-sdk. I'll list all the details and design decisions in the following sections. Note that the PR is not 1-1 exactly to the current RFC as the RFC is a more polished version of the PR after updating based on the feedback and discussions.

          +

          Let's start with a sequence diagram to illustrate the main operations of the Stateful Multisig.

          +

          multisig operations

          +

          Notes on above diagram:

          +
            +
          • It's a 3 step process to execute a proposal. (Start Proposal --> Approvals --> Execute Proposal)
          • +
          • Execute is an explicit extrinsic for a simpler API. It can be optimized to be executed automatically after getting enough approvals.
          • +
          • Any user can create a multisig account and they don't need to be part of it. (Alice in the diagram)
          • +
          • A proposal is any extrinsic including control extrinsics (e.g. add/remove signer, change threshold,..etc).
          • +
          • Any multisig account signer can start a proposal on behalf of the multisig account. (Bob in the diagram)
          • +
          • Any multisig account owener can execute proposal if it's approved by enough signers. (Dave in the diagram)
          • +
          +

          State Transition Functions

          +

          having the following enum to store the call or the hash:

          +
          #![allow(unused)]
          +fn main() {
          +enum CallOrHash<T: Config> {
          +	Call(<T as Config>::RuntimeCall),
          +	Hash(T::Hash),
          +}
          +}
          +
            +
          • create_multisig - Create a multisig account with a given threshold and initial signers. (Needs Deposit)
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Creates a new multisig account and attach signers with a threshold to it.
          +		///
          +		/// The dispatch origin for this call must be _Signed_. It is expected to be a nomral AccountId and not a
          +		/// Multisig AccountId.
          +		///
          +		/// T::BaseCreationDeposit + T::PerSignerDeposit * signers.len() will be held from the caller's account.
          +		///
          +		/// # Arguments
          +		///
          +		/// - `signers`: Initial set of accounts to add to the multisig. These may be updated later via `add_signer`
          +		/// and `remove_signer`.
          +		/// - `threshold`: The threshold number of accounts required to approve an action. Must be greater than 0 and
          +		/// less than or equal to the total number of signers.
          +		///
          +		/// # Errors
          +		///
          +		/// * `TooManySignatories` - The number of signatories exceeds the maximum allowed.
          +		/// * `InvalidThreshold` - The threshold is greater than the total number of signers.
          +		pub fn create_multisig(
          +			origin: OriginFor<T>,
          +			signers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>,
          +			threshold: u32,
          +		) -> DispatchResult 
          +}
          +
            +
          • start_proposal - Start a multisig proposal. (Needs Deposit)
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Starts a new proposal for a dispatchable call for a multisig account.
          +		/// The caller must be one of the signers of the multisig account.
          +		/// T::ProposalDeposit will be held from the caller's account.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `multisig_account` - The multisig account ID.
          +		/// * `call_or_hash` - The enum having the call or the hash of the call to be approved and executed later.
          +		///
          +		/// # Errors
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
          +		/// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. (shouldn't really happen as it's the first approval)
          +		pub fn start_proposal(
          +			origin: OriginFor<T>,
          +			multisig_account: T::AccountId,
          +			call_or_hash: CallOrHash,
          +		) -> DispatchResult
          +}
          +
            +
          • approve - Approve a multisig proposal.
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Approves a proposal for a dispatchable call for a multisig account.
          +		/// The caller must be one of the signers of the multisig account.
          +		///
          +		/// If a signer did approve -> reject -> approve, the proposal will be approved.
          +		/// If a signer did approve -> reject, the proposal will be rejected.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `multisig_account` - The multisig account ID.
          +		/// * `call_or_hash` - The enum having the call or the hash of the call to be approved.
          +		///
          +		/// # Errors
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
          +		/// * `TooManySignatories` - The number of signatories exceeds the maximum allowed.
          +		/// This shouldn't really happen as it's an approval, not an addition of a new signer.
          +		pub fn approve(
          +			origin: OriginFor<T>,
          +			multisig_account: T::AccountId,
          +			call_or_hash: CallOrHash,
          +		) -> DispatchResult
          +}
          +
            +
          • reject - Reject a multisig proposal.
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Rejects a proposal for a multisig account.
          +		/// The caller must be one of the signers of the multisig account.
          +		///
          +		/// Between approving and rejecting, last call wins.
          +		/// If a signer did approve -> reject -> approve, the proposal will be approved.
          +		/// If a signer did approve -> reject, the proposal will be rejected.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `multisig_account` - The multisig account ID.
          +		/// * `call_or_hash` - The enum having the call or the hash of the call to be rejected.
          +		///
          +		/// # Errors
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
          +		/// * `SignerNotFound` - The caller has not approved the proposal.
          +		#[pallet::call_index(3)]
          +		#[pallet::weight(Weight::default())]
          +		pub fn reject(
          +			origin: OriginFor<T>,
          +			multisig_account: T::AccountId,
          +			call_or_hash: CallOrHash,
          +		) -> DispatchResult
          +}
          +
            +
          • execute_proposal - Execute a multisig proposal. (Releases Deposit)
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Executes a proposal for a dispatchable call for a multisig account.
          +		/// Poropsal needs to be approved by enough signers (exceeds or equal multisig threshold) before it can be executed.
          +		/// The caller must be one of the signers of the multisig account.
          +		///
          +		/// This function does an extra check to make sure that all approvers still exist in the multisig account.
          +		/// That is to make sure that the multisig account is not compromised by removing an signer during an active proposal.
          +		///
          +		/// Once finished, the withheld deposit will be returned to the proposal creator.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `multisig_account` - The multisig account ID.
          +		/// * `call_or_hash` - We should have gotten the RuntimeCall (preimage) and stored it in the proposal by the time the extrinsic is called.
          +		///
          +		/// # Errors
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
          +		/// * `NotEnoughApprovers` - approvers don't exceed the threshold.
          +		/// * `ProposalNotFound` -  The proposal does not exist.
          +		/// * `CallPreImageNotFound` -  The proposal doesn't have the preimage of the call in the state.
          +		pub fn execute_proposal(
          +			origin: OriginFor<T>,
          +			multisig_account: T::AccountId,
          +			call_or_hash: CallOrHash,
          +		) -> DispatchResult
          +}
          +
            +
          • cancel_proposal - Cancel a multisig proposal. (Releases Deposit)
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Cancels an existing proposal for a multisig account.
          +		/// Poropsal needs to be rejected by enough signers (exceeds or equal multisig threshold) before it can be executed.
          +		/// The caller must be one of the signers of the multisig account.
          +		///
          +		/// This function does an extra check to make sure that all rejectors still exist in the multisig account.
          +		/// That is to make sure that the multisig account is not compromised by removing an signer during an active proposal.
          +		///
          +		/// Once finished, the withheld deposit will be returned to the proposal creator./
          +		///
          +		/// # Arguments
          +		///
          +		/// * `origin` - The origin multisig account who wants to cancel the proposal.
          +		/// * `call_or_hash` - The call or hash of the call to be canceled.
          +		///
          +		/// # Errors
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `ProposalNotFound` - The proposal does not exist.
          +		pub fn cancel_proposal(
          +		origin: OriginFor<T>, 
          +		multisig_account: T::AccountId, 
          +		call_or_hash: CallOrHash) -> DispatchResult
          +}
          +
            +
          • cancel_own_proposal - Cancel a multisig proposal started by the caller in case no other signers approved it yet. (Releases Deposit)
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Cancels an existing proposal for a multisig account Only if the proposal doesn't have approvers other than
          +		/// the proposer.
          +		///
          +		///	This function needs to be called from a the proposer of the proposal as the origin.
          +		///
          +		/// The withheld deposit will be returned to the proposal creator.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `multisig_account` - The multisig account ID.
          +		/// * `call_or_hash` - The hash of the call to be canceled.
          +		///
          +		/// # Errors
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `ProposalNotFound` - The proposal does not exist.
          +		pub fn cancel_own_proposal(
          +			origin: OriginFor<T>,
          +			multisig_account: T::AccountId,
          +			call_or_hash: CallOrHash,
          +		) -> DispatchResult
          +}
          +
            +
          • cleanup_proposals - Cleanup proposals of a multisig account. (Releases Deposit)
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Cleanup proposals of a multisig account. This function will iterate over a max limit per extrinsic to ensure
          +		/// we don't have unbounded iteration over the proposals.
          +		///
          +		/// The withheld deposit will be returned to the proposal creator.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `multisig_account` - The multisig account ID.
          +		///
          +		/// # Errors
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `ProposalNotFound` - The proposal does not exist.
          +		pub fn cleanup_proposals(
          +			origin: OriginFor<T>,
          +			multisig_account: T::AccountId,
          +		) -> DispatchResult
          +}
          +

          Note: Next functions need to be called from the multisig account itself. Deposits are reserved from the multisig account as well.

          +
            +
          • add_signer - Add a new signer to a multisig account. (Needs Deposit)
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Adds a new signer to the multisig account.
          +		/// This function needs to be called from a Multisig account as the origin.
          +		/// Otherwise it will fail with MultisigNotFound error.
          +		///
          +		/// T::PerSignerDeposit will be held from the multisig account.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `origin` - The origin multisig account who wants to add a new signer to the multisig account.
          +		/// * `new_signer` - The AccountId of the new signer to be added.
          +		/// * `new_threshold` - The new threshold for the multisig account after adding the new signer.
          +		///
          +		/// # Errors
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `InvalidThreshold` - The threshold is greater than the total number of signers or is zero.
          +		/// * `TooManySignatories` - The number of signatories exceeds the maximum allowed.
          +		pub fn add_signer(
          +			origin: OriginFor<T>,
          +			new_signer: T::AccountId,
          +			new_threshold: u32,
          +		) -> DispatchResult
          +}
          +
            +
          • remove_signer - Remove an signer from a multisig account. (Releases Deposit)
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Removes an  signer from the multisig account.
          +		/// This function needs to be called from a Multisig account as the origin.
          +		/// Otherwise it will fail with MultisigNotFound error.
          +		/// If only one signer exists and is removed, the multisig account and any pending proposals for this account will be deleted from the state.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `origin` - The origin multisig account who wants to remove an signer from the multisig account.
          +		/// * `signer_to_remove` - The AccountId of the signer to be removed.
          +		/// * `new_threshold` - The new threshold for the multisig account after removing the signer. Accepts zero if
          +		/// the signer is the only one left.kkk
          +		///
          +		/// # Errors
          +		///
          +		/// This function can return the following errors:
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `InvalidThreshold` - The new threshold is greater than the total number of signers or is zero.
          +		/// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account.
          +		pub fn remove_signer(
          +			origin: OriginFor<T>,
          +			signer_to_remove: T::AccountId,
          +			new_threshold: u32,
          +		) -> DispatchResult
          +}
          +
            +
          • set_threshold - Change the threshold of a multisig account.
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Sets a new threshold for a multisig account.
          +		///	This function needs to be called from a Multisig account as the origin.
          +		/// Otherwise it will fail with MultisigNotFound error.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `origin` - The origin multisig account who wants to set the new threshold.
          +		/// * `new_threshold` - The new threshold to be set.
          +		/// # Errors
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		/// * `InvalidThreshold` - The new threshold is greater than the total number of signers or is zero.
          +		set_threshold(origin: OriginFor<T>, new_threshold: u32) -> DispatchResult
          +}
          +
            +
          • delete_multisig - Delete a multisig account. (Releases Deposit)
          • +
          +
          #![allow(unused)]
          +fn main() {
          +		/// Deletes a multisig account and all related proposals.
          +		///
          +		///	This function needs to be called from a Multisig account as the origin.
          +		/// Otherwise it will fail with MultisigNotFound error.
          +		///
          +		/// # Arguments
          +		///
          +		/// * `origin` - The origin multisig account who wants to cancel the proposal.
          +		///
          +		/// # Errors
          +		///
          +		/// * `MultisigNotFound` - The multisig account does not exist.
          +		pub fn delete_account(origin: OriginFor<T>) -> DispatchResult
          +}
          +

          Storage/State

          +
            +
          • Use 2 main storage maps to store mutlisig accounts and proposals.
          • +
          +
          #![allow(unused)]
          +fn main() {
          +#[pallet::storage]
          +  pub type MultisigAccount<T: Config> = StorageMap<_, Twox64Concat, T::AccountId, MultisigAccountDetails<T>>;
          +
          +/// The set of open multisig proposals. A proposal is uniquely identified by the multisig account and the call hash.
          +/// (maybe a nonce as well in the future)
          +#[pallet::storage]
          +pub type PendingProposals<T: Config> = StorageDoubleMap<
          +    _,
          +    Twox64Concat,
          +    T::AccountId, // Multisig Account
          +    Blake2_128Concat,
          +    T::Hash, // Call Hash
          +    MultisigProposal<T>,
          +>;
          +}
          +

          As for the values:

          +
          #![allow(unused)]
          +fn main() {
          +pub struct MultisigAccountDetails<T: Config> {
          +	/// The signers of the multisig account. This is a BoundedBTreeSet to ensure faster operations (add, remove).
          +	/// As well as lookups and faster set operations to ensure approvers is always a subset from signers. (e.g. in case of removal of an signer during an active proposal)
          +	pub signers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>,
          +	/// The threshold of approvers required for the multisig account to be able to execute a call.
          +	pub threshold: u32,
          +	pub deposit: BalanceOf<T>,
          +}
          +}
          +
          #![allow(unused)]
          +fn main() {
          +pub struct MultisigProposal<T: Config> {
          +    /// Proposal creator.
          +    pub creator: T::AccountId,
          +    pub creation_deposit: BalanceOf<T>,
          +    /// The extrinsic when the multisig operation was opened.
          +    pub when: Timepoint<BlockNumberFor<T>>,
          +    /// The approvers achieved so far, including the depositor.
          +    /// The approvers are stored in a BoundedBTreeSet to ensure faster lookup and operations (approve, reject).
          +    /// It's also bounded to ensure that the size don't go over the required limit by the Runtime.
          +    pub approvers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>,
          +    /// The rejectors for the proposal so far.
          +    /// The rejectors are stored in a BoundedBTreeSet to ensure faster lookup and operations (approve, reject).
          +    /// It's also bounded to ensure that the size don't go over the required limit by the Runtime.
          +    pub rejectors: BoundedBTreeSet<T::AccountId, T::MaxSignatories>,
          +    /// The block number until which this multisig operation is valid. None means no expiry.
          +    pub expire_after: Option<BlockNumberFor<T>>,
          +}
          +}
          +

          For optimization we're using BoundedBTreeSet to allow for efficient lookups and removals. Especially in the case of approvers, we need to be able to remove an approver from the list when they reject their approval. (which we do lazily when execute_proposal is called).

          +

          There's an extra storage map for the deposits of the multisig accounts per signer added. This is to ensure that we can release the deposits when the multisig removes them even if the constant deposit per signer changed in the runtime later on.

          +

          Considerations & Edge cases

          +

          Removing an signer from the multisig account during an active proposal

          +

          We need to ensure that the approvers are always a subset from signers. This is also partially why we're using BoundedBTreeSet for signers and approvers. Once execute proposal is called we ensure that the proposal is still valid and the approvers are still a subset from current signers.

          +

          Multisig account deletion and cleaning up existing proposals

          +

          Once the last signer of a multisig account is removed or the multisig approved the account deletion we delete the multisig accound from the state and keep the proposals until someone calls cleanup_proposals multiple times which iterates over a max limit per extrinsic. This is to ensure we don't have unbounded iteration over the proposals. Users are already incentivized to call cleanup_proposals to get their deposits back.

          +

          Multisig account deletion and existing deposits

          +

          We currently just delete the account without checking for deposits (Would like to hear your thoughts here). We can either

          +
            +
          • Don't make deposits to begin with and make it a fee.
          • +
          • Transfer to treasury.
          • +
          • Error on deletion. (don't like this)
          • +
          +

          Approving a proposal after the threshold is changed

          +

          We always use latest threshold and don't store each proposal with different threshold. This allows the following:

          +
            +
          • In case threshold is lower than the number of approvers then the proposal is still valid.
          • +
          • In case threshold is higher than the number of approvers then we catch it during execute proposal and error.
          • +
          +

          Drawbacks

          +
            +
          • New pallet to maintain.
          • +
          +

          Testing, Security, and Privacy

          +

          Standard audit/review requirements apply.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          Doing back of the envelop calculation to proof that the stateful multisig is more efficient than the stateless multisig given it's smaller footprint size on blocks.

          +

          Quick review over the extrinsics for both as it affects the block size:

          +

          Stateless Multisig: +Both as_multi and approve_as_multi has a similar parameters:

          +
          #![allow(unused)]
          +fn main() {
          +origin: OriginFor<T>,
          +threshold: u16,
          +other_signatories: Vec<T::AccountId>,
          +maybe_timepoint: Option<Timepoint<BlockNumberFor<T>>>,
          +call_hash: [u8; 32],
          +max_weight: Weight,
          +}
          +

          Stateful Multisig: +We have the following extrinsics:

          +
          #![allow(unused)]
          +fn main() {
          +pub fn start_proposal(
          +			origin: OriginFor<T>,
          +			multisig_account: T::AccountId,
          +			call_or_hash: CallOrHash,
          +		)
          +}
          +
          #![allow(unused)]
          +fn main() {
          +pub fn approve(
          +			origin: OriginFor<T>,
          +			multisig_account: T::AccountId,
          +			call_or_hash: CallOrHash,
          +		)
          +}
          +
          #![allow(unused)]
          +fn main() {
          +pub fn execute_proposal(
          +			origin: OriginFor<T>,
          +			multisig_account: T::AccountId,
          +			call_or_hash: CallOrHash,
          +		)
          +}
          +

          The main takeway is that we don't need to pass the threshold and other signatories in the extrinsics. This is because we already have the threshold and signatories in the state (only once).

          +

          So now for the caclulations, given the following:

          +
            +
          • K is the number of multisig accounts.
          • +
          • N is number of signers in each multisig account.
          • +
          • For each proposal we need to have 2N/3 approvals.
          • +
          +

          The table calculates if each of the K multisig accounts has one proposal and it gets approved by the 2N/3 and then executed. How much did the total Blocks and States sizes increased by the end of the day.

          +

          Note: We're not calculating the cost of proposal as both in statefull and stateless multisig they're almost the same and gets cleaned up from the state once the proposal is executed or canceled.

          +

          Stateless effect on blocksizes = 2/3KN^2 (as each user of the 2/3 users will need to call approve_as_multi with all the other signatories(N) in extrinsic body)

          +

          Stateful effect on blocksizes = K * N (as each user will need to call approve with the multisig account only in extrinsic body)

          +

          Stateless effect on statesizes = Nil (as the multisig account is not stored in the state)

          +

          Stateful effect on statesizes = K*N (as each multisig account (K) will be stored with all the signers (K) in the state)

          +
          + + +
          PalletBlock SizeState Size
          Stateless2/3KN^2Nil
          StatefulK*NK*N
          +
          +

          Simplified table removing K from the equation: +| Pallet | Block Size | State Size | +|----------------|:-------------:|-----------:| +| Stateless | N^2 | Nil | +| Stateful | N | N |

          +

          So even though the stateful multisig has a larger state size, it's still more efficient in terms of block size and total footprint on the blockchain.

          +

          Ergonomics

          +

          The Stateful Multisig will have better ergonomics for managing multisig accounts for both developers and end-users.

          +

          Compatibility

          +

          This RFC is compatible with the existing implementation and can be handled via upgrades and migration. It's not intended to replace the existing multisig pallet.

          +

          Prior Art and References

          +

          multisig pallet in polkadot-sdk

          +

          Unresolved Questions

          +
            +
          • On account deletion, should we transfer remaining deposits to treasury or remove signers' addition deposits completely and consider it as fees to start with?
          • +
          + +
            +
          • +Batch addition/removal of signers.
          • +
          • +Add expiry to proposals. After a certain time, proposals will not accept any more approvals or executions and will be deleted.
          • +
          • +Implement call filters. This will allow multisig accounts to only accept certain calls.
          • +
          +

          (source)

          +

          Table of Contents

          + +

          RFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytes

          +
          + + + +
          Start Date20 Feb 2024
          DescriptionIncrease the maximum length of identity PGP fingerprint values from 20 bytes
          AuthorsLuke Schoen
          +
          +

          Summary

          +

          This proposes to increase the maximum length of PGP Fingerprint values from a 20 bytes/chars limit to a 40 bytes/chars limit.

          +

          Motivation

          +

          Background

          +

          Pretty Good Privacy (PGP) Fingerprints are shorter versions of their corresponding Public Key that may be printed on a business card.

          +

          They may be used by someone to validate the correct corresponding Public Key.

          +

          It should be possible to add PGP Fingerprints to Polkadot on-chain identities.

          +

          GNU Privacy Guard (GPG) is compliant with PGP and the two acronyms are used interchangeably.

          +

          Problem

          +

          If you want to set a Polkadot on-chain identity, users may provide a PGP Fingerprint value in the "pgpFingerprint" field, which may be longer than 20 bytes/chars (e.g. PGP Fingerprints are 40 bytes/chars long), however that field can only store a maximum length of 20 bytes/chars of information.

          +

          Possible disadvantages of the current 20 bytes/chars limitation:

          +
            +
          • Discourages users from using the "pgpFingerprint" field.
          • +
          • Discourages users from using Polkadot on-chain identities for Web2 and Web3 dApp software releases where the latest "pgpFingerprint" field could be used to verify the correct PGP Fingerprint that has been used to sign the software releases so users that download the software know that it was from a trusted source.
          • +
          • Encourages dApps to link to Web2 sources to allow their users verify the correct fingerprint associated with software releases, rather than to use the Web3 Polkadot on-chain identity "pgpFingerprint" field of the releaser of the software, since it may be the case that the "pgpFingerprint" field of most on-chain identities is not widely used due to the maximum length of 20 bytes/chars restriction.
          • +
          • Discourages users from setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), since if they try to provide their 40 character long PGP Fingerprint or GPG Fingerprint, which is longer than the maximum length of 20 bytes/chars, they will encounter an error.
          • +
          • Discourages users from using on-chain Web3 registrars to judge on-chain identity fields, where the shortest value they are able to generate for a "pgpFingerprint" is not less than or equal to the maximum length of 20 bytes.
          • +
          +

          Solution Requirements

          +

          The maximum length of identity PGP Fingerprint values should be increased from the current 20 bytes/chars limit at least a 40 bytes/chars limit to support PGP Fingerprints and GPG Fingerprints.

          +

          Stakeholders

          +
            +
          • Any Polkadot account holder wishing to use a Polkadot on-chain identity for their: +
              +
            • PGP Fingerprints that are longer than 32 characters
            • +
            • GPG Fingerprints that are longer than 32 characters
            • +
            +
          • +
          +

          Explanation

          +

          If a user tries to setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), then if they try to provide their 40 character long PGP Fingerprint or GPG Fingerprint, which is longer than the maximum length of 20 bytes/chars [u8;20], then they will encounter this error:

          +
          createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Struct: failed on pgpFingerprint: Option<[u8;20]>:: Expected input with 20 bytes (160 bits), found 40 bytes
          +
          +

          Increasing maximum length of identity PGP Fingerprint values from the current 20 bytes/chars limit to at least a 40 bytes/chars limit would overcome these errors and support PGP Fingerprints and GPG Fingerprints, satisfying the solution requirements.

          +

          Drawbacks

          +

          No drawbacks have been identified.

          +

          Testing, Security, and Privacy

          +

          Implementations would be tested for adherance by checking that 40 bytes/chars PGP Fingerprints are supported.

          +

          No effect on security or privacy has been identified than already exists.

          +

          No implementation pitfalls have been identified.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.

          +

          To minimize additional overhead the proposal suggests a 40 bytes/chars limit since that would at least provide support for PGP Fingerprints, satisfying the solution requirements.

          +

          Ergonomics

          +

          No potential ergonomic optimizations have been identified.

          +

          Compatibility

          +

          Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

          +

          Prior Art and References

          +

          No prior articles or references.

          +

          Unresolved Questions

          +

          No further questions at this stage.

          + +

          Relates to RFC entitled "Increase maximum length of identity raw data values from 32 bytes".

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker pallet

          +
          + + + +
          Start Date25 Apr 2024
          DescriptionAdd slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker pallet
          AuthorsLuke Schoen
          +
          +

          Summary

          +

          This proposes to require a slashable deposit in the broker pallet when initially purchasing or renewing Bulk Coretime or Instantaneous Coretime cores.

          +

          Additionally, it proposes to record a reputational status based on the behavior of the purchaser, as it relates to their use of Kusama Coretime cores that they purchase, and to possibly reserve a proportion of the cores for prospective purchasers that have an on-chain identity.

          +

          Motivation

          +

          Background

          +

          There are sales of Kusama Coretime cores that are scheduled to occur later this month by Coretime Marketplace Lastic.xyz initially in limited quantities, and potentially also by RegionX in future that is subject to their Polkadot referendum #582. This poses a risk in that some Kusama Coretime core purchasers may buy Kusama Coretime cores when they have no intention of actually placing a workload on them or leasing them out, which would prevent those that wish to purchase and actually use Kusama Coretime cores from being able to use any at cores at all.

          +

          Problem

          +

          The types of purchasers may include:

          +
            +
          • Collectors (e.g. purchase a significant core such as the first core that is sold just to increase their likelihood of receiving an NFT airdrop for being one of the first purchasers).
          • +
          • Resellers (e.g. purchase a core that may be used at a popular period of time to resell closer to the date to realise a profit)
          • +
          • Market makers (e.g. buy cores just to change the floor price or volume).
          • +
          • Anti-competitive (e.g. competitor to Polkadot ecosystem purchases cores possibly in violation of anti-trust laws just to restrict access to prospective Kusama Coretime sales cores by the Kusama community that wish to do business in the Polkadot ecosystem).
          • +
          +

          Chaoatic repurcussions could include the following:

          +
            +
          • Generation of "white elephant" Kusama Coretime cores, similar to "white elephant" properties in the real-estate industry that never actually get used, leased or tenanted.
          • +
          • Kusama Coretime core resellers scalping the core time faster than the average core time consumer, and then choosing to use dynamic pricing that causes prices to fluctuate based on demand.
          • +
          • Resellers that own the Kusama Coretime scalping organisations may actually turn out to be the Official Kusama Coretime sellers.
          • +
          • Official Kusama Coretime sellers may establish a monopoly on the market and abuse that power by charging exhorbitant additional charge fees for each purchase, since they could then increase their floor prices even more, pretending that there are fewer cores available and more demand to make extra profits from their scalping organisations, similar to how it occurred in these concert ticket sales. This could caused Kusama Coretime costs to be no longer be affordable to the Kusama community.
          • +
          • Official Kusama Coretime sellers may run pre-sale events, but their websites may not be able to unable to handle the traffic and crash multiple times, causing them to end up cancelling those pre-sales and the pre-sale registrants missing out on getting a core that way, which would then cause available Kusama Coretime cores to be bought and resold at a higher price on third-party sites.
          • +
          • The scalping activity may be illegal in some jurisdictions and raise anti-trust issues similar to the Taylor Swift debacle over concert tickets.
          • +
          +

          Solution Requirements

          +
            +
          1. +

            On-chain identity. It may be possible to circumvent bots and scalpers to an extent by requiring a proportion of Kusama Coretime purchasers to have an on-chain identity. As such, a possible solution could be to allow the configuration of a threshold in the Broker pallet that reserves a proportion of the cores for accounts that have an on-chain identity, that reverts to a waiting list of anonymous account purchasers if the reserved proportion of cores remain unsold.

            +
          2. +
          3. +

            Slashable deposit. A viable solution could be to require a slashable deposit to be locked prior to the purchase or renewal of a core, similar to how decision deposits are used in OpenGov to prevent spam, but where if you buy a Kusama Coretime core you could be challenged by one of more collectives of fishermen to provide proof against certain criteria of how you used it, and if you fail to provide adequate evidence in response to that scrutiny, then you would lose a proportion of that deposit and face restrictions on purchasing or renewing cores in future that may also be configured on-chain.

            +
          4. +
          5. +

            Reputation. To disincentivise certain behaviours, a reputational status indicator could be used to record the historic behavior of the purchaser and whether on-chain judgement has determined they have adequately rectified that behaviour, as it relates to their usage of Kusama Coretime cores that they purchase.

            +
          6. +
          +

          Stakeholders

          +
            +
          • Any Kusama account holder wishing to use the Broker pallet in any upcoming Kusama Coretime sales.
          • +
          • Any prospective Kusama Coretime purchaser, developer, and user.
          • +
          • KSM holders.
          • +
          +

          Drawbacks

          +

          Performance

          +

          The slashable deposit if set too high, may result in an economic impact, where less Kusama Coretime core sales are purchased.

          +

          Testing, Security, and Privacy

          +

          Lack of a slashable deposit in the Broker pallet is a security concern, since it exposes Kusama Coretime sales to potential abuse.

          +

          Reserving a proportion of Kusama Coretime sales cores for those with on-chain identities should not be to the exclusion of accounts that wish to remain anonymous or cause cores to be wasted unnecessarily. As such, if cores that are reserved for on-chain identities remain unsold then they should be released to anonymous accounts that are on a waiting list.

          +

          No implementation pitfalls have been identified.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          It should improve performance as it reduces the potential for state bloat since there is less risk of undesirable Kusama Coretime sales activity that would be apparent with no requirement for a slashable deposit or there being no reputational risk to purchasers that waste or misuse Kusama Coretime cores.

          +

          The solution proposes to minimize the risk of some Kusama Coretime cores not even being used or leased to perform any tasks at all.

          +

          It will be important to monitor and manage the slashable deposits, purchaser reputations, and utilization of the proportion of cores that are reserved for accounts with an on-chain identity.

          +

          Ergonomics

          +

          The mechanism for setting a slashable deposit amount, should avoid undue complexity for users.

          +

          Compatibility

          +

          Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.

          +

          Prior Art and References

          +

          Prior Art

          +

          No prior articles.

          +

          Unresolved Questions

          +

          None

          + +

          None

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0001: Secondary Market for Regions

          +
          + + + +
          Start Date2024-06-09
          DescriptionImplement a secondary market for region listings and sales
          AuthorsAurora Poppyseed, Philip Lucsok
          +
          +

          Summary

          +

          This RFC proposes the addition of a secondary market feature to either the broker pallet or as a separate pallet maintained by Lastic, enabling users to list and purchase regions. This includes creating, purchasing, and removing listings, as well as emitting relevant events and handling associated errors.

          +

          Motivation

          +

          Currently, the broker pallet lacks functionality for a secondary market, which limits users' ability to freely trade regions. This RFC aims to introduce a secure and straightforward mechanism for users to list regions they own for sale and allow other users to purchase these regions.

          +

          While integrating this functionality directly into the broker pallet is one option, another viable approach is to implement it as a separate pallet maintained by Lastic. This separate pallet would have access to the broker pallet and add minimal functionality necessary to support the secondary market.

          +

          Adding smart contracts to the Coretime chain could also address this need; however, this process is expected to be lengthy and complex. We cannot afford to wait for this extended timeline to enable basic secondary market functionality. By proposing either integration into the broker pallet or the creation of a dedicated pallet, we can quickly enhance the flexibility and utility of the broker pallet, making it more user-friendly and valuable.

          +

          Stakeholders

          +

          Primary stakeholders include:

          +
            +
          • Developers working on the broker pallet.
          • +
          • Secondary Coretime marketplaces.
          • +
          • Users who own regions and wish to trade them.
          • +
          • Community members interested in enhancing the broker pallet’s capabilities.
          • +
          +

          Explanation

          +

          This RFC introduces the following key features:

          +
            +
          1. +

            Storage Changes:

            +
              +
            • Addition of Listings storage map to keep track of regions listed for sale and their prices.
            • +
            +
          2. +
          3. +

            New Dispatchable Functions:

            +
              +
            • create_listing: Allows a region owner to list a region for sale.
            • +
            • purchase_listing: Allows a user to purchase a listed region.
            • +
            • remove_listing: Allows a region owner to remove their listing.
            • +
            +
          4. +
          5. +

            Events:

            +
              +
            • ListingCreated: Emitted when a new listing is created.
            • +
            • RegionSold: Emitted when a region is sold.
            • +
            • ListingRemoved: Emitted when a listing is removed.
            • +
            +
          6. +
          7. +

            Error Handling:

            +
              +
            • ExpiredRegion: The region has expired and cannot be listed or sold.
            • +
            • UnknownListing: The listing does not exist.
            • +
            • InvalidPrice: The listing price is invalid.
            • +
            • NotOwner: The caller is not the owner of the region.
            • +
            +
          8. +
          9. +

            Testing:

            +
              +
            • Comprehensive tests to verify the correct functionality of the new features, including listing creation, purchase, removal, and handling of edge cases such as expired regions and unauthorized actions.
            • +
            +
          10. +
          +

          Drawbacks

          +

          The main drawback of adding the additional complexity directly to the broker pallet is the potential increase in maintenance overhead. Therefore, we propose adding additional functionality as a separate pallet on the Coretime chain. To take the pressure off from implementing these features, implementation along with unit tests would be taken care of by Lastic (Aurora Makovac, Philip Lucsok).

          +

          There are potential risks of security vulnerabilities in the new market functionalities, such as unauthorized region transfers or incorrect balance adjustments. Therefore, extensive security measures would have to be implemented.

          +

          Testing, Security, and Privacy

          +

          Testing

          +
            +
          • Comprehensive unit tests need to be provided to ensure the correctness of the new functionalities.
          • +
          • Scenarios tested should include successful and failed listing creation, purchases, and removals, as well as edge cases like expired regions and non-owner actions.
          • +
          +

          Security

          +
            +
          • Security audits should be performed to identify any vulnerabilities.
          • +
          • Ensure that only region owners can create or remove listings.
          • +
          • Validate all inputs to prevent invalid operations.
          • +
          +

          Privacy

          +
            +
          • The proposal does not introduce new privacy concerns as it only affects region trading functionality within the existing framework.
          • +
          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +
            +
          • This feature is expected to introduce minimal overhead since it primarily involves read and write operations to storage maps.
          • +
          • Efforts will be made to optimize the code to prevent unnecessary computational costs.
          • +
          +

          Ergonomics

          +
            +
          • The new functions are designed to be intuitive and easy to use, providing clear feedback through events and errors.
          • +
          • Documentation and examples will be provided to assist developers and users.
          • +
          +

          Compatibility

          +
            +
          • This proposal does not break compatibility with existing interfaces or previous versions.
          • +
          • No migrations are necessary as it introduces new functionality without altering existing features.
          • +
          +

          Prior Art and References

          +
            +
          • All related discussions are going to be under this PR.
          • +
          +

          Unresolved Questions

          +
            +
          • Are there additional security measures needed to prevent potential abuses of the new functionalities?
          • +
          + +
            +
          • Integration with external NFT marketplaces for more robust trading options.
          • +
          • Development of user interfaces to interact with the new marketplace features seamlessly.
          • +
          • Exploration of adding smart contracts to the Coretime chain, which would provide greater flexibility and functionality for the secondary market and other decentralized applications. This would require a longer time for implementation, so this proposes an intermediary solution.
          • +
          +

          (source)

          +

          Table of Contents

          + +

          RFC-0002: Smart Contracts on the Coretime Chain

          +
          + + + +
          Start Date2024-06-09
          DescriptionImplement smart contracts on the Coretime chain
          AuthorsAurora Poppyseed, Phil Lucksok
          +
          +

          Summary

          +

          This RFC proposes the integration of smart contracts on the Coretime chain to enhance flexibility and enable complex decentralized applications, including secondary market functionalities.

          +

          Motivation

          +

          Currently, the Coretime chain lacks the capability to support smart contracts, which limits the range of decentralized applications that can be developed and deployed. By enabling smart contracts, the Coretime chain can facilitate more sophisticated functionalities such as automated region trading, dynamic pricing mechanisms, and other decentralized applications that require programmable logic. This will enhance the utility of the Coretime chain, attract more developers, and create more opportunities for innovation.

          +

          Additionally, while there is a proposal (#885) to allow EVM-compatible contracts on Polkadot’s Asset Hub, the implementation of smart contracts directly on the Coretime chain will provide synchronous interactions and avoid the complexities of asynchronous operations via XCM.

          +

          Stakeholders

          +

          Primary stakeholders include:

          +
            +
          • Developers working on the Coretime chain.
          • +
          • Users who want to deploy decentralized applications on the Coretime chain.
          • +
          • Community members interested in expanding the capabilities of the Coretime chain.
          • +
          • Secondary Coretime marketplaces.
          • +
          +

          Explanation

          +

          This RFC introduces the following key components:

          +
            +
          1. +

            Smart Contract Support:

            +
              +
            • Integrate support for deploying and executing smart contracts on the Coretime chain.
            • +
            • Use a well-established smart contract platform, such as Ethereum’s Solidity or Polkadot's Ink!, to ensure compatibility and ease of use.
            • +
            +
          2. +
          3. +

            Storage and Execution:

            +
              +
            • Define a storage structure for smart contracts and their associated data.
            • +
            • Ensure efficient and secure execution of smart contracts, with proper resource management and gas fee mechanisms.
            • +
            +
          4. +
          5. +

            Integration with Existing Pallets:

            +
              +
            • Ensure that smart contracts can interact with existing pallets on the Coretime chain, such as the broker pallet.
            • +
            • Provide APIs and interfaces for seamless integration and interaction.
            • +
            +
          6. +
          7. +

            Security and Auditing:

            +
              +
            • Implement robust security measures to prevent vulnerabilities and exploits in smart contracts.
            • +
            • Conduct thorough security audits and testing before deployment.
            • +
            +
          8. +
          +

          Drawbacks

          +

          There are several drawbacks to consider:

          +
            +
          • Complexity: Adding smart contracts introduces significant complexity to the Coretime chain, which may increase maintenance overhead and the potential for bugs.
          • +
          • Performance: The execution of smart contracts can be resource-intensive, potentially affecting the performance of the Coretime chain.
          • +
          • Security: Smart contracts are prone to vulnerabilities and exploits, necessitating rigorous security measures and continuous monitoring.
          • +
          +

          Testing, Security, and Privacy

          +

          Testing

          +
            +
          • Comprehensive unit tests and integration tests should be developed to ensure the correct functionality of smart contracts.
          • +
          • Test scenarios should include various use cases and edge cases to validate the robustness of the implementation.
          • +
          +

          Security

          +
            +
          • Security audits should be performed to identify and mitigate vulnerabilities.
          • +
          • Implement best practices for smart contract development to minimize the risk of exploits.
          • +
          • Continuous monitoring and updates will be necessary to address new security threats.
          • +
          +

          Privacy

          +
            +
          • The proposal does not introduce new privacy concerns as it extends existing functionalities with programmable logic.
          • +
          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +
            +
          • The introduction of smart contracts may impact performance due to the additional computational overhead.
          • +
          • Optimization techniques, such as efficient gas fee mechanisms and resource management, should be employed to minimize performance degradation.
          • +
          +

          Ergonomics

          +
            +
          • The new functionality should be designed to be intuitive and easy to use for developers, with comprehensive documentation and examples.
          • +
          • Provide developer tools and SDKs to facilitate the creation and deployment of smart contracts.
          • +
          +

          Compatibility

          +
            +
          • This proposal should maintain compatibility with existing interfaces and functionalities of the Coretime chain.
          • +
          • Ensure backward compatibility and provide migration paths if necessary.
          • +
          +

          Prior Art and References

          +
            +
          • Ethereum’s implementation of smart contracts using Solidity.
          • +
          • Polkadot’s Ink! smart contract platform.
          • +
          • Existing decentralized applications and use cases on other blockchain platforms.
          • +
          • Proposal #885: EVM-compatible contracts on Asset Hub, which highlights the community's interest in integrating smart contracts within the Polkadot ecosystem.
          • +
          +

          Unresolved Questions

          +
            +
          • What specific security measures should be implemented to prevent smart contract vulnerabilities?
          • +
          • How can we ensure optimal performance while supporting complex smart contracts?
          • +
          • What are the best practices for integrating smart contracts with existing pallets on the Coretime chain?
          • +
          + +
            +
          • Further enhancements could include advanced developer tools and SDKs for smart contract development.
          • +
          • Integration with external decentralized applications and platforms to expand the ecosystem.
          • +
          • Continuous updates and improvements to the smart contract platform based on community feedback and emerging best practices.
          • +
          • Exploration of additional use cases for smart contracts on the Coretime chain, such as decentralized finance (DeFi) applications, voting systems, and more.
          • +
          +

          By enabling smart contracts on the Coretime chain, we can significantly expand its capabilities and attract a wider range of developers and users, fostering innovation and growth in the ecosystem.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0000: Feature Name Here

          +
          + + + +
          Start Date13 July 2024
          DescriptionImplement off-chain parachain runtime upgrades
          Authorseskimor
          +
          +

          Summary

          +

          Change the upgrade process of a parachain runtime upgrade to become an off-chain +process with regards to the relay chain. Upgrades are still contained in +parachain blocks, but will no longer need to end up in relay chain blocks nor in +relay chain state.

          +

          Motivation

          +

          Having parachain runtime upgrades go through the relay chain has always been +seen as a scalability concern. Due to optimizations in statement +distribution and asynchronous backing it became less crucial and got +de-prioritized, the original issue can be found +here.

          +

          With the introduction of Agile Coretime and in general our efforts to reduce +barrier to entry more for Polkadot more, the issue becomes more relevant again: +We would like to reduce the required storage deposit for PVF registration, with +the aim to not only make it cheaper to run a parachain (bulk + on-demand +coretime), but also reduce the amount of capital required for the deposit. With +this we would hope for far more parachains to get registered, thousands +potentially even ten thousands. With so many PVFs registered, updates are +expected to become more frequent and even attacks on service quality for other +parachains would become a higher risk.

          +

          Stakeholders

          +
            +
          • Parachain Teams
          • +
          • Relay Chain Node implementation teams
          • +
          • Relay Chain runtime developers
          • +
          +

          Explanation

          +

          The issues with on-chain runtime upgrades are:

          +
            +
          1. Needlessly costly.
          2. +
          3. A single runtime upgrade more or less occupies an entire relay chain block, thus it +might affect also other parachains, especially if their candidates are also +not negligible due to messages for example or they want to uprade their +runtime at the same time.
          4. +
          5. The signalling of the parachain to notify the relay chain of an upcoming +runtime upgrade already contains the upgrade. Therefore the only way to rate +limit upgrades is to drop an already distributed update in the size of +megabytes: With the result that the parachain missed a block and more +importantly it will try again with the very next block, until it finally +succeeds. If we imagine to reduce capacity of runtime upgrades to let's say 1 +every 100 relay chain blocks, this results in lot's of wasted effort and lost +blocks.
          6. +
          +

          We discussed introducing a separate signalling before submitting the actual +runtime, but I think we should just go one step further and make upgrades fully +off-chain. Which also helps bringing down deposit costs in a secure way, as we +are also actually reducing costs for the network.

          +

          Introduce a new UMP message type RequestCodeUpgrade

          +

          As part of elastic scaling we are already planning to increase flexibility of UMP +messages, we can now use this to our advantage and introduce another UMP message:

          +
          #![allow(unused)]
          +fn main() {
          +enum UMPSignal {
          +  // For elastic scaling
          +  OnCore(CoreIndex),
          +  // For off-chain upgrades
          +  RequestCodeUpgrade(Hash),
          +}
          +}
          +

          We could also make that new message a regular XCM, calling an extrinsic on the +relay chain, but we will want to look into that message right after validation +on the backers on the node side, making a straight forward semantic message more +apt for the purpose.

          +

          Handle RequestCodeUpgrade on backers

          +

          We will introduce a new request/response protocol for both collators and +validators, with the following request/response:

          +
          #![allow(unused)]
          +fn main() {
          +struct RequestBlob {
          +  blob_hash: Hash,
          +}
          +
          +struct BlobResponse {
          +  blob: Vec<u8>
          +}
          +}
          +

          This protocol will be used by backers to request the PVF from collators in the +following conditions:

          +
            +
          1. They received a collation sending RequestCodeUpgrade.
          2. +
          3. They received a collation, but they don't yet have the code that was +previously registered on the relaychain. (E.g. disk pruned, new validator)
          4. +
          +

          In case they received the collation via PoV distribution instead of from the +collator itself, they will use the exact same message to fetch from the valiator +they got the PoV from.

          +

          Get the new code to all validators

          +

          Once the candidate issuing RequestCodeUpgrade got backed on chain, validators +will start fetching the code from the backers as part of availability +distribution.

          +

          To mitigate attack vectors we should make sure that serving requests for code +can be treated as low priority requests. Thus I am suggesting the following +scheme:

          +

          Validators will notice via a runtime API (TODO: Define) that a new code has been requested, the +API will return the Hash and a counter, which starts at some configurable +value e.g. 10. The validators are now aware of the new hash and start fetching, +but they don't have to wait for the fetch to succeed to sign their bitfield.

          +

          Then on each further candidate from that chain that counter gets decremented. +Validators which have not yet succeeded fetching will now try again. This game +continues until the counter reached 0. Now it is mandatory to have to code in +order to sign a 1 in the bitfield.

          +

          PVF pre-checking will happen after the candidate which brought the counter to +0 has been successfully included and thus is also able to assume that 2/3 of +the validators have the code.

          +

          This scheme serves two purposes:

          +
            +
          1. Fetching can happen over a longer period of time with low priority. E.g. if +we waited for the PVF at the very first avaialbility distribution, this might +actually affect liveness of other chains on the same core. Distributing +megabytes of data to a thousand validators, might take a bit. Thus this helps +isolating parachains from each other.
          2. +
          3. By configuring the initial counter value we can affect how much an upgrade +costs. E.g. forcing the parachain to produce 10 blocks, means 10x the cost +for issuing an update. If too frequent upgrades ever become a problem for the +system, we have a knob to make them more costly.
          4. +
          +

          On-chain code upgrade process

          +

          First when a candidate is backed we need to make the new hash available +(together with a counter) via a +runtime API so validators in availability distribution can check for it and +fetch it if changed (see previous section). For performance reasons, I think we +should not do an additional call, but replace the existing one with one containing the new additional information (Option<(Hash, Counter)>).

          +

          Once the candidate gets included (counter 0), the hash is given to pre-checking +and only after pre-checking succeeded (and a full session passed) it is finally +enacted and the parachain can switch to the new code. (Same process as it used +to be.)

          +

          Handling new validators

          +

          Backers

          +

          If a backer receives a collation for a parachain it does not yet have the code +as enacted on chain (see "On-chain code upgrade process"), it will use above +request/response protocol to fetch it from whom it received the collation.

          +

          Availablity Distribution

          +

          Validators in availability distribution will be changed to only sign a 1 in +the bitfield of a candidate if they not only have the chunk, but also the +currently active PVF. They will fetch it from backers in case they don't have it +yet.

          +

          How do other parties get hold of the PVF?

          +

          Two ways:

          +
            +
          1. Discover collators via relay chain DHT and request from them: Preferred way, +as it is less load on validators.
          2. +
          3. Request from validators, which will serve on a best effort basis.
          4. +
          +

          Pruning

          +

          We covered how validators get hold of new code, but when can they prune old ones? +In principle it is not an issue, if some validors prune code, because:

          +
            +
          1. We changed it so that a candidate is not deemed available if validators were +not able to fetch the PVF.
          2. +
          3. Backers can always fetch the PVF from collators as part of the collation +fetching.
          4. +
          +

          But the majority of validators should always keep the latest code of any +parachain and only prune the previous one, once the first candidate using the +new code got finalized. This ensures that disputes will always be able to +resolve.

          +

          Drawbacks

          +

          The major drawback of this solution is the same as any solution the moves work +off-chain, it adds complexity to the node. E.g. nodes needing the PVF, need to +store them separately, together with their own pruning strategy as well.

          +

          Testing, Security, and Privacy

          +

          Implementations adhering to this RFC, will respond to PVF requests with the +actual PVF, if they have it. Requesters will persist received PVFs on disk for +as long as they are replaced by a new one. Implementations must not be lazy +here, if validators only fetched the PVF when needed, they can be prevented from +participating in disputes.

          +

          Validators should treat incoming requests for PVFs in general with rather low +priority, but should prefer fetches from other validators over requests from +random peers.

          +

          Given that we are altering what set bits in the availability bitfields mean (not +only chunk, but also PVF available), it is important to have enough validators +upgraded, before we allow collators to make use of the new runtime upgrade +mechanism. Otherwise we would risk disputes to not being able to succeed.

          +

          This RFC has no impact on privacy.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          This proposal lightens the load on the relay chain and is thus in general +beneficial for the performance of the network, this is achieved by the +following:

          +
            +
          1. Code upgrades are still propagated to all validators, but only once, not +twice (First statements, then via the containing relay chain block).
          2. +
          3. Code upgrades are only communicated to validators and other nodes which are +interested, not any full node as it has been before.
          4. +
          5. Relay chain block space is preserved. Previously we could only do one runtime +upgrade per relay chain block, occupying almost all of the blockspace.
          6. +
          7. Signalling an upgrade no longer contains the upgrade, hence if we need to +push back on an upgrade for whatever reason, no network bandwidth and core +time gets wasted because of this.
          8. +
          +

          Ergonomics

          +

          End users are only affected by better performance and more stable block times. +Parachains will need to implement the introduced request/response protocol and +adapt to the new signalling mechanism via an UMP message, instead of sending +the code upgrade directly.

          +

          For parachain operators we should emit events on initiated runtime upgrade and +each block reporting the current counter and how many blocks to go until the +upgrade gets passed to pre-checking. This is especially important for on-demand +chains or bulk users not occupying a full core. Further more that behaviour of +requiring multiple blocks to fully initiate a runtime upgrade needs to be well +documented.

          +

          Compatibility

          +

          We will continue to support the old mechanism for code upgrades for a while, but +will start to impose stricter limits over time, with the number of registered +parachains going up. With those limits in place parachains not migrating to the +new scheme might be having a harder time upgrading and will miss more blocks. I +guess we can be lenient for a while still, so the upgrade path for +parachains should be rather smooth.

          +

          In total the protocol changes we need are:

          +

          For validators and collators:

          +
            +
          1. New request/response protocol for fetching PVF data from collators and +validators.
          2. +
          3. New UMP message type for signalling a runtime upgrade.
          4. +
          +

          Only for validators:

          +
            +
          1. New runtime API for determining to be enacted code upgrades.
          2. +
          3. Different behaviour of bitfields (only sign a 1 bit, if validator has chunk + +"hot" PVF).
          4. +
          5. Altered behaviour in availability-distribution: Fetch missing PVFS.
          6. +
          +

          Prior Art and References

          +

          Off-chain runtime upgrades have been discussed before, the architecture +described here is simpler though as it piggybacks on already existing features, +namely:

          +
            +
          1. availability-distribution: No separate I have code messages anymore.
          2. +
          3. Existing pre-checking.
          4. +
          +

          https://github.com/paritytech/polkadot-sdk/issues/971

          +

          Unresolved Questions

          +
            +
          1. What about the initial runtime, shall we make that off-chain as well?
          2. +
          3. Good news, at least after the first upgrade, no code will be stored on chain +any more, this means that we also have to redefine the storage deposit now. +We no longer charge for chain storage, but validator disk storage -> Should +be cheaper. Solution to this: Not only store the hash on chain, but also the +size of the data. Then define a price per byte and charge that, but: +
              +
            • how do we charge - I guess deposit has to be provided via other means, +runtime upgrade fails if not provided.
            • +
            • how do we signal to the chain that the code is too large for it to reject +the upgrade? Easy: Make available and vote nay in pre-checking.
            • +
            +
          4. +
          +

          TODO: Fully resolve these questions and incorporate in RFC text.

          + +

          Further Hardening

          +

          By no longer having code upgrade go through the relay chain, occupying a full relay +chain block, the impact on other parachains is already greatly reduced, if we +make distribution and PVF pre-checking low-priority processes on validators. The +only thing attackers might be able to do is delay upgrades of other parachains.

          +

          Which seems like a problem to be solved once we actually see it as a problem in +the wild (and can already be mitigated by adjusting the counter). The good thing +is that we have all the ingredients to go further if need be. Signalling no +longer actually includes the code, hence there is no need to reject the +candidate: The parachain can make progress even if we choose not to immediately +act on the request and no relay chain resources are wasted either.

          +

          We could for example introduce another UMP Signalling message +RequestCodeUpgradeWithPriority which not just requests a code upgrade, but +also offers some DOT to get ranked up in a queue.

          +

          Generalize this off-chain storage mechanism?

          +

          Making this storage mechanism more general purpose is worth thinking about. E.g. +by resolving above "fee" question, we might also be able to resolve the pruning +question in a more generic way and thus could indeed open this storage facility +for other purposes as well. E.g. smart contracts, so the PoV would only need to +reference contracts by hash and the actual PoV is stored on validators and +collators and thus no longer needs to be part of the PoV.

          +

          A possible avenue would be to change the response to:

          +
          #![allow(unused)]
          +fn main() {
          +enum BlobResponse {
          +  Blob(Vec<u8>),
          +  Blobs(MerkleTree),
          +}
          +}
          +

          With this the hash specified in the request can also be a merkle root and the +responder will respond with the entire merkle tree (only hashes, no payload). +Then the requester can traverse the leaf hashes and use the same request +response protocol to request any locally missing blobs in that tree.

          +

          One leaf would for example be the PVF others could be smart contracts. With a +properly specified format (e.g. which leaf is the PVF?), what we got here is +that a parachain can not only update its PVF, but additional data, +incrementally. E.g. adding another smart contract, does not require resubmitting +the entire PVF to validators, only the root hash on the relay chain gets +updated, then validators fetch the merkle tree and only fetch any missing +leaves. That additional data could be made available to the PVF via a to be +added host function. The nice thing about this approach is, that while we can +upgrade incrementally, lifetime is still tied to the PVF and we get all the same +guarantees. Assuming the validators store blobs by hash, we even get disk +sharing if multiple parachains use the same data (e.g. same smart contracts).

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0106: Remove XCM fees mode

          +
          + + + +
          Start Date23 July 2024
          DescriptionRemove the SetFeesMode instruction and fees_mode register from XCM
          AuthorsFrancisco Aguirre
          +
          +

          Summary

          +

          The SetFeesMode instruction and the fees_mode register allow for the existence of JIT withdrawal. +JIT withdrawal complicates the fee mechanism and leads to bugs and unexpected behaviour. +The proposal is to remove said functionality. +Another effort to simplify fee handling in XCM.

          +

          Motivation

          +

          The JIT withdrawal mechanism creates bugs such as not being able to get fees when all assets are put into holding and none left in the origin location. +This is a confusing behavior, since there are funds for fees, just not where the XCVM wants them. +The XCVM should have only one entrypoint to fee payment, the holding register. +That way there is also less surface for bugs.

          +

          Stakeholders

          +
            +
          • Runtime Users
          • +
          • Runtime Devs
          • +
          • Wallets
          • +
          • dApps
          • +
          +

          Explanation

          +

          The SetFeesMode instruction will be removed. +The Fees Mode register will be removed.

          +

          Drawbacks

          +

          Users will have to make sure to put enough assets in WithdrawAsset when +previously some things might have been charged directly from their accounts. +This leads to a more predictable behaviour though so it will only be +a drawback for the minority of users.

          +

          Testing, Security, and Privacy

          +

          Implementations and benchmarking must change for most existing pallet calls +that send XCMs to other locations.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          Performance will be improved since unnecessary checks will be avoided.

          +

          Ergonomics

          +

          JIT withdrawal was a way of side-stepping the regular flow of XCM programs. +By removing it, the spec is simplified but now old use-cases have to work with +the original intended behaviour, which may result in more implementation work.

          +

          Ergonomics for users will undoubtedly improve since the system is more predictable.

          +

          Compatibility

          +

          Existing programs in the ecosystem will break. +The instruction should be deprecated as soon as this RFC is approved +(but still fully supported), then removed in a subsequent XCM version +(probably deprecate in v5, remove in v6).

          +

          Prior Art and References

          +

          The previous RFC PR on the xcm-format repo, before XCM RFCs were moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/57.

          +

          Unresolved Questions

          +

          None.

          + +

          The new generic fees mechanism is related to this proposal and further stimulates it as the JIT withdraw fees mechanism will become useless anyway.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0111: Pure Proxy Replication

          +
          + + + +
          Start Date12 Aug 2024.
          DescriptionReplication of pure proxy account ownership to a remote chain
          Authors@muharem @xlc
          +
          +

          Summary

          +

          This RFC proposes a solution to replicate an existing pure proxy from one chain to others. The aim is to address the current limitations where pure proxy accounts, which are keyless, cannot have their proxy relationships recreated on different chains. This leads to issues where funds or permissions transferred to the same keyless account address on chains other than its origin chain become inaccessible.

          +

          Motivation

          +

          A pure proxy is a new account created by a primary account. The primary account is set as a proxy for the pure proxy account, managing it. Pure proxies are keyless and non-reproducible, meaning they lack a private key and have an address derived from a preimage determined by on-chain logic. More on pure proxies can be found here.

          +

          For the purpose of this document, we define a keyless account as a "pure account", the controlling account as a "proxy account", and the entire relationship as a "pure proxy".

          +

          The relationship between a pure account (e.g., account ID: pure1) and its proxy (e.g., account ID: alice) is stored on-chain (e.g., parachain A) and currently cannot be replicated to another chain (e.g., parachain B). Because the account pure1 is keyless and its proxy relationship with alice is not replicable from the parachain A to the parachain B, alice does not control the pure1 account on the parachain B.

          +

          Although this behaviour is not promised, users and clients often mistakenly expect alice to control the same pure1 account on the parachain B. As a result, assets transferred to the account or permissions granted for it are inaccessible. Several factors contribute to this misuse:

          +
            +
          • regular accounts on different parachains with the same account ID are typically accessible for the owner and controlled by the same private key (e.g., within System Parachains);
          • +
          • users and clients do not distinguish between keyless and regular accounts;
          • +
          • members using the multisig account ID across different chains, where a member of a multisig is a pure account;
          • +
          • users may prefer an account with a registered identity (e.g. for cross-chain treasury spend proposal), even if the account is keyless;
          • +
          +

          Given that these mistakes are likely, it is necessary to provide a solution to either prevent them or enable access to a pure account on a target chain.

          +

          Stakeholders

          +

          Runtime Users, Runtime Devs, wallets, cross-chain dApps.

          +

          Explanation

          +

          One possible solution is to allow a proxy to create or replicate a pure proxy relationship for the same pure account on a target chain. For example, Alice, as the proxy of the pure1 pure account on parachain A, should be able to set a proxy for the same pure1 account on parachain B.

          +

          To minimise security risks, the parachain B should grant the parachain A the least amount of permission necessary for the replication. First, Parachain A claims to Parachain B that the operation is commanded by the pure account, and thus by its proxy, and second, provides proof that the account is keyless.

          +

          The replication process will be facilitated by XCM, with the first claim made using the DescendOrigin instruction. The replication call on parachain A would require a signed origin by the pure account and construct an XCM program for parachain B, where it first descends the origin, resulting in the ParachainA/AccountId32(pure1) origin location on the receiving side.

          +

          To prove that the pure account is keyless, the client must provide the initial preimage used by the chain to derive the pure account. Parachain A verifies it and sends it to parachain B with the replication request.

          +

          We can draft a pallet extension for the proxy pallet, which needs to be initialised on both sides to enable replication:

          +
          #![allow(unused)]
          +fn main() {
          +// Simplified version to illustrate the concept.
          +mod pallet_proxy_replica {
          +  /// The part of the pure account preimage that has to be provided by a client.
          +  struct Witness {
          +    /// Pure proxy swapner
          +    spawner: AccountId,
          +    /// Disambiguation index
          +    index: u16,
          +    /// The block height and extrinsic index of when the pure account was created.  
          +    block_number: BlockNumber,
          +    /// The extrinsic index.
          +    ext_index: u32,
          +    // Part of the preimage, but constant.
          +    // proxy_type: ProxyType::Any,
          +  } 
          +  // ...
          +  
          +  /// The replication call to be initiated on the source chain.
          +  // Simplified version, the XCM part will be abstracted by the `Config` trait.
          +  fn replicate(origin: SignedOrigin, witness: Witness, proxy: xcm::Location) -> ... {
          +       let pure = ensure_signed(origin);
          +       ensure!(pure == proxy_pallet::derive_pure_account(witness), Error::NotPureAccount);
          +       let xcm = vec![
          +         DescendOrigin(who),
          +         Transact(
          +             // …
          +             origin_kind: OriginKind::Xcm,
          +	     call: pallet_proxy_replica::create(witness, proxy).encode(),
          +         )
          +       ];
          +       xcmTransport::send(xcm)?;
          +  }
          +  // …
          +  
          +  /// The call initiated by the source chain on the receiving chain.
          +  // `Config::CreateOrigin` - generally open for whitelisted parachain IDs and 
          +  // converts `Origin::Xcm(ParachainA/AccountId32(pure1))` to `AccountID(pure1)`.
          +  fn create(origin: Config::CreateOrigin, witness: Witness, proxy: xcm::Location) -> ... {
          +       let pure = T::CreateOrigin::ensure_origin(origin);
          +       ensure!(pure == proxy_pallet::derive_pure_account(witness), Error::NotPureAccount);
          +       proxy_pallet::create_pure_proxy(pure, proxy);
          +  }
          +}
          +
          +}
          +

          Drawbacks

          +

          There are two disadvantages to this approach:

          +
            +
          • The receiving chain has to trust the sending chain's claim that the account controlling the pure account has commanded the replication.
          • +
          • Clients must obtain witness data.
          • +
          +

          We could eliminate the first disadvantage by allowing only the spawner of the pure proxy to recreate the pure proxies, if they sign the transaction on a remote chain and supply the witness/preimage. Since the preimage of a pure account includes the account ID of the spawner, we can verify that the account signing the transaction is indeed the spawner of the given pure account. However, this approach would grant exclusive rights to the spawner over the pure account, which is not a property of pure proxies at present. This is why it's not an option for us.

          +

          As an alternative to requiring clients to provide a witness data, we could label pure accounts on the source chain and trust it on the receiving chain. However, this would require the receiving chain to place greater trust in the source chain. If the source chain is compromised, any type of account on the trusting chain could also be compromised.

          +

          A conceptually different solution would be to not implement replication of pure proxies and instead inform users that ownership of a pure proxy on one chain does not imply ownership of the same account on another chain. This solution seems complex, as it would require UIs and clients to adapt to this understanding. Moreover, mistakes would likely remain unavoidable.

          +

          Testing, Security, and Privacy

          +

          Each chain expressly authorizes another chain to replicate its pure proxies, accepting the inherent risk of that chain potentially being compromised. This authorization allows a malicious actor from the compromised chain to take control of any pure proxy account on the chain that granted the authorization. However, this is limited to pure proxies that originated from the compromised chain if they have a chain-specific seed within the preimage.

          +

          There is a security issue, not introduced by the proposed solution but worth mentioning. The same spawner can create the pure accounts on different chains controlled by the different accounts. This is possible because the current preimage version of the proxy pallet does not include any non-reproducible, chain-specific data, and elements like block numbers and extrinsic indexes can be reproduced with some effort. This issue could be addressed by adding a chain-specific seed into the preimages of pure accounts.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          The replication is facilitated by XCM, which adds some additional load to the communication channel. However, since the number of replications is not expected to be large, the impact is minimal.

          +

          Ergonomics

          +

          The proposed solution does not alter any existing interfaces. It does require clients to obtain the witness data which should not be an issue with support of an indexer.

          +

          Compatibility

          +

          None.

          +

          Prior Art and References

          +

          None.

          +

          Unresolved Questions

          +

          None.

          + +
            +
          • Pure Proxy documentation - https://wiki.polkadot.network/docs/learn-proxies-pure
          • +
          +

          (source)

          +

          Table of Contents

          + +

          RFC-0112: Compress the State Response Message in State Sync

          +
          + + + +
          Start Date14 August 2024
          DescriptionCompress the state response message to reduce the data transfer during the state syncing
          AuthorsLiu-Cheng Xu
          +
          +

          Summary

          +

          This RFC proposes compressing the state response message during the state syncing process to reduce the amount of data transferred.

          +

          Motivation

          +

          State syncing can require downloading several gigabytes of data, particularly for blockchains with large state sizes, such as Astar, which +has a state size exceeding 5 GiB (https://github.com/AstarNetwork/Astar/issues/1110). This presents a significant +challenge for nodes with slower network connections. Additionally, the current state sync implementation lacks a persistence feature (https://github.com/paritytech/polkadot-sdk/issues/4), +meaning any network disruption forces the node to re-download the entire state, making the process even more difficult.

          +

          Stakeholders

          +

          This RFC benefits all projects utilizing the Substrate framework, specifically in improving the efficiency of state syncing.

          +
            +
          • Node Operators.
          • +
          • Substrate Users.
          • +
          +

          Explanation

          +

          The largest portion of the state response message consists of either CompactProof or Vec<KeyValueStateEntry>, depending on whether a proof is requested (source):

          +
            +
          • CompactProof: When proof is requested, compression yields a lower ratio but remains beneficial, as shown in warp sync tests in the Performance section below.
          • +
          • Vec<KeyValueStateEntry>: Without proof, this is theoretically compressible because the entries are generated by iterating the +storage sequentially starting from an empty storage key, which means many entries in the message share the same storage prefix, making it ideal +for compression.
          • +
          +

          Drawbacks

          +

          None identified.

          +

          Testing, Security, and Privacy

          +

          The code changes required for this RFC are straightforward: compress the state response on the sender side and decompress it on the receiver side. Existing sync tests should ensure functionality remains intact.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          This RFC optimizes network bandwidth usage during state syncing, particularly for blockchains with gigabyte-sized states, while introducing negligible CPU overhead for compression and decompression. For example, compressing the state response during a recent Polkadot warp sync (around height #22076653) reduces the data transferred from 530,310,121 bytes to 352,583,455 bytes — a 33% reduction, saving approximately 169 MiB of data.

          +

          Performance data is based on this patch, with logs available here.

          +

          Ergonomics

          +

          None.

          +

          Compatibility

          +

          No compatibility issues identified.

          +

          Prior Art and References

          +

          None.

          +

          Unresolved Questions

          +

          None.

          + +

          None.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signatures

          +
          + + + +
          Start Date16 August 2024
          DescriptionHost function to verify NIST-P256 elliptic curve signatures.
          AuthorsRodrigo Quelhas
          +
          +

          Summary

          +

          This RFC proposes a new host function, secp256r1_ecdsa_verify_prehashed, for verifying NIST-P256 signatures. The function takes as input the message hash, r and s components of the signature, and the x and y coordinates of the public key. By providing this function, runtime authors can leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures, reducing computational costs and improving overall performance.

          +

          Motivation

          +

          “secp256r1” elliptic curve is a standardized curve by NIST which has the same calculations by different input parameters with “secp256k1” elliptic curve. The cost of combined attacks and the security conditions are almost the same for both curves. Adding a host function can provide signature verifications using the “secp256r1” elliptic curve in the runtime and multi-faceted benefits can occur. One important factor is that this curve is widely used and supported in many modern devices such as Apple’s Secure Enclave, Webauthn, Android Keychain which proves the user adoption. Additionally, the introduction of this host function could enable valuable features in the account abstraction which allows more efficient and flexible management of accounts by transaction signs in mobile devices. +Most of the modern devices and applications rely on the “secp256r1” elliptic curve. The addition of this host function enables a more efficient verification of device native transaction signing mechanisms. For example:

          +
            +
          1. Apple's Secure Enclave: There is a separate “Trusted Execution Environment” in Apple hardware which can sign arbitrary messages and can only be accessed by biometric identification.
          2. +
          3. Webauthn: Web Authentication (WebAuthn) is a web standard published by the World Wide Web Consortium (W3C). WebAuthn aims to standardize an interface for authenticating users to web-based applications and services using public-key cryptography. It is being used by almost all of the modern web browsers.
          4. +
          5. Android Keystore: Android Keystore is an API that manages the private keys and signing methods. The private keys are not processed while using Keystore as the applications’ signing method. Also, it can be done in the “Trusted Execution Environment” in the microchip.
          6. +
          7. Passkeys: Passkeys is utilizing FIDO Alliance and W3C standards. It replaces passwords with cryptographic key-pairs which is also can be used for the elliptic curve cryptography.
          8. +
          +

          Stakeholders

          +
            +
          • Runtime Authors
          • +
          +

          Explanation

          +

          This RFC proposes a new host function for runtime authors to leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures.

          +

          Proposed host function signature:

          +
          #![allow(unused)]
          +fn main() {
          +fn ext_secp256r1_ecdsa_verify_prehashed_version_1(
          +    sig: &[u8; 64],
          +    msg: &[u8; 32],
          +    pub_key: &[u8; 64],
          +) -> bool;
          +}
          +

          The host function MUST return true if the signature is valid or false otherwise.

          +

          Drawbacks

          +

          N/A

          +

          Testing, Security, and Privacy

          +

          Security

          +

          The changes are not directly affecting the protocol security, parachains are not enforced to use the host function.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          N/A

          +

          Ergonomics

          +

          The host function proposed in this RFC allows parachain runtime developers to use a more efficient verification mechanism for "secp256r1" elliptic curve signatures.

          +

          Compatibility

          +

          Parachain teams will need to include this host function to upgrade.

          +

          Prior Art and References

          + +

          (source)

          +

          Table of Contents

          + +

          RFC-0117: The Unbrick Collective

          +
          + + + +
          Start Date22 August 2024
          DescriptionThe Unbrick Collective aims to help teams rescuing a para once it stops producing blocks
          AuthorsBryan Chen, Pablo Dorado
          +
          +

          Summary

          +

          A followup of the RFC-0014. This RFC proposes adding a new collective to the Polkadot Collectives +Chain: The Unbrick Collective, as well as improvements in the mechanisms that will allow teams +operating paras that had stopped producing blocks to be assisted, in order to restore the production +of blocks of these paras.

          +

          Motivation

          +

          Since the initial launch of Polkadot parachains, there has been many incidients causing parachains +to stop producing new blocks (therefore, being bricked) and many occurrences that required +Polkadot governance to update the parachain head state/wasm. This can be due to many reasons range +from incorrectly registering the initial head state, inability to use sudo key, bad runtime +migration, bad weight configuration, and bugs in the development of the Polkadot SDK.

          +

          Currently, when the para is not unlocked in the paras registrar1, the Root origin is required to +perform such actions, involving the governance process to invoke this origin, which can be very +resource expensive for the teams. The long voting and enactment times also could result significant +damage to the parachain and users.

          +

          Finally, other instances of governance that might enact a call using the Root origin (like the +Polkadot Fellowship), due to the nature of their mission, are not fit to carry these kind of tasks.

          +

          In consequence, the idea of a Unbrick Collective that can provide assistance to para teams when +they brick and further protection against future halts is reasonable enough.

          +

          Stakeholders

          +
            +
          • Parachain teams
          • +
          • Parachain users
          • +
          • OpenGov users
          • +
          • Polkadot Fellowship
          • +
          +

          Explanation

          +

          The Collective

          +

          The Unbrick Collective is defined as an unranked collective of members, not paid by the Polkadot +Treasury. Its main goal is to serve as a point of contact and assistance for enacting the actions +needed to unbrick a para. Such actions are:

          +
            +
          • Updating the Parachain Verification Function (a.k.a. a new WASM) of a para.
          • +
          • Updating the head state of a para.
          • +
          • A combination of the above.
          • +
          +

          In order to ensure these changes are safe enough for the network, actions enacted by the Unbrick +Collective must be whitelisted via similar mechanisms followed by collectives like the Polkadot +Fellowship. This will prevent unintended, not overseen changes on other paras to occur.

          +

          Also, teams might opt-in to delegate handling their para in the registry to the Collective. This +allows to perform similar actions using the paras registrar, allowing for a shorter path to unbrick a +para.

          +

          Initially, the unbrick collective has powers similar to a parachains own sudo, but permits more +decentralized control. In the future, Polkadot shall provide functionality like SPREE or JAM that +exceeds sudo permissions, so the unbrick collective cannot modify those state roots or code.

          +

          The Unbrick Process

          +
          flowchart TD
          +    A[Start] 
          +
          +    A -- Bricked --> C[Request para unlock via Root]
          +    C -- Approved --> Y
          +    C -- Rejected --> A
          +    
          +    D[unbrick call proposal on WhitelistedUnbrickCaller]
          +    E[whitelist call proposal on the Unbrick governance]
          +    E -- call whitelisted --> F[unbrick call enacted]
          +    D -- unbrick called --> F
          +    F --> Y
          +
          +    A -- Not bricked --> O[Opt-in to the Collective]
          +    O -- Bricked --> D
          +    O -- Bricked --> E
          +
          +    Y[update PVF / head state] -- Unbricked --> Z[End]
          +
          +

          Initially, a para team has two paths to handle a potential unbrick of their para in the case it +stops producing blocks.

          +
            +
          1. Opt-in to the Unbrick Collective: This is done by delegating the handling of the para +in the paras registrar to an origin related to the Collective. This doesn't require unlocking +the para. This way, the collective is enabled to perform changes in the paras module, after +the Unbrick Process proceeds.
          2. +
          3. Request a Para Unlock: In case the para hasn't delegated its handling in the paras +registrar, it'll be still possible for the para team to submit a proposal to unlock the para, +which can be assisted by the Collective. However, this involves submitting a proposal to the Root +governance origin.
          4. +
          +

          Belonging to the Collective

          +

          The collective will be initially created without members (no seeding). There will be additional +governance proposals to setup the seed members.

          +

          The origins able to modify the members of the collective are:

          +
            +
          • The Fellows track in the Polkadot Fellowship.
          • +
          • Root track in the Relay.
          • +
          • More than two thirds of the existing Unbrick Collective.
          • +
          +

          The members are responsible to verify the technical details of the unbrick requests (i.e. the hash +of the new PVF being set). Therefore, they must have the technical capacity to perform such tasks.

          +

          Suggested requirements to become a member are the following:

          +
            +
          • Rank 3 or above in the Polkadot Fellowship.
          • +
          • Being a CTO or Technical Lead in a para team that has opted-in to delegate the Unbrick Collective +to manage the PVF/head state of the para.
          • +
          +

          Drawbacks

          +

          The ability to modify the Head State and/or the PVF of a para means a possibility to perform +arbitrary modifications of it (i.e. take control the native parachain token or any bridged assets +in the para).

          +

          This could introduce a new attack vector, and therefore, such great power needs to be handled +carefully.

          +

          Testing, Security, and Privacy

          +

          The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

          +

          An audit will be required to ensure the implementation doesn't introduce unwanted side effects.

          +

          There are no privacy related concerns.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          This RFC should not introduce any performance impact.

          +

          Ergonomics

          +

          This RFC should improve the experience for new and existing parachain teams, lowering the barrier +to unbrick a stalled para.

          +

          Compatibility

          +

          This RFC is fully compatible with existing interfaces.

          +

          Prior Art and References

          + +

          Unresolved Questions

          +
            +
          • What are the parameters for the WhitelistedUnbrickCaller track?
          • +
          • Any other methods that shall be updated to accept Unbrick origin?
          • +
          • Any other requirements to become a member?
          • +
          • We would like to keep this simple, so no funding support from the Polkadot treasury. But do we +want to compensate the members somehow? i.e. Allow parachain teams to donate to the collective.
          • +
          • We hope SPREE/JAM would be carefully audited for miss-use risks before being
            +provided to parachain teams, but could the unbrick collective have an elections
            +that warranted trust beyond sudo powers?
          • +
          • An auditing framework/collective makes sense parachain code upgrades, but
            +could also strengthen the unbrick collective.
          • +
          • Do we want to have this collective offer additional technical support to help bricked parachains? +i.e. help debug the code, create the rescue plan, create postmortem report, provide resources on +how to avoid getting bricked
          • +
          + +
          1 +

          The paras registrar refers to a pallet in the Relay, responsible to gather registration info +of the paras, the locked/unlocked state, and the manager info.

          +
          + +

          (source)

          +

          Table of Contents

          + +

          RFC-0120: Referenda Confirmation by Candle Mechanism

          +
          + + + +
          Start Date22 March 2024
          DescriptionProposal to decide polls after confirm period via a mechanism similar to a candle auction
          AuthorsPablo Dorado, Daniel Olano
          +
          +

          Summary

          +

          In an attempt to mitigate risks derived from unwanted behaviours around long decision periods on +referenda, this proposal describes how to finalize and decide a result of a poll via a +mechanism similar to candle auctions.

          +

          Motivation

          +

          Referenda protocol provide permissionless and efficient mechanisms to enable governance actors to +decide the future of the blockchains around Polkadot network. However, they pose a series of risks +derived from the game theory perspective around these mechanisms. One of them being where an actor +uses the the public nature of the tally of a poll as a way of determining the best point in time to +alter a poll in a meaningful way.

          +

          While this behaviour is expected based on the current design of the referenda logic, given the +recent extension of ongoing times (up to 1 month), the incentives for a bad actor to cause losses +on a proposer, reflected as wasted cost of opportunity increase, and thus, this otherwise +reasonable outcome becomes an attack vector, a potential risk to mitigate, especially when such +attack can compromise critical guarantees of the protocol (such as its upgradeability).

          +

          To mitigate this, the referenda underlying mechanisms should incentive actors to cast their votes +on a poll as early as possible. This proposal's approach suggests using a Candle Auction that will +be determined right after the confirm period finishes, thus decreasing the chances of actors to +alter the results of a poll on confirming state, and instead incentivizing them to cast their votes +earlier, on deciding state.

          +

          Stakeholders

          +
            +
          • Governance actors: Tokenholders and Collectives that vote on polls that have this mechanism +enabled should be aware this change affects the outcome of failing a poll on its confirm period.
          • +
          • Runtime Developers: This change requires runtime developers to change configuration +parameters for the Referenda Pallet.
          • +
          • Tooling and UI developers: Applications that interact with referenda must update to reflect +the new Finalizing state.
          • +
          +

          Explanation

          +

          Currently, the process of a referendum/poll is defined as an sequence between an ongoing state +(where accounts can vote), comprised by a with a preparation period, a decision period, and a +confirm period. If the poll is passing before the decision period ends, it's possible to push +forward to confirm period, and still, go back in case the poll fails. Once the decision period +ends, a failure of the poll in the confirm period will lead to the poll to ultimately be rejected.

          +
          stateDiagram-v2
          +    sb: Submission
          +    pp: Preparation Period
          +    dp: Decision Period
          +    cp: Confirmation Period
          +    state dpd <<choice>>
          +    state ps <<choice>>
          +    cf: Approved
          +    rj: Rejected
          +
          +    [*] --> sb
          +    sb --> pp
          +    pp --> dp: decision period starts
          +    dp --> cp: poll is passing
          +    dp --> ps: decision period ends
          +    ps --> cp: poll is passing
          +    cp --> dpd: poll fails
          +    dpd --> dp: decision period not deadlined
          +    ps --> rj: poll is failing
          +    dpd --> rj: decision period deadlined
          +    cp --> cf
          +    cf --> [*]
          +    rj --> [*]
          +
          +

          This specification proposes three changes to implement this candle mechanism:

          +
            +
          1. +

            This mechanism MUST be enabled via a configuration parameter. Once enabled, the referenda system +MAY record the next poll ID from which to start enabling this mechanism. This is to preserve +backwards compatibility with currently ongoing polls.

            +
          2. +
          3. +

            A record of the poll status (whether it is passing or not) is stored once the decision period is +finished.

            +
          4. +
          5. +

            Including a Finalization period as part of the ongoing state. From this point, the poll MUST +be immutable at this point.

            +

            This period begins the moment after confirm period ends, and extends the decision for a couple +of blocks, until the VRF seed used to determine the candle block can be considered +"good enough". This is, not known before the ongoing period (decision/confirmation) was over.

            +

            Once that happens, a random block within the confirm period is chosen, and the decision of +approving or rejecting the poll is based on the status immediately before the block where the +candle was "lit-off".

            +
          6. +
          +

          When enabled, the state diagram for the referenda system is the following:

          +
          stateDiagram-v2
          +    sb: Submission
          +    pp: Preparation Period
          +    dp: Decision Period
          +    cp: Confirmation Period
          +    cds: Finalization
          +    state dpd <<choice>>
          +    state ps <<choice>>
          +    state cd <<choice>>
          +    cf: Approved
          +    rj: Rejected
          +
          +    [*] --> sb
          +    sb --> pp
          +    pp --> dp: decision period starts
          +    dp --> cp: poll is passing
          +    ps --> cp: poll is passing
          +    dp --> ps: decision period ends
          +    ps --> rj: poll is failing
          +    cp --> dpd: poll fails
          +    dpd --> cp: decision period over
          +    dpd --> dp: decision period not over
          +    cp --> cds: confirmation period ends
          +    cds --> cd: define moment when candle lit-off
          +    cd --> cf: poll passed
          +    cd --> rj: poll failed
          +    cf --> [*]
          +    rj --> [*]
          +
          +

          Drawbacks

          +

          This approach doesn't include a mechanism to determine whether a change of the poll status in the +confirming period is due to a legitimate change of mind of the voters, or an exploitation of its +aforementioned vulnerabilities (like a sniping attack), instead treating all of them as potential +attacks.

          +

          This is an issue that can be addressed by additional mechanisms, and heuristics that can help +determine the probability of a change of poll status to happen as a result of a legitimate behaviour.

          +

          Testing, Security, and Privacy

          +

          The implementation of this RFC will be tested on testnets (Paseo and Westend) first. Furthermore, it +should be enabled in a canary network (like Kusama) to ensure the behaviours it is trying to address +is indeed avoided.

          +

          An audit will be required to ensure the implementation doesn't introduce unwanted side effects.

          +

          There are no privacy related concerns.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          The added steps imply pessimization, necessary to meet the expected changes. An implementation MUST +exit from the Finalization period as early as possible to minimize this impact.

          +

          Ergonomics

          +

          This proposal does not alter the already exposed interfaces or developers or end users. However, they +must be aware of the changes in the additional overhead the new period might incur (these depend on the +implemented VRF).

          +

          Compatibility

          +

          This proposal does not break compatibility with existing interfaces, older versions, but it alters the +previous implementation of the referendum processing algorithm.

          +

          An acceptable upgrade strategy that can be applied is defining a point in time (block number, poll index) +from which to start applying the new mechanism, thus, not affecting the already ongoing referenda.

          +

          Prior Art and References

          + +

          Unresolved Questions

          +
            +
          • How to determine in a statistically meaningful way that a change in the poll status corresponds to an +organic behaviour, and not an unwanted, malicious behaviour?
          • +
          + +

          A proposed implementation of this change can be seen on this Pull Request.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0124: Extrinsic version 5

          +
          + + + +
          Start Date18 October 2024
          DescriptionDefinition and specification of version 5 extrinsics
          AuthorsGeorge Pisaltu
          +
          +

          Summary

          +

          This RFC proposes the definition of version 5 extrinsics along with changes to the specification and +encoding from version 4.

          +

          Motivation

          +

          RFC84 +introduced the specification of General transactions, a new type of extrinsic besides the Signed +and Unsigned variants available previously in version 4. Additionally, +RFC99 +introduced versioning of transaction extensions through an extra byte in the extrinsic encoding. +Both of these changes require an extrinsic format version bump as both the semantics around +extensions as well as the actual encoding of extrinsics need to change to accommodate these new +features.

          +

          Stakeholders

          +
            +
          • Runtime users
          • +
          • Runtime devs
          • +
          • Wallet devs
          • +
          +

          Explanation

          +

          Changes to extrinsic authorization

          +

          The introduction of General transactions allows the authorization of any and all origins through +extensions. This means that, with the appropriate extension, General transactions can replicate +the same behavior present-day v4 Signed transactions. Specifically for Polkadot chains, an example +implementation for such an extension is +VerifySignature, +introduced in the Transaction Extension +PR3685. Other extensions can be inserted +into the extension pipeline to authorize different custom origins. Therefore, a Signed extrinsic +variant is redundant to a General one strictly in terms of user functionality and could eventually +be deprecated and removed.

          +

          Encoding format for version 5

          +

          As with version 4, the encoded extrinsic v5 is a SCALE encoded vector of bytes (u8), therefore +starting with the encoded length of the following bytes in compact format. The leading byte after +the length determines the version and type of extrinsic, as specified by +RFC84. +For reasons mentioned above, this RFC removes the Signed variant for v5 extrinsics.

          +

          For Bare extrinsics, the following bytes will just be the encoded call and nothing else.

          +

          For General transactions, as stated in +RFC99, +an extension version byte must be added to the extrinsic format. This byte should allow runtimes to +expose more than one set of extensions which can be used for a transaction. As far as the v5 +extrinsic encoding is concerned, this extension byte should be encoded immediately after the leading +encoding byte. The extension version byte should be included in payloads to be signed by all +extensions configured by runtime devs to ensure a user's extension version choice cannot be altered +by third parties.

          +

          After the extension version byte, the extensions will be encoded next, followed by the call itself.

          +

          A quick visualization of the encoding:

          +
            +
          • Bare extrinsics: (extrinsic_encoded_len, 0b0000_0101, call)
          • +
          • General transactions: (extrinsic_encoded_len, , 0b0100_0101, extension_version_byte, extensions, call)
          • +
          +

          Signatures on Polkadot in General transactions

          +

          In order to run a transaction with a signed origin in extrinsic version 5, a user must create the +transaction with an instance of at least one extension responsible for authorizing Signed origins +with a provided signature.

          +

          As stated before, PR3685 comes with a +Transaction Extension which replicates the current Signed transactions in v5 extrinsics, namely +VerifySignature. +I will use this extension as an example on how to replicate current Signed transaction +functionality in the new v5 extrinsic format, though the runtime logic is not constrained to this +particular implementation.

          +

          This extension leverages the new inherited implication functionality introduced in +TransactionExtension and creates a payload to be signed using the data of all extensions after +itself in the extension pipeline. This extension can be configured to accept a MultiSignature, +which makes it compatible with all signature types currently used in Polkadot.

          +

          In the context of using an extension such as VerifySignature, for example, to replicate current +Signed transaction functionality, the steps to generate the payload to be signed would be:

          +
            +
          1. The extension version byte, call, extension and extension implicit should be encoded (by +"extension" and its implicit we mean only the data associated with extensions that follow this +one in the composite extension type);
          2. +
          3. The result of the encoding should then be hashed using the BLAKE2_256 hasher;
          4. +
          5. The result of the hash should then be signed with the signature type specified in the extension definition.
          6. +
          +
          #![allow(unused)]
          +fn main() {
          +// Step 1: encode the bytes
          +let encoded = (extension_version_byte, call, transaction_extension, transaction_extension_implicit).encode();
          +// Step 2: hash them
          +let payload = blake2_256(&encoded[..]);
          +// Step 3: sign the payload
          +let signature = keyring.sign(&payload[..]);
          +}
          +

          Summary of changes in version 5

          +

          In order to minimize the number of changes to the extrinsic format version and also to help all +consumers downstream in the transition period between these extrinsic versions, we should:

          +
            +
          • Remove the Signed variant starting with v5 extrinsics
          • +
          • Add the General variant starting with v5 extrinsics
          • +
          • Enable runtimes to support both v4 and v5 extrinsics
          • +
          +

          Drawbacks

          +

          The metadata will have to accommodate two distinct extrinsic format versions at a given point in +time in order to provide the new functionality in a non-breaking way for users and tooling.

          +

          Although having to support multiple extrinsic versions in metadata involves extra work, the change +is ultimately an improvement to metadata and the extra functionality may be useful in other future +scenarios.

          +

          Testing, Security, and Privacy

          +

          There is no impact on testing, security or privacy.

          +

          Performance, Ergonomics, and Compatibility

          +

          This change makes the authorization through signatures configurable by runtime devs in version 5 +extrinsics, as opposed to version 4 where the signing payload algorithm and signatures were +hardcoded. This moves the responsibility of ensuring proper authentication through +TransactionExtension to the runtime devs, but a sensible default which closely resembles the +present day behavior will be provided in VerifySignature.

          +

          Performance

          +

          There is no performance impact.

          +

          Ergonomics

          +

          Tooling will have to adapt to be able to tell which authorization scheme is used by a particular +transaction by decoding the extension and checking which particular TransactionExtension in the +pipeline is enabled to do the origin authorization. Previously, this was done by simply checking +whether the transaction is signed or unsigned, as there was only one method of authentication.

          +

          Compatibility

          +

          As long as extrinsic version 4 is still exposed in the metadata when version 5 will be introduced, +the changes will not break existing infrastructure. This should give enough time for tooling to +support version 5 and to remove version 4 in the future.

          +

          Prior Art and References

          +

          This is a result of the work in Extrinsic +Horizon and +RFC99.

          +

          Unresolved Questions

          +

          None.

          + +

          Following this change, extrinsic version 5 will be introduced as part of the Extrinsic +Horizon effort, which will shape future +work.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0138: Election mechanism for invulnerable collators on system chains

          +
          + + + +
          Start Date28 January 2025
          DescriptionMechanism for electing invulnerable collators on system chains.
          AuthorsGeorge Pisaltu
          +
          +

          Summary

          +

          The current election mechanism for permissionless collators on system chains was introduced in +RFC-7. +This RFC proposes a mechanism to facilitate replacements in the invulnerable sets of system chains +by breaking down barriers that exist today.

          +

          Motivation

          +

          Following RFC-7 and the introduction of the collator election +mechanism, anyone can now collate on a system +chain on the permissionless slots, but the invulnerable set has been a contentious issue among +current collators on system chains as the path towards an invulnerable slot is almost impossible to +pursue. From a technical standpoint, nothing is preventing a permissionless collator, or anyone for +that matter, from submitting a referendum to remove one collator from the invulnerable set and add +themselves in their place. However, as it quickly becomes obvious, such a referendum would be very +difficult to pass under normal circumstances.

          +

          The first reason this would be contentious is that there is no significant difference between +collators with good performance. There is no reasonable way to keep track of arbitrary data on-chain +which could clearly and consistently distinguish between one collator or another. Collators that +perform well propose blocks when they are supposed to and that is what is being tracked on-chain. +Any other metrics for performance are arbitrary as far as the runtime logic is concerned and should +be reasoned upon by humans using public discussion and a referendum.

          +

          The second reason for this is the inherently social aspect of this action. Even just proposing the +referendum would be perceived as an attack on a specific collator in the set, singling them out, +when in reality the proposer likely just wants to be part of the set and doesn't necessarily care +who is kicked. In order to consolidate their position, the other invulnerables will rally behind the +one that was challenged and the bid to replace one invulnerable will probably fail.

          +

          Existing invulnerables have a vested interest in protecting any other invulnerable from such attacks +so that they themselves would be protected if need be. The existing collator set has already +demonstrated that they can work together and subvert the free market mechanism offered by the +runtime when they agreed to not outbid each other on permissionless slots after the new collator +selection mechanism was introduced.

          +

          The existing invulnerable set on a given system chain are there for a reason; they have demonstrated +reliability in the past and were rewarded by governance with invulnerable slots and a bounty to +cover their expenses. This means they have a solid reputation and a strong say in governance over +matters related to collation. The optics of a permissionless collator actively challenging an +invulnerable, even when it's justified, combined with the support of other invulnerables, make the +invulnerable set de facto immutable.

          +

          While there should be strong guarantees of stability for invulnerables, they should not be a closed +circle. The aim of this RFC is to provide a clear, reasonable, fair, and socially acceptable path +for a permissionless collator with a proven track record to become an invulnerable while preserving +the stability of the invulnerable set of a system parachain.

          +

          Stakeholders

          +
            +
          • Infrastructure providers (people who run validator/collator nodes)
          • +
          • Polkadot Treasury
          • +
          +

          Explanation

          +

          Proposal

          +

          This RFC proposes a periodic, mandatory, round-robin, two-round election mechanism for +invulnerables.

          +

          How it works

          +

          The election should be implemented on top of the current logic in the collator-selection pallet. +In this mechanism, candidates would register for the first round of the next election by placing +deposits.

          +

          When the period between elections passes, the first round of the election starts with every +candidate that registered, excluding the incumbent, as an option on the ballot. Votes should be +expressed using tokens which should not be available for other transactions while the election is +ongoing in order to introduce some opportunity cost to voting. After a certain amount of time +passes, the election closes and the candidate who wins the first round of the election advances to +the second and final round of the election. The deposits held for voting in the first round must be +released before the second round.

          +

          In the second round of the election, the winner of the first round has the chance to replace the +invulnerable currently holding the slot. A referendum is submitted to replace the incumbent with the +winner of the first round of the election, turning the second round of the election into a +conviction-voting compatible referendum. If the referendum fails, the incumbent keeps their slot.

          +

          The period between elections should be configurable at the collator-selection pallet level. A full +election cycle ends when the pallet held an election for every single invulnerable slot. To qualify +for the ballot, candidates must have been collating for at least one period from a permissionless +slot or be the incumbent.

          +

          Motivations behind the particularities of this mechanism

          +
            +
          • Round-robin - It is not desirable to allow any election of the entire invulnerable set at once +because the main purpose of invulnerables is to ensure the stability, reliability and liveness of +the parachain. It is safer to change them one by one and, in case mistakes happen, governance has +time to react without endangering the liveness of any chain.
          • +
          • Two-round voting - it's useful to separate the election process into two distinct steps: the +first, less important step of determining the challenger at the pallet level through deposits; the +second, more important step of actually trying to replace the invulnerable by referendum, which is +the same mechanism the invulnerable used to acquire the slot in the first place. It is not so +important who is trying to replace the incumbent as long as they meet the requirements and they +have a clear way to get to the second round of the election.
          • +
          • Mandatory - The runtime, not any particular individual, is actively pushing the invulnerables to +convince people that they not only deserve to keep their invulnerable slots, but that they deserve +it more than any of the other candidates that registered; the rules of the chain enforce this +mechanism so no blame or ill-intent can be attributed to other individuals.
          • +
          • Periodic - In order to provide a reasonable path towards an invulnerable slot, no seat can be +permanent and should be challenged periodically.
          • +
          • Ballot qualification - Any invulnerable collator must have a proven track record as a collator, so +allowing only current permissionless collators to run against the current invulnerable minimizes +the chance of human error by restricting the number of incompatible choices.
          • +
          +

          Corner cases

          +
            +
          • If no candidate registers for an election, the slot will become empty, unless the number of +collators is lower than the minimum number allowed by the pallet configuration, defined in +MinEligibleCollators.
          • +
          • In case of equality for the first and second positions, the candidate that registered first wins the election.
          • +
          • In case no collator registers or qualifies for the first round of the election, the incumbent is +automatically granted the win and gets to keep the invulnerable slot.
          • +
          +

          Drawbacks

          +

          The first major drawback of this proposal is that it would put more responsibility on governance by +having people vote regularly in order to maintain the invulnerable collator set on each chain. Today +the collator-selection pallet employs a fire-and-forget system where the invulnerables are chosen +once by governance vote. Although in theory governance can always intervene to elect new +invulnerables, for the reasons stated in this RFC this is not the case in practice. Moving away from +this system means more action is needed from governance to ensure the stability of the invulnerable +collator sets on each system chain, which automatically increases the probability of errors. +However, governance is the ultimate source of truth on-chain and there is a lot more at stake in the +hands of governance than the invulnerable collator sets on system chains, so I think this risk is +acceptable.

          +

          The second drawback of this proposal is the imperfect voting mechanism. Probably the simplest and +most fair voting system for this scenario would have been First Past the Post, where all candidates +participate in a single election round and the candidate with the most votes wins the election +outright. However, the downside of such a system is the technical complexity behind running such an +election on-chain. This election mechanism would require a multiple choice referendum implementation +in the collator-selection pallet or at the system level somewhere else (e.g. on the Collectives +chain), which would be a mix between the conviction-voting and staking pallets and would +possibly communicate with all system chains via XCM. While this voting system could be useful in +other contexts as well, I don't think it's worth conditioning the invulnerable collator redesign on +a separate implementation of the multiple choice voting system when the Two-Round proposed achieves +the objectives of this RFC.

          +

          Testing, Security, and Privacy

          +

          All election mechanisms as well as corner cases can be covered with unit tests.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          The chain will have to run extrinsics to start and end elections periodically, but the impact in +terms of weight and PoV size is negligible.

          +

          Ergonomics

          +

          The invulnerables will be the most affected group, as they will have to now compete in elections +periodically to secure their spots. Permissionless candidates will now have a clear, though not +guaranteed, path towards becoming an invulnerable, at least for a period of time.

          +

          Compatibility

          +

          Any changes to the election mechanism of invulnerables should be compatible with the current +invulnerable set interaction with the collator set chosen at the session boundary. The current +invulnerable set for each chain can be grandfathered in when upgrading the collator-selection +pallet version.

          +

          Prior Art and References

          +

          This RFC builds on RFC-7, which introduced the election mechanism for system chain collators.

          +

          Unresolved Questions

          +
            +
          • How long should the period between individual elections be? How long should the full election +cycle be? +
              +
            • There should be a bit more than one month between individual elections, so that if there are 5 +invulnerables on system chains, a full election cycle would take 6 months.
            • +
            +
          • +
          • How long should the voting stay open? +
              +
            • It probably should just be a fixed period (e.g. 1 week) or maybe it can be the entire period +before the next election begins.
            • +
            +
          • +
          + +

          The main spinoff of this RFC might be a multiple choice poll implementation in a separate pallet to +hold a First Past the Post election instead of the Two-Round System proposed, which would prompt a +migration to the new voting system within the collator-selection pallet. Additionally, a more +complex solution where the voting for all system chains happens in a single place which then sends +XCM responses with election results back to system chains can be implemented in the next iteration +of this RFC.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-0152: Decentralized Convex-Preference Coretime Market for Polkadot

          +
          + + + + +
          Start Date2025-06-30
          DescriptionThis RFC proposes a decentralized market mechanism for allocating Coretime on Polkadot, replacing the existing Dutch auction method (RFC17). The proposed model leverages convex preference interactions among agents, eliminating explicit bidding and centralized price determination. This ensures fairness, transparency, and decentralization.
          **Conflicts-WithRFC-0017
          AuthorsDiego Correa Tristain algoritmia@labormedia.cl
          +
          +

          Summary

          +

          This RFC proposes a decentralized market mechanism for allocating Coretime on Polkadot, replacing the existing Dutch auction method (RFC17). The proposed model leverages convex preference interactions among agents, eliminating explicit bidding and centralized price determination. This ensures fairness, transparency, and decentralization.

          +

          Motivation

          +

          The current auction-based model (RFC17) presents critical issues:

          +
            +
          • +

            Front-running and timing asymmetry: Actors with superior infrastructure or timing strategies possess unfair advantages.

            +
          • +
          • +

            Complexity and cognitive overhead: Auctions pose challenges for participant comprehension and effective engagement.

            +
          • +
          • +

            Resource hoarding and inefficiency: Auctions allow strategic actors to monopolize resources, restricting equitable participation.

            +
          • +
          +

          The decentralized convex-preference model addresses these issues by facilitating asynchronous, equitable and transparent access before state coordination and deterministic verifiability during and after protocol consensus.

          +

          Stakeholders

          +

          Primary set of stakeholders are:

          +
            +
          • Parachain Teams & Developers
          • +
          • Governance Bodies (Polkadot Fellowship, Polkadot Governance, Technical Committees)
          • +
          • Core Developers & Runtime Engineers
          • +
          • Application Builders / Smart Contract Developers
          • +
          • End Users of Polkadot Ecosystem dApps
          • +
          • Token Holders & Investors
          • +
          • Researchers / Economists / Protocol Designers
          • +
          • Communication Hubs (e.g., The Kusamarian, Polkadot Forum Moderators, Ecosystem Ambassadors)
          • +
          +

          Explanation

          +

          Guide-Level Explanation

          +

          Agents participating in the Coretime market (such as parachains, parathreads, or smart contracts) declare two parameters:

          +
            +
          • +

            Asset Holdings: Their initial allocation of Coretime and tokens (e.g., DOT).

            +
          • +
          • +

            Preference Parameter (α): A scalar value between 0 and 1 indicating their valuation preference between Coretime and tokens.

            +
          • +
          +

          These parameters are recorded transparently on-chain. Transactions between agents are conducted through deterministic convex optimizations, ensuring local Pareto-optimal exchanges. A global equilibrium price naturally emerges from these local interactions without any centralized authority or external pricing mechanism Tristain, 2024.

          +

          Reference-Level Explanation

          +

          Economic Model

          +

          Agents' preferences are represented using a Cobb-Douglas utility function:

          +

          $U_i(x, y) = x^{α_i} y^{1-α_i}$

          +

          where:

          +
            +
          • $x$ represents the quantity of Coretime.
          • +
          • $y$ represents the quantity of tokens.
          • +
          • $α_i \in [0,1]$ is the scalar preference parameter.
          • +
          +

          Mechanism Implementation

          +

          Implementation involves the following components:

          +
            +
          1. Preference Declaration: Agents MUST explicitly register their scalar preference (α) and initial asset holdings on-chain.
          2. +
          3. Interaction Module: A dedicated runtime pallet or smart contract SHOULD manage interactions, ensuring Pareto-optimal deterministic outcomes.
          4. +
          5. Convergence Enforcement: Interaction ordering MUST follow a deterministic protocol prioritizing transactions significantly enhancing price convergence, sequencing from higher to lower exchange ratios.
          6. +
          7. On-chain Verifiability: Transaction histories and convergence processes MUST be transparently auditable and verifiable on-chain.
          8. +
          +

          Example Flow Diagram

          +
          Preference & Asset Declaration → Paired-exchange Convex Optimization → Interaction Ordering (High-to-Low Exchange Impact) → Global Price Convergence → On-chain Auditability
          +
          +

          Drawbacks

          +

          Performance

          +
            +
          • Initial implementation complexity due to the introduction of a new runtime module.
          • +
          +

          User Experience

          +
            +
          • User education and UI development required for scalar preference parameter comprehension.
          • +
          +

          Governance Burden

          +
            +
          • Additional review and audit complexity due to innovative economic logic.
          • +
          +

          Testing, Security, and Privacy

          +

          The implementation of this decentralized convex-preference Coretime market mechanism demands particular care in maintaining determinism, accuracy, and security in all on-chain interactions. Key considerations include:

          +

          Precision and Determinism in Arithmetic

          +
            +
          • +

            The proposed mechanism relies on convex optimization over continuous variables, which REQUIRES floating-point arithmetic or high-precision fixed-point alternatives.

            +
          • +
          • +

            To ensure deterministic behavior across all nodes, arithmetic operations MUST be implemented using deterministic libraries or Wasm-compatible fixed-point math, avoiding non-deterministic floating-point behavior across architectures.

            +
          • +
          • +

            Verifiability of Pareto-optimal outcomes across interactions MUST be reproducible and provable, potentially leveraging range-limited arithmetic or bounded rational approximations for optimization solvers.

            +
          • +
          +

          Security

          +
            +
          • +

            Preference declarations and asset holdings MUST be immutably recorded on-chain, subject to strict validation and input constraints to prevent manipulation.

            +
          • +
          • +

            The optimization process MUST prevent overflow, underflow, or division-by-zero attacks in edge-case preference combinations (e.g., α close to 0 or 1).

            +
          • +
          • +

            Any deterministic interaction ordering logic MUST be auditable and resistant to manipulation or reordering incentives by privileged actors.

            +
          • +
          +

          Privacy

          +
            +
          • +

            Although the model emphasizes transparency and verifiability, it MAY be beneficial in future iterations to support privacy-preserving preference commitment schemes (e.g., via homomorphic encryption or zero-knowledge commitments).

            +
          • +
          • +

            This MAY allow agents to express preferences without revealing them publicly, while still enabling fair participation and on-chain verification.

            +
          • +
          +

          Testing and Recommendations

          +
            +
          • +

            Simulation of multiple interacting agents with heterogeneous preferences and randomized initial allocations SHOULD be used to validate global convergence and equilibrium behavior.

            +
          • +
          • +

            Fuzz testing and symbolic execution SHOULD be applied to the interaction module to identify corner cases in the optimization pipeline.

            +
          • +
          • +

            Formal verification of convergence routines and boundedness of the optimization space is RECOMMENDED for high-assurance deployments.

            +
          • +
          +

          Performance, Ergonomics, and Compatibility

          +

          This leads to a more fluid, computation-bound system where efficiency stems from algorithmic design and verification speed, not from externally imposed timing constraints. Compatibility with existing Substrate pallets can be explored through modular implementation.

          +

          Performance

          +

          The system's performance depends on the availability of computational resources, not on arbitrary time windows or rounds. Price discovery and convergence are calculated as fast as the system can process the deterministic interaction rules. Pair-wise interactions can be batched and accumulated asynchronously. This enhances real-time responsiveness while removing artificial scheduling constraints.

          +

          Ergonomics

          +

          Agents only need to express a simple scalar preference and their token/Coretime holdings, removing cognitive complexity. This lightweight interaction model improves usability, especially for smaller participants.

          +

          Compatibility

          +

          The mechanism is fully compatible with asynchronous execution architectures. Because it relies on deterministic local state transitions, it integrates seamlessly with Byzantine fault-tolerant consensus protocols and supports scalable, decentralized implementations.

          +

          Prior Art and References

          +

          RFC-1

          +

          Initial Forum Discussion (superseded) : Invitation to Critically Evaluate Core Time Pricing Model Framework

          +

          RFC Draft Proposal Preliminary Forum Thread: RFC: Decentralized Convex-Preference Coretime Market for Polkadot Draft

          +

          "Emergent Properties of Distributed Agents with Two-Stage Convex Zero-Sum Optimal Exchange Network": Tristain, 2024

          +

          Personally, I want to express a special gratitude to Edmundo Beteta for introduccing me to Microeconomics Theory and guiding my curiosity at the Faculty of Economics and Administration, Universidad de Chile.

          +

          Unresolved Questions

          +
            +
          • +

            Optimal method for initial rollout (experimental sandbox vs. partial deployment on Polkadot).

            +
          • +
          • +

            OPTIONAL criteria and heuristics for deterministic interaction ordering.

            +
          • +
          + +
            +
          • +

            Extend the model to support multi-asset allocations with additional priority mechanisms.

            +
          • +
          • +

            Apply similar decentralized convex-preference principles to broader decentralized resource allocation challenges (e.g. JAM, energy/resource coordination, price stabilization).

            +
          • +
          +

          (source)

          +

          Table of Contents

          + +

          RFC-0154: AURA Multi-Slot Collation

          +
          + + + +
          Start Date25th of August 2025
          DescriptionMulti-Slot AURA for System Parachains
          Authorsbhargavbh, burdges, AlistairStewart
          +
          +

          Summary

          +

          This RFC proposes a modification to the AURA round-robin block production mechanism for system parachains (e.g. Polkadot Hub). The proposed change increases the number of consecutive block production slots assigned to each collator from the current single-slot allocation to a configurable value, initially set at four. This modification aims to enhance censorship resistance by mitigating data-withholding attacks.

          +

          Motivation

          +

          The Polkadot Relay Chain guarantees the safety of parachain blocks, but it does not provide explicit guarantees for liveness or censorship resistance. With the planned migration of core Relay Chain functionalities—such as Balances, Staking, and Governance—to the Polkadot Hub system parachain in early November 2025, it becomes critical to establish a mechanism for achieving censorship resistance for these parachains without compromising throughput. For example, if governance functionality is migrated to Polkadot-Hub, malicious collators could systematically censor aye votes for a Relay Chain runtime upgrade, potentially altering the referendum's outcome. This demonstrates that censorship attacks on a system parachain can have a direct and undesirable impact on the security of the Relay Chain. This proposal addresses such censorship vulnerabilities by modifying the AURA block production mechanism utilized by system parachain collator with minimal honesty assumptions on the collators.

          +

          Stakeholders

          +
            +
          • Collators: Operators responsible for block production on the Polkadot Hub and other system parachains.
          • +
          • Users and Applications: Entities that interact with the Polkadot Hub or other system parachains.
          • +
          +

          Threat Model

          +

          This analysis of censorship resistance for AURA-based parachains operates under the following assumptions:

          +
            +
          • +

            Collator Honesty: The model assumes the presence of at least one honest collator. We intentionally chose the most relaxed security assumption as collators are not slashable (unlike validators). Note that all system parachains use AURA via the Aura-Ext pallet.

            +
          • +
          • +

            Backer Honesty: The backer assigned to a block candidate is assumed to be honest. This is a reasonable assumption given 2/3rd honesty on relay-chain and that backers are assigned randomly by ELVES. Additionally, we assume that backers responsible for disbursing the withheld block to the victim collators. Pre-PVFs can definitely help in improving the resilience of backers against DoS attacks. Essentially, the pre PVF lets backers check the slot ownership and hence backers can filter out spamming collators at this stage. However, pre-PVFs have not yet been implemented. The stronger on assumption on backer disbursing the block is only needed for efficiency concerns and not essential for censorship resistance itself (i.e. the collator can always reconstruct from the availability layer).

            +
          • +
          • +

            Availability Layer: We also assume that the availability layer is robust and a collator can fetch the latest parablock (header and body) directly from the availability layer (or the backer) in a reasonable time, i.e., <6s from backer and <18s from availability layer provided by ELVES.

            +
          • +
          • +

            Scope: We focus mainly on honest collators ability to produce and get their blocks backed, rather than censorship at the transaction level. Ideally, we want to achive the property that honest collators eventually get their blocks backed even if there is a slight delay (and provide a provable bound on this delay).

            +
          • +
          +

          Proposed Changes

          +

          The current AURA mechanism, which assigns a single block production slot per collator, is vulnerable to data-withholding attacks. A malicious collator can strategically produce a block and then selectively withhold it from subsequent collators. This can prevent honest collators from building their blocks in a timely manner, effectively censoring their block production.

          +

          Illustrative Attack Scenario:

          +

          Consider 3 collators A, B and C assigned to consecutive slots by the AURA mechanism. A and C conspire to censor collator B, i.e., not allow B's block to get backed, they can execute the following attack: A produces block $b_A$ and submits it to the backers but it selectively witholds $b_A$ from B. Then C builds on top of $b_A$ and gets in its block before B can recover $b_A$ from availability layer and build on top of it.

          +

          Proposed Solution

          +

          This proposal modifies the AURA round-robin mechanism to assign $x$ consecutive slots to each collator. The specific value of $x$ is contingent upon asynchronous backing parameters od the system parachain and will be derived using a generic formula provided in this document. The collator selected by AURA will be responsible for producing $x$ consecutive blocks. This modification will require corresponding adjustments to the AURA authorship checks within the PVF (Parachain Validation Function). For the current configuration of Polkadot Hub, $x=4$.

          +

          Analysis

          +

          The number of consecutive slots to be assigned to ensure AURA's censorship resistance depends on Async Backing Parameters like unincluded_segment_length. We now describe our approach for deriving $x$ based on paramters of async backing and other variables like block production and latency in availability layer. The relevant values can then be plugged in to obtain $x$ for any system parachain.

          +

          Clearly, the number of consecutive slots (x) in the round-robin is lower bounded by the time required to reconstruct the previous block from the availability layer (b) in addition to the block building time (a). Hence, we need to set $x$ such that $x\geq a+b$. But with async backing, a malicious collator sequentially tries to not share the block and just-in-time front-run the honest collator for all the unincluded_segment blocks. Hence, $x\geq (a+b)\cdot m$ is sufficient, where $m$ is the max allowed candidate depth (unincluded segment allowed).

          +

          Independently, there is a check on the relay chain which filters out parablocks anchoring to very old relay_parents in the verify_backed_candidates. Any parablock which is anchored to a relay parent older than the oldest element in allowed_relay_parents gets rejected. Hence, the malicious collator can not front-run and censor the consequent collator after this delay as the parablock is no longer valid. The update of the allowed_relay_parents occurs at process_inherent_data where the buffer length of AllowedRelayParents is set by the scheduler parameter: lookahead (set to 3 by default). Therefore, the async_backing delay (asyncdelay) tolerated by the relay chain backers is $3*6s = 18s$. Hence, the number of consecutive slots is the minimum of the above two values:

          +

          $$x \geq min((a+b)\cdot m, a + b + asyncdelay)$$

          +

          where $m$ is the max_candidate_depth (or unincluded segment as seen from collator's perpective).

          +

          Number of consecutive slots for Polkadot Hub

          +

          Assuming the previous block data can be fetched from backers, then we comfortably have $a+b \leq 6s$, i.e. block buiding plus recoinstruciton time is < 6s. Using the current asyncdelay of 18s, suffices to set $x$ to 4. If the max_candidate_depth (m) for Polkadot Hub is set $m\leq3$, then this will reduce (improve) $x$ from 4 to $m$. Note that a channel would have to be provided for collators to fetch blocks from backers as the preferred option and only recover from availability layer as the fail-safe option.

          +

          Performance, Ergonomics, and Compatibility

          +

          The proposed changes are security critical and mitigate censorship attacks on core functionality like balances, staking and governance on Polkadot Hub. +This approach is compatible with the Slot-Based collation and the currently deployed FixedVelocityConsensusHook. Further analysis is needed to integrate with cusotm ConsesnsusHooks that leverage Elastic Scaling.

          +

          Multi-slot collation however is vulnerable to liveness attacks: adversarial collators don't show up to stall the liveness but then also lose out on block production rewards. The amount of missed blocks because of collators skipping is same as in the current implementation, only the distribution of missed slots changes (they are chunked together instead of being evenly distributed). Secondly, when ratio of adversarial (censoring) collators $\alpha$ is high (close to 1), the ratio of uncensored block to all blocks produced drops to $(1-\alpha)/(x\alpha)$. For more practical lower values of $\alpha<1/4$, the ratio of uncensored to all blocks is almost 1.

          +

          The latency for backing of blocks is affected as follows:

          +
            +
          • Censored Blocks: $(x-1)*6s$ compared to the blocks being indefinitely censored. $x$ is the number number of consecutive slots per collator.
          • +
          • An adversarial collator not showing up can slow the chain by $x*6s$ instead of $6s$. This is however not an economically rational attack as there are incentives for collating paid retrospectively.
          • +
          +

          Effective multi-slot collation requires that collators be able to prioritize transactions that have been targeted for censorship. The implementation should incorporate a framework for priority transactions (e.g., governance votes, election extrinsics) to ensure that such transactions are included in the uncensored blocks.

          +

          Prior Art and References

          +

          This RFC is related to RFC-7, which details the selection mechanism for System Parachain Collators. In general, a more robust collator selection mechanism that reduces the proportion of malicious actors would directly benefit the effectiveness of the ideas presented in this RFC

          +

          Future Directions

          +

          A resilient mechanism is needed for prioritising transactions in block production for collators that are actively targeted for censorship. There are two potential approches:

          +
            +
          • One approach is to categorise which transactions or extrinsics are more likely to be censored and should be considered priority. This would allow an honest collator to maximize the utility of its consecutive block production slots and prioritise when building the uncensored block. While this is dependent on the specific parachain's functionality, a generic framework would be beneficial for runtime engineers to tag relevant transaction types. However, if there exist transactions which are cheap and high priority (e.g. a governance vote), this approach is not ideal as it lets an adversary spam the collators with cheap high-priority transactions.
          • +
          • AAlternatively, one could design a robust tipping mechanism where transaction actively being censored would have to pay a higher tip to get themselves included. Even if the adversary initiates a bidding war, since 100% of the tip is forwarded to the collator, it only increase the revenue of the collator further incentivising it to remain honest. A careful analysis of such an incentive mechanism is required, however, it is beyond the scope of this RFC.
          • +
          +

          (source)

          +

          Table of Contents

          + +

          RFC-TODO: Stale Nomination Reward Curve

          +
          + + + +
          Start Date10 July 2024
          DescriptionIntroduce a decaying reward curve for stale nominations in staking.
          AuthorsShawn Tabrizi
          +
          +

          Summary

          +

          This is a proposal to reduce the impact of stale nominations in the Polkadot staking system. With this proposal, nominators are incentivized to update or renew their selected validators once per time period. Nominators that do not update or renew their selected validators would be considered stale, and a decaying multiplier would be applied to their nominations, reducing the weight of their nomination and rewards.

          +

          Motivation

          +

          Longer motivation behind the content of the RFC, presented as a combination of both problems and requirements for the solution.

          +

          One of Polkadot's primary utilities is providing a high quality security layer for applications built on top of it. To achieve this, Polkadot runs a Nominated Proof-of-Stake system, allowing nominators to vote on who they think are the best validators for Polkadot.

          +

          This system functions best when nominators and validators are active participants in the network. Nominators should consistently evaluate the quality and preferences of validators, and adjust their nominations accordingly.

          +

          Unfortunately, many Polkadot nominators do not play an active role in the NPoS system. For many, they set their nominations, and then seldomly look back at the.

          +

          This can lead to many negative behaviors:

          +
            +
          • Incumbents who received early nominations basically achieve tenure.
          • +
          • Validator quality and performance can decrease without recourse.
          • +
          • The validator set are not the optimal for Polkadot.
          • +
          • New validators have a harder time entering the active set.
          • +
          • Validators are able to "sneakily" increase their commission.
          • +
          +

          Stakeholders

          +

          Primary stakeholders are:

          +
            +
          • Nominators
          • +
          • Validators
          • +
          +

          Explanation

          +

          Detail-heavy explanation of the RFC, suitable for explanation to an implementer of the changeset. This should address corner cases in detail and provide justification behind decisions, and provide rationale for how the design meets the solution requirements.

          +

          Drawbacks

          +

          Description of recognized drawbacks to the approach given in the RFC. Non-exhaustively, drawbacks relating to performance, ergonomics, user experience, security, or privacy.

          +

          Testing, Security, and Privacy

          +

          Describe the the impact of the proposal on these three high-importance areas - how implementations can be tested for adherence, effects that the proposal has on security and privacy per-se, as well as any possible implementation pitfalls which should be clearly avoided.

          +

          Performance, Ergonomics, and Compatibility

          +

          Describe the impact of the proposal on the exposed functionality of Polkadot.

          +

          Performance

          +

          Is this an optimization or a necessary pessimization? What steps have been taken to minimize additional overhead?

          +

          Ergonomics

          +

          If the proposal alters exposed interfaces to developers or end-users, which types of usage patterns have been optimized for?

          +

          Compatibility

          +

          Does this proposal break compatibility with existing interfaces, older versions of implementations? Summarize necessary migrations or upgrade strategies, if any.

          +

          Prior Art and References

          +

          Provide references to either prior art or other relevant research for the submitted design.

          +

          Unresolved Questions

          +

          Provide specific questions to discuss and address before the RFC is voted on by the Fellowship. This should include, for example, alternatives to aspects of the proposed design where the appropriate trade-off to make is unclear.

          + +

          Describe future work which could be enabled by this RFC, if it were accepted, as well as related RFCs. This is a place to brain-dump and explore possibilities, which themselves may become their own RFCs.

          +

          (source)

          +

          Table of Contents

          + +

          RFC-XXXX: Adding customized mandatory context to proof of possession statement

          +
          + + + +
          Start Date20 May 2025 2025
          DescriptionChange SessionKeys runtime API to generate a customized ownership proof for each crypto type
          AuthorsAndrew Berger - Syed Hosseini
          +
          +

          Summary

          +

          This RFC is an amendment to RFC-0048. It proposes changing the OpaqueKeysInner:create_ownership_proof and OpaqueKeys:: ownership_proof_is_valid to invoke generation and validation procedures specific to each crypto type. This enables different crypto schemes to implement proof of possession that fits their security needs. In short, this RFC delegates the procedure of generating and validating proof of possession to the crypto schemes rather than dictating a uniform generation and verification.

          +

          Motivation

          +

          Following RFC-0048, all submitted keys accompany a signature of the account_id by the same key, proving that the submitter knows the private key corresponding to the submitted key. However, a scheme should mandate a context-specific approach for generating proof of possession and a different context for signing anything else to prevent rogue key attacks [3]. While this is critical for schemes with aggregatable public keys, the other (non-aggregatable) crypto schemes opt for backward compatibility and accept signatures not prepended with mandatory context.

          +

          However, the current RFC does not allow using different API calls and procedures to generate proof of possession for different crypto schemes.

          +

          After this RFC, the procedure for generating and verifying proof of possession would be at the discretion of the crypto scheme itself, not deterministically tied to the way they sign other messages. Stakeholders

          +
            +
          • Polkadot runtime implementors
          • +
          • Polkadot node implementors
          • +
          • Validator operators
          • +
          +

          Explanation

          +

          The RFC does not change the structure introduced by RFC-0048. The proof is a sequence of signatures:

          +
          #![allow(unused)]
          +fn main() {
          +type Proof = (Signature, Signature, ..);
          +}
          +

          However, each signature is generated by the crypto scheme instead of each private session key signing the account_id. By default, the following statement is signed by the crypto scheme:

          +
          rust
          +"POP_"|account_id
          +
          +

          The prefix could alert signers if they are misled into signing false proof of possession statements. More importantly, a new crypto scheme could specify a different structure for its proof of possession.

          +

          Because RFC-0048 has not been deployed, the version of the SessionKeys could still be set to 1 as requested by RFC-0048.

          +

          Drawbacks

          +

          Crypto scheme needs to implement an explicit generate_proof_of_possession and verify_proof_of_possession runtime API in addition to old capabilities (sigen, verify, etc).

          +

          Testing, Security, and Privacy

          +

          The proof of possession for current crypto schemes is virtually identical to the one defined in RFC-0048. On the other hand, the changes proposed by this RFC allow the generation of secure proof of possession for BLS keys.

          +

          Performance, Ergonomics, and Compatibility

          +

          Performance

          +

          The performance is the same as the one discussed in RFC-0048.

          +

          Ergonomics

          +

          Separating the generation of proof of possession from signing allows a crypto scheme more freedom to implement proof of possession that is fitted to its needs.

          +

          Compatibility

          +

          The significant difference is that proof of possession suggested by RFC-0048 is signed:

          +
          rust
          +account_id
          +
          +

          vs the current proposal suggests changing the statement to:

          +
          rust
          +"POP_"|account_id
          +
          +

          for the current crypto scheme. However, future crypto schemes such as BLS, which are not bound to backward compatibility, could produce more sophisticated proof of possession.

          +

          Prior Art and References

          +

          This is a minor amendment to RFC-0048.

          +

          Unresolved Questions

          +

          None.

          + +

          - [1] Substrate implementation of the generation of proof of possession for all crypto schemes (current and experimental ones) is implemented in Pull 6010.

          +

          - [2] Substrate implementation of RFC-0048, in which the implementation of OpaqueKeysInner:create_ownership_proof and OpaqueKeys:: ownership_proof_is_valid should be modified to call generate_proof_of_possion and verify_proof_of_possession runtime APIs instead of directly calling the sign.

          +

          - [3] Ristenpart, T., & Yilek, S. (2007). The power of proofs-of-possession: Securing multiparty signatures against rogue-key attacks. In , Annual {{International Conference}} on the {{Theory}} and {{Applications}} of {{Cryptographic Techniques} (pp. 228–245). : Springer).

          diff --git a/proposed/0145-remove-unnecessary-allocator-usage.html b/proposed/0145-remove-unnecessary-allocator-usage.html index c7f8735..d62a36d 100644 --- a/proposed/0145-remove-unnecessary-allocator-usage.html +++ b/proposed/0145-remove-unnecessary-allocator-usage.html @@ -90,7 +90,7 @@ @@ -870,7 +870,7 @@