-
+
diff --git a/index.html b/index.html
index 6d00ec1..da60f29 100644
--- a/index.html
+++ b/index.html
@@ -90,7 +90,7 @@
diff --git a/introduction.html b/introduction.html
index 6d00ec1..da60f29 100644
--- a/introduction.html
+++ b/introduction.html
@@ -90,7 +90,7 @@
diff --git a/print.html b/print.html
index 693ea17..40615a0 100644
--- a/print.html
+++ b/print.html
@@ -91,7 +91,7 @@
@@ -1101,88 +1101,6 @@ Such conversion attempts will explicitly fail.
None.
None.
-
(source)
-Table of Contents
-
-
-
-Start Date 12 July 2024
-Description Remove require_weight_at_most parameter from XCM Transact
-Authors Adrian Catangiu
-
-
-
-The Transact XCM instruction currently forces the user to set a specific maximum weight allowed to the inner call and then also pay for that much weight regardless of how much the call actually needs in practice.
-This RFC proposes improving the usability of Transact by removing that parameter and instead get and charge the actual weight of the inner call from its dispatch info on the remote chain.
-
-The UX of using Transact is poor because of having to guess/estimate the require_weight_at_most weight used by the inner call on the target.
-We've seen multiple Transact on-chain failures caused by guessing wrong values for this require_weight_at_most even though the rest of the XCM program would have worked.
-In practice, this parameter only adds UX overhead with no real practical value. Use cases fall in one of two categories:
-
-Unpaid execution of Transacts - in these cases the require_weight_at_most is not really useful, caller doesn't
-have to pay for it, and on the call site it either fits the block or not;
-Paid execution of single Transact - the weight to be spent by the Transact is already covered by the BuyExecution
-weight limit parameter.
-
-We've had multiple OpenGov root/whitelisted_caller proposals initiated by core-devs completely or partially fail
-because of incorrect configuration of require_weight_at_most parameter. This is a strong indication that the
-instruction is hard to use.
-
-
-Runtime Users,
-Runtime Devs,
-Wallets,
-dApps,
-
-
-The proposed enhancement is simple: remove require_weight_at_most parameter from the instruction:
-- Transact { origin_kind: OriginKind, require_weight_at_most: Weight, call: DoubleEncoded<Call> },
-+ Transact { origin_kind: OriginKind, call: DoubleEncoded<Call> },
-
-The XCVM implementation shall no longer use require_weight_at_most for weighing. Instead, it shall weigh the Transact instruction by decoding and weighing the inner call.
-
-No drawbacks, existing scenarios work as before, while this also allows new/easier flows.
-
-Currently, an XCVM implementation can weigh a message just by looking at the decoded instructions without decoding the Transact's call, but assuming require_weight_at_most weight for it. With the new version it has to decode the inner call to know its actual weight.
-But this does not actually change the security considerations, as can be seen below.
-With the new Transact the weighing happens after decoding the inner call. The entirety of the XCM program containing this Transact needs to be either covered by enough bought weight using a BuyExecution, or the origin has to be allowed to do free execution.
-The security considerations around how much can someone execute for free are the same for
-both this new version and the old. In both cases, an "attacker" can do the XCM decoding (including Transact inner calls) for free by adding a large enough BuyExecution without actually having the funds available.
-In both cases, decoding is done for free, but in both cases execution fails early on BuyExecution.
-
-
-No performance change.
-
-Ergonomics are slightly improved by simplifying Transact API.
-
-Compatible with previous XCM programs.
-
-None.
-
-None.
-
-None.
(source)
Table of Contents
@@ -1229,12 +1147,12 @@ both this new version and the old. In both cases, an "attacker" can do
Authors eskimor
-
+
Change the upgrade process of a parachain runtime upgrade to become an off-chain
process with regards to the relay chain. Upgrades are still contained in
parachain blocks, but will no longer need to end up in relay chain blocks nor in
relay chain state.
-
+
Having parachain runtime upgrades go through the relay chain has always been
seen as a scalability concern. Due to optimizations in statement
distribution and asynchronous backing it became less crucial and got
@@ -1249,13 +1167,13 @@ this we would hope for far more parachains to get registered, thousands
potentially even ten thousands. With so many PVFs registered, updates are
expected to become more frequent and even attacks on service quality for other
parachains would become a higher risk.
-
+
Parachain Teams
Relay Chain Node implementation teams
Relay Chain runtime developers
-
+
The issues with on-chain runtime upgrades are:
Needlessly costly.
@@ -1385,11 +1303,11 @@ fetching.
parachain and only prune the previous one, once the first candidate using the
new code got finalized. This ensures that disputes will always be able to
resolve.
-
+
The major drawback of this solution is the same as any solution the moves work
off-chain, it adds complexity to the node. E.g. nodes needing the PVF, need to
store them separately, together with their own pruning strategy as well.
-
+
Implementations adhering to this RFC, will respond to PVF requests with the
actual PVF, if they have it. Requesters will persist received PVFs on disk for
as long as they are replaced by a new one. Implementations must not be lazy
@@ -1403,8 +1321,8 @@ only chunk, but also PVF available), it is important to have enough validators
upgraded, before we allow collators to make use of the new runtime upgrade
mechanism. Otherwise we would risk disputes to not being able to succeed.
This RFC has no impact on privacy.
-
-
+
+
This proposal lightens the load on the relay chain and is thus in general
beneficial for the performance of the network, this is achieved by the
following:
@@ -1419,7 +1337,7 @@ upgrade per relay chain block, occupying almost all of the blockspace.
push back on an upgrade for whatever reason, no network bandwidth and core
time gets wasted because of this.
-
+
End users are only affected by better performance and more stable block times.
Parachains will need to implement the introduced request/response protocol and
adapt to the new signalling mechanism via an UMP message, instead of sending
@@ -1430,7 +1348,7 @@ upgrade gets passed to pre-checking. This is especially important for on-demand
chains or bulk users not occupying a full core. Further more that behaviour of
requiring multiple blocks to fully initiate a runtime upgrade needs to be well
documented.
-
+
We will continue to support the old mechanism for code upgrades for a while, but
will start to impose stricter limits over time, with the number of registered
parachains going up. With those limits in place parachains not migrating to the
@@ -1451,7 +1369,7 @@ validators.
"hot" PVF).
Altered behaviour in availability-distribution: Fetch missing PVFS.
-
+
Off-chain runtime upgrades have been discussed before, the architecture
described here is simpler though as it piggybacks on already existing features,
namely:
@@ -1460,7 +1378,7 @@ namely:
Existing pre-checking.
https://github.com/paritytech/polkadot-sdk/issues/971
-
+
What about the initial runtime, shall we make that off-chain as well?
Good news, at least after the first upgrade, no code will be stored on chain
@@ -1477,7 +1395,7 @@ the upgrade? Easy: Make available and vote nay in pre-checking.
TODO: Fully resolve these questions and incorporate in RFC text.
-
+
By no longer having code upgrade go through the relay chain, occupying a full relay
chain block, the impact on other parachains is already greatly reduced, if we
@@ -1545,6 +1463,7 @@ sharing if multiple parachains use the same data (e.g. same smart contracts).
Ergonomics
Compatibility
+Versioning
Runtime
Validators
Parachains
@@ -1564,20 +1483,20 @@ sharing if multiple parachains use the same data (e.g. same smart contracts).Authors Andrei Sandu
-
+
The only requirement for collator nodes is to provide valid parachain blocks to the validators of a backing group and by definition the collator set is trustless. However, in the case of elastic scaling, for security reason, collators must be trusted - non-malicious. CoreIndex commitments are required to remove this limitation. Additionally we are introducing a SessionIndex field in the CandidateReceipt to make dispute resolution more secure and robust.
-
+
At present time misbehaving collator nodes, or anyone who has acquired a valid collation can prevent a parachain from effecitvely using elastic scaling by providing the same collation to all backing groups assigned to the parachain. This happens before the next parachain block is authored and will prevent the chain of candidates to be formed, reducing the throughput of the parachain to a single core.
This RFC solves the problem by enabling a parachain to provide a core index commitment as part of it's PVF execution output and in the associated candidate receipt data structure.
Once this RFC is implemented the validity of a parachain block depends on the core it is being executed on.
-
+
Polkadot core developers.
Cumulus node developers.
Tooling, block explorer developers.
This approach and alternatives have been considered and discussed in this issue .
-
+
The approach proposed below was chosen primarly because it minimizes the number of breaking changes, the complexity and takes less implementation and testing time. The proposal is to change the existing primitives while keeping binary compatibility with the older versions. We repurpose unused fields to introduce core index and a session index information in the CandidateDescriptor and extend the UMP usage to output core index information.
The CandidateDescriptor currently includes collator and signature fields. The collator includes a signature on the following descriptor fields: parachain id, relay parent, validation data hash, validation code hash and the PoV hash.
@@ -1693,17 +1612,20 @@ impl VersionedCandidateReceipt {
the session_index is not equal to the session of the relay_parent in the descriptor
If core index (and session index) information is not available (backers got an old candidate receipt), there will be no changes compared to current behaviour.
-
+
The only drawback is that further additions to the descriptor are limited to the amount of remaining unused space.
-
+
Standard testing (unit tests, CI zombienet tests) for functionality and mandatory secuirty audit to ensure the implementation does not introduce any new security issues.
Backwards compatibility of the implementation will be tested on testnets (Versi and Westend).
There is no impact on privacy.
-
+
The expectation is that performance impact is negligible for sending and processing the UMP message has negligible performance impact in the runtime as well as on the node side.
-
-Parachain that use elastic scaling must send the separator empty message followed by the UMPSignal::OnCore only after sending all of the UMP XCM messages.
-
+
+Parachain that use elastic scaling must send the separator empty message followed by the UMPSignal::SelectCore only after sending all of the UMP XCM messages.
+
+
+At this point there is a simple way to determine the version of the receipt, by testing for zeroed reserved bytes in the
+descriptor. Supporting future changes will require a u8 version field to be introduced in the reserved space. We consider the current version to be 0 and the version check implicitly done when checking for reserved space to be zeroed.
The first step is to remove collator signature checking logic in the runtime, but keep the node side collator signature
checks.
@@ -1717,13 +1639,13 @@ The feature acts as a signal for supporting the new candidate receipts on the no
CoreIndex commitments are needed only by parachains using elastic scaling. Just upgrading the collator node and runtime should be sufficient and possible without manual changes.
Any tooling that decodes UMP XCM messages needs an update to support or ignore the new UMP messages, but they should be fine to decode the regular XCM messages that come before the separator.
-
+
Forum discussion about a new CandidateReceipt format: https://forum.polkadot.network/t/pre-rfc-discussion-candidate-receipt-format-v2/3738
-
+
N/A
-
+
The implementation is extensible and future proof to some extent. With minimal or no breaking changes, additional fields can be added in the candidate descriptor until the reserved space is exhausted
-Once the reserved space is exhausted, versioning will be implemented. The candidate receipt format will be versioned. This will exteend to pvf execution which requires versioning for the validation function, inputs and outputs (CandidateCommitments).
+Once the reserved space is exhausted, versioning will be implemented. The candidate receipt format will be versioned. Versioning should also be implemented for the validation function, inputs and outputs (CandidateCommitments).
(source)
Table of Contents
@@ -1759,7 +1681,7 @@ The feature acts as a signal for supporting the new candidate receipts on the no
Authors Francisco Aguirre
-
+
XCM already handles execution fees in an effective and efficient manner using the BuyExecution instruction.
However, other types of fees are not handled as effectively -- for example, delivery fees.
Fees exist that can't be measured using Weight -- as execution fees can -- so a new method should be thought up for those cases.
@@ -1768,7 +1690,7 @@ This RFC proposes making the fee handling system simpler and more general, by do
Adding a fees register
Deprecating BuyExecution and adding a new instruction PayFees with new semantics to ultimately replace it.
-
+
Execution fees are handled correctly by XCM right now.
However, the addition of extra fees, like for message delivery, result in awkward ways of integrating them into the XCVM implementation.
This is because these types of fees are not included in the language.
@@ -1776,14 +1698,14 @@ The standard should have a way to correctly deal with these implementation speci
The new instruction moves the specified amount of fees from the holding register to a dedicated fees register that the XCVM can use in flexible ways depending on its implementation.
The XCVM implementation is free to use these fees to pay for execution fees, transport fees, or any other type of fee that might be necessary.
This moves the specifics of fees further away from the XCM standard, and more into the actual underlying XCVM implementation, which is a good thing.
-
+
Runtime Users
Runtime Devs
Wallets
dApps
-
+
The new instruction that will replace BuyExecution is a much simpler and general version: PayFees.
This instruction takes one Asset, takes it from the holding register, and puts it into a new fees register.
The XCVM implementation can now use this Asset to make sure every necessary fee is paid for, this includes execution fees, delivery fees, or any other fee.
@@ -1810,27 +1732,27 @@ BuyExecution { asset, weight_limit }
PayFees { asset }
// ...rest
}
-
+
There needs to be an explicit change from BuyExecution to PayFees, most often accompanied by a reduction in the assets passed in.
-
+
It might become a security concern if leftover fees are trapped, since a lot of them are expected.
-
-
+
+
There should be no performance downsides to this approach.
The fees register is a simplification that may actually result in better performance, in the case an implementation is doing a workaround to achieve what this RFC proposes.
-
+
The interface is going to be very similar to the already existing one.
Even simpler since PayFees will only receive one asset.
That asset will allow users to limit the amount of fees they are willing to pay.
-
+
This RFC can't just change the semantics of the BuyExecution instruction since that instruction accepts any funds, uses what it needs and returns the rest immediately.
The new proposed instruction, PayFees, doesn't return the leftover immediately, it keeps it in the fees register.
In practice, the deprecated BuyExecution needs to be slowly rolled out in favour of PayFees.
-
+
The closed RFC PR on the xcm-format repository, before XCM RFCs got moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/53.
-
+
None
-
+
This proposal would greatly benefit from an improved asset trapping system.
CustomAssetClaimer is also related, as it directly improves the ergonomics of this proposal.
LeftoverAssetsDestination execution hint would also similarly improve the ergonomics.
@@ -1866,52 +1788,52 @@ In practice, the deprecated BuyExecution needs to be slowly rolled
Authors Francisco Aguirre
-
+
The SetFeesMode instruction and the fees_mode register allow for the existence of JIT withdrawal.
JIT withdrawal complicates the fee mechanism and leads to bugs and unexpected behaviour.
The proposal is to remove said functionality.
Another effort to simplify fee handling in XCM.
-
+
The JIT withdrawal mechanism creates bugs such as not being able to get fees when all assets are put into holding and none left in the origin location.
This is a confusing behavior, since there are funds for fees, just not where the XCVM wants them.
The XCVM should have only one entrypoint to fee payment, the holding register.
That way there is also less surface for bugs.
-
+
Runtime Users
Runtime Devs
Wallets
dApps
-
+
The SetFeesMode instruction will be removed.
The Fees Mode register will be removed.
-
+
Users will have to make sure to put enough assets in WithdrawAsset when
previously some things might have been charged directly from their accounts.
This leads to a more predictable behaviour though so it will only be
a drawback for the minority of users.
-
+
Implementations and benchmarking must change for most existing pallet calls
that send XCMs to other locations.
-
-
+
+
Performance will be improved since unnecessary checks will be avoided.
-
+
JIT withdrawal was a way of side-stepping the regular flow of XCM programs.
By removing it, the spec is simplified but now old use-cases have to work with
the original intended behaviour, which may result in more implementation work.
Ergonomics for users will undoubtedly improve since the system is more predictable.
-
+
Existing programs in the ecosystem will break.
The instruction should be deprecated as soon as this RFC is approved
(but still fully supported), then removed in a subsequent XCM version
(probably deprecate in v5, remove in v6).
-
+
The previous RFC PR on the xcm-format repo, before XCM RFCs were moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/57.
-
+
None.
-
+
The new generic fees mechanism is related to this proposal and further stimulates it as the JIT withdraw fees mechanism will become useless anyway.
(source)
Table of Contents
@@ -1944,25 +1866,25 @@ The instruction should be deprecated as soon as this RFC is approved
Authors Francisco Aguirre
-
+
A previous XCM RFC (https://github.com/polkadot-fellows/xcm-format/pull/37) introduced a SetAssetClaimer instruction.
This idea of instructing the XCVM to change some implementation-specific behavior is useful.
In order to generalize this mechanism, this RFC introduces a new instruction SetExecutionHints
and makes the SetAssetClaimer be just one of many possible execution hints.
-
+
There is a need for specifying how certain implementation-specific things should behave.
Things like who can claim the assets or what can be done instead of trapping assets.
Another idea for a hint:
LeftoverAssetsDestination: for depositing leftover assets to a destination instead of trapping them
-
+
Runtime devs
Wallets
dApps
-
+
A new instruction, SetExecutionHints, will be added.
This instruction will take a single parameter of type ExecutionHint, an enumeration.
The first variant for this enum is AssetClaimer, which allows to specify a location that should be able to claim trapped assets.
@@ -1983,29 +1905,29 @@ enum ExecutionHint {
type NumVariants = /* Number of variants of the `ExecutionHint` enum */;
}
-
+
The SetExecutionHints instruction might be hard to benchmark, since we should look into the actual hints being set to know how much weight to attribute to it.
-
+
ExecutionHints are specified on a per-message basis, so they have to be specified at the beginning of a message.
If they were to be specified at the end, hints like AssetClaimer would be useless if an error occurs beforehand and assets get trapped before ever reaching the hint.
The instruction takes a bounded vector of hints so as to not force barriers to allow an arbitrary number of SetExecutionHint instructions.
-
-
+
+
None.
-
+
The SetExecutionHints instruction provides a better integration with barriers.
If we had to add one barrier for SetAssetClaimer and another for each new hint that's added, barriers would need to be changed all the time.
Also, this instruction would make it easy to write XCM programs.
You only need to specify the hints you want in one single instruction at the top of your program.
-
+
None.
-
+
The previous RFC PR in the xcm-format repository before XCM RFCs moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/59.
-
+
SetLeftoverAssetsDestination is an idea of a hint that could be added.
What more are there?
This RFC creates a convenience for a pattern that was identified. Should we try to hinder that pattern instead?
-
+
None.
(source)
Table of Contents
@@ -2038,23 +1960,23 @@ This RFC creates a convenience for a pattern that was identified. Should we try
Authors Adrian Catangiu
-
+
XCM programs that want to ensure that the following XCM instructions cannot command the authority of the Original Origin (such as asset transfer programs) should consider, where possible, using DescendOrigin into a demoted, safer origin rather than ClearOrigin which clears the origin completely.
-
+
Currently, all XCM asset transfer instructions ultimately clear the origin in the remote XCM message by use of the ClearOrigin instruction. This is done for security considerations to ensure that later instructions cannot command the authority of the Original Origin (the sending chain).
The problem with this approach is that it limits what can be achieved on remote chains through XCM. Most XCM operations require having an origin, and following any asset transfer the origin is lost, meaning not much can be done other than depositing the transferred assets to some local account or transferring them onward to another chain.
For example, we cannot transfer some funds for buying execution, then do a Transact (all in the same XCM message).
The above example is a basic, core building block for cross-chain interactions and we should support it.
-
+
Runtime Users, Runtime Devs, wallets, cross-chain dApps.
-
+
In the case of XCM programs going from origin-chain directly to dest-chain without an intermediary hop, we can enable scenarios such as above by using the DescendOrigin instruction instead of the ClearOrigin instruction.
Instead of clearing the origin-chain origin, we can "descend" into a child location of origin-chain, specifically we could "descend" into the actual origin of the initiator. Most common such descension would be X2(Parachain(origin-chain), AccountId32(origin-account)), when the initiator is a (signed/pure/proxy) account origin-account.
This allows an actor on chain A to Transact on chain B without having to prefund its SA account on chain B, instead they can simply transfer the required fees in the same XCM program as the Transact.
Unfortunately, this approach only works when the asset transfer has the same XCM route/hops as the rest of the program. Meaning it only works if the assets can be directly transferred from chain A to chain B without going through intermediary hops or reserve chains. When going through a reserve-chain, the original origin-chain/origin-account origin is lost and cannot be recreated using just the DescendOrigin instruction
Even so, this proposal is still useful for the majority of usecases (where the asset transfer happens directly between A and B).
The TransferReserveAsset, DepositReserveAsset, InitiateReserveWithdraw and InitiateTeleport instructions should use a DescendOrigin instruction on the onward XCM program instead of the currently used ClearOrigin instruction. The DescendOrigin instruction should effectively mutate the origin on the remote chain to the SA of the origin on the local chain.
-
+
No performance, ergonomics, user experience, security, or privacy drawbacks.
In terms of ergonomics and user experience, the support for combining an asset transfer with a subsequent action (like Transact) is assymetrical:
@@ -2062,15 +1984,15 @@ Even so, this proposal is still useful for the majority of usecases (where the a
doesn't natively work when assets have to go through a reserve location.
But it is still a net positive for ergonomics and user experience, while being neutral for the rest.
-
+
Barriers should also allow DescendOrigin, not just ClearOrigin.
XCM program builders should audit their programs and eliminate assumptions of "no origin" on remote side. Instead, the working assumption is that the origin on the remote side is the local origin reanchored location. This new assumption is 100% in line with the behavior of remote XCM programs sent over using pallet_xcm::send.
-
-
+
+
No impact.
-
+
Improves ergonomics by allowing the local origin to operate on the remote chain even when the XCM program includes an asset transfer.
-
+
At the executor-level this change is backwards and forwards compatible. Both types of programs can be executed on new and old versions of XCM with no changes in behavior.
Programs switching to the new approach is however a breaking change from the existing XCM barriers point of view.
For example, the AllowTopLevelPaidExecutionFrom barrier permits programs containing ClearOrigin before BuyExecution, but will reject programs with DescendOrigin before BuyExecution.
@@ -2080,12 +2002,12 @@ For example, the
+
None.
-
+
How to achieve this for all workflows, not just point-to-point XCM programs with no intermediary hops?
As long as the intermediary hop(s) is/are not trusted to "impersonate" a location from the original origin chain, there is no way AFAICT to hold on to the original origin.
-
+
Similar (maybe even better) results can be achieved using XCMv5ExecuteWithOrigin instruction, instead of DescendOrigin. But that introduces version downgrade compatibility challenges.
(source)
Table of Contents
@@ -2118,9 +2040,9 @@ For example, the
+
This is a proposal to reduce the impact of stale nominations in the Polkadot staking system. With this proposal, nominators are incentivized to update or renew their selected validators once per time period. Nominators that do not update or renew their selected validators would be considered stale, and a decaying multiplier would be applied to their nominations, reducing the weight of their nomination and rewards.
-
+
Longer motivation behind the content of the RFC, presented as a combination of both problems and requirements for the solution.
One of Polkadot's primary utilities is providing a high quality security layer for applications built on top of it. To achieve this, Polkadot runs a Nominated Proof-of-Stake system, allowing nominators to vote on who they think are the best validators for Polkadot.
This system functions best when nominators and validators are active participants in the network. Nominators should consistently evaluate the quality and preferences of validators, and adjust their nominations accordingly.
@@ -2133,31 +2055,31 @@ For example, the
+
Primary stakeholders are:
-
+
Detail-heavy explanation of the RFC, suitable for explanation to an implementer of the changeset. This should address corner cases in detail and provide justification behind decisions, and provide rationale for how the design meets the solution requirements.
-
+
Description of recognized drawbacks to the approach given in the RFC. Non-exhaustively, drawbacks relating to performance, ergonomics, user experience, security, or privacy.
-
+
Describe the the impact of the proposal on these three high-importance areas - how implementations can be tested for adherence, effects that the proposal has on security and privacy per-se, as well as any possible implementation pitfalls which should be clearly avoided.
-
+
Describe the impact of the proposal on the exposed functionality of Polkadot.
-
+
Is this an optimization or a necessary pessimization? What steps have been taken to minimize additional overhead?
-
+
If the proposal alters exposed interfaces to developers or end-users, which types of usage patterns have been optimized for?
-
+
Does this proposal break compatibility with existing interfaces, older versions of implementations? Summarize necessary migrations or upgrade strategies, if any.
-
+
Provide references to either prior art or other relevant research for the submitted design.
-
+
Provide specific questions to discuss and address before the RFC is voted on by the Fellowship. This should include, for example, alternatives to aspects of the proposed design where the appropriate trade-off to make is unclear.
-
+
Describe future work which could be enabled by this RFC, if it were accepted, as well as related RFCs. This is a place to brain-dump and explore possibilities, which themselves may become their own RFCs.
(source)
Table of Contents
@@ -2199,9 +2121,9 @@ For example, the
+
This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.
-
+
The Polkadot Ubiquitous Computer , or just Polkadot UC , represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.
The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions . This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.
@@ -2222,7 +2144,7 @@ For example, the
+
Primary stakeholder sets are:
Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
@@ -2231,7 +2153,7 @@ For example, the
+
Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime .
When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.
@@ -2663,16 +2585,16 @@ InstaPoolHistory: (empty)
Governance upgrade proposal(s).
Monitoring of the upgrade process.
-
+
No specific considerations.
Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.
While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.
-
+
Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.
A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.
Any final implementation MUST pass a professional external security audit.
The proposal introduces no new privacy concerns.
-
+
RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.
RFC-5 proposes the API for interacting with Relay-chain.
Additional work should specify the interface for the instantaneous market revenue so that the Coretime-chain can ensure Bulk Coretime placed in the instantaneous market is properly compensated.
@@ -2688,7 +2610,7 @@ InstaPoolHistory: (empty)
The percentage of cores to be sold as Bulk Coretime.
The fate of revenue collected.
-
+
Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains . While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.
(source)
Table of Contents
@@ -2721,10 +2643,10 @@ InstaPoolHistory: (empty)
Authors Gavin Wood, Robert Habermeier
-
+
In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.
This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.
-
+
The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.
@@ -2736,7 +2658,7 @@ InstaPoolHistory: (empty)
The interface MUST allow for the allocating chain to instruct changes to the number of cores which it is able to allocate.
The interface MUST allow for the allocating chain to be notified of changes to the number of cores which are able to be allocated by the allocating chain.
-
+
Primary stakeholder sets are:
Developers of the Relay-chain core-management logic.
@@ -2744,7 +2666,7 @@ InstaPoolHistory: (empty)
Socialization:
This content of this RFC was discussed in the Polkdot Fellows channel.
-
+
The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types ), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types ). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.
Future work may include these messages being introduced into the XCM standard.
@@ -2819,17 +2741,17 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);
For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.
For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.
-
+
No specific considerations.
-
+
Standard Polkadot testing and security auditing applies.
The proposal introduces no new privacy concerns.
-
+
RFC-1 proposes a means of determining allocation of Coretime using this interface.
RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.
None at present.
-
+
None.
(source)
Table of Contents
@@ -2875,13 +2797,13 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);
Authors Joe Petrowski
-
+
As core functionality moves from the Relay Chain into system chains, so increases the reliance on
the liveness of these chains for the use of the network. It is not economically scalable, nor
necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a
mechanism -- part technical and part social -- for ensuring reliable collator sets that are
resilient to attemps to stop any subsytem of the Polkadot protocol.
-
+
In order to guarantee access to Polkadot's system, the collators on its system chains must propose
blocks (provide liveness) and allow all transactions to eventually be included. That is, some
collators may censor transactions, but there must exist one collator in the set who will include a
@@ -2917,12 +2839,12 @@ to censor any subset of transactions.
Collators selected by governance SHOULD have a reasonable expectation that the Treasury will
reimburse their operating costs.
-
+
Infrastructure providers (people who run validator/collator nodes)
Polkadot Treasury
-
+
This protocol builds on the existing
Collator Selection pallet
and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who
@@ -2958,27 +2880,27 @@ approximately:
of which 15 are Invulnerable, and
five are elected by bond.
-
+
The primary drawback is a reliance on governance for continued treasury funding of infrastructure
costs for Invulnerable collators.
-
+
The vast majority of cases can be covered by unit testing. Integration test should ensure that the
Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired
number of Candidates, can handle updates over XCM from the system's governance location.
-
+
This proposal has very little impact on most users of Polkadot, and should improve the performance
of system chains by reducing the number of missed blocks.
-
+
As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager.
Appropriate benchmarking and tests should ensure that conservative limits are placed on the number
of Invulnerables and Candidates.
-
+
The primary group affected is Candidate collators, who, after implementation of this RFC, will need
to compete in a bond-based election rather than a race to claim a Candidate spot.
-
+
This RFC is compatible with the existing implementation and can be handled via upgrades and
migration.
-
+
GitHub: Collator Selection Roadmap
@@ -2993,9 +2915,9 @@ migration.
SR Labs Auditors
Current collators including Paranodes, Stake Plus, Turboflakes, Peter Mensik, SIK, and many more.
-
+
None at this time.
-
+
There may exist in the future system chains for which this model of collator selection is not
appropriate. These chains should be evaluated on a case-by-case basis.
(source)
@@ -3034,10 +2956,10 @@ appropriate. These chains should be evaluated on a case-by-case basis.
Authors Pierre Krieger
-
+
The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.
This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.
-
+
The maintenance of bootnodes has long been an annoyance for everyone.
When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories.
When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.
@@ -3046,9 +2968,9 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-date
Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.
While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.
Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.
-
+
This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.
-
+
The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.
Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.
While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.
@@ -3085,10 +3007,10 @@ message Response {
The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.
Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.
-
+
The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.
The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.
-
+
Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.
This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.
However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.
@@ -3097,22 +3019,22 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) a
For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes.
Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.
Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.
-
-
+
+
The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.
Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.
Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization.
If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.
-
+
Irrelevant.
-
+
Irrelevant.
-
+
None.
-
+
While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?
-
+
It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.
(source)
Table of Contents
@@ -3133,13 +3055,13 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that
Authors Jonas Gehrlein
-
+
The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.
-
+
How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.
-
+
Polkadot DOT token holders.
-
+
This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.
It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here , or through other equally effective measures, serves as a baseline assumption for this argument.
Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.
@@ -3182,13 +3104,13 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that
Authors Joe Petrowski
-
+
Since the introduction of the Collectives parachain, many groups have expressed interest in forming
new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is
relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into
the Collectives parachain for each new collective. This RFC proposes a means for the network to
ratify a new collective, thus instructing the Fellowship to instate it in the runtime.
-
+
Many groups have expressed interest in representing collectives on-chain. Some of these include:
Parachain technical fellowship (new)
@@ -3204,12 +3126,12 @@ path to having its collective accepted on-chain as part of the protocol. Accepta
the Fellowship to include the new collective with a given initial configuration into the runtime.
However, the network, not the Fellowship, should ultimately decide which collectives are in the
interest of the network.
-
+
Polkadot stakeholders who would like to organize on-chain.
Technical Fellowship, in its role of maintaining system runtimes.
-
+
The group that wishes to operate an on-chain collective should publish the following information:
Charter, including the collective's mandate and how it benefits Polkadot. This would be similar
@@ -3243,22 +3165,22 @@ Fellowship would help them identify the pallet indices associated with a given c
or not the Fellowship member agrees with removal.
Collective removal may also come with other governance calls, for example voiding any scheduled
Treasury spends that would fund the given collective.
-
+
Passing a Root origin referendum is slow. However, given the network's investment (in terms of code
maintenance and salaries) in a new collective, this is an appropriate step.
-
+
No impacts.
-
+
Generally all new collectives will be in the Collectives parachain. Thus, performance impacts
should strictly be limited to this parachain and not affect others. As the majority of logic for
collectives is generalized and reusable, we expect most collectives to be instances of similar
subsets of modules. That is, new collectives should generally be compatible with UIs and other
services that provide collective-related functionality, with little modifications to support new
ones.
-
+
The launch of the Technical Fellowship, see the
initial forum post .
-
+
None at this time.
(source)
Table of Contents
@@ -3295,13 +3217,13 @@ ones.
Authors Oliver Tale-Yazdi
-
+
Introduces breaking changes to the Core runtime API by letting Core::initialize_block return an enum. The versions of Core is bumped from 4 to 5.
-
+
The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks.
Further it would be nice to not hinder the possibility of implementing a new hook poll, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem. This hook can then be used to replace the use of on_initialize and on_finalize for non-deadline critical logic.
In a similar fashion, it should not hinder the future addition of a System::PostInherents callback that always runs after all inherents were applied.
-
+
Substrate Maintainers: They have to implement this, including tests, audit and
maintenance burden.
@@ -3309,7 +3231,7 @@ maintenance burden.
Polkadot Parachain Teams: They have to adapt to the breaking changes but then eventually have
multi-block migrations available.
-
+
This runtime API function is changed from returning () to ExtrinsicInclusionMode:
fn initialize_block(header: &<Block as BlockT>::Header)
@@ -3330,23 +3252,23 @@ multi-block migrations available.
1. Multi-Block-Migrations : The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.
2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.
3. System::PostInherents can be done in the same manner as poll.
-
+
The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.
-
+
The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.
Security: n/a
Privacy: n/a
-
-
+
+
The performance overhead is minimal in the sense that no clutter was added after fulfilling the
requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.
-
+
The new interface allows for more extensible runtime logic. In the future, this will be utilized for
multi-block-migrations which should be a huge ergonomic advantage for parachain developers.
-
+
The advice here is OPTIONAL and outside of the RFC. To not degrade
user experience, it is recommended to ensure that an updated node can still import historic blocks.
-
+
The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275 ). Related issues and merge
requests:
-
+
Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode,
ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called
AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them.
=> renamed to ExtrinsicInclusionMode
Is post_inherents more consistent instead of last_inherent? Then we should change it.
=> renamed to last_inherent
-
+
The long-term future here is to move the block building logic into the runtime. Currently there is a tight dance between the block author and the runtime; the author has to call into different runtime functions in quick succession and exact order. Any misstep causes the block to be invalid.
This can be unified and simplified by moving both parts into the runtime.
(source)
@@ -3400,14 +3322,14 @@ This can be unified and simplified by moving both parts into the runtime.
Authors Bryan Chen
-
+
This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.
This is achieved by remove existing lock conditions and only lock a parachain when:
A parachain manager explicitly lock the parachain
OR a parachain block is produced successfully
-
+
The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.
The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.
The key scenarios this RFC seeks to improve are:
@@ -3426,12 +3348,12 @@ This can be unified and simplified by moving both parts into the runtime.
A parachain SHOULD be locked when it successfully produced the first block.
A parachain manager MUST be able to perform lease swap without having a running parachain.
-
+
Parachain teams
Parachain users
-
+
A parachain can either be locked or unlocked. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:
@@ -3471,31 +3393,31 @@ This can be unified and simplified by moving both parts into the runtime.
Parachain never produced a block. Including from expired leases.
Parachain manager never explicitly lock the parachain.
-
+
Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.
For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.
It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.
Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.
Existing operational parachains will not be impacted.
-
+
The implementation of this RFC will be tested on testnets (Rococo and Westend) first.
An audit maybe required to ensure the implementation does not introduce unwanted side effects.
There is no privacy related concerns.
-
+
This RFC should not introduce any performance impact.
-
+
This RFC should improve the developer experiences for new and existing parachain teams
-
+
This RFC is fully compatibility with existing interfaces.
-
+
Parachain Slot Extension Story: https://github.com/paritytech/polkadot/issues/4758
Allow parachain to renew lease without actually run another parachain: https://github.com/paritytech/polkadot/issues/6685
Always treat parachain that never produced block for a significant amount of time as unlocked: https://github.com/paritytech/polkadot/issues/7539
-
+
None at this stage.
-
+
This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.
-
+
Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR .
-
+
Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.
Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.
-
+
Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
Encointer Association: Further decentralization of the Encointer Network necessities like devops.
Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
-
+
Our PR has all details about our runtime and how we would move it into the fellowship repo.
Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets
It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.
@@ -3550,17 +3472,17 @@ This can be unified and simplified by moving both parts into the runtime.
Encointer will publish all its crates crates.io
Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
-
+
Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.
-
+
No changes to the existing system are proposed. Only changes to how maintenance is organized.
-
+
No changes
-
+
Existing Encointer runtime repo
-
+
None identified
-
+
More info on Encointer: encointer.org
(source)
Table of Contents
@@ -4480,11 +4402,11 @@ other privacy-enhancing mechanisms to address this concern.
Authors Joe Petrowski, Gavin Wood
-
+
The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary
prior to the launch of parachains and development of XCM, most of this logic can exist in
parachains. This is a proposal to migrate several subsystems into system parachains.
-
+
Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to
operate with common guarantees about the validity and security of their state transitions. Polkadot
provides these common guarantees by executing the state transitions on a strict subset (a backing
@@ -4496,13 +4418,13 @@ blockspace) to the network.
By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a
set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot
Ubiquitous Computer can maximise its primary offering: secure blockspace.
-
+
Parachains that interact with affected logic on the Relay Chain;
Core protocol and XCM format developers;
Tooling, block explorer, and UI developers.
-
+
The following pallets and subsystems are good candidates to migrate from the Relay Chain:
Identity
@@ -4648,36 +4570,36 @@ sensible to rehearse a migration.
Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session
changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set --
will give confidence to the chain's robustness on Polkadot.
-
+
These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular
may require some optimizations to deal with constraints.
-
+
Standard audit/review requirements apply. More powerful multi-chain integration test tools would be
useful in developement.
-
+
Describe the impact of the proposal on the exposed functionality of Polkadot.
-
+
This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its
primary resources are allocated to system performance.
-
+
This proposal alters very little for coretime users (e.g. parachain developers). Application
developers will need to interact with multiple chains, making ergonomic light client tools
particularly important for application development.
For existing parachains that interact with these subsystems, they will need to configure their
runtimes to recognize the new locations in the network.
-
+
Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol.
Application developers will need to interact with multiple chains in the network.
-
+
-
+
There remain some implementation questions, like how to use balances for both Staking and
Governance. See, for example, Moving Staking off the Relay
Chain .
-
+
Ideally the Relay Chain becomes transactionless, such that not even balances are represented there.
With Staking and Governance off the Relay Chain, this is not an unreasonable next step.
With Identity on Polkadot, Kusama may opt to drop its People Chain.
@@ -4712,13 +4634,13 @@ With Staking and Governance off the Relay Chain, this is not an unreasonable nex
Authors Vedhavyas Singareddi
-
+
At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the
Storage.
We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field
under RuntimeVersion,
we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.
-
+
Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data.
This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is
further explored in https://github.com/polkadot-fellows/RFCs/issues/19
@@ -4730,11 +4652,11 @@ One of the main challenge here is some extrinsics could be big enough that this
included in the Consensus block due to Block's weight restriction.
If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but
rather at maximum, 32 byte of extrinsic data.
-
+
Technical Fellowship, in its role of maintaining system runtimes.
-
+
In order to use project specific StateVersion for extrinsic roots, we proposed
an implementation that introduced
parameter to frame_system::Config but that unfortunately did not feel correct.
@@ -4760,26 +4682,26 @@ pub const VERSION: RuntimeVersion = RuntimeVersion {
system_version: 1,
};
}
-
+
There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated
so that chains know which system_version to use.
-
+
AFAIK, should not have any impact on the security or privacy.
-
+
These changes should be compatible for existing chains if they use state_version value for system_verision.
-
+
I do not believe there is any performance hit with this change.
-
+
This does not break any exposed Apis.
-
+
This change should not break any compatibility.
-
+
We proposed introducing a similar change by introducing a
parameter to frame_system::Config but did not feel that
is the correct way of introducing this change.
-
+
I do not have any specific questions about this change at the moment.
-
+
IMO, this change is pretty self-contained and there won't be any future work necessary.
(source)
Table of Contents
@@ -4808,9 +4730,9 @@ is the correct way of introducing this change.
Authors Sebastian Kunert
-
+
This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.
-
+
The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:
Trie Depth: We assume a trie depth to account for intermediary nodes.
@@ -4819,12 +4741,12 @@ is the correct way of introducing this change.
These pessimistic assumptions lead to an overestimation of storage weight, negatively impacting block utilization on parachains.
In addition, the current model does not account for multiple accesses to the same storage items. While these repetitive accesses will not increase storage-proof size, the runtime-side weight monitoring will account for them multiple times. Since the proof size is completely opaque to the runtime, we can not implement retroactive storage weight correction.
A solution must provide a way for the runtime to track the exact storage-proof size consumed on a per-extrinsic basis.
-
+
Parachain Teams: They MUST include this host function in their runtime and node.
Light-client Implementors: They SHOULD include this host function in their runtime and node.
-
+
This RFC proposes a new host function that exposes the storage-proof size to the runtime. As a result, runtimes can implement storage weight reclaiming mechanisms that improve block utilization.
This RFC proposes the following host function signature:
#![allow(unused)]
@@ -4832,14 +4754,14 @@ is the correct way of introducing this change.
fn ext_storage_proof_size_version_1() -> u64;
}
The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.
-
-
+
+
Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.
-
+
The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.
-
+
Parachain teams will need to include this host function to upgrade.
-
+
Pull Request including proposed host function: PoV Reclaim (Clawback) Node Side .
Issue with discussion: [FRAME core] Clawback PoV Weights For Dispatchables
@@ -4893,12 +4815,12 @@ is the correct way of introducing this change.
Authors Aurora Poppyseed , Just_Luuuu , Viki Val , Joe Petrowski
-
+
This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for
creating an NFT collection, minting an individual NFT, and lowering its corresponding metadata and
attribute deposits. The objective is to lower the barrier to entry for NFT creators, fostering a
more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.
-
+
The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2
DOT for metadata and attribute deposits) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub
presents a significant financial barrier for many NFT creators. By lowering the deposit
@@ -4915,7 +4837,7 @@ low.
Deposits SHOULD be derived from deposit function, adjusted by correspoding pricing mechansim.
-
+
NFT Creators : Primary beneficiaries of the proposed change, particularly those who found the
current deposit requirements prohibitive.
@@ -4929,7 +4851,7 @@ collections, enhancing the overall ecosystem.
Previous discussions have been held within the Polkadot
Forum , with
artists expressing their concerns about the deposit amounts.
-
+
This RFC proposes a revision of the deposit constants in the configuration of the NFTs pallet on the
Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.
As of v1.1.1, the Collection Deposit is 10 DOT and the Item Deposit is 0.01 DOT (see
@@ -5000,7 +4922,7 @@ application to avoid sudden rate changes, as in:
where the constant a moves the inflection to lower or higher x values, the constant b adjusts
the rate of the deposit increase, and the independent variable x is the number of collections or
items, depending on application.
-
+
Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks.
Highlighted below are cogent points extracted from the discourse on the Polkadot Forum
conversation ,
@@ -5029,22 +4951,22 @@ stakeholders wouldn't be much affected. As of date 9th January 2024 there are 42
Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.
-
+
As noted above, state bloat is a security concern. In the case of abuse, governance could adapt by
increasing deposit rates and/or using forceDestroy on collections agreed to be spam.
-
-
+
+
The primary performance consideration stems from the potential for state bloat due to increased
activity from lower deposit requirements. It's vital to monitor and manage this to avoid any
negative impact on the chain's performance. Strategies for mitigating state bloat, including
efficient data management and periodic reviews of storage requirements, will be essential.
-
+
The proposed change aims to enhance the user experience for artists, traders, and utilizers of
Kusama and Polkadot Asset Hubs, making Polkadot and Kusama more accessible and user-friendly.
-
+
The change does not impact compatibility as a redeposit function is already implemented.
-
+
If this RFC is accepted, there should not be any unresolved questions regarding how to adapt the
implementation of deposits for NFT collections.
@@ -5132,11 +5054,11 @@ Polkadot and Kusama networks.
Authors Alin Dima
-
+
Propose a way of permuting the availability chunk indices assigned to validators, in the context of
recovering available data from systematic chunks , with the
purpose of fairly distributing network bandwidth usage.
-
+
Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once
per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3
validators during an entire session, when favouring availability recovery from systematic chunks.
@@ -5144,9 +5066,9 @@ validators during an entire session, when favouring availability recovery from s
systematic availability chunks to different validators, based on the relay chain block and core.
The main purpose is to ensure fair distribution of network bandwidth usage for availability recovery in general and in
particular for systematic chunk holders.
-
+
Relay chain node core developers.
-
+
An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the
resulting code.
@@ -5300,7 +5222,7 @@ struct (added in https://github.com/paritytech/polkadot-sdk/pull/2177Configuration::set_node_feature extrinsic. Once the feature is enabled and new configuration is live, the
validator->chunk mapping ceases to be a 1:1 mapping and systematic recovery may begin.
-
+
Getting access to the core_index that used to be occupied by a candidate in some parts of the dispute protocol is
very complicated (See appendix A ). This RFC assumes that availability-recovery processes initiated during
@@ -5310,28 +5232,28 @@ mitigate this problem and will likely be needed in the future for CoreJam and/or
Related discussion about updating CandidateReceipt
It's a breaking change that requires all validators and collators to upgrade their node version at least once.
-
+
Extensive testing will be conducted - both automated and manual.
This proposal doesn't affect security or privacy.
-
-
+
+
This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of
CPU time in polkadot as we scale up the parachain block size and number of availability cores.
With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be
halved and total POV recovery time decrease by 80% for large POVs. See more
here .
-
+
Not applicable.
-
+
This is a breaking change. See upgrade path section above.
All validators and collators need to have upgraded their node versions before the feature will be enabled via a
governance call.
-
+
See comments on the tracking issue and the
in-progress PR
-
+
Not applicable.
-
+
This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic
chunks from backers/approval-checkers.
@@ -5406,7 +5328,7 @@ dispute scenarios.
Authors Bastian Köcher
-
+
This RFC proposes to changes the SessionKeys::generate_session_keys runtime api interface. This runtime api is used by validator operators to
generate new session keys on a node. The public session keys are then registered manually on chain by the validator operator.
Before this RFC it was not possible by the on chain logic to ensure that the account setting the public session keys is also in
@@ -5414,7 +5336,7 @@ possession of the private session keys. To solve this the RFC proposes to pass t
registration on chain to generate_session_keys. Further this RFC proposes to change the return value of the generate_session_keys
function also to not only return the public session keys, but also the proof of ownership for the private session keys. The
validator operator will then need to send the public session keys and the proof together when registering new session keys on chain.
-
+
When submitting the new public session keys to the on chain logic there doesn't exist any verification of possession of the private session keys.
This means that users can basically register any kind of public session keys on chain. While the on chain logic ensures that there are
no duplicate keys, someone could try to prevent others from registering new session keys by setting them first. While this wouldn't bring
@@ -5422,13 +5344,13 @@ the "attacker" any kind of advantage, more like disadvantages (potenti
e.g. changing its session key in the event of a private session key leak.
After this RFC this kind of attack would not be possible anymore, because the on chain logic can verify that the sending account
is in ownership of the private session keys.
-
+
Polkadot runtime implementors
Polkadot node implementors
Validator operators
-
+
We are first going to explain the proof format being used:
#![allow(unused)]
fn main() {
@@ -5462,31 +5384,31 @@ actual exported function signature looks like:
already gets the proof passed as Vec<u8>. This proof needs to be decoded to
the actual Proof type as explained above. The proof and the SCALE encoded
account_id of the sender are used to verify the ownership of the SessionKeys.
-
+
Validator operators need to pass the their account id when rotating their session keys in a node.
This will require updating some high level docs and making users familiar with the slightly changed ergonomics.
-
+
Testing of the new changes only requires passing an appropriate owner for the current testing context.
The changes to the proof generation and verification got audited to ensure they are correct.
-
-
+
+
The session key generation is an offchain process and thus, doesn't influence the performance of the
chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys.
The verification of the proof is a signature verification number of individual session keys times. As setting
the session keys is happening quite rarely, it should not influence the overall system performance.
-
+
The interfaces have been optimized to make it as easy as possible to generate the ownership proof.
-
+
Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before
a runtime is enacted that contains these changes otherwise they will fail to generate session keys.
The RPC that exists around this runtime api needs to be updated to support passing the account id
and for returning the ownership proof alongside the public session keys.
UIs would need to be updated to support the new RPC and the changed on chain logic.
-
+
None.
-
+
None.
-
+
Substrate implementation of the RFC .
(source)
Table of Contents
@@ -5524,10 +5446,10 @@ and for returning the ownership proof alongside the public session keys.
Authors Joe Petrowski, Gavin Wood
-
+
The Fellowship Manifesto states that members should receive a monthly allowance on par with gross
income in OECD countries. This RFC proposes concrete amounts.
-
+
One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and
retain technical talent for the continued progress of the network.
In order for members to uphold their commitment to the network, they should receive support to
@@ -5537,12 +5459,12 @@ on par with a full-time job. Providing a livable wage to those making such contr
pragmatic to work full-time on Polkadot.
Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion
are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.
-
+
Fellowship members
Polkadot Treasury
-
+
This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to
the amount or asset used would only be on a single value, and all others would adjust relatively. A
III Dan is someone whose contributions match the expectations of a full-time individual contributor.
@@ -5602,19 +5524,19 @@ other hand, more people will likely join the Fellowship in the coming years.
Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via
RFC.
-
+
By not using DOT for payment, the protocol relies on the stability of other assets and the ability
to acquire them. However, the asset of choice can be changed in the future.
-
+
N/A.
-
-
+
+
N/A
-
+
N/A
-
+
N/A
-
+
-
+
None at present.
(source)
Table of Contents
@@ -5655,11 +5577,11 @@ States
Authors Pierre Krieger
-
+
When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.
Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime.
This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1.
-
+
There exists three motivations behind this change:
@@ -5672,9 +5594,9 @@ States
It makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below.
-
+
Low-level developers.
-
+
To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:
concat(
leb128(total-size-in-bytes-of-the-rest),
@@ -5694,23 +5616,23 @@ A SCALE-compact encoded 1 is one byte of value 4. In o
This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec.
As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them.
By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications.
-
+
This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)).
An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome.
-
+
Irrelevant.
-
-
+
+
Irrelevant.
-
+
Irrelevant.
-
+
The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.
-
+
Irrelevant.
-
+
None.
-
+
None. This is a simple isolated change.
(source)
Table of Contents
@@ -5750,20 +5672,20 @@ This is equivalent to forcing the Vec<Transaction> to always
Authors Pierre Krieger
-
+
This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".
Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.
The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.
-
+
The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.
It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.
If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node.
In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.
This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.
-
+
Low-level client developers.
People interested in accessing the archive of the chain.
-
+
Reading RFC #8 first might help with comprehension, as this RFC is very similar.
Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.
@@ -5799,30 +5721,30 @@ If blocks pruning is enabled and the chain is a relay chain, then Substrate unfo
Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol.
Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case.
Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.
-
+
None that I can see.
-
+
The content of this section is basically the same as the one in RFC 8.
This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.
Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities.
Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.
For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.
Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.
-
-
+
+
The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.
Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.
Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.
-
+
Irrelevant.
-
+
Irrelevant.
-
+
Unknown.
-
+
While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?
-
+
This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.
If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks.
We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.
@@ -5871,12 +5793,12 @@ We could even add to the peer-to-peer network nodes that are only capable of ser
Authors Zondax AG, Parity Technologies
-
+
To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format.
It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute.
This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails.
Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime.
-
+
Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way.
On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature.
The two main reasons why this is not possible today are:
@@ -5903,14 +5825,14 @@ We could even add to the peer-to-peer network nodes that are only capable of ser
Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead;
Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching).
-
+
Runtime implementors
UI/wallet implementors
Offline wallet implementors
The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.
-
+
The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described.
First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained.
@@ -6181,23 +6103,23 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
Included in the extrinsic is u8, the "mode". The mode is either 0 which means to not include the metadata hash in the signed data or the mode is 1 to include the metadata hash in V1.
Included in the signed data is an Option<[u8; 32]>. Depending on the mode the value is either None or Some(metadata_hash).
-
+
The chunking may not be the optimal case for every kind of offline wallet.
-
+
All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised.
Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.
Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.
-
-
+
+
There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.
The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.
-
+
RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well.
On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid.
-
+
None.
-
+
Does it work with all kind of offline wallets?
Generic types currently appear multiple times in the metadata with each instantiation. It could be may be useful to have generic type only once in the metadata and declare the generic parameters at their instantiation.
@@ -6235,20 +6157,20 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
Authors George Pisaltu
-
+
This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction.
-
+
"General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685 . They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions.
An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712 .
The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version , which has been equal to 4 for a long time.
By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.
-
+
Runtime users
Runtime devs
Wallet devs
-
+
An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version.
Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time.
This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic format version will be bumped to 5 and the extrinsic type bit representation would change as follows:
@@ -6259,23 +6181,23 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
11 reserved
-
+
This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again.
-
+
There is no impact on testing, security or privacy.
-
+
This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.
-
+
There is no performance impact.
-
+
The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.
-
+
This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.
-
+
The original design was originally proposed in the TransactionExtension PR , which is also the motivation behind this effort.
-
+
None.
-
+
Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work.
(source)
Table of Contents
@@ -6308,16 +6230,16 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
Authors Alex Gheorghe (alexggh)
-
+
Extend the DHT authority discovery records with a signed creation time, so that nodes can determine which record is newer and always decide to prefer the newer records to the old ones.
-
+
Currently, we use the Kademlia DHT for storing records regarding the p2p address of an authority discovery key, the problem is that if the nodes decide to change its PeerId/Network key it will publish a new record, however because of the distributed and replicated nature of the DHT there is no way to tell which record is newer so both old PeerId and the new PeerId will live in the network until the old one expires(36h), that creates all sort of problem and leads to the node changing its address not being properly connected for up to 36h.
After this RFC, nodes are extended to decide to keep the new record and propagate the new record to nodes that have the old record stored, so in the end all the nodes will converge faster to the new record(in the order of minutes, not 36h)
Implementation of the rfc: https://github.com/paritytech/polkadot-sdk/pull/3786.
Current issue without this enhacement: https://github.com/paritytech/polkadot-sdk/issues/3673
-
+
Polkadot node developers.
-
+
This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot.
You can find a link to the specification here .
In a nutshell, on a specific node the current authority-discovery protocol publishes Kademila DHT records at startup and periodically. The records contain the full address of the node for each authorithy key it owns. The node tries also to find the full address of all authorities in the network by querying the DHT and picking up the first record it finds for each of the authority id it found on chain.
@@ -6350,24 +6272,24 @@ You can find a link to the specification
+
In theory the new protocol creates a bit more traffic on the DHT network, because it waits for DHT records to be received from more than one node, while in the current implementation we just take the first record that we receive and cancel all in-flight requests to other peers. However, because the redundancy factor will be relatively small and this operation happens rarerly, every 10min, this cost is negligible.
-
+
This RFC's implementation https://github.com/paritytech/polkadot-sdk/pull/3786 had been tested on various local test networks and versi.
With regard to security the creation time is wrapped inside SignedAuthorityRecord wo it will be signed with the authority id key, so there is no way for other malicious nodes to manipulate this field without the received node observing.
-
+
Irrelevant.
-
+
Irrelevant.
-
+
Irrelevant.
-
+
The changes are backwards compatible with the existing protocol, so nodes with both the old protocol and newer protocol can exist in the network, this is achieved by the fact that we use protobuf for serializing and deserializing the records, so new fields will be ignore when deserializing with the older protocol and vice-versa when deserializing an old record with the new protocol the new field will be None and the new code accepts this record as being valid.
-
+
The enhancements have been inspired by the algorithm specified in here
-
+
N/A
-
+
N/A
(source)
Table of Contents
@@ -6400,20 +6322,20 @@ in order to speed up the time until all nodes have the newest record, nodes can
Authors Bastian Köcher
-
+
This RFC proposes a change to the extrinsic format to include a transaction extension version.
-
+
The extrinsic format supports to be extended with transaction extensions. These transaction extensions are runtime specific and can be different per chain. Each transaction extension can add data to the extrinsic itself or extend the signed payload.
This means that adding a transaction extension is breaking the chain specific extrinsic format. A recent example was the introduction of the CheckMetadatHash to Polkadot and all its system chains.
As the extension was adding one byte to the extrinsic, it broke a lot of tooling. By introducing an extra version for the transaction extensions it will be possible to introduce changes to these transaction extensions while still being backwards compatible.
Based on the version of the transaction extensions, each chain runtime could decode the extrinsic correctly and also create the correct signed payload.
-
+
Runtime users
Runtime devs
Wallet devs
-
+
RFC84 introduced the extrinsic format 5. The idea is to piggyback onto this change of the extrinsic format to add the extra version for the transaction extensions. If required, this could also come
as extrinsic format 6, but 5 is not yet deployed anywhere.
The extrinsic format supports the following types of transactions:
@@ -6429,20 +6351,102 @@ as extrinsic format 6, but 5 is not yet deployed anywh
The Version being a SCALE encoded u8 representing the version of the transaction extensions.
In the chain runtime the version can be used to determine which set of transaction extensions should be used to decode and to validate the transaction.
-
+
This adds one byte more to each signed transaction.
-
+
There is no impact on testing, security or privacy.
-
+
This will ensure that changes to the transactions extensions can be done in a backwards compatible way.
-
+
There is no performance impact.
-
+
Runtime developers need to take care of the versioning and ensure to bump as required, so that there are no compatibility breaking changes without a bump of the version. It will also add a little bit more code in the runtime
to decode these old versions, but this should be neglectable.
-
+
When introduced together with extrinsic format version 5 from RFC84 , it can be implemented in a backwards compatible way. So, transactions can still be send using the
old extrinsic format and decoded by the runtime.
+
+None.
+
+None.
+
+None.
+
(source)
+Table of Contents
+
+
+
+Start Date 12 July 2024
+Description Remove require_weight_at_most parameter from XCM Transact
+Authors Adrian Catangiu
+
+
+
+The Transact XCM instruction currently forces the user to set a specific maximum weight allowed to the inner call and then also pay for that much weight regardless of how much the call actually needs in practice.
+This RFC proposes improving the usability of Transact by removing that parameter and instead get and charge the actual weight of the inner call from its dispatch info on the remote chain.
+
+The UX of using Transact is poor because of having to guess/estimate the require_weight_at_most weight used by the inner call on the target.
+We've seen multiple Transact on-chain failures caused by guessing wrong values for this require_weight_at_most even though the rest of the XCM program would have worked.
+In practice, this parameter only adds UX overhead with no real practical value. Use cases fall in one of two categories:
+
+Unpaid execution of Transacts - in these cases the require_weight_at_most is not really useful, caller doesn't
+have to pay for it, and on the call site it either fits the block or not;
+Paid execution of single Transact - the weight to be spent by the Transact is already covered by the BuyExecution
+weight limit parameter.
+
+We've had multiple OpenGov root/whitelisted_caller proposals initiated by core-devs completely or partially fail
+because of incorrect configuration of require_weight_at_most parameter. This is a strong indication that the
+instruction is hard to use.
+
+
+Runtime Users,
+Runtime Devs,
+Wallets,
+dApps,
+
+
+The proposed enhancement is simple: remove require_weight_at_most parameter from the instruction:
+- Transact { origin_kind: OriginKind, require_weight_at_most: Weight, call: DoubleEncoded<Call> },
++ Transact { origin_kind: OriginKind, call: DoubleEncoded<Call> },
+
+The XCVM implementation shall no longer use require_weight_at_most for weighing. Instead, it shall weigh the Transact instruction by decoding and weighing the inner call.
+
+No drawbacks, existing scenarios work as before, while this also allows new/easier flows.
+
+Currently, an XCVM implementation can weigh a message just by looking at the decoded instructions without decoding the Transact's call, but assuming require_weight_at_most weight for it. With the new version it has to decode the inner call to know its actual weight.
+But this does not actually change the security considerations, as can be seen below.
+With the new Transact the weighing happens after decoding the inner call. The entirety of the XCM program containing this Transact needs to be either covered by enough bought weight using a BuyExecution, or the origin has to be allowed to do free execution.
+The security considerations around how much can someone execute for free are the same for
+both this new version and the old. In both cases, an "attacker" can do the XCM decoding (including Transact inner calls) for free by adding a large enough BuyExecution without actually having the funds available.
+In both cases, decoding is done for free, but in both cases execution fails early on BuyExecution.
+
+
+No performance change.
+
+Ergonomics are slightly improved by simplifying Transact API.
+
+Compatible with previous XCM programs.
None.
diff --git a/proposed/0088-broker-pallet-slashable-deposit-purchaser-reputation-reserved-cores.html b/proposed/0088-broker-pallet-slashable-deposit-purchaser-reputation-reserved-cores.html
index 797c374..f516029 100644
--- a/proposed/0088-broker-pallet-slashable-deposit-purchaser-reputation-reserved-cores.html
+++ b/proposed/0088-broker-pallet-slashable-deposit-purchaser-reputation-reserved-cores.html
@@ -90,7 +90,7 @@
diff --git a/proposed/0089-flexible-inflation.html b/proposed/0089-flexible-inflation.html
index d85b1b4..866c4a7 100644
--- a/proposed/0089-flexible-inflation.html
+++ b/proposed/0089-flexible-inflation.html
@@ -90,7 +90,7 @@
diff --git a/proposed/0097-unbonding_queue.html b/proposed/0097-unbonding_queue.html
index 59fa9b1..6881e98 100644
--- a/proposed/0097-unbonding_queue.html
+++ b/proposed/0097-unbonding_queue.html
@@ -90,7 +90,7 @@
diff --git a/proposed/00xx-smart-contracts-coretime-chain.html b/proposed/00xx-smart-contracts-coretime-chain.html
index 1acbaa8..55b91f2 100644
--- a/proposed/00xx-smart-contracts-coretime-chain.html
+++ b/proposed/00xx-smart-contracts-coretime-chain.html
@@ -90,7 +90,7 @@
diff --git a/proposed/0100-xcm-multi-type-asset-transfer.html b/proposed/0100-xcm-multi-type-asset-transfer.html
index e2e7f6d..6029085 100644
--- a/proposed/0100-xcm-multi-type-asset-transfer.html
+++ b/proposed/0100-xcm-multi-type-asset-transfer.html
@@ -90,7 +90,7 @@
@@ -458,7 +458,7 @@ Such conversion attempts will explicitly fail.
-
+
@@ -472,7 +472,7 @@ Such conversion attempts will explicitly fail.
-
+
diff --git a/proposed/0102-offchain-parachain-runtime-upgrades.html b/proposed/0102-offchain-parachain-runtime-upgrades.html
index 2e47908..050e8b9 100644
--- a/proposed/0102-offchain-parachain-runtime-upgrades.html
+++ b/proposed/0102-offchain-parachain-runtime-upgrades.html
@@ -90,7 +90,7 @@
@@ -518,7 +518,7 @@ sharing if multiple parachains use the same data (e.g. same smart contracts).
-