diff --git a/404.html b/404.html index fc6c187..d5c595a 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index 5467c36..740e067 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index b912398..5b7d034 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 1be04f3..eac74bf 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 120ef99..633bf7b 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0010-burn-coretime-revenue.html b/approved/0010-burn-coretime-revenue.html index 8e747c0..1f8bca6 100644 --- a/approved/0010-burn-coretime-revenue.html +++ b/approved/0010-burn-coretime-revenue.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index d3ca7ba..05d7b33 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html index 57aa988..e9eb042 100644 --- a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 3d8f7e6..7d7adb3 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 5dca18e..b743e0c 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0026-sassafras-consensus.html b/approved/0026-sassafras-consensus.html index 50b7200..badd567 100644 --- a/approved/0026-sassafras-consensus.html +++ b/approved/0026-sassafras-consensus.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index dd6d0cc..1ab016f 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0042-extrinsics-state-version.html b/approved/0042-extrinsics-state-version.html index 99c111a..2fb5873 100644 --- a/approved/0042-extrinsics-state-version.html +++ b/approved/0042-extrinsics-state-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0043-storage-proof-size-hostfunction.html b/approved/0043-storage-proof-size-hostfunction.html index e9e1c18..493101f 100644 --- a/approved/0043-storage-proof-size-hostfunction.html +++ b/approved/0043-storage-proof-size-hostfunction.html @@ -90,7 +90,7 @@ diff --git a/approved/0045-nft-deposits-asset-hub.html b/approved/0045-nft-deposits-asset-hub.html index 5d3e683..14fd5df 100644 --- a/approved/0045-nft-deposits-asset-hub.html +++ b/approved/0045-nft-deposits-asset-hub.html @@ -90,7 +90,7 @@ diff --git a/approved/0047-assignment-of-availability-chunks.html b/approved/0047-assignment-of-availability-chunks.html index 76aacfc..ac3d20a 100644 --- a/approved/0047-assignment-of-availability-chunks.html +++ b/approved/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ diff --git a/approved/0048-session-keys-runtime-api.html b/approved/0048-session-keys-runtime-api.html index 6563792..01c39fe 100644 --- a/approved/0048-session-keys-runtime-api.html +++ b/approved/0048-session-keys-runtime-api.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 7ebeb21..5ec761a 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index 9e9372b..8d9d23c 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/approved/0059-nodes-capabilities-discovery.html b/approved/0059-nodes-capabilities-discovery.html index df00ddf..89471ef 100644 --- a/approved/0059-nodes-capabilities-discovery.html +++ b/approved/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ diff --git a/approved/0078-merkleized-metadata.html b/approved/0078-merkleized-metadata.html index b197dd5..d7e52f0 100644 --- a/approved/0078-merkleized-metadata.html +++ b/approved/0078-merkleized-metadata.html @@ -90,7 +90,7 @@ diff --git a/approved/0084-general-transaction-extrinsic-format.html b/approved/0084-general-transaction-extrinsic-format.html index d605e7b..bf374b0 100644 --- a/approved/0084-general-transaction-extrinsic-format.html +++ b/approved/0084-general-transaction-extrinsic-format.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index 1ef16d5..0d87840 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index 1ef16d5..0d87840 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/new/0100-xcm-multi-type-asset-transfer.html b/new/0100-xcm-multi-type-asset-transfer.html index 5238246..67e42af 100644 --- a/new/0100-xcm-multi-type-asset-transfer.html +++ b/new/0100-xcm-multi-type-asset-transfer.html @@ -90,7 +90,7 @@ @@ -446,7 +446,7 @@ deliberate decision for enhancing UX - in practice, people/dApps care abount lim - @@ -460,7 +460,7 @@ deliberate decision for enhancing UX - in practice, people/dApps care abount lim - diff --git a/new/0101-xcm-transact-allow-unlimited-weight.html b/new/0101-xcm-transact-allow-unlimited-weight.html new file mode 100644 index 0000000..dd1959a --- /dev/null +++ b/new/0101-xcm-transact-allow-unlimited-weight.html @@ -0,0 +1,313 @@ + + + + + + + RFC-0101: XCM Transact shall also allow Unlimited weight - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

(source)

+

Table of Contents

+ +

RFC-0101: XCM Transact shall also allow Unlimited weight

+
+ + + +
Start Date12 July 2024
DescriptionEnchance XCM Transact to allow Unlimited weight inner call
AuthorsAdrian Catangiu
+
+

Summary

+

The Transact XCM instruction currently forces the user to set a specific maximum weight allowed to the inner call and then also pay for that much weight regardless of how much the call actually needs in practice.

+

This RFC proposes improving the usability of Transact by having the remote chain (which executes the Transact), get and charge the actual weight of the inner call from its dispatch info.

+

Motivation

+

The UX of using Transact is poor because of having to guess/estimate the require_weight_at_most weight used by the inner call on the target.

+

We've seen multiple Transact on-chain failures caused by guessing wrong values for this require_weight_at_most even though the rest of the XCM program would have worked.

+

In practice, this parameter only adds UX overhead with no real practical value. Use cases fall in one of two categories:

+
    +
  1. Unpaid execution of Transacts - in these cases the require_weight_at_most is not really useful, caller doesn't +have to pay for it, and on the call site it either fits the block or not;
  2. +
  3. Paid execution of single Transact - the weight to be spent by the Transact is already covered by the BuyExecution +weight limit parameter.
  4. +
+

We've had multiple OpenGov root/whitelisted_caller proposals initiated by core-devs completely or partially fail +because of incorrect configuration of require_weight_at_most parameter. This is a strong indication that the +instruction is hard to use.

+

Stakeholders

+
    +
  • Runtime Users,
  • +
  • Runtime Devs,
  • +
  • Wallets,
  • +
  • dApps,
  • +
+

Explanation

+

The proposed enhancement is simple: change Transact instruction:

+
- Transact { origin_kind: OriginKind, require_weight_at_most: Weight, call: DoubleEncoded<Call> },
++ Transact { origin_kind: OriginKind, weight_limit: WeightLimit, call: DoubleEncoded<Call> },
+
+

With the new API, users who do not need to artificially limit the maximum weight used by the inner call, +can pass weight_limit: Unlimited; while those who need to do it, still can.

+

The XCVM implementation shall not use the weight_limit for weighing. Instead, it shall weigh the Transact instruction by also decoding and weighing the inner call.

+

The weight_limit shall be used to bail early if the actual weight is more than the specified limit.

+

Drawbacks

+

No drawbacks, existing scenarios work as before, while this also allows new/easier flows.

+

Testing, Security, and Privacy

+

Currently, an XCVM implementation can weigh a message just by looking at the decoded instructions without decoding the Transact's call, but assuming require_weight_at_most weight for it. With the new version it has to decode the inner call to know its actual weight.

+

But this does not actually change the security considerations, as can be seen below.

+

When using weight_limit = Unlimited, the weighing happens after decoding the inner call. The entirety of the XCM program containing this Transact needs to be either covered by enough bought weight using a BuyExecution, or the origin has to be allowed to do free execution.

+

The security considerations around how much can someone execute for free are the same for +both this new version and the old. In both cases, an "attacker" can do the XCM decoding (including Transact inner calls) for free by adding a large enough BuyExecution without actually having the funds available.

+

In both cases, decoding is done for free, but execution fails early on BuyExecution.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

No performance change.

+

Ergonomics

+

Ergonomics are slightly improved by allowing Unlimited max weight for most scenarios.

+

Compatibility

+

Compatible with previous XCM programs.

+

Prior Art and References

+

None.

+

Unresolved Questions

+

None.

+ +

If we see that nobody uses actual limits (all on-chain calls use weight_limit = Unlimited), we should remove it completely.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/print.html b/print.html index daa6014..ffb44db 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -443,6 +443,91 @@ deliberate decision for enhancing UX - in practice, people/dApps care abount lim

None.

+

(source)

+

Table of Contents

+ +

RFC-0101: XCM Transact shall also allow Unlimited weight

+
+ + + +
Start Date12 July 2024
DescriptionEnchance XCM Transact to allow Unlimited weight inner call
AuthorsAdrian Catangiu
+
+

Summary

+

The Transact XCM instruction currently forces the user to set a specific maximum weight allowed to the inner call and then also pay for that much weight regardless of how much the call actually needs in practice.

+

This RFC proposes improving the usability of Transact by having the remote chain (which executes the Transact), get and charge the actual weight of the inner call from its dispatch info.

+

Motivation

+

The UX of using Transact is poor because of having to guess/estimate the require_weight_at_most weight used by the inner call on the target.

+

We've seen multiple Transact on-chain failures caused by guessing wrong values for this require_weight_at_most even though the rest of the XCM program would have worked.

+

In practice, this parameter only adds UX overhead with no real practical value. Use cases fall in one of two categories:

+
    +
  1. Unpaid execution of Transacts - in these cases the require_weight_at_most is not really useful, caller doesn't +have to pay for it, and on the call site it either fits the block or not;
  2. +
  3. Paid execution of single Transact - the weight to be spent by the Transact is already covered by the BuyExecution +weight limit parameter.
  4. +
+

We've had multiple OpenGov root/whitelisted_caller proposals initiated by core-devs completely or partially fail +because of incorrect configuration of require_weight_at_most parameter. This is a strong indication that the +instruction is hard to use.

+

Stakeholders

+ +

Explanation

+

The proposed enhancement is simple: change Transact instruction:

+
- Transact { origin_kind: OriginKind, require_weight_at_most: Weight, call: DoubleEncoded<Call> },
++ Transact { origin_kind: OriginKind, weight_limit: WeightLimit, call: DoubleEncoded<Call> },
+
+

With the new API, users who do not need to artificially limit the maximum weight used by the inner call, +can pass weight_limit: Unlimited; while those who need to do it, still can.

+

The XCVM implementation shall not use the weight_limit for weighing. Instead, it shall weigh the Transact instruction by also decoding and weighing the inner call.

+

The weight_limit shall be used to bail early if the actual weight is more than the specified limit.

+

Drawbacks

+

No drawbacks, existing scenarios work as before, while this also allows new/easier flows.

+

Testing, Security, and Privacy

+

Currently, an XCVM implementation can weigh a message just by looking at the decoded instructions without decoding the Transact's call, but assuming require_weight_at_most weight for it. With the new version it has to decode the inner call to know its actual weight.

+

But this does not actually change the security considerations, as can be seen below.

+

When using weight_limit = Unlimited, the weighing happens after decoding the inner call. The entirety of the XCM program containing this Transact needs to be either covered by enough bought weight using a BuyExecution, or the origin has to be allowed to do free execution.

+

The security considerations around how much can someone execute for free are the same for +both this new version and the old. In both cases, an "attacker" can do the XCM decoding (including Transact inner calls) for free by adding a large enough BuyExecution without actually having the funds available.

+

In both cases, decoding is done for free, but execution fails early on BuyExecution.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

No performance change.

+

Ergonomics

+

Ergonomics are slightly improved by allowing Unlimited max weight for most scenarios.

+

Compatibility

+

Compatible with previous XCM programs.

+

Prior Art and References

+

None.

+

Unresolved Questions

+

None.

+ +

If we see that nobody uses actual limits (all on-chain calls use weight_limit = Unlimited), we should remove it completely.

(source)

Table of Contents

-

Drawbacks

+

Drawbacks

The main drawback of adding the additional complexity directly to the broker pallet is the potential increase in maintenance overhead. Therefore, we propose adding additional functionality as a separate pallet on the Coretime chain. To take the pressure off from implementing these features, implementation along with unit tests would be taken care of by Lastic (Aurora Makovac, Philip Lucsok).

There are potential risks of security vulnerabilities in the new market functionalities, such as unauthorized region transfers or incorrect balance adjustments. Therefore, extensive security measures would have to be implemented.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

Testing

-

Drawbacks

+

Drawbacks

The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

The vast majority of cases can be covered by unit testing. Integration test should ensure that the Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired number of Candidates, can handle updates over XCM from the system's governance location.

-

Performance, Ergonomics, and Compatibility

+

Performance, Ergonomics, and Compatibility

This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.

-

Performance

+

Performance

As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.

-

Ergonomics

+

Ergonomics

The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.

-

Compatibility

+

Compatibility

This RFC is compatible with the existing implementation and can be handled via upgrades and migration.

-

Prior Art and References

+

Prior Art and References

Written Discussions

-

Unresolved Questions

+

Unresolved Questions

None at this time.

- +

There may exist in the future system chains for which this model of collator selection is not appropriate. These chains should be evaluated on a case-by-case basis.

(source)

@@ -1912,10 +1997,10 @@ appropriate. These chains should be evaluated on a case-by-case basis.

AuthorsPierre Krieger -

Summary

+

Summary

The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

-

Motivation

+

Motivation

The maintenance of bootnodes has long been an annoyance for everyone.

When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

@@ -1924,9 +2009,9 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-date

Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

-

Stakeholders

+

Stakeholders

This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

-

Explanation

+

Explanation

The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

@@ -1963,10 +2048,10 @@ message Response {

The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

-

Drawbacks

+

Drawbacks

The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

@@ -1975,22 +2060,22 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) a

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

-

Ergonomics

+

Ergonomics

Irrelevant.

-

Compatibility

+

Compatibility

Irrelevant.

-

Prior Art and References

+

Prior Art and References

None.

-

Unresolved Questions

+

Unresolved Questions

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

- +

It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

(source)

Table of Contents

@@ -2011,13 +2096,13 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsJonas Gehrlein -

Summary

+

Summary

The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

-

Motivation

+

Motivation

How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

-

Stakeholders

+

Stakeholders

Polkadot DOT token holders.

-

Explanation

+

Explanation

This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

@@ -2060,13 +2145,13 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that AuthorsJoe Petrowski -

Summary

+

Summary

Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

-

Motivation

+

Motivation

Many groups have expressed interest in representing collectives on-chain. Some of these include:

-

Drawbacks

+

Drawbacks

Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

No changes to the existing system are proposed. Only changes to how maintenance is organized.

-

Performance, Ergonomics, and Compatibility

+

Performance, Ergonomics, and Compatibility

No changes

-

Prior Art and References

+

Prior Art and References

Existing Encointer runtime repo

-

Unresolved Questions

+

Unresolved Questions

None identified

- +

More info on Encointer: encointer.org

(source)

Table of Contents

@@ -3358,11 +3443,11 @@ other privacy-enhancing mechanisms to address this concern. AuthorsJoe Petrowski, Gavin Wood -

Summary

+

Summary

The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.

-

Motivation

+

Motivation

Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -3374,13 +3459,13 @@ blockspace) to the network.

By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace.

-

Stakeholders

+

Stakeholders

-

Explanation

+

Explanation

The following pallets and subsystems are good candidates to migrate from the Relay Chain: