fn initialize_block(header: &<Block as BlockT>::Header)
@@ -436,16 +396,16 @@ This can be unified and simplified by moving both parts into the runtime.
| Authors | Gabriel Facco de Arruda |
-
+
This RFC proposes changes that enable the use of absolute locations in AccountId derivations, which allows protocols built using XCM to have static account derivations in any runtime, regardless of its position in the family hierarchy.
-
+
These changes would allow protocol builders to leverage absolute locations to maintain the exact same derived account address across all networks in the ecosystem, thus enhancing user experience.
One such protocol, that is the original motivation for this proposal, is InvArch's Saturn Multisig, which gives users a unifying multisig and DAO experience across all XCM connected chains.
-
+
-
+
This proposal aims to make it possible to derive accounts for absolute locations, enabling protocols that require the ability to maintain the same derived account in any runtime. This is done by deriving accounts from the hash of described absolute locations, which are static across different destinations.
The same location can be represented in relative form and absolute form like so:
#![allow(unused)]
@@ -571,12 +531,12 @@ This can be unified and simplified by moving both parts into the runtime.
| Authors | Aurora Poppyseed, Just_Luuuu, Viki Val |
-
+
This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for
creating an NFT collection, minting an individual NFT, and lowering its corresponding metadata and
attribute deposits. The objective is to lower the barrier to entry for NFT creators, fostering a
more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.
-
+
The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2
DOT for metadata and attribute deposits) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub
presents a significant financial barrier for many NFT creators. By lowering the deposit
@@ -593,7 +553,7 @@ low.
- Deposits SHOULD be derived from
deposit function, adjusted by correspoding pricing mechansim.
-
+
- NFT Creators: Primary beneficiaries of the proposed change, particularly those who found the
current deposit requirements prohibitive.
@@ -607,7 +567,7 @@ collections, enhancing the overall ecosystem.
Previous discussions have been held within the Polkadot
Forum, with
artists expressing their concerns about the deposit amounts.
-
+
This RFC proposes a revision of the deposit constants in the configuration of the NFTs pallet on the
Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.
As of v1.1.1, the Collection Deposit is 10 DOT and the Item Deposit is 0.01 DOT (see
@@ -621,7 +581,7 @@ pub const NftsItemDeposit: Balance = system_para_deposit(1, 164);
This results in the following deposits (calculted using this
repository):
Polkadot
-| Name | Current Rate (DOT) | Proposed Rate (DOT) |
+| Name | Current Rate (DOT) | Calculated with Function (DOT) |
collectionDeposit | 10 | 0.20064 |
itemDeposit | 0.01 | 0.20081 |
metadataDepositBase | 0.20129 | 0.20076 |
@@ -630,7 +590,7 @@ repository):
Similarly, the prices for Kusama were calculated as:
Kusama:
-| Name | Current Rate (KSM) | Proposed Rate (KSM) |
+| Name | Current Rate (KSM) | Calculated with Function (KSM) |
collectionDeposit | 0.1 | 0.006688 |
itemDeposit | 0.001 | 0.000167 |
metadataDepositBase | 0.006709666617 | 0.0006709666617 |
@@ -674,7 +634,7 @@ discussed in the Ad
DOT and stablecoin values limiting the amount. With asset rates available via the Asset Conversion
pallet, the system could take the lower value required. A sigmoid curve would make sense for this
application to avoid sudden rate changes, as in:
-$$ \frac{\mathrm{min(DOT deposit, stable deposit)} }{\mathrm{1 + e^{a - b * x}} }$$
+$$ minDeposit + \frac{\mathrm{min(DotDeposit, StableDeposit) - minDeposit} }{\mathrm{1 + e^{a - b * x}} }$$
where the constant a moves the inflection to lower or higher x values, the constant b adjusts
the rate of the deposit increase, and the independent variable x is the number of collections or
items, depending on application.
@@ -819,12 +779,12 @@ Polkadot and Kusama networks.
| Authors | Alzymologist Oy, Zondax LLC, Parity Technologies |
-
+
To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this Substrate, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, signers, etc can use this metadata to interact with the chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format. It gets even more important when the user signs the transaction on an offline signer as the latter by its nature can not get access to the metadata without relying on the online wallet to provide it the metadata. So, the idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. Cryptographically digested together small number of extra-metadata information, it allows the offline signers and wallets to decode transactions by getting proofs for the individual chunks of the metadata. This digest is also included into the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash and extra-metadata constants when verifying the transaction. If the digest known by the runtime differs from the one that the offline signer used, it very likely means that the online wallet provided some invalid data and the verification of the transaction fails thus making metadata substitution impossible. The user is depending on the offline wallet showing them the correct decoded transaction before signing and with the merkelized metadata they can be sure that it was the correct transaction or the runtime will reject the transaction.
Thus we propose following:
Add a metadata digest value to signed data to supplement signer party with proof of correct extrinsic interpretation. This would ensure that hardware wallets always use correct metadata to decode the information for the user.
The digest value is generated once before release and is well-known and deterministic. The digest mechanism is designed to be modular and flexible. It also supports partial metadata transfer as needed by the signing party's extrinsic decoding mechanism. This considers signing devices potentially limited communication bandwidth and/or memory capacity.
-
+
While all blockchain systems support (at least in some sense) offline signing used in air-gapped wallets and lightweight embedded devices, only few allow simultaneously complex upgradeable logic and full message decoding on the cold off-line signer side; Substrate is one of these heartening few, and therefore - we should build on this feature to greatly improve transaction security, and thus in general, network resilience.
As a starting point, it is important to recognise that prudence and due care are naturally required. As we build further reliance on this feature we should be very careful to make sure it works correctly every time so as not to create false sense of security.
@@ -854,7 +814,7 @@ Polkadot and Kusama networks.
- Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead;
- Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching).
-
+
- Runtime implementors
- UI/wallet implementors
@@ -864,7 +824,7 @@ Polkadot and Kusama networks.
This feature is essential for all offline signer tools; many regular signing tools might make use of it. In general, this RFC greatly improves security of any network implementing it, as many governing keys are used with offline signers.
Implementing this RFC would remove requirement to maintain metadata portals manually, as task of metadata verification would be effectively moved to consensus mechanism of the chain.
The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.
-
+
The FRAME metadata provides a lot of information about a FRAME-based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, information about runtime APIs and type information about most of the types used in the runtime. For decoding transactions on an offline wallet, we mainly require this type of information. Most of the other information in the FRAME metadata is actually not required for the decoding, and thus, we can remove them. Thus, we have come up with a custom representation of the metadata and how this custom metadata is chunked, ensuring that we only need to send the chunks required for decoding a certain transaction to the offline wallet.
A reference implementation of this process is provided in metadata-shortener crate.
@@ -1176,20 +1136,20 @@ Privacy of users should also not be impacted. This assumes that wallets will gen
| Authors | Pierre Krieger |
-
+
This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".
Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.
The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.
-
+
The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.
It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.
If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node.
In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.
This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.
-
+
Low-level client developers.
People interested in accessing the archive of the chain.
-
+
Reading RFC #8 first might help with comprehension, as this RFC is very similar.
Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.
@@ -1288,19 +1248,19 @@ We could even add to the peer-to-peer network nodes that are only capable of ser
| Authors | Jiahao Ye |
-
+
Currently, substrate runtime use an simple allocator defined by host side. Every runtime MUST
import these allocator functions for normal execution. This situation make runtime code not versatile enough.
So this RFC proposes to define a new spec for allocator part to make substrate runtime more generic.
-
+
Since this RFC define a new way for allocator, we now regard the old one as legacy allocator.
As we all know, since the allocator implementation details are defined by the substrate client, parachain/parathread cannot customize memory allocator algorithm, so the new specification allows the runtime to customize memory allocation, and then export the allocator function according to the specification for the client side to use.
Another benefit is that some new host functions can be designed without allocating memory on the client, which may have potential performance improvements. Also it will help provide a unified and clean specification if substrate runtime support multi-targets(e.g. RISC-V).
There is also a potential benefit. Many programming languages that support compilation to wasm may not be friendly to supporting external allocator. This is beneficial for other programming languages to enter the substrate runtime ecosystem.
The last and most important benefit is that for offchain context execution, the runtime can fully support pure wasm. What this means here is that all imported host functions could not actually be called (as stub functions), then the various verification logic of the runtime can be converted into pure wasm, which provides the possibility for the substrate runtime to run block verification in other environments (such as in browsers and other non-substrate environments).
-
+
No attempt was made at convincing stakeholders.
-
+
This section contains a list of functions should be exported by substrate runtime.
We define the spec as version 1, so the following dummy function v1 MUST be exported to hint
@@ -1414,7 +1374,7 @@ For the first year, we SHALL disable the v1 by default, and enable it by default
| Authors | Sourabh Niyogi |
-
+
This RFC proposes to add the two dominant smart contract programming languages in the Polkadot ecosystem to AssetHub:
EVM + ink!/Coreplay. The objective is to increase DOT Revenue by making AssetHub accessible to
(1) Polkadot Rollups;
@@ -1423,7 +1383,7 @@ EVM + ink!/Coreplay. The objective is to increase DOT Revenue by making AssetHu
These changes in AssetHub are enabled by key Polkadot 2.0 technologies:
PolkaVM supporting Coreplay, and
hyper data availability in Blobs Chain.
-
+
EVM Contracts are pervasive in the Web3 blockchain ecosystem,
while Polkadot 2.0's Coreplay aims to surpass EVM Contracts in ease-of-use using PolkaVM's RISC architecture.
Asset Hub for Polkadot does not have smart contract capabilities,
@@ -1483,7 +1443,7 @@ This may be due to missing tooling, slow compile times, and/or simply because in
This has the key new promise of making smart contract languages easier to use -- instead of smart contract developers worrying about what can be done within the gas limits of a specific block or a specific transaction, Coreplay smart contracts can be much easier to program on (see here).
We believe AssetHub should support ink! as a precursor to support CorePlay's capabilities as soon as possible.
To the best of our knowledge, release times of this are unknown but having ink! inside AssetHub would be natural for Polkadot 2.0.
-
+
- Asset Hub Users: Those who call any extrinsic on Asset Hub for Polkadot.
- DOT Token Holders: Those who hold DOT on any chain in the Polkadot ecosystem.
@@ -1491,7 +1451,7 @@ To the best of our knowledge, release times of this are
+
AssetHub is a major component of the Polkadot 2.0 Minimal Relay Chain architecture. It is critical that smart contract developers not be able to clog AssetHub's blockspace for other mission critical applications, such as Staking and Governance.
As such, it is proposed that at most 50% of the available weight in AssetHub for Polkadot blocks be allocated to smart contracts pallets (EVM, ink! and/or Coreplay). While to date AssetHub has limited usage, it is believed (see here) that imposing this limit on smart contracts pallet would limit the effect on non-smart contract usage. A excessively small weight limit like 10% or 20% may limit the attractiveness of Polkadot as a platform for Polkadot rollups and EVM Contracts. A excessively large weight like 90% or 100% may threaten AssetHub usage.
@@ -1601,7 +1561,7 @@ This would enable CEXs and EVM L2 builders to choose Polkadot over Ethereum.
| Author | Adam Clay Steeber |
-
+
This RFC proposes adding a trivial governance track on Kusama to facilitate X (formerly known as Twitter) posts on the @kusamanetwork account. The technical aspect
of implementing this in the runtime is very inconsequential and straight-forward, though it might get more technical if the Fellowship wants to regulate this track
with a non-existent permission set. If this is implemented it would need to be followed up with:
@@ -1609,7 +1569,7 @@ with a non-existent permission set. If this is implemented it would need to be f
- the establishment of specifications for proposing X posts via this track, and
- the development of tools/processes to ensure that the content contained in referenda enacted in this track would be automatically posted on X.
-
+
The overall motivation for this RFC is to decentralize the management of the Kusama brand/communication channel to KSM holders. This is necessary in my opinion primarily
because of the inactivity of the account in recent history, with posts spanning weeks or months apart. I am currently unaware of who/what entity manages the Kusama
X account, but if they are affiliated with Parity or W3F this proposed solution could also offload some of the legal ramifications of making (or not making)
@@ -1619,11 +1579,11 @@ and the community becomes totally autonomous in the management of Kusama's X pos
that could be offloaded to openGov, provided this proof-of-concept is successful.
Finally, this RFC is the epitome of experimentation that Kusama is ideal for. This proposal may spark newfound excitement for Kusama and help us realize Kusama's potential
for pushing boundaries and trying new unconventional ideas.
-
+
This idea has not been formalized by any individual (or group of) KSM holder(s). To my knowledge the socialization of this idea is contained
entirely in my recent X post here, but it is possible that an idea like this one has been discussed in
other places. It appears to me that the ecosystem would welcome a change like this which is why I am taking action to formalize the discussion.
-
+
The implementation of this idea can be broken down into 3 primary phases:
First, we begin with this RFC to ensure all feedback can be discussed and implemented in the proposal. After the Fellowship and the community come to a reasonable
@@ -1754,9 +1714,9 @@ out of Kusama's scope. But it will require some off-chain effort to maintain.
Authors | Gavin Wood |
-
+
This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.
-
+
The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.
The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.
@@ -1777,7 +1737,7 @@ out of Kusama's scope. But it will require some off-chain effort to maintain.The solution SHOULD avoid creating additional dependencies on functionality which the Relay-chain need not strictly provide for the delivery of the Polkadot UC.
Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.
-
+
Primary stakeholder sets are:
- Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
@@ -1786,7 +1746,7 @@ out of Kusama's scope. But it will require some off-chain effort to maintain.
Socialization:
The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.
-
+
Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.
When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.
@@ -2276,10 +2236,10 @@ InstaPoolHistory: (empty)
| Authors | Gavin Wood, Robert Habermeier |
-
+
In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.
This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.
-
+
The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.
@@ -2291,7 +2251,7 @@ InstaPoolHistory: (empty)
- The interface MUST allow for the allocating chain to instruct changes to the number of cores which it is able to allocate.
- The interface MUST allow for the allocating chain to be notified of changes to the number of cores which are able to be allocated by the allocating chain.
-
+
Primary stakeholder sets are:
- Developers of the Relay-chain core-management logic.
@@ -2299,7 +2259,7 @@ InstaPoolHistory: (empty)
Socialization:
This content of this RFC was discussed in the Polkdot Fellows channel.
-
+
The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.
Future work may include these messages being introduced into the XCM standard.
@@ -2430,13 +2390,13 @@ assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);
| Authors | Joe Petrowski |
-
+
As core functionality moves from the Relay Chain into system chains, so increases the reliance on
the liveness of these chains for the use of the network. It is not economically scalable, nor
necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a
mechanism -- part technical and part social -- for ensuring reliable collator sets that are
resilient to attemps to stop any subsytem of the Polkadot protocol.
-
+
In order to guarantee access to Polkadot's system, the collators on its system chains must propose
blocks (provide liveness) and allow all transactions to eventually be included. That is, some
collators may censor transactions, but there must exist one collator in the set who will include a
@@ -2472,12 +2432,12 @@ to censor any subset of transactions.
- Collators selected by governance SHOULD have a reasonable expectation that the Treasury will
reimburse their operating costs.
-
+
- Infrastructure providers (people who run validator/collator nodes)
- Polkadot Treasury
-
+
This protocol builds on the existing
Collator Selection pallet
and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who
@@ -2589,10 +2549,10 @@ appropriate. These chains should be evaluated on a case-by-case basis.
| Authors | Pierre Krieger |
-
+
The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.
This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.
-
+
The maintenance of bootnodes has long been an annoyance for everyone.
When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories.
When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.
@@ -2601,9 +2561,9 @@ When it comes to RPC nodes, UX developers often have trouble finding up-to-date
Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.
While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.
Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.
-
+
This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.
-
+
The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.
Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.
While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.
@@ -2669,6 +2629,46 @@ If this every becomes a problem, this value of 20 is an arbitrary constant that
While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?
It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.
+(source)
+Table of Contents
+
+
+ | |
+| Start Date | 19.07.2023 |
+| Description | Revenue from Coretime sales should be burned |
+| Authors | Jonas Gehrlein |
+
+
+
+The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.
+
+How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.
+
+Polkadot DOT token holders.
+
+This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.
+It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.
+Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.
+
+-
+
Balancing Inflation: While DOT as a utility token inherently profits from a (reasonable) net inflation, it also benefits from a deflationary force that functions as a counterbalance to the overall inflation. Right now, the only mechanism on Polkadot that burns fees is the one for underutilized DOT in the Treasury. Finding other, more direct target for burns makes sense and the Coretime market is a good option.
+
+-
+
Clear incentives: By burning the revenue accrued on Coretime sales, prices paid by buyers are clearly costs. This removes distortion from the market that might arise when the paid tokens occur on some other places within the network. In that case, some actors might have secondary motives of influencing the price of Coretime sales, because they benefit down the line. For example, actors that actively participate in the Coretime sales are likely to also benefit from a higher Treasury balance, because they might frequently request funds for their projects. While those effects might appear far-fetched, they could accumulate. Burning the revenues makes sure that the prices paid are clearly costs to the actors themselves.
+
+-
+
Collective Value Accrual: Following the previous argument, burning the revenue also generates some externality, because it reduces the overall issuance of DOT and thereby increases the value of each remaining token. In contrast to the aforementioned argument, this benefits all token holders collectively and equally. Therefore, I'd consider this as the preferrable option, because burns lets all token holders participate at Polkadot's success as Coretime usage increases.
+
+
(source)
Table of Contents
diff --git a/proposed/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/proposed/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html
index 8f770ae..c48be37 100644
--- a/proposed/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html
+++ b/proposed/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html
@@ -90,7 +90,7 @@
@@ -285,7 +285,7 @@ This can be unified and simplified by moving both parts into the runtime.