diff --git a/404.html b/404.html index 80f0ede..b11c02b 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index c3a03ab..a066a83 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index 596b563..1c2cb29 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 4ced1f3..4986be3 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index fe53b8d..357d1e5 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index 789a005..a7a6940 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 8fc601a..027e999 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index b7d18f6..b3230ae 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index 23e5721..aa20315 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 9d88118..c9fb979 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index d703e38..899280e 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index 6ae2c87..4d67879 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index 6ae2c87..4d67879 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index dd62199..24a81b8 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -2166,6 +2166,153 @@ The following other host functions are similarly also considered deprecated:

Future Possibilities

After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.

+

(source)

+

Table of Contents

+ +

RFC-0000: Lowering NFT Deposits on Polkadot and Kusama Asset Hubs

+
+ + + +
Start Date2 November 2023
DescriptionA proposal to reduce the minimum deposit required for collection creation on the Polkadot and Kusama Asset Hub, making it more accessible and affordable for artists.
AuthorsAurora Poppyseed, Just_Luuuu, Viki Val
+
+

Summary

+

This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for creating NFT collections. The objective is to lower the barrier to entry for artists, fostering a more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.

+

Motivation

+

The current deposit of 10 DOT for collection creation on the Polkadot Asset Hub presents a significant financial barrier for many artists. By lowering the deposit requirements, we aim to encourage more artists to participate in the Polkadot NFT ecosystem, thereby enriching the diversity and vibrancy of the community and its offerings.

+

The actual implementation of the deposit is an arbitrary number coming from Uniques pallet. It is not a result of any economic analysis. This proposal aims to adjust the deposit from constant to dynamic pricing based on the deposit function with respect to stakeholders.

+

Requirements

+ +

Stakeholders

+ +

Previous discussions have been held within the Polkadot Forum community and with artists expressing their concerns about the deposit amounts. Link.

+

Explanation

+

This RFC suggests modifying deposit constants defined in the nfts pallet on the Polkadot Asset Hub to require a lower deposit. The reduced deposit amount should be determined by the deposit adjusted by the pricing mechanism (arbitrary number/another pricing function).

+

Current deposit requirements are as follows:

+
#![allow(unused)]
+fn main() {
+parameter_types! {
+	// re-use the Uniques deposits
+	pub const NftsCollectionDeposit: Balance = UniquesCollectionDeposit::get();
+	pub const NftsItemDeposit: Balance = UniquesItemDeposit::get();
+	pub const NftsMetadataDepositBase: Balance = UniquesMetadataDepositBase::get();
+	pub const NftsAttributeDepositBase: Balance = UniquesAttributeDepositBase::get();
+	pub const NftsDepositPerByte: Balance = UniquesDepositPerByte::get();
+}
+
+// 
+parameter_types! {
+	pub const UniquesCollectionDeposit: Balance = UNITS / 10; // 1 / 10 UNIT deposit to create a collection
+	pub const UniquesItemDeposit: Balance = UNITS / 1_000; // 1 / 1000 UNIT deposit to mint an item
+	pub const UniquesMetadataDepositBase: Balance = deposit(1, 129);
+	pub const UniquesAttributeDepositBase: Balance = deposit(1, 0);
+	pub const UniquesDepositPerByte: Balance = deposit(0, 1);
+}
+}
+

The proposed change would modify the deposit constants to require a lower deposit. The reduced deposit amount should be determined by deposit adjusted by an arbitrary number.

+
#![allow(unused)]
+fn main() {
+parameter_types! {
+	pub const NftsCollectionDeposit: Balance = deposit(1, 130);
+	pub const NftsItemDeposit: Balance = deposit(1, 164) / 40;
+	pub const NftsMetadataDepositBase: Balance = deposit(1, 129) / 10;
+	pub const NftsAttributeDepositBase: Balance = deposit(1, 0) / 10;
+	pub const NftsDepositPerByte: Balance = deposit(0, 1);
+}
+}
+

Prices and Proposed Prices on Polkadot Asset Hub: +Scroll right

+
| **Name**                  | **Current price implementation** | **Price if DOT = 5$**  | **Price if DOT goes to 50$**  | **Proposed Price in DOT** | **Proposed Price if DOT = 5$**   | **Proposed Price if DOT goes to 50$**|
+|---------------------------|----------------------------------|------------------------|-------------------------------|---------------------------|----------------------------------|--------------------------------------|
+| collectionDeposit         | 10 DOT                           | 50 $                   | 500 $                         | 0.20064 DOT                   | ~1 $                            | 10.32$                                   |
+| itemDeposit               | 0.01 DOT                         | 0.05 $                 | 0.5 $                         | 0.005 DOT                 | 0.025 $                          | 0.251$                                |
+| metadataDepositBase       | 0.20129 DOT                      | 1.00645 $              | 10.0645 $                     | 0.0020129 DOT             | 0.0100645 $                      | 0.100645$                            |
+| attributeDepositBase      | 0.2 DOT                          | 1 $                    | 10 $                          | 0.002 DOT                 | 0.01 $                           | 0.1$                                 |
+
+

Prices and Proposed Prices on Kusama Asset Hub: +Scroll right

+
| **Name**                  | **Current price implementation** | **Price if KSM = 23$** | **Price if KSM goes to 500$** | **Proposed Price in KSM** | **Proposed Price if KSM = 23$**  | **Proposed Price if KSM goes to 500$** |
+|---------------------------|----------------------------------|------------------------|-------------------------------|---------------------------|----------------------------------|----------------------------------------|
+| collectionDeposit         | 0.1 KSM                          | 2.3 $                  | 50 $                          | 0.006688 KSM                  | 0.154 $                           | 3.34 $                                    |
+| itemDeposit               | 0.001 KSM                        | 0.023 $                | 0.5 $                         | 0.000167 KSM                | 0.00385 $                         | 0.0835 $                                 |
+| metadataDepositBase       | 0.006709666617 KSM               | 0.15432183319 $        | 3.3548333085 $                | 0.0006709666617 KSM       | 0.015432183319 $                 | 0.33548333085 $                        |
+| attributeDepositBase      | 0.00666666666 KSM                | 0.15333333318 $        | 3.333333333 $                 | 0.000666666666 KSM        | 0.015333333318 $                 | 0.3333333333 $                         |
+
+
+
+

Note: This is only a proposal for change and can be modified upon additional conversation.

+
+

Drawbacks

+

Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. Highlighted below are cogent points extracted from the discourse on the Polkadot Forum conversation, which provide critical perspectives on the implications of such changes:

+
+

But NFT deposits were chosen somewhat arbitrarily at genesis and it’s a good exercise to re-evaluate them and adapt if they are causing pain and if lowering them has little or no negative side effect (or if the trade-off is worth it). +-> joepetrowski

+
+
+

Underestimates mean that state grows faster, although not unbounded - effectively an economic subsidy on activity. Overestimates mean that the state grows slower - effectively an economic depressant on activity. +-> rphmeier

+
+
+

Technical: We want to prevent state bloat, therefore using state should have a cost associated with it. +-> joepetrowski

+
+

Testing, Security, and Privacy

+

The change is backwards compatible. The prevention of "spam" could be prevented by OpenGov proposal to forceDestoy list of collections that are not suitable.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

This change is not expected to have a significant impact on the overall performance of the Polkadot Asset Hub. However, monitoring the network closely, especially in the initial stages after implementation, is crucial to identify and mitigate any potential issues.

+

Additionally, a supplementary proposal aims to augment the network's adaptability:

+
+

Just from a technical perspective; I think the best we can do is to use a weak governance origin that is controlled by some consortium (ie. System Collective). +This origin could then update the NFT deposits any time the market conditions warrant it - obviously while honoring the storage deposit requirements. +To implement this, we need RFC#12 and the Parameters pallet from @xlc. +-> OliverTY

+
+

This dynamic governance approach would facilitate a responsive and agile economic model for deposit management, ensuring that the network remains accessible and robust in the face of market volatility.

+

Ergonomics

+

The proposed change aims to enhance the user experience for artists, making Polkadot more accessible and user-friendly.

+

Compatibility

+

The change does not impact compatibility as redeposit function is already implemented.

+

Unresolved Questions

+ + + +

If accepted, this RFC could pave the way for further discussions and proposals aimed at enhancing the inclusivity and accessibility of the Polkadot ecosystem. Future work could also explore having a weak governance origin for deposits as proposed by Oliver.

(source)

Table of Contents

-

Stakeholders

+

Stakeholders

All chain teams are stakeholders, as implementing this feature would require timely effort on their side and would impact compatibility with older tools.

This feature is essential for all offline signer tools; many regular signing tools might make use of it. In general, this RFC greatly improves security of any network implementing it, as many governing keys are used with offline signers.

Implementing this RFC would remove requirement to maintain metadata portals manually, as task of metadata verification would be effectively moved to consensus mechanism of the chain.

-

Explanation

+

Explanation

Detailed description of metadata shortening and digest process is provided in metadata-shortener crate (see cargo doc --open and examples). Below are presented algorithms of the process.

Definitions

Metadata structure

@@ -3773,29 +3920,29 @@ modularized_registry.sort(|a, b| { 0x02 - 0xFFreservedreserved for future use -

Drawbacks

+

Drawbacks

Increased transaction size

A 1-byte increase in transaction size due to signed extension value. Digest is not included in transferred transaction, only in signing process.

Transition overhead

Some slightly out of spec systems might experience breaking changes as new content of signed extensions is added. It is important to note, that there is no real overhead in processing time nor complexity, as the metadata checking mechanism is voluntary. The only drawbacks are expected for tools that do not implement MetadataV14 self-descripting features.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

The metadata shortening protocol should be extensively tested on all available examples of metadata before releasing changes to either metadata or shortener. Careful code review should be performed on shortener implementation code to ensure security. The main metadata tree would inevitably be constructed on runtime build which would also ensure correctness.

To be able to recall shortener protocol in case of vulnerability issues, a version byte is included.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

This is negligibly short pessimization during build time on the chain side. Cold wallets performance would improve mostly as metadata validity mechanism that was taking most of effort in cold wallet support would become trivial.

-

Ergonomics

+

Ergonomics

The proposal was optimized for cold storage wallets usage with minimal impact on all other parts of the ecosystem

-

Compatibility

+

Compatibility

Proposal in this form is not compatible with older tools that do not implement proper MetadataV14 self-descriptive features; those would have to be upgraded to include a new signed extensions field.

Prior Art and References

This project was developed upon a Polkadot Treasury grant; relevant development links are located in metadata-offline-project repository.

-

Unresolved Questions

+

Unresolved Questions

  1. How would polkadot-js handle the transition?
  2. Where would non-rust tools like Ledger apps get shortened metadata content?
- +

Changes to code of all cold signers to implement this mechanism SHOULD be done when this is enabled; non-cold signers may perform extra metadata check for better security. Ultimately, signing anything without decoding it with verifiable metadata should become discouraged in all situations where a decision-making mechanism is involved (that is, outside of fully automated blind signers like trade bots or staking rewards payout tools).

(source)

Table of Contents

@@ -3838,11 +3985,11 @@ modularized_registry.sort(|a, b| { AuthorsAlin Dima -

Summary

+

Summary

Propose a way of permuting the availability chunk indices assigned to validators, in the context of recovering available data from systematic chunks, with the purpose of fairly distributing network bandwidth usage.

-

Motivation

+

Motivation

Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 validators during an entire session, when favouring availability recovery from systematic chunks.

@@ -3850,9 +3997,9 @@ validators during an entire session, when favouring availability recovery from s systematic availability chunks to different validators, based on the relay chain block and core. The main purpose is to ensure fair distribution of network bandwidth usage for availability recovery in general and in particular for systematic chunk holders.

-

Stakeholders

+

Stakeholders

Relay chain node core developers.

-

Explanation

+

Explanation

Systematic erasure codes

An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the resulting code. @@ -4007,7 +4154,7 @@ struct (added in https://github.com/paritytech/polkadot-sdk/pull/2177Configuration::set_node_feature extrinsic. Once the feature is enabled and new configuration is live, the validator->chunk mapping ceases to be a 1:1 mapping and systematic recovery may begin.

-

Drawbacks

+

Drawbacks

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

Extensive testing will be conducted - both automated and manual. This proposal doesn't affect security or privacy.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of CPU time in polkadot as we scale up the parachain block size and number of availability cores.

With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be halved and total POV recovery time decrease by 80% for large POVs. See more here.

-

Ergonomics

+

Ergonomics

Not applicable.

-

Compatibility

+

Compatibility

This is a breaking change. See upgrade path section above. All validators and collators need to have upgraded their node versions before the feature will be enabled via a governance call.

Prior Art and References

See comments on the tracking issue and the in-progress PR

-

Unresolved Questions

+

Unresolved Questions

Not applicable.

- +

This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic chunks from backers/approval-checkers.

Appendix A

@@ -4120,20 +4267,20 @@ dispute scenarios.

AuthorsPierre Krieger -

Summary

+

Summary

This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

-

Motivation

+

Motivation

The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recently blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.

This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.

-

Stakeholders

+

Stakeholders

Low-level client developers. People interested in accessing the archive of the chain.

-

Explanation

+

Explanation

Reading RFC #8 first might help with comprehension, as this RFC is very similar.

Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

Capabilities

@@ -4168,30 +4315,30 @@ If blocks pruning is enabled and the chain is a relay chain, then Substrate unfo

Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol.

Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case.

Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.

-

Drawbacks

+

Drawbacks

None that I can see.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

The content of this section is basically the same as the one in RFC 8.

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.

Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

-

Ergonomics

+

Ergonomics

Irrelevant.

-

Compatibility

+

Compatibility

Irrelevant.

Prior Art and References

Unknown.

-

Unresolved Questions

+

Unresolved Questions

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

- +

This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.

@@ -4231,19 +4378,19 @@ We could even add to the peer-to-peer network nodes that are only capable of ser AuthorsJiahao Ye -

Summary

+

Summary

Currently, substrate runtime use an simple allocator defined by host side. Every runtime MUST import these allocator functions for normal execution. This situation make runtime code not versatile enough.

So this RFC proposes to define a new spec for allocator part to make substrate runtime more generic.

-

Motivation

+

Motivation

Since this RFC define a new way for allocator, we now regard the old one as legacy allocator. As we all know, since the allocator implementation details are defined by the substrate client, parachain/parathread cannot customize memory allocator algorithm, so the new specification allows the runtime to customize memory allocation, and then export the allocator function according to the specification for the client side to use. Another benefit is that some new host functions can be designed without allocating memory on the client, which may have potential performance improvements. Also it will help provide a unified and clean specification if substrate runtime support multi-targets(e.g. RISC-V). There is also a potential benefit. Many programming languages that support compilation to wasm may not be friendly to supporting external allocator. This is beneficial for other programming languages ​​to enter the substrate runtime ecosystem. The last and most important benefit is that for offchain context execution, the runtime can fully support pure wasm. What this means here is that all imported host functions could not actually be called (as stub functions), then the various verification logic of the runtime can be converted into pure wasm, which provides the possibility for the substrate runtime to run block verification in other environments (such as in browsers and other non-substrate environments).

-

Stakeholders

+

Stakeholders

No attempt was made at convincing stakeholders.

-

Explanation

+

Explanation

Runtime side spec

This section contains a list of functions should be exported by substrate runtime.

We define the spec as version 1, so the following dummy function v1 MUST be exported to hint @@ -4282,20 +4429,20 @@ Check if parsed wasm module contains a exported v1 function:

allocator.

Detail-heavy explanation of the RFC, suitable for explanation to an implementer of the changeset. This should address corner cases in detail and provide justification behind decisions, and provide rationale for how the design meets the solution requirements.

-

Drawbacks

+

Drawbacks

The allocator inside of the runtime will make code size bigger, but it's not obvious. The allocator inside of the runtime maybe slow down(or speed up) the runtime, still not obvious.

We could ignore these drawbacks since they are not prominent. And the execution efficiency is highly decided by runtime developer. We could not prevent a poor efficiency if developer want to do it.

-

Testing, Security, and Privacy

+

Testing, Security, and Privacy

Keep the legacy allocator runtime test cases, and add new feature to compile test cases for v1 allocator spec. And then update the test asserts.

Update template runtime to enable v1 spec. Once the dev network runs well, it seems that the spec is implmented correctly.

-

Performance, Ergonomics, and Compatibility

-

Performance

+

Performance, Ergonomics, and Compatibility

+

Performance

As the above says, not obvious impact about performance. And polkadot-sdk could offer the best practice allocator for all chains. Third party also could customized by theirself. So the performance could be improved over time.

-

Ergonomics

+

Ergonomics

Only for runtime developer, Just need to import a new crate and enable a new feature. Maybe it's convienient for other wasm-target language to implment.

-

Compatibility

+

Compatibility

It's 100% compatible. Only Some runtime configs and executor configs need to be depreacted.

For support new runtime spec, we MUST upgrade the client binary to support new spec of client part firstly.

We SHALL add an optional primtive crate to enable the version 1 spec and disable the legacy allocator by cargo feature. @@ -4305,9 +4452,9 @@ For the first year, we SHALL disable the v1 by default, and enable it by default

  • Move the allocator inside of the runtime
  • Add new allocator design
  • -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    - +

    The content discussed with RFC-0004 is basically orthogonal, but it could still be considered together, and it is preferred that this rfc be implmentented first.

    This feature could make substrate runtime be easier supported by other languages and integreted into other ecosystem.

    (source)

    @@ -4340,17 +4487,17 @@ For the first year, we SHALL disable the v1 by default, and enable it by default AuthorsSourabh Niyogi -

    Summary

    +

    Summary

    This RFC proposes lowering the existential deposit requirements on Asset Hub for Polkadot by a factor of 25, from 0.1 DOT to .004 DOT. The objective is to lower the barrier to entry for asset minters to mint a new asset to the entire DOT token holder base, and make Asset Hub on Polkadot a place where everyone can do small asset conversions.

    -

    Motivation

    +

    Motivation

    The current Existential deposit is 0.1 DOT on Asset Hub for Polkadot. While this is not does not appear to be a significant financial barrier for most people (only $0.80), this value makes Asset Hub impractical for Asset Hub Minters, specifically for the case where the Asset Hub Minters wishes to mint a new asset for the entire community of DOT holders (e.g. 1.25MM DOT holders would cost 125K DOT @ $8 = $1MM).

    By lowering the existential deposit requirements from 0.1 DOT to 0.004 DOT, the cost of minting to the entire community of DOT holders goes from an unmanagable number [125K DOT, the value of several houses circa December 2023] down to a manageable number [5K DOT, the value of a car circa December 2023].

    -

    Stakeholders

    +

    Stakeholders

    -

    Explanation

    +

    Explanation

    The exact amount of the existential deposit (ED) is proposed to be 0.004 DOT based on