| 00 | unsigned |
| 10 | signed |
@@ -1039,7 +1046,7 @@ Implement call filters. This will allow multisig accounts to only accept certain
There is no impact on testing, security or privacy.
This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.
-
+
There is no performance impact.
The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.
@@ -1860,7 +1867,7 @@ number of Candidates, can handle updates over XCM from the system's governance l
This proposal has very little impact on most users of Polkadot, and should improve the performance
of system chains by reducing the number of missed blocks.
-
+
As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager.
Appropriate benchmarking and tests should ensure that conservative limits are placed on the number
of Invulnerables and Candidates.
@@ -1990,7 +1997,7 @@ Furthermore, when a large number of providers (here, a provider is a bootnode) a
Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.
Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.
-
+
The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.
Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.
@@ -2229,7 +2236,7 @@ multi-block migrations available.
Security: n/a
Privacy: n/a
-
+
The performance overhead is minimal in the sense that no clutter was added after fulfilling the
requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.
@@ -2373,7 +2380,7 @@ This can be unified and simplified by moving both parts into the runtime.
The implementation of this RFC will be tested on testnets (Rococo and Westend) first.
An audit maybe required to ensure the implementation does not introduce unwanted side effects.
There is no privacy related concerns.
-
+
This RFC should not introduce any performance impact.
This RFC should improve the developer experiences for new and existing parachain teams
@@ -2670,7 +2677,7 @@ may require some optimizations to deal with constraints.
useful in developement.
Describe the impact of the proposal on the exposed functionality of Polkadot.
-
+
This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its
primary resources are allocated to system performance.
@@ -2781,7 +2788,7 @@ so that chains know which system_version to use.
AFAIK, should not have any impact on the security or privacy.
These changes should be compatible for existing chains if they use state_version value for system_verision.
-
+
I do not believe there is any performance hit with this change.
This does not break any exposed Apis.
@@ -2847,7 +2854,7 @@ is the correct way of introducing this change.
}
The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.
-
+
Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.
The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.
@@ -3048,7 +3055,7 @@ Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.
As noted above, state bloat is a security concern. In the case of abuse, governance could adapt by
increasing deposit rates and/or using forceDestroy on collections agreed to be spam.
-
+
The primary performance consideration stems from the potential for state bloat due to increased
activity from lower deposit requirements. It's vital to monitor and manage this to avoid any
negative impact on the chain's performance. Strategies for mitigating state bloat, including
@@ -3328,7 +3335,7 @@ mitigate this problem and will likely be needed in the future for CoreJam and/or
Extensive testing will be conducted - both automated and manual.
This proposal doesn't affect security or privacy.
-
+
This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of
CPU time in polkadot as we scale up the parachain block size and number of availability cores.
With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be
@@ -3509,7 +3516,7 @@ to acquire them. However, the asset of choice can be changed in the future.
N/A.
-
+
N/A
N/A
@@ -3601,7 +3608,7 @@ This is equivalent to forcing the Vec<Transaction> to always
Irrelevant.
-
+
Irrelevant.
Irrelevant.
@@ -3710,7 +3717,7 @@ Furthermore, when a large number of providers are registered, only the providers
For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.
Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.
-
+
The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.
Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.
@@ -4085,7 +4092,7 @@ nodes: [[[2, 3], [4, 5]], [0, 1]]
Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.
Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.
-
+
There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.
The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.
@@ -4343,7 +4350,7 @@ The following other host functions are similarly also considered deprecated:
This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.
-
+
The API of these new functions was heavily inspired by API used by the C programming language.
The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.
@@ -4530,7 +4537,7 @@ OLD_PRICE = 1000
This pricing model is based on the requirements from the basic linear solution proposed in RFC-1, which is a simple dynamic pricing model and only used as proof. The present model adds additional considerations to make the model more adaptable under real conditions.
This RFC, if accepted, shall be implemented in conjunction with RFC-1.
-
+
@@ -4646,7 +4653,7 @@ Also note that child tries aren't considered as descendants of the main trie whe
Implementers of the replier side should be careful to detect early on when a reply would exceed the maximum reply size, rather than inconditionally generate a reply, as this could take a very large amount of CPU, disk I/O, and memory. Existing implementations might currently be accidentally protected from such an attack thanks to the fact that requests have a maximum size, and thus that the list of keys in the query was bounded. After this proposal, this accidental protection would no longer exist.
Malicious server nodes might truncate Merkle proofs even when they don't strictly need to, and it is not possible for the client to (easily) detect this situation. However, malicious server nodes can already do undesirable things such as throttle down their upload bandwidth or simply not respond. There is no need to handle unnecessarily truncated Merkle proofs any differently than a server simply not answering the request.
-
+
It is unclear to the author of the RFC what the performance implications are. Servers are supposed to have limits to the amount of resources they use to respond to requests, and as such the worst that can happen is that light client requests become a bit slower than they currently are.
Irrelevant.
@@ -4834,7 +4841,7 @@ Also note that child tries aren't considered as descendants of the main trie whe
This change will enhance / improve the security of the protocol as it relates to its treasury. The confirmation period is one of the last lines of defence for the collective Polkadot stakeholders to react to a potentially bad referendum and vote NAY in order for its confirmation period to be aborted. It makes sense for the treasurer track's confirmation period duration to be either equal to, or higher than, the big spender track confirmation period.
-
+
This is a simple change (code wise) which should not affect the performance of the Polkadot protocol, outside of increasing the duration of the confirmation period on the treasurer track.
If the proposal alters exposed interfaces to developers or end-users, which types of usage patterns have been optimized for?
@@ -5931,7 +5938,7 @@ privacy-enhancing mechanisms to address this concern.
Security considerations should be taken with the implementation to make sure no unwanted behavior is introduced.
This proposal does not introduce any privacy considerations.
-
+
Depending on the final implementation, this proposal should not introduce much overhead to performance.
The ergonomics of this proposal depend on the final implementation details.
@@ -6017,7 +6024,7 @@ privacy-enhancing mechanisms to address this concern.
We feel that the Polkadot Technical Fellowship would be the most competent collective to identify the testing requirements for the ideas presented in this RFC.
-
+
This change may add extra chain storage requirements on Polkadot, especially with respect to nested delegations.
The change to add nested delegations may affect governance interfaces such as Nova Wallet who will have to apply changes to their indexers to support nested delegations. It may also affect the Polkadot Delegation Dashboard as well as Polkassembly & SubSquare.
@@ -6205,7 +6212,7 @@ pub(super) type CheckedCodeHash<T: Config> =
An audit is required to ensure the implementation's correctness.
The proposal introduces no new privacy concerns.
-
+
This RFC should not introduce any performance impact.
This RFC does not affect the current parachains, nor the parachains that intend to use the one-time payment model for parachain registration.
@@ -6218,108 +6225,6 @@ pub(super) type CheckedCodeHash<T: Config> =
As noted in this GitHub issue, we want to raise the per-byte cost of on-chain data storage. However, a substantial increase in this cost would make it highly impractical for on-demand parachains to register on Polkadot.
This RFC offers an alternative solution for on-demand parachains, ensuring that the per-byte cost increase doesn't overly burden the registration process.
-(source)
-Table of Contents
-
-
- | |
-| Start Date | 13 November 2023 |
-| Description | Change SessionKeys runtime api to also create a proof of ownership for on chain registration. |
-| Authors | Bastian Köcher |
-
-
-
-When rotating/generating the SessionKeys of a node, the node calls into the runtime using the
-SessionKeys::generate_session_keys runtime api. This runtime api function needs to be changed
-to add an extra parameter owner and to change the return value to also include the proof of
-ownership. The owner should be the account id of the account setting the SessionKeys on chain
-to allow the on chain logic the verification of the proof. The on chain logic is then able to proof
-the possession of the private keys of the SessionKeys using the proof.
-
-When a user sets new SessionKeys on chain the chain can currently not ensure that the user
-actually has control over the private keys of the SessionKeys. With the RFC applied the chain is able
-to ensure that the user actually is in possession of the private keys.
-
-
-- Polkadot runtime implementors
-- Polkadot node implementors
-- Validator operators
-
-
-We are first going to explain the proof format being used:
-#![allow(unused)]
-fn main() {
-type Proof = (Signature, Signature, ..);
-}
-The proof being a SCALE encoded tuple over all signatures of each private session
-key signing the owner. The actual type of each signature depends on the
-corresponding session key cryptographic algorithm. The order of the signatures in
-the proof is the same as the order of the session keys in the SessionKeys type.
-The version of the SessionKeys needs to be bumped to 1 to reflect the changes to the
-signature of SessionKeys_generate_session_keys:
-#![allow(unused)]
-fn main() {
-pub struct OpaqueGeneratedSessionKeys {
- pub keys: Vec<u8>,
- pub proof: Vec<u8>,
-}
-
-fn SessionKeys_generate_session_keys(owner: Vec<u8>, seed: Option<Vec<u8>>) -> OpaqueGeneratedSessionKeys;
-}
-The default calling convention for runtime apis is applied, meaning the parameters
-passed as SCALE encoded array and the length of the encoded array. The return value
-being the SCALE encoded return value as u64 (array_ptr | length << 32). So, the
-actual exported function signature looks like:
-#![allow(unused)]
-fn main() {
-fn SessionKeys_generate_session_keys(array: *const u8, len: usize) -> u64;
-}
-The on chain logic for setting the SessionKeys needs to be changed as well. It
-already gets the proof passed as Vec<u8>. This proof needs to be decoded to
-the actual Proof type as explained above. The proof and the SCALE encoded
-account_id of the sender are used to verify the ownership of the SessionKeys.
-
-Validator operators need to pass the their account id when rotating their session keys in a node.
-This will require updating some high level docs and making users familiar with the slightly changed ergonomics.
-
-Testing of the new changes is quite easy as it only requires passing an appropriate owner
-for the current testing context. The changes to the proof generation and verification got
-audited to ensure they are correct.
-
-
-Does not have any impact on the overall performance, only setting SessionKeys will require more weight.
-
-If the proposal alters exposed interfaces to developers or end-users, which types of usage patterns have been optimized for?
-
-Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before
-a runtime is enacted that contains these changes otherwise they will fail to generate session keys.
-
-None.
-
-None.
-
-Substrate implementation of the RFC.
(source)
Table of Contents
@@ -6351,16 +6256,16 @@ a runtime is enacted that contains these changes otherwise they will fail to gen
| Authors | Pierre Krieger |