diff --git a/404.html b/404.html index d14cb71..1dd48fe 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index 03b34f8..95b7edd 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index af2935a..516e724 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 9f705b6..1d965fb 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 2df6c1c..37a2d61 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0010-burn-coretime-revenue.html b/approved/0010-burn-coretime-revenue.html index 9a21062..703e5f7 100644 --- a/approved/0010-burn-coretime-revenue.html +++ b/approved/0010-burn-coretime-revenue.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index 064fe50..0b42b48 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html index d318b76..4c749ef 100644 --- a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 843e1d4..8625df0 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index da39dd5..d0ae071 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0026-sassafras-consensus.html b/approved/0026-sassafras-consensus.html index 1d9ca49..c05fb93 100644 --- a/approved/0026-sassafras-consensus.html +++ b/approved/0026-sassafras-consensus.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index 2d28df5..0a143f0 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0042-extrinsics-state-version.html b/approved/0042-extrinsics-state-version.html index 984a426..4e196e4 100644 --- a/approved/0042-extrinsics-state-version.html +++ b/approved/0042-extrinsics-state-version.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0043-storage-proof-size-hostfunction.html b/approved/0043-storage-proof-size-hostfunction.html index e28625d..5e6cb75 100644 --- a/approved/0043-storage-proof-size-hostfunction.html +++ b/approved/0043-storage-proof-size-hostfunction.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0045-nft-deposits-asset-hub.html b/approved/0045-nft-deposits-asset-hub.html index 0d39c65..a5d4974 100644 --- a/approved/0045-nft-deposits-asset-hub.html +++ b/approved/0045-nft-deposits-asset-hub.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0047-assignment-of-availability-chunks.html b/approved/0047-assignment-of-availability-chunks.html index 1c0321d..c36ac3d 100644 --- a/approved/0047-assignment-of-availability-chunks.html +++ b/approved/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0048-session-keys-runtime-api.html b/approved/0048-session-keys-runtime-api.html index 5ea82fb..8a058d2 100644 --- a/approved/0048-session-keys-runtime-api.html +++ b/approved/0048-session-keys-runtime-api.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 6574feb..172eba1 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index 2fa4ace..8964f78 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0059-nodes-capabilities-discovery.html b/approved/0059-nodes-capabilities-discovery.html index a2c2e90..6dce43b 100644 --- a/approved/0059-nodes-capabilities-discovery.html +++ b/approved/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0078-merkleized-metadata.html b/approved/0078-merkleized-metadata.html index ff0ac1e..13dcfa4 100644 --- a/approved/0078-merkleized-metadata.html +++ b/approved/0078-merkleized-metadata.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0084-general-transaction-extrinsic-format.html b/approved/0084-general-transaction-extrinsic-format.html index b660719..6bfa41a 100644 --- a/approved/0084-general-transaction-extrinsic-format.html +++ b/approved/0084-general-transaction-extrinsic-format.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0091-dht-record-creation-time.html b/approved/0091-dht-record-creation-time.html index b115f45..229950e 100644 --- a/approved/0091-dht-record-creation-time.html +++ b/approved/0091-dht-record-creation-time.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0097-unbonding_queue.html b/approved/0097-unbonding_queue.html index 8f9724b..150e2ba 100644 --- a/approved/0097-unbonding_queue.html +++ b/approved/0097-unbonding_queue.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0099-transaction-extension-version.html b/approved/0099-transaction-extension-version.html index d783095..8c83589 100644 --- a/approved/0099-transaction-extension-version.html +++ b/approved/0099-transaction-extension-version.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0100-xcm-multi-type-asset-transfer.html b/approved/0100-xcm-multi-type-asset-transfer.html index 4171abe..cfcbddd 100644 --- a/approved/0100-xcm-multi-type-asset-transfer.html +++ b/approved/0100-xcm-multi-type-asset-transfer.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0101-xcm-transact-remove-max-weight-param.html b/approved/0101-xcm-transact-remove-max-weight-param.html index 0f5e6dc..f793c05 100644 --- a/approved/0101-xcm-transact-remove-max-weight-param.html +++ b/approved/0101-xcm-transact-remove-max-weight-param.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0103-introduce-core-index-commitment.html b/approved/0103-introduce-core-index-commitment.html index b6c2f8a..91b4126 100644 --- a/approved/0103-introduce-core-index-commitment.html +++ b/approved/0103-introduce-core-index-commitment.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0105-xcm-improved-fee-mechanism.html b/approved/0105-xcm-improved-fee-mechanism.html index ea2a5b9..8981d2d 100644 --- a/approved/0105-xcm-improved-fee-mechanism.html +++ b/approved/0105-xcm-improved-fee-mechanism.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0107-xcm-execution-hints.html b/approved/0107-xcm-execution-hints.html index 5ed8746..4ca8758 100644 --- a/approved/0107-xcm-execution-hints.html +++ b/approved/0107-xcm-execution-hints.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve diff --git a/approved/0108-xcm-remove-testnet-ids.html b/approved/0108-xcm-remove-testnet-ids.html index 31134b2..6341036 100644 --- a/approved/0108-xcm-remove-testnet-ids.html +++ b/approved/0108-xcm-remove-testnet-ids.html @@ -90,7 +90,7 @@ - IntroductionNewly ProposedProposedRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve + IntroductionNewly ProposedProposedRFC-0004: Remove the host-side runtime memory allocatorRFC-0006: Dynamic Pricing for Bulk Coretime SalesRFC-0009: Improved light client requests networking protocolRFC-0015: Market Design RevisitRFC-34: XCM Absolute Location Account Derivation RFC-0035: Conviction Voting Delegation ModificationsRFC-0044: Rent based registration modelRFC-0054: Remove the concept of "heap pages" from the clientRFC-0070: X Track for @kusamanetworkRFC-0073: Decision Deposit Referendum TrackRFC-0074: Stateful Multisig PalletRFC-0077: Increase maximum length of identity PGP fingerprint values from 20 bytesRFC-0088: Add slashable locked deposit, purchaser reputation, and reserved cores for on-chain identities to broker palletRFC-0089: Flexible InflationRFC-0001: Secondary Market for RegionsRFC-0002: Smart Contracts on the Coretime ChainRFC-0111: Pure Proxy ReplicationRFC-0112: Compress the State Response Message in State SyncRFC-0114: Introduce secp256r1_ecdsa_verify_prehashed Host Function to verify NIST-P256 elliptic curve signaturesRFC-0117: The Unbrick CollectiveRFC-114: Adjust Tipper Track Confirmation PeriodsApprovedRFC-1: Agile CoretimeRFC-5: Coretime InterfaceRFC-0007: System Collator SelectionRFC-0008: Store parachain bootnodes in relay chain DHTRFC-0010: Burn Coretime RevenueRFC-0012: Process for Adding New System CollectivesRFC-0013: Prepare Core runtime API for MBMsRFC-0014: Improve locking mechanism for parachainsRFC-0022: Adopt Encointer RuntimeRFC-0026: Sassafras Consensus ProtocolRFC-0032: Minimal RelayRFC-0042: Add System version that replaces StateVersion on RuntimeVersionRFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block UtilizationRFC-0045: Lowering NFT Deposits on Asset HubRFC-0047: Assignment of availability chunks to validatorsRFC-0048: Generate ownership proof for SessionKeysRFC-0050: Fellowship SalariesRFC-0056: Enforce only one transaction per notificationRFC-0059: Add a discovery mechanism for nodes based on their capabilitiesRFC-0078: Merkleized MetadataRFC-0084: General transactions in extrinsic formatRFC-0091: DHT Authority discovery record creation timeRFC-0097: Unbonding QueueRFC-0099: Introduce a transaction extension versionRFC-0100: New XCM instruction: InitiateAssetsTransferRFC-0101: XCM Transact remove require_weight_at_most parameterRFC-0103: Introduce a CoreIndex commitment and a SessionIndex field in candidate receiptsRFC-0105: XCM improved fee mechanismRFC-0107: XCM Execution hintsRFC-0108: Remove XCM testnet NetworkIdsStaleRFC-0000: Feature Name HereRFC-0106: Remove XCM fees modeRFC-0109: Descend XCM origin instead of clearing it where possibleRFC-TODO: Stale Nomination Reward Curve @@ -245,7 +245,7 @@ using NetworkId::ByGenesis.
NetworkId::ByGenesis
This book contains the Polkadot Fellowship Requests for Comments (RFCs) detailing proposed changes to the technical implementation of the Polkadot network.
polkadot-fellows/RFCs
(source)
Table of Contents
Update the runtime-host interface to no longer make use of a host-side allocator.
The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.
The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer in order to free the buffer.
ext_hashing_twox_256_version_1
ext_allocator_free_version_1
Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.
Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.
No attempt was made at convincing stakeholders.
This section contains a list of new host functions to introduce.
(func $ext_storage_read_version_2 + (param $key i64) (param $value_out i64) (param $offset i32) (result i64)) +(func $ext_default_child_storage_read_version_2 + (param $child_storage_key i64) (param $key i64) (param $value_out i64) + (param $offset i32) (result i64)) +
The signature and behaviour of ext_storage_read_version_2 and ext_default_child_storage_read_version_2 is identical to their version 1 counterparts, but the return value has a different meaning. +The new functions directly return the number of bytes that were written in the value_out buffer. If the entry doesn't exist, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in value_out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.
ext_storage_read_version_2
ext_default_child_storage_read_version_2
value_out
-1
The runtime execution stops with an error if value_out is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.
(func $ext_storage_next_key_version_2 + (param $key i64) (param $out i64) (return i32)) +(func $ext_default_child_storage_next_key_version_2 + (param $child_storage_key i64) (param $key i64) (param $out i64) (return i32)) +
The behaviour of these functions is identical to their version 1 counterparts. +Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an out parameter containing a pointer-size to the memory location where the host writes the output. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. +These functions return the size, in bytes, of the next key, or 0 if there is no next key. If the size of the next key is larger than the buffer in out, the bytes of the key that fit the buffer are written to out and any extra byte that doesn't fit is discarded.
out
0
Some notes:
ext_storage_next_key_version_2
ext_default_child_storage_next_key_version_2
(func $ext_hashing_keccak_256_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_keccak_512_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_sha2_256_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_blake2_128_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_blake2_256_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_twox_64_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_twox_128_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_twox_256_version_2 + (param $data i64) (param $out i32)) +(func $ext_trie_blake2_256_root_version_3 + (param $data i64) (param $version i32) (param $out i32)) +(func $ext_trie_blake2_256_ordered_root_version_3 + (param $data i64) (param $version i32) (param $out i32)) +(func $ext_trie_keccak_256_root_version_3 + (param $data i64) (param $version i32) (param $out i32)) +(func $ext_trie_keccak_256_ordered_root_version_3 + (param $data i64) (param $version i32) (param $out i32)) +(func $ext_default_child_storage_root_version_3 + (param $child_storage_key i64) (param $out i32)) +(func $ext_crypto_ed25519_generate_version_2 + (param $key_type_id i32) (param $seed i64) (param $out i32)) +(func $ext_crypto_sr25519_generate_version_2 + (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32)) +(func $ext_crypto_ecdsa_generate_version_2 + (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32)) +
The behaviour of these functions is identical to their version 1 or version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of these functions accepts an out parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.
(func $ext_default_child_storage_root_version_3 + (param $child_storage_key i64) (param $out i32)) +(func $ext_storage_root_version_3 + (param $out i32)) +
The behaviour of these functions is identical to their version 1 and version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new versions of these functions accepts an out parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.
I have taken the liberty to take the version 1 of these functions as a base rather than the version 2, as a PPP deprecating the version 2 of these functions has previously been accepted: https://github.com/w3f/PPPs/pull/6.
(func $ext_storage_clear_prefix_version_3 + (param $prefix i64) (param $limit i64) (param $removed_count_out i32) + (return i32)) +(func $ext_default_child_storage_clear_prefix_version_3 + (param $child_storage_key i64) (param $prefix i64) + (param $limit i64) (param $removed_count_out i32) (return i32)) +(func $ext_default_child_storage_kill_version_4 + (param $child_storage_key i64) (param $limit i64) + (param $removed_count_out i32) (return i32)) +
The behaviour of these functions is identical to their version 2 and 3 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the version 3 and 4 of these functions accepts a removed_count_out parameter containing the memory location to a 8 bytes buffer where the host writes the number of keys that were removed in little endian. The runtime execution stops with an error if removed_count_out is outside of the range of the memory of the virtual machine. The functions return 1 to indicate that there are keys remaining, and 0 to indicate that all keys have been removed.
removed_count_out
Note that there is an alternative proposal to add new host functions with the same names: https://github.com/w3f/PPPs/pull/7. This alternative doesn't conflict with this one except for the version number. One proposal or the other will have to use versions 4 and 5 rather than 3 and 4.
(func $ext_crypto_ed25519_sign_version_2 + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) +(func $ext_crypto_sr25519_sign_version_2 + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) +func $ext_crypto_ecdsa_sign_version_2 + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) +(func $ext_crypto_ecdsa_sign_prehashed_version_2 + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64)) +
The behaviour of these functions is identical to their version 1 counterparts. The new versions of these functions accept an out parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. The signatures are always of a size known at compilation time. On success, these functions return 0. If the public key can't be found in the keystore, these functions return 1 and do not write anything to out.
1
Note that the return value is 0 on success and 1 on failure, while the previous version of these functions write 1 on success (as it represents a SCALE-encoded Some) and 0 on failure (as it represents a SCALE-encoded None). Returning 0 on success and non-zero on failure is consistent with common practices in the C programming language and is less surprising than the opposite.
Some
None
(func $ext_crypto_secp256k1_ecdsa_recover_version_3 + (param $sig i32) (param $msg i32) (param $out i32) (return i64)) +(func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3 + (param $sig i32) (param $msg i32) (param $out i32) (return i64)) +
The behaviour of these functions is identical to their version 2 counterparts. The new versions of these functions accept an out parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. The signatures are always of a size known at compilation time. On success, these functions return 0. On failure, these functions return a non-zero value and do not write anything to out.
The non-zero value written on failure is:
These values are equal to the values returned on error by the version 2 (see https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), but incremented by 1 in order to reserve 0 for success.
(func $ext_crypto_ed25519_num_public_keys_version_1 + (param $key_type_id i32) (return i32)) +(func $ext_crypto_ed25519_public_key_version_2 + (param $key_type_id i32) (param $key_index i32) (param $out i32)) +(func $ext_crypto_sr25519_num_public_keys_version_1 + (param $key_type_id i32) (return i32)) +(func $ext_crypto_sr25519_public_key_version_2 + (param $key_type_id i32) (param $key_index i32) (param $out i32)) +(func $ext_crypto_ecdsa_num_public_keys_version_1 + (param $key_type_id i32) (return i32)) +(func $ext_crypto_ecdsa_public_key_version_2 + (param $key_type_id i32) (param $key_index i32) (param $out i32)) +
The functions superceded the ext_crypto_ed25519_public_key_version_1, ext_crypto_sr25519_public_key_version_1, and ext_crypto_ecdsa_public_key_version_1 host functions.
ext_crypto_ed25519_public_key_version_1
ext_crypto_sr25519_public_key_version_1
ext_crypto_ecdsa_public_key_version_1
Instead of calling ext_crypto_ed25519_public_key_version_1 in order to obtain the list of all keys at once, the runtime should instead call ext_crypto_ed25519_num_public_keys_version_1 in order to obtain the number of public keys available, then ext_crypto_ed25519_public_key_version_2 repeatedly. +The ext_crypto_ed25519_public_key_version_2 function writes the public key of the given key_index to the memory location designated by out. The key_index must be between 0 (included) and n (excluded), where n is the value returned by ext_crypto_ed25519_num_public_keys_version_1. Execution must trap if n is out of range.
ext_crypto_ed25519_num_public_keys_version_1
ext_crypto_ed25519_public_key_version_2
key_index
n
The same explanations apply for ext_crypto_sr25519_public_key_version_1 and ext_crypto_ecdsa_public_key_version_1.
Host implementers should be aware that the list of public keys (including their ordering) must not change while the runtime is running. This is most likely done by copying the list of all available keys either at the start of the execution or the first time the list is accessed.
(func $ext_offchain_http_request_start_version_2 + (param $method i64) (param $uri i64) (param $meta i64) (result i32)) +
The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the request identifier in it, and returning a pointer to it, the version 2 of this function simply returns the newly-assigned identifier to the HTTP request. On failure, this function returns -1. An identifier of -1 is invalid and is reserved to indicate failure.
(func $ext_offchain_http_request_write_body_version_2 + (param $method i64) (param $uri i64) (param $meta i64) (result i32)) +(func $ext_offchain_http_response_read_body_version_2 + (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64)) +
The behaviour of these functions is identical to their version 1 counterpart. Instead of allocating a buffer, writing two bytes in it, and returning a pointer to it, the new version of these functions simply indicates what happened:
ext_offchain_http_request_write_body_version_2
ext_offchain_http_response_read_body_version_2
These values are equal to the values returned on error by the version 1 (see https://spec.polkadot.network/chap-host-api#defn-http-error), but tweaked in order to reserve positive numbers for success.
When it comes to ext_offchain_http_response_read_body_version_2, the host implementers must not read too much data at once in order to not create ambiguity in the returned value. Given that the size of the buffer is always inferior or equal to 4 GiB, this is not a problem.
buffer
(func $ext_offchain_http_response_wait_version_2 + (param $ids i64) (param $deadline i64) (param $out i32)) +
The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an out parameter containing the memory location where the host writes the output. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.
The encoding of the response code is also modified compared to its version 1 counterpart and each response code now encodes to 4 little endian bytes as described below:
The buffer passed to out must always have a size of 4 * n where n is the number of elements in the ids.
4 * n
ids
(func $ext_offchain_http_response_header_name_version_1 + (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) +(func $ext_offchain_http_response_header_value_version_1 + (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) +
These functions supercede the ext_offchain_http_response_headers_version_1 host function.
ext_offchain_http_response_headers_version_1
Contrary to ext_offchain_http_response_headers_version_1, only one header indicated by header_index can be read at a time. Instead of calling ext_offchain_http_response_headers_version_1 once, the runtime should call ext_offchain_http_response_header_name_version_1 and ext_offchain_http_response_header_value_version_1 multiple times with an increasing header_index, until a value of -1 is returned.
header_index
ext_offchain_http_response_header_name_version_1
ext_offchain_http_response_header_value_version_1
These functions accept an out parameter containing a pointer-size to the memory location where the header name or value should be written. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out.
These functions return the size, in bytes, of the header name or header value. If request doesn't exist or is in an invalid state (as documented for ext_offchain_http_response_headers_version_1) or the header_index is out of range, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.
If the buffer in out is too small to fit the entire header name of value, only the bytes that fit are written and the rest are discarded.
(func $ext_offchain_submit_transaction_version_2 + (param $data i64) (return i32)) +(func $ext_offchain_http_request_add_header_version_2 + (param $request_id i32) (param $name i64) (param $value i64) (result i32)) +
Instead of allocating a buffer, writing 1 or 0 in it, and returning a pointer to it, the version 2 of these functions return 0 or 1, where 0 indicates success and 1 indicates failure. The runtime must interpret any non-0 value as failure, but the client must always return 1 in case of failure.
(func $ext_offchain_local_storage_read_version_1 + (param $kind i32) (param $key i64) (param $value_out i64) (param $offset i32) (result i64)) +
This function supercedes the ext_offchain_local_storage_get_version_1 host function, and uses an API and logic similar to ext_storage_read_version_2.
ext_offchain_local_storage_get_version_1
It reads the offchain local storage key indicated by kind and key starting at the byte indicated by offset, and writes the value to the pointer-size indicated by value_out.
kind
key
offset
The function returns the number of bytes that were written in the value_out buffer. If the entry doesn't exist, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in value_out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.
(func $ext_offchain_network_peer_id_version_1 + (param $out i64)) +
This function writes the PeerId of the local node to the memory location indicated by out. A PeerId is always 38 bytes long. +The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.
PeerId
(func $ext_input_size_version_1 + (return i64)) +(func $ext_input_read_version_1 + (param $offset i64) (param $out i64)) +
When a runtime function is called, the host uses the allocator to allocate memory within the runtime where to write some input data. These two new host functions provide an alternative way to access the input that doesn't make use of the allocator.
The ext_input_size_version_1 host function returns the size in bytes of the input data.
ext_input_size_version_1
The ext_input_read_version_1 host function copies some data from the input data to the memory of the runtime. The offset parameter indicates the offset within the input data where to start copying, and must be inferior or equal to the value returned by ext_input_size_version_1. The out parameter is a pointer-size containing the buffer where to write to. +The runtime execution stops with an error if offset is strictly superior to the size of the input data, or if out is outside of the range of the memory of the virtual machine, even if the amount of data to copy would be 0 bytes.
ext_input_read_version_1
In addition to the new host functions, this RFC proposes two changes to the runtime-host interface:
(func (result i64))
__heap_base
All the host functions that are being superceded by new host functions are now considered deprecated and should no longer be used. +The following other host functions are similarly also considered deprecated:
ext_storage_get_version_1
ext_default_child_storage_get_version_1
ext_allocator_malloc_version_1
ext_offchain_network_state_version_1
This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.
The API of these new functions was heavily inspired by API used by the C programming language.
The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.
It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions:
ext_input_size_version_1/ext_input_read_version_1 is inherently slower than obtaining a buffer with the entire data due to the two extra function calls and the extra copying. However, given that this only happens once per runtime call, the cost is expected to be negligible.
The ext_crypto_*_public_keys, ext_offchain_network_state, and ext_offchain_http_* host functions are likely slightly slower than their deprecated counterparts, but given that they are used only in offchain workers this is acceptable.
ext_crypto_*_public_keys
ext_offchain_network_state
ext_offchain_http_*
It is unclear how replacing ext_storage_get with ext_storage_read and ext_default_child_storage_get with ext_default_child_storage_read will impact performances.
ext_storage_get
ext_storage_read
ext_default_child_storage_get
ext_default_child_storage_read
It is unclear how the changes to ext_storage_next_key and ext_default_child_storage_next_key will impact performances.
ext_storage_next_key
ext_default_child_storage_next_key
After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. +This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.
This RFC proposes a dynamic pricing model for the sale of Bulk Coretime on the Polkadot UC. The proposed model updates the regular price of cores for each sale period, by taking into account the number of cores sold in the previous sale, as well as a limit of cores and a target number of cores sold. It ensures a minimum price and limits price growth to a maximum price increase factor, while also giving govenance control over the steepness of the price change curve. It allows governance to address challenges arising from changing market conditions and should offer predictable and controlled price adjustments.
Accompanying visualizations are provided at [1].
RFC-1 proposes periodic Bulk Coretime Sales as a mechanism to sell continouos regions of blockspace (suggested to be 4 weeks in length). A number of Blockspace Regions (compare RFC-1 & RFC-3) are provided for sale to the Broker-Chain each period and shall be sold in a way that provides value-capture for the Polkadot network. The exact pricing mechanism is out of scope for RFC-1 and shall be provided by this RFC.
A dynamic pricing model is needed. A limited number of Regions are offered for sale each period. The model needs to find the price for a period based on supply and demand of the previous period.
The model shall give Coretime consumers predictability about upcoming price developments and confidence that Polkadot governance can adapt the pricing model to changing market conditions.
The primary stakeholders of this RFC are:
The dynamic pricing model sets the new price based on supply and demand in the previous period. The model is a function of the number of Regions sold, piecewise-defined by two power functions.
The curve of the function forms a plateau around the target and then falls off to the left and rises up to the right. The shape of the plateau can be controlled via a scale factor for the left side and right side of the function respectively.
From here on, we will also refer to Regions sold as 'cores' to stay congruent with RFC-1.
BULK_LIMIT
0 < BULK_LIMIT
BULK_TARGET
0 < BULK_TARGET <= BULK_LIMIT
MIN_PRICE
0 < MIN_PRICE
MAX_PRICE_INCREASE_FACTOR
1 < MAX_PRICE_INCREASE_FACTOR
SCALE_DOWN
0 < SCALE_DOWN
SCALE_UP
0 < SCALE_UP
P(n) = \begin{cases} + (P_{\text{old}} - P_{\text{min}}) \left(1 - \left(\frac{T - n}{T}\right)^d\right) + P_{\text{min}} & \text{if } n \leq T \\ + ((F - 1) \cdot P_{\text{old}} \cdot \left(\frac{n - T}{L - T}\right)^u) + P_{\text{old}} & \text{if } n > T +\end{cases} +
old_price
cores_sold
The left side is a power function that describes an increasing concave downward curvature that approaches old_price. We realize this by using the form $y = a(1 - x^d)$, usually used as a downward sloping curve, but in our case flipped horizontally by letting the argument $x = \frac{T-n}{T}$ decrease with $n$, doubly inversing the curve.
This approach is chosen over a decaying exponential because it let's us a better control the shape of the plateau, especially allowing us to get a straight line by setting SCALE_DOWN to $1$.
The right side is a power function of the form $y = a(x^u)$.
NEW_PRICE := IF CORES_SOLD <= BULK_TARGET THEN + (OLD_PRICE - MIN_PRICE) * (1 - ((BULK_TARGET - CORES_SOLD)^SCALE_DOWN / BULK_TARGET^SCALE_DOWN)) + MIN_PRICE +ELSE + ((MAX_PRICE_INCREASE_FACTOR - 1) * OLD_PRICE * ((CORES_SOLD - BULK_TARGET)^SCALE_UP / (BULK_LIMIT - BULK_TARGET)^SCALE_UP)) + OLD_PRICE +END IF +
We introduce MIN_PRICE to control the minimum price.
The left side of the function shall be allowed to come close to 0 if cores sold approaches 0. The rationale is that if there are actually 0 cores sold, the previous sale price was too high and the price needs to adapt quickly.
If the number of cores is close to BULK_TARGET, less extreme price changes might be sensible. This ensures that a drop in sold cores or an increase doesn’t lead to immediate price changes, but rather slowly adapts. Only if more extreme changes in the number of sold cores occur, does the price slope increase.
We introduce SCALE_DOWN and SCALE_UP to control for the steepness of the left and the right side of the function respectively.
We introduce MAX_PRICE_INCREASE_FACTOR as the factor that controls how much the price may increase from one period to another.
Introducing this variable gives governance an additional control lever and avoids the necessity for a future runtime upgrade.
This example proposes the baseline parameters. If not mentioned otherwise, other examples use these values.
The minimum price of a core is 1 DOT, the price can double every 4 weeks. Price change around BULK_TARGET is dampened slightly.
BULK_TARGET = 30 +BULK_LIMIT = 45 +MIN_PRICE = 1 +MAX_PRICE_INCREASE_FACTOR = 2 +SCALE_DOWN = 2 +SCALE_UP = 2 +OLD_PRICE = 1000 +
We might want to have a more aggressive price growth, allowing the price to triple every 4 weeks and have a linear increase in price on the right side.
BULK_TARGET = 30 +BULK_LIMIT = 45 +MIN_PRICE = 1 +MAX_PRICE_INCREASE_FACTOR = 3 +SCALE_DOWN = 2 +SCALE_UP = 1 +OLD_PRICE = 1000 +
If governance considers the risk that a sudden surge in DOT price might price chains out from bulk coretime markets, it can ensure the model quickly reacts to a quick drop in demand, by setting 0 < SCALE_DOWN < 1 and setting the max price increase factor more conservatively.
BULK_TARGET = 30 +BULK_LIMIT = 45 +MIN_PRICE = 1 +MAX_PRICE_INCREASE_FACTOR = 1.5 +SCALE_DOWN = 0.5 +SCALE_UP = 2 +OLD_PRICE = 1000 +
By setting the scaling factors to 1 and potentially adapting the max price increase, we can achieve a linear function
BULK_TARGET = 30 +BULK_LIMIT = 45 +MIN_PRICE = 1 +MAX_PRICE_INCREASE_FACTOR = 1.5 +SCALE_DOWN = 1 +SCALE_UP = 1 +OLD_PRICE = 1000 +
None at present.
This pricing model is based on the requirements from the basic linear solution proposed in RFC-1, which is a simple dynamic pricing model and only used as proof. The present model adds additional considerations to make the model more adaptable under real conditions.
This RFC, if accepted, shall be implemented in conjunction with RFC-1.
Improve the networking messages that query storage items from the remote, in order to reduce the bandwidth usage and number of round trips of light clients.
Clients on the Polkadot peer-to-peer network can be divided into two categories: full nodes and light clients. So-called full nodes are nodes that store the content of the chain locally on their disk, while light clients are nodes that don't. In order to access for example the balance of an account, a full node can do a disk read, while a light client needs to send a network message to a full node and wait for the full node to reply with the desired value. This reply is in the form of a Merkle proof, which makes it possible for the light client to verify the exactness of the value.
Unfortunately, this network protocol is suffering from some issues:
Once Polkadot and Kusama will have transitioned to state_version = 1, which modifies the format of the trie entries, it will be possible to generate Merkle proofs that contain only the hashes of values in the storage. Thanks to this, it is already possible to prove the existence of a key without sending its entire value (only its hash), or to prove that a value has changed or not between two blocks (by sending just their hashes). +Thus, the only reason why aforementioned issues exist is because the existing networking messages don't give the possibility for the querier to query this. This is what this proposal aims at fixing.
state_version = 1
This is the continuation of https://github.com/w3f/PPPs/pull/10, which itself is the continuation of https://github.com/w3f/PPPs/pull/5.
The protobuf schema of the networking protocol can be found here: https://github.com/paritytech/substrate/blob/5b6519a7ff4a2d3cc424d78bc4830688f3b184c0/client/network/light/src/schema/light.v1.proto
The proposal is to modify this protocol in this way:
@@ -11,6 +11,7 @@ message Request { + RemoteReadRequest remote_read_request = 2; + RemoteReadChildRequest remote_read_child_request = 4; + // Note: ids 3 and 5 were used in the past. It would be preferable to not re-use them. ++ RemoteReadRequestV2 remote_read_request_v2 = 6; + } + } + +@@ -48,6 +49,21 @@ message RemoteReadRequest { + repeated bytes keys = 3; + } + ++message RemoteReadRequestV2 { ++ required bytes block = 1; ++ optional ChildTrieInfo child_trie_info = 2; // Read from the main trie if missing. ++ repeated Key keys = 3; ++ optional bytes onlyKeysAfter = 4; ++ optional bool onlyKeysAfterIgnoreLastNibble = 5; ++} ++ ++message ChildTrieInfo { ++ enum ChildTrieNamespace { ++ DEFAULT = 1; ++ } ++ ++ required bytes hash = 1; ++ required ChildTrieNamespace namespace = 2; ++} ++ + // Remote read response. + message RemoteReadResponse { + // Read proof. If missing, indicates that the remote couldn't answer, for example because +@@ -65,3 +81,8 @@ message RemoteReadChildRequest { + // Storage keys. + repeated bytes keys = 6; + } ++ ++message Key { ++ required bytes key = 1; ++ optional bool skipValue = 2; // Defaults to `false` if missing ++ optional bool includeDescendants = 3; // Defaults to `false` if missing ++} +
Note that the field names aren't very important as they are not sent over the wire. They can be changed at any time without any consequence. I would invite people to not discuss these field names as they are implementation details.
This diff adds a new type of request (RemoteReadRequestV2).
RemoteReadRequestV2
The new child_trie_info field in the request makes it possible to specify which trie is concerned by the request. The current networking protocol uses two different structs (RemoteReadRequest and RemoteReadChildRequest) for main trie and child trie queries, while this new request would make it possible to query either. This change doesn't fix any of the issues mentioned in the previous section, but is a side change that has been done for simplicity. +An alternative could have been to specify the child_trie_info for each individual Key. However this would make it necessary to send the child trie hash many times over the network, which leads to a waste of bandwidth, and in my opinion makes things more complicated for no actual gain. If a querier would like to access more than one trie at the same time, it is always possible to send one query per trie.
child_trie_info
RemoteReadRequest
RemoteReadChildRequest
Key
If skipValue is true for a Key, then the value associated with this key isn't important to the querier, and the replier is encouraged to replace the value with its hash provided that the storage item has a state_version equal to 1. If the storage value has a state_version equal to 0, then the optimization isn't possible and the replier should behave as if skipValue was false.
skipValue
true
state_version
false
If includeDescendants is true for a Key, then the replier must also include in the proof all keys that are descendant of the given key (in other words, its children, children of children, children of children of children, etc.). It must do so even if key itself doesn't have any storage value associated to it. The values of all of these descendants are replaced with their hashes if skipValue is true, similarly to key itself.
includeDescendants
The optional onlyKeysAfter and onlyKeysAfterIgnoreLastNibble fields can provide a lower bound for the keys contained in the proof. The responder must not include in its proof any node whose key is strictly inferior to the value in onlyKeysAfter. If onlyKeysAfterIgnoreLastNibble is provided, then the last 4 bits for onlyKeysAfter must be ignored. This makes it possible to represent a trie branch node that doesn't have an even number of nibbles. If no onlyKeysAfter is provided, it is equivalent to being empty, meaning that the response must start with the root node of the trie.
onlyKeysAfter
onlyKeysAfterIgnoreLastNibble
If onlyKeysAfterIgnoreLastNibble is missing, it is equivalent to false. If onlyKeysAfterIgnoreLastNibble is true and onlyKeysAfter is missing or empty, then the request is invalid.
For the purpose of this networking protocol, it should be considered as if the main trie contained an entry for each default child trie whose key is concat(":child_storage:default:", child_trie_hash) and whose value is equal to the trie root hash of that default child trie. This behavior is consistent with what the host functions observe when querying the storage. This behavior is present in the existing networking protocol, in other words this proposal doesn't change anything to the situation, but it is worth mentioning. +Also note that child tries aren't considered as descendants of the main trie when it comes to the includeDescendants flag. In other words, if the request concerns the main trie, no content coming from child tries is ever sent back.
concat(":child_storage:default:", child_trie_hash)
This protocol keeps the same maximum response size limit as currently exists (16 MiB). It is not possible for the querier to know in advance whether its query will lead to a reply that exceeds the maximum size. If the reply is too large, the replier should send back only a limited number (but at least one) of requested items in the proof. The querier should then send additional requests for the rest of the items. A response containing none of the requested items is invalid.
The server is allowed to silently discard some keys of the request if it judges that the number of requested keys is too high. This is in line with the fact that the server might truncate the response.
This proposal doesn't handle one specific situation: what if a proof containing a single specific item would exceed the response size limit? For example, if the response size limit was 1 MiB, querying the runtime code (which is typically 1.0 to 1.5 MiB) would be impossible as it's impossible to generate a proof less than 1 MiB. The response size limit is currently 16 MiB, meaning that no single storage item must exceed 16 MiB.
Unfortunately, because it's impossible to verify a Merkle proof before having received it entirely, parsing the proof in a streaming way is also not possible.
A way to solve this issue would be to Merkle-ize large storage items, so that a proof could include only a portion of a large storage item. Since this would require a change to the trie format, it is not realistically feasible in a short time frame.
The main security consideration concerns the size of replies and the resources necessary to generate them. It is for example easily possible to ask for all keys and values of the chain, which would take a very long time to generate. Since responses to this networking protocol have a maximum size, the replier should truncate proofs that would lead to the response being too large. Note that it is already possible to send a query that would lead to a very large reply with the existing network protocol. The only thing that this proposal changes is that it would make it less complicated to perform such an attack.
Implementers of the replier side should be careful to detect early on when a reply would exceed the maximum reply size, rather than inconditionally generate a reply, as this could take a very large amount of CPU, disk I/O, and memory. Existing implementations might currently be accidentally protected from such an attack thanks to the fact that requests have a maximum size, and thus that the list of keys in the query was bounded. After this proposal, this accidental protection would no longer exist.
Malicious server nodes might truncate Merkle proofs even when they don't strictly need to, and it is not possible for the client to (easily) detect this situation. However, malicious server nodes can already do undesirable things such as throttle down their upload bandwidth or simply not respond. There is no need to handle unnecessarily truncated Merkle proofs any differently than a server simply not answering the request.
It is unclear to the author of the RFC what the performance implications are. Servers are supposed to have limits to the amount of resources they use to respond to requests, and as such the worst that can happen is that light client requests become a bit slower than they currently are.
Irrelevant.
The prior networking protocol is maintained for now. The older version of this protocol could get removed in a long time.
None. This RFC is a clean-up of an existing mechanism.
The current networking protocol could be deprecated in a long time. Additionally, the current "state requests" protocol (used for warp syncing) could also be deprecated in favor of this one.
This document is a proposal for restructuring the bulk markets in the Polkadot UC's coretime allocation system to improve efficiency and fairness. The proposal suggests separating the BULK_PERIOD into MARKET_PERIOD and RENEWAL_PERIOD, allowing for a market-driven price discovery through a clearing price Dutch auction during the MARKET_PERIOD followed by renewal offers at the MARKET_PRICE during the RENEWAL_PERIOD. The new system ensures synchronicity between renewal and market prices, fairness among all current tenants, and efficient price discovery, while preserving price caps to provide security for current tenants. It seeks to start a discussion about the possibility of long-term leases.
BULK_PERIOD
MARKET_PERIOD
RENEWAL_PERIOD
MARKET_PRICE
While the initial RFC-1 has provided a robust framework for Coretime allocation within the Polkadot UC, this proposal builds upon its strengths and uses many provided building blocks to address some areas that could be further improved.
In particular, this proposal introduces the following changes:
RESERVE_PRICE
The premise of this proposal is to reduce complexity by introducing a common price (that develops releative to capacity consumption of Polkadot UC), while still allowing for market forces to add efficiency. Longterm lease owners still receive priority IF they can pay (close to) the market price. This prevents a situation where the renewal price significantly diverges from renewal prices which allows for core captures. While maximum price increase certainty might seem contradictory to efficient price discovery, the proposed model aims to balance these elements, utilizing market forces to determine the price and allocate cores effectively within certain bounds. It must be stated, that potential price increases remain predictable (in the worst-case) but could be higher than in the originally proposed design. The argument remains, however, that we need to allow market forces to affect all prices for an efficient Coretime pricing and allocation.
Ultimately, this the framework proposed here adheres to all requirements stated in RFC-1.
Primary stakeholder sets are:
The BULK_PERIOD has been restructured into two primary segments: the MARKET_PERIOD and RENEWAL_PERIOD, along with an auxiliary SETTLEMENT_PERIOD. This latter period doesn't necessitate any actions from the coretime system chain, but it facilitates a more efficient allocation of coretime in secondary markets. A significant departure from the original proposal lies in the timing of renewals, which now occur post-market phase. This adjustment aims to harmonize renewal prices with their market counterparts, ensuring a more consistent and equitable pricing model.
SETTLEMENT_PERIOD
During the market period, core sales are conducted through a well-established clearing price Dutch auction that features a RESERVE_PRICE. The price initiates at a premium, designated as PRICE_PREMIUM (for instance, 30%) and descends linearly to the RESERVE_PRICE throughout the duration of the MARKET_PERIOD. Each bidder is expected to submit both their desired price and the quantity (that is, the amount of Coretime) they wish to purchase. To secure these acquisitions, bidders must make a deposit equivalent to their bid multiplied by the chosen quantity, in DOT.
PRICE_PREMIUM
The market achieves resolution once all quantities have been sold, or the RESERVE_PRICE has been reached. This situation leads to determining the MARKET_PRICE either by the lowest bid that was successful in clearing the entire market or by the RESERVE_PRICE. This mechanism yields a uniform price, shaped by market forces (refer to the following discussion for an explanation of its benefits). In other words, all buyers pay the same price (per unit of Coretime). Further down the benefits of this variant of a Dutch auction is discussed.
Note: In cases where some cores remain unsold in the market, all buyers are obligated to pay the RESERVE_PRICE.
As the RENEWAL_PERIOD commences, all current tenants are granted the opportunity to renew their cores at a slight discount of MARKET_PRICE * RENEWAL_DISCOUNT (for instance, 10%). This provision affords marginal benefits to existing tenants, balancing out the non-transferability aspect of renewals.
MARKET_PRICE * RENEWAL_DISCOUNT
At the end of the period, all available cores are allocated to the current tenants who have opted for renewal and the participants who placed bids during the market period. If the demand for cores exceeds supply, the cores left unclaimed from renewals may be awarded to bidders who placed their bids early in the auction, thereby subtly incentivizing early participation. If the supply exceeds the demand, all unsold cores are transferred to the Instantanous Market.
After all cores are allocated, the RESERVE_PRICE is adjusted following the process described in RFC-1 and serves as baseline price in the next BULK_PERIOD.
Note: The particular price curve is outside the scope of the proposal. The MARKET_PRICE (as a function of RESERVE_PRICE), however, is able to capture higher demand very well while being capped downwards. That means, the curve that adjusts the RESERVE_PRICE should be more sensitive to undercapacity.
Tasks that are in the "renewal-pipeline" can determine the upper bound for the price they will pay in any future period. The main driver of any price increase over time is the adjustment of the RESERVE_PRICE, that occurs at the end of each BULK_PERIOD after determining the capacity fillment of Polkadot UC. To calculate the maximum price in some future period, a task could assume maximum capacity in all upcoming periods and track the resulting price increase of RESERVE_PRICE. In the final period, that price can get a maximum premium of PRICE_PREMIUM and after deducting a potential RENEWAL_DISCOUNT, the maximum price can be determined.
RENEWAL_DISCOUNT
During the settlement period, participants have ample time to trade Coretime on secondary markets before the onset of the next BULK_PERIOD. This allows for trading with full Coretime availability. Trading transferrable Coretime naturally continues during each BULK_PERIOD, albeit with cores already in use.
Having all bidders pay the market clearing price offers some benefits and disadvantages.
There are trade-offs that arise from this proposal, compared to the initial model. The most notable one is that here, I prioritize requirement 6 over requirement 2. The price, in the very "worst-case" (meaning a huge explosion in demand for coretime) could lead to a much larger increase of prices in Coretime. From an economic perspective, this (rare edgecase) would also mean that we'd vastly underprice Coretime in the original model, leading to highly inefficient allocations.
This RFC builds extensively on the available ideas put forward in RFC-1.
Additionally, I want to express a special thanks to Samuel Haefner and Shahar Dobzinski for fruitful discussions and helping me structure my thoughts.
The technical feasability needs to be assessed.
This RFC proposes changes that enable the use of absolute locations in AccountId derivations, which allows protocols built using XCM to have static account derivations in any runtime, regardless of its position in the family hierarchy.
These changes would allow protocol builders to leverage absolute locations to maintain the exact same derived account address across all networks in the ecosystem, thus enhancing user experience.
One such protocol, that is the original motivation for this proposal, is InvArch's Saturn Multisig, which gives users a unifying multisig and DAO experience across all XCM connected chains.
This proposal aims to make it possible to derive accounts for absolute locations, enabling protocols that require the ability to maintain the same derived account in any runtime. This is done by deriving accounts from the hash of described absolute locations, which are static across different destinations.
The same location can be represented in relative form and absolute form like so:
#![allow(unused)] +fn main() { +// Relative location (from own perspective) +{ + parents: 0, + interior: Here +} + +// Relative location (from perspective of parent) +{ + parents: 0, + interior: [Parachain(1000)] +} + +// Relative location (from perspective of sibling) +{ + parents: 1, + interior: [Parachain(1000)] +} + +// Absolute location +[GlobalConsensus(Kusama), Parachain(1000)] +}
Using DescribeFamily, the above relative locations would be described like so:
DescribeFamily
#![allow(unused)] +fn main() { +// Relative location (from own perspective) +// Not possible. + +// Relative location (from perspective of parent) +(b"ChildChain", Compact::<u32>::from(*index)).encode() + +// Relative location (from perspective of sibling) +(b"SiblingChain", Compact::<u32>::from(*index)).encode() + +}
The proposed description for absolute location would follow the same pattern, like so:
#![allow(unused)] +fn main() { +( + b"GlobalConsensus", + network_id, + b"Parachain", + Compact::<u32>::from(para_id), + tail +).encode() +}
This proposal requires the modification of two XCM types defined in the xcm-builder crate: The WithComputedOrigin barrier and the DescribeFamily MultiLocation descriptor.
xcm-builder
WithComputedOrigin
The WtihComputedOrigin barrier serves as a wrapper around other barriers, consuming origin modification instructions and applying them to the message origin before passing to the inner barriers. One of the origin modifying instructions is UniversalOrigin, which serves the purpose of signaling that the origin should be a Universal Origin that represents the location as an absolute path prefixed by the GlobalConsensus junction.
WtihComputedOrigin
UniversalOrigin
GlobalConsensus
In it's current state the barrier transforms locations with the UniversalOrigin instruction into relative locations, so the proposed changes aim to make it return absolute locations instead.
The DescribeFamily location descriptor is part of the HashedDescription MultiLocation hashing system and exists to describe locations in an easy format for encoding and hashing, so that an AccountId can be derived from this MultiLocation.
HashedDescription
This implementation contains a match statement that does not match against absolute locations, so changes to it involve matching against absolute locations and providing appropriate descriptions for hashing.
No drawbacks have been identified with this proposal.
Tests can be done using simple unit tests, as this is not a change to XCM itself but rather to types defined in xcm-builder.
Security considerations should be taken with the implementation to make sure no unwanted behavior is introduced.
This proposal does not introduce any privacy considerations.
Depending on the final implementation, this proposal should not introduce much overhead to performance.
The ergonomics of this proposal depend on the final implementation details.
Backwards compatibility should remain unchanged, although that depend on the final implementation.
DescirbeFamily
Implementation details and overall code is still up to discussion.
This RFC proposes to make modifications to voting power delegations as part of the Conviction Voting pallet. The changes being proposed include:
It has become clear since the launch of OpenGov that there are a few common tropes which pop up time and time again:
We believe (based on feedback from token holders with a larger stake in the network) that if there were some changes made to delegation mechanics, these larger stake holders would be more likely to delegate their voting power to active network participants – thus greatly increasing the support turnout.
This RFC proposes to make 4 changes to the convictionVoting pallet logic in order to improve the user experience of those delegating their voting power to another account.
Allow a Delegator to vote independently of their Delegate if they so desire – this would empower network participants to more actively delegate their voting power to active voters, removing the tedious steps of having to undelegate across an entire track every time they do not agree with their delegate's voting direction for a particular referendum.
Allow nested delegations – for example Charlie delegates to Bob who delegates to Alice – when Alice votes then both Bob and Charlie vote alongside Alice (in the current runtime Charlie will not vote when Alice votes) – This would allow network participants who control multiple (possibly derived) accounts to be able to delegate all of their voting power to a single account under their control, which would in turn delegate to a more active voting participant. Then if the delegator wishes to vote independently of their delegate they can control all of their voting power from a single account, which again removes the pain point of having to issue multiple undelegate extrinsics in the event that they disagree with their delegate.
Have delegated votes follow their delegates abstain votes – there are times where delegates may vote abstain on a particular referendum and adding this functionality will increase the support of a particular referendum. It has a secondary benefit of meaning that Validators who are delegating their voting power do not lose points in the 1KV program in the event that their delegate votes abstain (another pain point which may be preventing those network participants from delegating).
Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call - in order to delegate votes across all tracks, a user must batch 15 calls - resulting in high costs for delegation. A single call for delegate_all/ undelegate_all would reduce the complexity and therefore costs of delegations considerably for prospective Delegators.
delegate_all
undelegate_all
We do not foresee any drawbacks by implementing these changes. If anything we believe that this should help to increase overall voter turnout (via the means of delegation) which we see as a net positive.
We feel that the Polkadot Technical Fellowship would be the most competent collective to identify the testing requirements for the ideas presented in this RFC.
This change may add extra chain storage requirements on Polkadot, especially with respect to nested delegations.
The change to add nested delegations may affect governance interfaces such as Nova Wallet who will have to apply changes to their indexers to support nested delegations. It may also affect the Polkadot Delegation Dashboard as well as Polkassembly & SubSquare.
We want to highlight the importance for ecosystem builders to create a mechanism for indexers and wallets to be able to understand that changes have occurred such as increasing the pallet version, etc.
N/A
Additionally we would like to re-open the conversation about the potential for there to be free delegations. This was discussed by Dr Gavin Wood at Sub0 2022 and we feel like this would go a great way towards increasing the amount of network participants that are delegating: https://youtu.be/hSoSA6laK3Q?t=526
Overall, we strongly feel that delegations are a great way to increase voter turnout, and the ideas presented in this RFC would hopefully help in that aspect.
This RFC proposes a new model for a sustainable on-demand parachain registration, involving a smaller initial deposit and periodic rent payments. The new model considers that on-demand chains may be unregistered and later re-registered. The proposed solution also ensures a quick startup for on-demand chains on Polkadot in such cases.
With the support of on-demand parachains on Polkadot, there is a need to explore a new, more cost-effective model for registering validation code. In the current model, the parachain manager is responsible for reserving a unique ParaId and covering the cost of storing the validation code of the parachain. These costs can escalate, particularly if the validation code is large. We need a better, sustainable model for registering on-demand parachains on Polkadot to help smaller teams deploy more easily.
ParaId
This RFC suggests a new payment model to create a more financially viable approach to on-demand parachain registration. In this model, a lower initial deposit is required, followed by recurring payments upon parachain registration.
This new model will coexist with the existing one-time deposit payment model, offering teams seeking to deploy on-demand parachains on Polkadot a more cost-effective alternative.
This RFC proposes a set of changes that will enable the new rent based approach to registering and storing validation code on-chain. +The new model, compared to the current one, will require periodic rent payments. The parachain won't be pruned automatically if the rent is not paid, but by permitting anyone to prune the parachain and rewarding the caller, there will be an incentive for the removal of the validation code.
On-demand parachains should still be able to utilize the current one-time payment model. However, given the size of the deposit required, it's highly likely that most on-demand parachains will opt for the new rent-based model.
Importantly, this solution doesn't require any storage migrations in the current system nor does it introduce any breaking changes. The following provides a detailed description of this solution.
In the current implementation of the registrar pallet, there are two constants that specify the necessary deposit for parachains to register and store their validation code:
#![allow(unused)] +fn main() { +trait Config { + // -- snip -- + + /// The deposit required for reserving a `ParaId`. + #[pallet::constant] + type ParaDeposit: Get<BalanceOf<Self>>; + + /// The deposit to be paid per byte stored on chain. + #[pallet::constant] + type DataDepositPerByte: Get<BalanceOf<Self>>; +} +}
This RFC proposes the addition of three new constants that will determine the payment amount and the frequency of the recurring rent payment:
#![allow(unused)] +fn main() { +trait Config { + // -- snip -- + + /// Defines how frequently the rent needs to be paid. + /// + /// The duration is set in sessions instead of block numbers. + #[pallet::constant] + type RentDuration: Get<SessionIndex>; + + /// The initial deposit amount for registering validation code. + /// + /// This is defined as a proportion of the deposit that would be required in the regular + /// model. + #[pallet::constant] + type RentalDepositProportion: Get<Perbill>; + + /// The recurring rental cost defined as a proportion of the initial rental registration deposit. + #[pallet::constant] + type RentalRecurringProportion: Get<Perbill>; +} +}
Users will be able to reserve a ParaId and register their validation code for a proportion of the regular deposit required. However, they must also make additional rent payments at intervals of T::RentDuration.
T::RentDuration
For registering using the new rental system we will have to make modifications to the paras-registrar pallet. We should expose two new extrinsics for this:
paras-registrar
#![allow(unused)] +fn main() { +mod pallet { + // -- snip -- + + pub fn register_rental( + origin: OriginFor<T>, + id: ParaId, + genesis_head: HeadData, + validation_code: ValidationCode, + ) -> DispatchResult { /* ... */ } + + pub fn pay_rent(origin: OriginFor<T>, id: ParaId) -> DispatchResult { + /* ... */ + } +} +}
A call to register_rental will require the reservation of only a percentage of the deposit that would otherwise be required to register the validation code when using the regular model. +As described later in the Quick para re-registering section below, we will also store the code hash of each parachain to enable faster re-registration after a parachain has been pruned. For this reason the total initial deposit amount is increased to account for that.
register_rental
#![allow(unused)] +fn main() { +// The logic for calculating the initial deposit for parachain registered with the +// new rent-based model: + +let validation_code_deposit = per_byte_fee.saturating_mul((validation_code.0.len() as u32).into()); + +let head_deposit = per_byte_fee.saturating_mul((genesis_head.0.len() as u32).into()) +let hash_deposit = per_byte_fee.saturating_mul(HASH_SIZE); + +let deposit = T::RentalDepositProportion::get().mul_ceil(validation_code_deposit) + .saturating_add(T::ParaDeposit::get()) + .saturating_add(head_deposit) + .saturating_add(hash_deposit) +}
Once the ParaId is reserved and the validation code is registered the rent must be periodically paid to ensure the on-demand parachain doesn't get removed from the state. The pay_rent extrinsic should be callable by anyone, removing the need for the parachain to depend on the parachain manager for rent payments.
pay_rent
If the rent is not paid, anyone has the option to prune the on-demand parachain and claim a portion of the initial deposit reserved for storing the validation code. This type of 'light' pruning only removes the validation code, while the head data and validation code hash are retained. The validation code hash is stored to allow anyone to register it again as well as to enable quicker re-registration by skipping the pre-checking process.
The moment the rent is no longer paid, the parachain won't be able to purchase on-demand access, meaning no new blocks are allowed. This stage is called the "hibernation" stage, during which all the parachain-related data is still stored on-chain, but new blocks are not permitted. The reason for this is to ensure that the validation code is available in case it is needed in the dispute or approval checking subsystems. Waiting for one entire session will be enough to ensure it is safe to deregister the parachain.
This means that anyone can prune the parachain only once the "hibernation" stage is over, which lasts for an entire session after the moment that the rent is not paid.
The pruning described here is a light form of pruning, since it only removes the validation code. As with all parachains, the parachain or para manager can use the deregister extrinsic to remove all associated state.
deregister
The paras pallet will be loosely coupled with the para-registrar pallet. This approach enables all the pallets tightly coupled with the paras pallet to have access to the rent status information.
paras
para-registrar
Once the validation code is stored without having its rent paid the assigner_on_demand pallet will ensure that an order for that parachain cannot be placed. This is easily achievable given that the assigner_on_demand pallet is tightly coupled with the paras pallet.
assigner_on_demand
If the rent isn't paid on time, and the parachain gets pruned, the new model should provide a quick way to re-register the same validation code under the same ParaId. This can be achieved by skipping the pre-checking process, as the validation code hash will be stored on-chain, allowing us to easily verify that the uploaded code remains unchanged.
#![allow(unused)] +fn main() { +/// Stores the validation code hash for parachains that successfully completed the +/// pre-checking process. +/// +/// This is stored to enable faster on-demand para re-registration in case its pvf has been earlier +/// registered and checked. +/// +/// NOTE: During a runtime upgrade where the pre-checking rules change this storage map should be +/// cleared appropriately. +#[pallet::storage] +pub(super) type CheckedCodeHash<T: Config> = + StorageMap<_, Twox64Concat, ParaId, ValidationCodeHash>; +}
To enable parachain re-registration, we should introduce a new extrinsic in the paras-registrar pallet that allows this. The logic of this extrinsic will be same as regular registration, with the distinction that it can be called by anyone, and the required deposit will be smaller since it only has to cover for the storage of the validation code.
This RFC does not alter the process of reserving a ParaId, and therefore, it does not propose reducing it, even though such a reduction could be beneficial.
Even though this RFC doesn't delve into the specifics of the configuration values for parachain registration but rather focuses on the mechanism, configuring it carelessly could lead to potential problems.
Since the validation code hash and head data are not removed when the parachain is pruned but only when the deregister extrinsic is called, the T::DataDepositPerByte must be set to a higher value to create a strong enough incentive for removing it from the state.
T::DataDepositPerByte
The implementation of this RFC will be tested on Rococo first.
Proper research should be conducted on setting the configuration values of the new system since these values can have great impact on the network.
An audit is required to ensure the implementation's correctness.
The proposal introduces no new privacy concerns.
This RFC should not introduce any performance impact.
This RFC does not affect the current parachains, nor the parachains that intend to use the one-time payment model for parachain registration.
This RFC does not break compatibility.
Prior discussion on this topic: https://github.com/paritytech/polkadot-sdk/issues/1796
None at this time.
As noted in this GitHub issue, we want to raise the per-byte cost of on-chain data storage. However, a substantial increase in this cost would make it highly impractical for on-demand parachains to register on Polkadot. +This RFC offers an alternative solution for on-demand parachains, ensuring that the per-byte cost increase doesn't overly burden the registration process.
Rather than enforce a limit to the total memory consumption on the client side by loading the value at :heappages, enforce that limit on the runtime side.
:heappages
From the early days of Substrate up until recently, the runtime was present in two forms: the wasm runtime (wasm bytecode passed through an interpreter) and the native runtime (native code directly run by the client).
Since the wasm runtime has a lower amount of available memory (4 GiB maximum) compared to the native runtime, and in order to ensure sure that the wasm and native runtimes always produce the same outcome, it was necessary to clamp the amount of memory available to both runtimes to the same value.
In order to achieve this, a special storage key (a "well-known" key) :heappages was introduced and represents the number of "wasm pages" (one page equals 64kiB) of memory that are available to the memory allocator of the runtimes. If this storage key is absent, it defaults to 2048, which is 128 MiB.
The native runtime has since then been disappeared, but the concept of "heap pages" still exists. This RFC proposes a simplification to the design of Polkadot by removing the concept of "heap pages" as is currently known, and proposes alternative ways to achieve the goal of limiting the amount of memory available.
Client implementers and low-level runtime developers.
This RFC proposes the following changes to the client:
With these changes, the memory available to the runtime is now only bounded by the available memory space (4 GiB), and optionally by the maximum amount of memory specified in the Wasm binary (see https://webassembly.github.io/spec/core/bikeshed/#memories%E2%91%A0). In Rust, the latter can be controlled during compilation with the flag -Clink-arg=--max-memory=....
-Clink-arg=--max-memory=...
Since the client-side change is strictly more tolerant than before, we can perform the change immediately after the runtime has been updated, and without having to worry about backwards compatibility.
This RFC proposes three alternative paths (different chains might choose to follow different paths):
Path A: add back the same memory limit to the runtime, like so:
#[global_allocator]
Path B: define the memory limit using the -Clink-arg=--max-memory=... flag.
Path C: don't add anything to the runtime. This is effectively the same as setting the memory limit to ~4 GiB (compared to the current default limit of 128 MiB). This solution is viable only because we're compiling for 32bits wasm rather than for example 64bits wasm. If we ever compile for 64bits wasm, this would need to be revisited.
Each parachain can choose the option that they prefer, but the author of this RFC strongly suggests either option C or B.
In case of path A, there is one situation where the behaviour pre-RFC is not equivalent to the one post-RFC: when a host function that performs an allocation (for example ext_storage_get) is called, without this RFC this allocation might fail due to reaching the maximum heap pages, while after this RFC this will always succeed. +This is most likely not a problem, as storage values aren't supposed to be larger than a few megabytes at the very maximum.
In the unfortunate event where the runtime runs out of memory, path B would make it more difficult to relax the memory limit, as we would need to re-upload the entire Wasm, compared to updating only :heappages in path A or before this RFC. +In the case where the runtime runs out of memory only in the specific event where the Wasm runtime is modified, this could brick the chain. However, this situation is no different than the thousands of other ways that a bug in the runtime can brick a chain, and there's no reason to be particularily worried about this situation in particular.
This RFC would reduce the chance of a consensus issue between clients. +The :heappages are a rather obscure feature, and it is not clear what happens in some corner cases such as the value being too large (error? clamp?) or malformed. This RFC would completely erase these questions.
In case of path A, it is unclear how performances would be affected. Path A consists in moving client-side operations to the runtime without changing these operations, and as such performance differences are expected to be minimal. Overall, we're talking about one addition/subtraction per malloc and per free, so this is more than likely completely negligible.
In case of path B and C, the performance gain would be a net positive, as this RFC strictly removes things.
This RFC would isolate the client and runtime more from each other, making it a bit easier to reason about the client or the runtime in isolation.
Not a breaking change. The runtime-side changes can be applied immediately (without even having to wait for changes in the client), then as soon as the runtime is updated, the client can be updated without any transition period. One can even consider updating the client before the runtime, as it corresponds to path C.
None.
This RFC follows the same path as https://github.com/polkadot-fellows/RFCs/pull/4 by scoping everything related to memory allocations to the runtime.
This RFC proposes adding a trivial governance track on Kusama to facilitate X (formerly known as Twitter) posts on the @kusamanetwork account. The technical aspect +of implementing this in the runtime is very inconsequential and straight-forward, though it might get more technical if the Fellowship wants to regulate this track +with a non-existent permission set. If this is implemented it would need to be followed up with:
The overall motivation for this RFC is to decentralize the management of the Kusama brand/communication channel to KSM holders. This is necessary in my opinion primarily +because of the inactivity of the account in recent history, with posts spanning weeks or months apart. I am currently unaware of who/what entity manages the Kusama +X account, but if they are affiliated with Parity or W3F this proposed solution could also offload some of the legal ramifications of making (or not making) +announcements to the public regarding Kusama. While centralized control of the X account would still be present, it could become totally moot if this RFC is implemented +and the community becomes totally autonomous in the management of Kusama's X posts.
This solution does not cover every single communication front for Kusama, but it does cover one of the largest. It also establishes a precedent for other communication channels +that could be offloaded to openGov, provided this proof-of-concept is successful.
Finally, this RFC is the epitome of experimentation that Kusama is ideal for. This proposal may spark newfound excitement for Kusama and help us realize Kusama's potential +for pushing boundaries and trying new unconventional ideas.
This idea has not been formalized by any individual (or group of) KSM holder(s). To my knowledge the socialization of this idea is contained +entirely in my recent X post here, but it is possible that an idea like this one has been discussed in +other places. It appears to me that the ecosystem would welcome a change like this which is why I am taking action to formalize the discussion.
The implementation of this idea can be broken down into 3 primary phases:
First, we begin with this RFC to ensure all feedback can be discussed and implemented in the proposal. After the Fellowship and the community come to a reasonable +agreement on the changes necessary to make this happen, the Fellowship can merge changes into Kusama's runtime to include this new track with appropriate track configurations. +As a starting point, I recommend the following track configurations:
const APP_X_POST: Curve = Curve::make_linear(7, 28, percent(50), percent(100)); +const SUP_X_POST: Curve = Curve::make_reciprocal(?, ?, percent(?), percent(?), percent(?)); + +// I don't know how to configure the make_reciprocal variables to get what I imagine for support, +// but I recommend starting at 50% support and sharply decreasing such that 1% is sufficient quarterway +// through the decision period and hitting 0% at the end of the decision period, or something like that. + + ( + 69, + pallet_referenda::TrackInfo { + name: "x_post", + max_deciding: 50, + decision_deposit: 1 * UNIT, + prepare_period: 10 * MINUTES, + decision_period: 4 * DAYS, + confirm_period: 10 * MINUTES, + min_enactment_period: 1 * MINUTES, + min_approval: APP_X_POST, + min_support: SUP_X_POST, + }, + ), +
I also recommend restricting permissions of this track to only submitting remarks or batches of remarks - that's all we'll need for its purpose. I'm not sure how +easy that is to configure, but it is important since we don't want such an agile track to be able to make highly consequential calls.
It is important that we establish the specifications of referenda that will be submitted in this track to ensure that whatever automation tool is built can easily +make posts once a referendum is enacted. As stated above, we really only need a system.remark (or batch of remarks) to indicate the contents of a proposed X post. +The most straight-forward way to do this is to require remarks to adhere to X's requirements for making posts via their API.
For example, if I wanted to propose a post that contained the text "Hello World!" I would propose a referendum in the X post track that contains the following call data: +0x0000607b2274657874223a202248656c6c6f20576f726c6421227d (i.e. system.remark('{"text": "Hello World!"}')).
0x0000607b2274657874223a202248656c6c6f20576f726c6421227d
system.remark('{"text": "Hello World!"}')
At first, we could support text posts only to prove the concept. Later on we could expand this spec to add support for media, likes, retweets, replies, polls, and +whatever other X features we want.
Once we agree on track configurations and specs for referenda in this track, the Fellowship can move forward with merging these changes into Kusama's runtime and +include them in its next release. We could also move forward with developing the necessary tools that would listen for enacted referenda to post automatically on X. +This would require coordination with whoever controls the X account; they would either need to run the tools themselves or add a third party as an authorized user to +run the tools to make posts on the account's behalf. This is a bottleneck for decentralization, but as long as the tools are run by the X account manager or by a trusted third party +it should be fine. I'm open to more decentralized solutions, but those always come at a cost of complexity.
For the tools themselves, we could open a bounty on Kusama for developers/teams to bid on. We could also just ask the community to step up with a Treasury proposal +to have anyone fund the build. Or, the Fellowship could make the release of these changes contingent on their endorsement of developers/teams to build these tools. Lots of options! +For the record, me and my team could develop all the necessary tools, but all because I'm proposing these changes doesn't entitle me to funds to build the tools needed +to implement them. Here's what would be needed:
After everything is complete, we can update the Kusama wiki to include documentation on the X post specifications and include links to the tools/UI.
The main drawback to this change is that it requires a lot of off-chain coordination. It's easy enough to include the track on Kusama but it's a totally different +challenge to make it function as intended. The tools need to be built and the auth tokens need to be managed. It would certainly add an administrative burden to whoever +manages the X account since they would either need to run the tools themselves or manage auth tokens.
This change also introduces on-going costs to the Treasury since it would need to compensate people to support the tools necessary to facilitate this idea. The ultimate +question is whether these on-going costs would be worth the ability for KSM holders to make posts on Kusama's X account.
There's also the risk of misconfiguring the track to make referenda too easy to pass, potentially allowing a malicious actor to get content posted on X that violates X's ToS. +If that happens, we risk getting Kusama banned on X!
This change might also be outside the scope of the Fellowship/openGov. Perhaps the best solution for the X account is to have the Treasury pay for a professional +agency to manage posts. It wouldn't be decentralized but it would probably be more effective in terms of creating good content.
Finally, this solution is merely pseudo-decentralization since the X account manager would still have ultimate control of the account. It's decentralized insofar as +the auth tokens are given to people actually running the tools; a house of cards is required to facilitate X posts via this track. Not ideal.
There's major precedent for configuring tracks on openGov given the amount of power tracks have, so it shouldn't be hard to come up with a sound configuration. +That's why I recommend restricting permissions of this track to remarks and batches of remarks, or something equally inconsequential.
Building the tools for this implementation is really straight-forward and could be audited by Fellowship members, and the community at large, on Github.
The largest security concern would be the management of Kusama's X account's auth tokens. We would need to ensure that they aren't compromised.
If a track on Kusama promises users that compliant referenda enacted therein would be posted on Kusama's X account, users would expect that track to perform as promised. +If the house of cards tumbles down and a compliant referendum doesn't actually get anything posted, users might think that Kusama is broken or unreliable. This +could be damaging to Kusama's image and cause people to question the soundness of other features on Kusama.
As mentioned in the drawbacks, the performance of this feature would depend on off-chain coordinations. We can reduce the administrative burden of these coordinations +by funding third parties with the Treasury to deal with it, but then we're relying on trusting these parties.
By adding a new track to Kusama, governance platforms like Polkassembly or Nova Wallet would need to include it on their applications. This shouldn't be too +much of a burden or overhead since they've already built the infrastructure for other openGov tracks.
This change wouldn't break any compatibility as far as I know.
One reference to a similar feature requiring on-chain/off-chain coordination would be the Kappa-Sigma-Mu Society. Nothing on-chain necessarily enforces the rules +or facilitates bids, challenges, defenses, etc. However, the Society has managed to maintain itself with integrity to its rules. So I don't think this is totally +out of Kusama's scope. But it will require some off-chain effort to maintain.
The current size of the decision deposit on some tracks is too high for many proposers. As a result, those needing to use it have to find someone else willing to put up the deposit for them - and a number of legitimate attempts to use the root track have timed out. This track would provide a more affordable (though slower) route for these holders to use the root track.
There have been recent attempts to use the Kusama root track which have timed out with no decision deposit placed. Usually, these referenda have been related to parachain registration related issues.
Propose to address this by adding a new referendum track [22] Referendum Deposit which can place the decision deposit on another referendum. This would require the following changes:
placeDecisionDesposit
referenda->placeDecisionDeposit
This track would provide a route to starting a root referendum with a much-reduced slashable deposit. This might be undesirable but, assuming the decision deposit cost for this track is still high enough, slashing would still act as a disincentive.
An alternative to this might be to reduce the decision deposit size some of the more expensive tracks. However, part of the purpose of the high deposit - at least on the root track - is to prevent spamming the limited queue with junk referenda.
Will need additional tests case for the modified pallet and runtime. No security or privacy issues.
No significant performance impact.
Only changes related to adding the track. Existing functionality is unchanged.
No compatibility issues.
Feedback on whether my proposed implementation of this is the best way to address the issue - including which calls the track should be allowed to make. Are the track parameters correct or should be use something different? Alternative would be welcome.
A pallet to facilitate enhanced multisig accounts. The main enhancement is that we store a multisig account in the state with related info (signers, threshold,..etc). The module affords enhanced control over administrative operations such as adding/removing signers, changing the threshold, account deletion, canceling an existing proposal. Each signer can approve/reject a proposal while still exists. The proposal is not intended for migrating or getting rid of existing multisig. It's to allow both options to coexist.
For the rest of the RFC We use the following terms:
proposal
Stateful Multisig
Stateless Multisig
Entities in the Polkadot ecosystem need to have a way to manage their funds and other operations in a secure and efficient way. Multisig accounts are a common way to achieve this. Entities by definition change over time, members of the entity may change, threshold requirements may change, and the multisig account may need to be deleted. For even more enhanced hierarchical control, the multisig account may need to be controlled by other multisig accounts.
Current native solutions for multisig operations are less optimal, performance-wise (as we'll explain later in the RFC), and lack fine-grained control over the multisig account.
We refer to current multisig pallet in polkadot-sdk because the multisig account is only derived and not stored in the state. Although deriving the account is determinsitc as it relies on exact users (sorted) and thershold to derive it. This does not allow for control over the multisig account. It's also tightly coupled to exact users and threshold. This makes it hard for an organization to manage existing accounts and to change the threshold or add/remove signers.
We believe as well that the stateless multisig is not efficient in terms of block footprint as we'll show in the performance section.
Pure proxy can achieve having a stored and determinstic multisig account from different users but it's unneeded complexity as a way around the limitations of the current multisig pallet. It doesn't also have the same fine grained control over the multisig account.
Other points mentioned by @tbaut
Basic requirements for the Stateful Multisig are:
Corporate Governance: +In a corporate setting, multisig accounts can be employed for decision-making processes. For example, a company may require the approval of multiple executives to initiate significant financial transactions.
Joint Accounts: +Multisig accounts can be used for joint accounts where multiple individuals need to authorize transactions. This is particularly useful in family finances or shared business accounts.
Decentralized Autonomous Organizations (DAOs): +DAOs can utilize multisig accounts to ensure that decisions are made collectively. Multiple key holders can be required to approve changes to the organization's rules or the allocation of funds.
and much more...
I've created the stateful multisig pallet during my studies in Polkadot Blockchain Academy under supervision from @shawntabrizi and @ank4n. After that, I've enhanced it to be fully functional and this is a draft PR#3300 in polkadot-sdk. I'll list all the details and design decisions in the following sections. Note that the PR is not 1-1 exactly to the current RFC as the RFC is a more polished version of the PR after updating based on the feedback and discussions.
Let's start with a sequence diagram to illustrate the main operations of the Stateful Multisig.
Notes on above diagram:
Execute
having the following enum to store the call or the hash:
#![allow(unused)] +fn main() { +enum CallOrHash<T: Config> { + Call(<T as Config>::RuntimeCall), + Hash(T::Hash), +} +}
create_multisig
#![allow(unused)] +fn main() { + /// Creates a new multisig account and attach signers with a threshold to it. + /// + /// The dispatch origin for this call must be _Signed_. It is expected to be a nomral AccountId and not a + /// Multisig AccountId. + /// + /// T::BaseCreationDeposit + T::PerSignerDeposit * signers.len() will be held from the caller's account. + /// + /// # Arguments + /// + /// - `signers`: Initial set of accounts to add to the multisig. These may be updated later via `add_signer` + /// and `remove_signer`. + /// - `threshold`: The threshold number of accounts required to approve an action. Must be greater than 0 and + /// less than or equal to the total number of signers. + /// + /// # Errors + /// + /// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. + /// * `InvalidThreshold` - The threshold is greater than the total number of signers. + pub fn create_multisig( + origin: OriginFor<T>, + signers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>, + threshold: u32, + ) -> DispatchResult +}
start_proposal
#![allow(unused)] +fn main() { + /// Starts a new proposal for a dispatchable call for a multisig account. + /// The caller must be one of the signers of the multisig account. + /// T::ProposalDeposit will be held from the caller's account. + /// + /// # Arguments + /// + /// * `multisig_account` - The multisig account ID. + /// * `call_or_hash` - The enum having the call or the hash of the call to be approved and executed later. + /// + /// # Errors + /// + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. + /// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. (shouldn't really happen as it's the first approval) + pub fn start_proposal( + origin: OriginFor<T>, + multisig_account: T::AccountId, + call_or_hash: CallOrHash, + ) -> DispatchResult +}
approve
#![allow(unused)] +fn main() { + /// Approves a proposal for a dispatchable call for a multisig account. + /// The caller must be one of the signers of the multisig account. + /// + /// If a signer did approve -> reject -> approve, the proposal will be approved. + /// If a signer did approve -> reject, the proposal will be rejected. + /// + /// # Arguments + /// + /// * `multisig_account` - The multisig account ID. + /// * `call_or_hash` - The enum having the call or the hash of the call to be approved. + /// + /// # Errors + /// + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. + /// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. + /// This shouldn't really happen as it's an approval, not an addition of a new signer. + pub fn approve( + origin: OriginFor<T>, + multisig_account: T::AccountId, + call_or_hash: CallOrHash, + ) -> DispatchResult +}
reject
#![allow(unused)] +fn main() { + /// Rejects a proposal for a multisig account. + /// The caller must be one of the signers of the multisig account. + /// + /// Between approving and rejecting, last call wins. + /// If a signer did approve -> reject -> approve, the proposal will be approved. + /// If a signer did approve -> reject, the proposal will be rejected. + /// + /// # Arguments + /// + /// * `multisig_account` - The multisig account ID. + /// * `call_or_hash` - The enum having the call or the hash of the call to be rejected. + /// + /// # Errors + /// + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. + /// * `SignerNotFound` - The caller has not approved the proposal. + #[pallet::call_index(3)] + #[pallet::weight(Weight::default())] + pub fn reject( + origin: OriginFor<T>, + multisig_account: T::AccountId, + call_or_hash: CallOrHash, + ) -> DispatchResult +}
execute_proposal
#![allow(unused)] +fn main() { + /// Executes a proposal for a dispatchable call for a multisig account. + /// Poropsal needs to be approved by enough signers (exceeds or equal multisig threshold) before it can be executed. + /// The caller must be one of the signers of the multisig account. + /// + /// This function does an extra check to make sure that all approvers still exist in the multisig account. + /// That is to make sure that the multisig account is not compromised by removing an signer during an active proposal. + /// + /// Once finished, the withheld deposit will be returned to the proposal creator. + /// + /// # Arguments + /// + /// * `multisig_account` - The multisig account ID. + /// * `call_or_hash` - We should have gotten the RuntimeCall (preimage) and stored it in the proposal by the time the extrinsic is called. + /// + /// # Errors + /// + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. + /// * `NotEnoughApprovers` - approvers don't exceed the threshold. + /// * `ProposalNotFound` - The proposal does not exist. + /// * `CallPreImageNotFound` - The proposal doesn't have the preimage of the call in the state. + pub fn execute_proposal( + origin: OriginFor<T>, + multisig_account: T::AccountId, + call_or_hash: CallOrHash, + ) -> DispatchResult +}
cancel_proposal
#![allow(unused)] +fn main() { + /// Cancels an existing proposal for a multisig account. + /// Poropsal needs to be rejected by enough signers (exceeds or equal multisig threshold) before it can be executed. + /// The caller must be one of the signers of the multisig account. + /// + /// This function does an extra check to make sure that all rejectors still exist in the multisig account. + /// That is to make sure that the multisig account is not compromised by removing an signer during an active proposal. + /// + /// Once finished, the withheld deposit will be returned to the proposal creator./ + /// + /// # Arguments + /// + /// * `origin` - The origin multisig account who wants to cancel the proposal. + /// * `call_or_hash` - The call or hash of the call to be canceled. + /// + /// # Errors + /// + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `ProposalNotFound` - The proposal does not exist. + pub fn cancel_proposal( + origin: OriginFor<T>, + multisig_account: T::AccountId, + call_or_hash: CallOrHash) -> DispatchResult +}
cancel_own_proposal
#![allow(unused)] +fn main() { + /// Cancels an existing proposal for a multisig account Only if the proposal doesn't have approvers other than + /// the proposer. + /// + /// This function needs to be called from a the proposer of the proposal as the origin. + /// + /// The withheld deposit will be returned to the proposal creator. + /// + /// # Arguments + /// + /// * `multisig_account` - The multisig account ID. + /// * `call_or_hash` - The hash of the call to be canceled. + /// + /// # Errors + /// + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `ProposalNotFound` - The proposal does not exist. + pub fn cancel_own_proposal( + origin: OriginFor<T>, + multisig_account: T::AccountId, + call_or_hash: CallOrHash, + ) -> DispatchResult +}
cleanup_proposals
#![allow(unused)] +fn main() { + /// Cleanup proposals of a multisig account. This function will iterate over a max limit per extrinsic to ensure + /// we don't have unbounded iteration over the proposals. + /// + /// The withheld deposit will be returned to the proposal creator. + /// + /// # Arguments + /// + /// * `multisig_account` - The multisig account ID. + /// + /// # Errors + /// + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `ProposalNotFound` - The proposal does not exist. + pub fn cleanup_proposals( + origin: OriginFor<T>, + multisig_account: T::AccountId, + ) -> DispatchResult +}
Note: Next functions need to be called from the multisig account itself. Deposits are reserved from the multisig account as well.
add_signer
#![allow(unused)] +fn main() { + /// Adds a new signer to the multisig account. + /// This function needs to be called from a Multisig account as the origin. + /// Otherwise it will fail with MultisigNotFound error. + /// + /// T::PerSignerDeposit will be held from the multisig account. + /// + /// # Arguments + /// + /// * `origin` - The origin multisig account who wants to add a new signer to the multisig account. + /// * `new_signer` - The AccountId of the new signer to be added. + /// * `new_threshold` - The new threshold for the multisig account after adding the new signer. + /// + /// # Errors + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `InvalidThreshold` - The threshold is greater than the total number of signers or is zero. + /// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. + pub fn add_signer( + origin: OriginFor<T>, + new_signer: T::AccountId, + new_threshold: u32, + ) -> DispatchResult +}
remove_signer
#![allow(unused)] +fn main() { + /// Removes an signer from the multisig account. + /// This function needs to be called from a Multisig account as the origin. + /// Otherwise it will fail with MultisigNotFound error. + /// If only one signer exists and is removed, the multisig account and any pending proposals for this account will be deleted from the state. + /// + /// # Arguments + /// + /// * `origin` - The origin multisig account who wants to remove an signer from the multisig account. + /// * `signer_to_remove` - The AccountId of the signer to be removed. + /// * `new_threshold` - The new threshold for the multisig account after removing the signer. Accepts zero if + /// the signer is the only one left.kkk + /// + /// # Errors + /// + /// This function can return the following errors: + /// + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `InvalidThreshold` - The new threshold is greater than the total number of signers or is zero. + /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. + pub fn remove_signer( + origin: OriginFor<T>, + signer_to_remove: T::AccountId, + new_threshold: u32, + ) -> DispatchResult +}
set_threshold
#![allow(unused)] +fn main() { + /// Sets a new threshold for a multisig account. + /// This function needs to be called from a Multisig account as the origin. + /// Otherwise it will fail with MultisigNotFound error. + /// + /// # Arguments + /// + /// * `origin` - The origin multisig account who wants to set the new threshold. + /// * `new_threshold` - The new threshold to be set. + /// # Errors + /// + /// * `MultisigNotFound` - The multisig account does not exist. + /// * `InvalidThreshold` - The new threshold is greater than the total number of signers or is zero. + set_threshold(origin: OriginFor<T>, new_threshold: u32) -> DispatchResult +}
delete_multisig
#![allow(unused)] +fn main() { + /// Deletes a multisig account and all related proposals. + /// + /// This function needs to be called from a Multisig account as the origin. + /// Otherwise it will fail with MultisigNotFound error. + /// + /// # Arguments + /// + /// * `origin` - The origin multisig account who wants to cancel the proposal. + /// + /// # Errors + /// + /// * `MultisigNotFound` - The multisig account does not exist. + pub fn delete_account(origin: OriginFor<T>) -> DispatchResult +}
#![allow(unused)] +fn main() { +#[pallet::storage] + pub type MultisigAccount<T: Config> = StorageMap<_, Twox64Concat, T::AccountId, MultisigAccountDetails<T>>; + +/// The set of open multisig proposals. A proposal is uniquely identified by the multisig account and the call hash. +/// (maybe a nonce as well in the future) +#[pallet::storage] +pub type PendingProposals<T: Config> = StorageDoubleMap< + _, + Twox64Concat, + T::AccountId, // Multisig Account + Blake2_128Concat, + T::Hash, // Call Hash + MultisigProposal<T>, +>; +}
As for the values:
#![allow(unused)] +fn main() { +pub struct MultisigAccountDetails<T: Config> { + /// The signers of the multisig account. This is a BoundedBTreeSet to ensure faster operations (add, remove). + /// As well as lookups and faster set operations to ensure approvers is always a subset from signers. (e.g. in case of removal of an signer during an active proposal) + pub signers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>, + /// The threshold of approvers required for the multisig account to be able to execute a call. + pub threshold: u32, + pub deposit: BalanceOf<T>, +} +}
#![allow(unused)] +fn main() { +pub struct MultisigProposal<T: Config> { + /// Proposal creator. + pub creator: T::AccountId, + pub creation_deposit: BalanceOf<T>, + /// The extrinsic when the multisig operation was opened. + pub when: Timepoint<BlockNumberFor<T>>, + /// The approvers achieved so far, including the depositor. + /// The approvers are stored in a BoundedBTreeSet to ensure faster lookup and operations (approve, reject). + /// It's also bounded to ensure that the size don't go over the required limit by the Runtime. + pub approvers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>, + /// The rejectors for the proposal so far. + /// The rejectors are stored in a BoundedBTreeSet to ensure faster lookup and operations (approve, reject). + /// It's also bounded to ensure that the size don't go over the required limit by the Runtime. + pub rejectors: BoundedBTreeSet<T::AccountId, T::MaxSignatories>, + /// The block number until which this multisig operation is valid. None means no expiry. + pub expire_after: Option<BlockNumberFor<T>>, +} +}
For optimization we're using BoundedBTreeSet to allow for efficient lookups and removals. Especially in the case of approvers, we need to be able to remove an approver from the list when they reject their approval. (which we do lazily when execute_proposal is called).
There's an extra storage map for the deposits of the multisig accounts per signer added. This is to ensure that we can release the deposits when the multisig removes them even if the constant deposit per signer changed in the runtime later on.
We need to ensure that the approvers are always a subset from signers. This is also partially why we're using BoundedBTreeSet for signers and approvers. Once execute proposal is called we ensure that the proposal is still valid and the approvers are still a subset from current signers.
Once the last signer of a multisig account is removed or the multisig approved the account deletion we delete the multisig accound from the state and keep the proposals until someone calls cleanup_proposals multiple times which iterates over a max limit per extrinsic. This is to ensure we don't have unbounded iteration over the proposals. Users are already incentivized to call cleanup_proposals to get their deposits back.
We currently just delete the account without checking for deposits (Would like to hear your thoughts here). We can either
We always use latest threshold and don't store each proposal with different threshold. This allows the following:
Standard audit/review requirements apply.
Doing back of the envelop calculation to proof that the stateful multisig is more efficient than the stateless multisig given it's smaller footprint size on blocks.
Quick review over the extrinsics for both as it affects the block size:
Stateless Multisig: +Both as_multi and approve_as_multi has a similar parameters:
as_multi
approve_as_multi
#![allow(unused)] +fn main() { +origin: OriginFor<T>, +threshold: u16, +other_signatories: Vec<T::AccountId>, +maybe_timepoint: Option<Timepoint<BlockNumberFor<T>>>, +call_hash: [u8; 32], +max_weight: Weight, +}
Stateful Multisig: +We have the following extrinsics:
#![allow(unused)] +fn main() { +pub fn start_proposal( + origin: OriginFor<T>, + multisig_account: T::AccountId, + call_or_hash: CallOrHash, + ) +}
#![allow(unused)] +fn main() { +pub fn approve( + origin: OriginFor<T>, + multisig_account: T::AccountId, + call_or_hash: CallOrHash, + ) +}
#![allow(unused)] +fn main() { +pub fn execute_proposal( + origin: OriginFor<T>, + multisig_account: T::AccountId, + call_or_hash: CallOrHash, + ) +}
The main takeway is that we don't need to pass the threshold and other signatories in the extrinsics. This is because we already have the threshold and signatories in the state (only once).
So now for the caclulations, given the following:
The table calculates if each of the K multisig accounts has one proposal and it gets approved by the 2N/3 and then executed. How much did the total Blocks and States sizes increased by the end of the day.
Note: We're not calculating the cost of proposal as both in statefull and stateless multisig they're almost the same and gets cleaned up from the state once the proposal is executed or canceled.
Stateless effect on blocksizes = 2/3KN^2 (as each user of the 2/3 users will need to call approve_as_multi with all the other signatories(N) in extrinsic body)
Stateful effect on blocksizes = K * N (as each user will need to call approve with the multisig account only in extrinsic body)
Stateless effect on statesizes = Nil (as the multisig account is not stored in the state)
Stateful effect on statesizes = K*N (as each multisig account (K) will be stored with all the signers (K) in the state)
Simplified table removing K from the equation: +| Pallet | Block Size | State Size | +|----------------|:-------------:|-----------:| +| Stateless | N^2 | Nil | +| Stateful | N | N |
So even though the stateful multisig has a larger state size, it's still more efficient in terms of block size and total footprint on the blockchain.
The Stateful Multisig will have better ergonomics for managing multisig accounts for both developers and end-users.
This RFC is compatible with the existing implementation and can be handled via upgrades and migration. It's not intended to replace the existing multisig pallet.
multisig pallet in polkadot-sdk
This proposes to increase the maximum length of PGP Fingerprint values from a 20 bytes/chars limit to a 40 bytes/chars limit.
Pretty Good Privacy (PGP) Fingerprints are shorter versions of their corresponding Public Key that may be printed on a business card.
They may be used by someone to validate the correct corresponding Public Key.
It should be possible to add PGP Fingerprints to Polkadot on-chain identities.
GNU Privacy Guard (GPG) is compliant with PGP and the two acronyms are used interchangeably.
If you want to set a Polkadot on-chain identity, users may provide a PGP Fingerprint value in the "pgpFingerprint" field, which may be longer than 20 bytes/chars (e.g. PGP Fingerprints are 40 bytes/chars long), however that field can only store a maximum length of 20 bytes/chars of information.
Possible disadvantages of the current 20 bytes/chars limitation:
identity
setIdentity(info)
The maximum length of identity PGP Fingerprint values should be increased from the current 20 bytes/chars limit at least a 40 bytes/chars limit to support PGP Fingerprints and GPG Fingerprints.
If a user tries to setting an on-chain identity by creating an extrinsic using Polkadot.js with identity > setIdentity(info), then if they try to provide their 40 character long PGP Fingerprint or GPG Fingerprint, which is longer than the maximum length of 20 bytes/chars [u8;20], then they will encounter this error:
[u8;20]
createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Struct: failed on pgpFingerprint: Option<[u8;20]>:: Expected input with 20 bytes (160 bits), found 40 bytes +
Increasing maximum length of identity PGP Fingerprint values from the current 20 bytes/chars limit to at least a 40 bytes/chars limit would overcome these errors and support PGP Fingerprints and GPG Fingerprints, satisfying the solution requirements.
No drawbacks have been identified.
Implementations would be tested for adherance by checking that 40 bytes/chars PGP Fingerprints are supported.
No effect on security or privacy has been identified than already exists.
No implementation pitfalls have been identified.
It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.
To minimize additional overhead the proposal suggests a 40 bytes/chars limit since that would at least provide support for PGP Fingerprints, satisfying the solution requirements.
No potential ergonomic optimizations have been identified.
Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.
No prior articles or references.
No further questions at this stage.
Relates to RFC entitled "Increase maximum length of identity raw data values from 32 bytes".
This proposes to require a slashable deposit in the broker pallet when initially purchasing or renewing Bulk Coretime or Instantaneous Coretime cores.
Additionally, it proposes to record a reputational status based on the behavior of the purchaser, as it relates to their use of Kusama Coretime cores that they purchase, and to possibly reserve a proportion of the cores for prospective purchasers that have an on-chain identity.
There are sales of Kusama Coretime cores that are scheduled to occur later this month by Coretime Marketplace Lastic.xyz initially in limited quantities, and potentially also by RegionX in future that is subject to their Polkadot referendum #582. This poses a risk in that some Kusama Coretime core purchasers may buy Kusama Coretime cores when they have no intention of actually placing a workload on them or leasing them out, which would prevent those that wish to purchase and actually use Kusama Coretime cores from being able to use any at cores at all.
The types of purchasers may include:
Chaoatic repurcussions could include the following:
On-chain identity. It may be possible to circumvent bots and scalpers to an extent by requiring a proportion of Kusama Coretime purchasers to have an on-chain identity. As such, a possible solution could be to allow the configuration of a threshold in the Broker pallet that reserves a proportion of the cores for accounts that have an on-chain identity, that reverts to a waiting list of anonymous account purchasers if the reserved proportion of cores remain unsold.
Slashable deposit. A viable solution could be to require a slashable deposit to be locked prior to the purchase or renewal of a core, similar to how decision deposits are used in OpenGov to prevent spam, but where if you buy a Kusama Coretime core you could be challenged by one of more collectives of fishermen to provide proof against certain criteria of how you used it, and if you fail to provide adequate evidence in response to that scrutiny, then you would lose a proportion of that deposit and face restrictions on purchasing or renewing cores in future that may also be configured on-chain.
Reputation. To disincentivise certain behaviours, a reputational status indicator could be used to record the historic behavior of the purchaser and whether on-chain judgement has determined they have adequately rectified that behaviour, as it relates to their usage of Kusama Coretime cores that they purchase.
The slashable deposit if set too high, may result in an economic impact, where less Kusama Coretime core sales are purchased.
Lack of a slashable deposit in the Broker pallet is a security concern, since it exposes Kusama Coretime sales to potential abuse.
Reserving a proportion of Kusama Coretime sales cores for those with on-chain identities should not be to the exclusion of accounts that wish to remain anonymous or cause cores to be wasted unnecessarily. As such, if cores that are reserved for on-chain identities remain unsold then they should be released to anonymous accounts that are on a waiting list.
It should improve performance as it reduces the potential for state bloat since there is less risk of undesirable Kusama Coretime sales activity that would be apparent with no requirement for a slashable deposit or there being no reputational risk to purchasers that waste or misuse Kusama Coretime cores.
The solution proposes to minimize the risk of some Kusama Coretime cores not even being used or leased to perform any tasks at all.
It will be important to monitor and manage the slashable deposits, purchaser reputations, and utilization of the proportion of cores that are reserved for accounts with an on-chain identity.
The mechanism for setting a slashable deposit amount, should avoid undue complexity for users.
No prior articles.
This RFC proposes a new pallet_inflation to be added to the Polkadot runtime, which improves +inflation machinery of the Polkadot relay chain in a number of ways:
pallet_inflation
The existing inflation logic in the relay chain suffers from a number of drawbacks:
Event
This RFC, as iterated above, proposes a new pallet_inflation that addresses all of the named +problems. However, this RFC does not propose any changes to the actual inflation rate, but +rather provide a new technical substrate (pun intended), upon which token holders can decide on the +future of the DOT token's inflation in a more clear and transparent way.
We argue that one reason why the inflation rate of Polkadot has not significantly change in ~4 years +has been the complicated process of updating it. We hope that with the tools provided in this RFC, +stakeholders can experiment with the inflation rate in a more ergonomic way. Finally, this +experimentation can be considered useful as a final step toward fixing the economics of DOT in JAM, +as proposed in the JAM graypaper.
Within the scope of this RFC, we suggest deploying the new inflation pallet in a backwards +compatible way, such that the inflation model does not change in practice, and leave the actual +changes to the token holders and researchers and further governance proposals.
+While mainly intended for Polkadot, the system proposed in this RFC is general enough such that it +can be interpreted as a "general inflation system pallet", and can be used in newly onboarding +parachain. +
While mainly intended for Polkadot, the system proposed in this RFC is general enough such that it +can be interpreted as a "general inflation system pallet", and can be used in newly onboarding +parachain.
This RFC is relevant to the following stakeholders, listed from high to low impact:
First, let's further elaborate on the existing order. The current inflation logic is deeply nested +in pallet_staking, and pallet_staking::Config::EraPayout interface. Through this trait, the +staking pallet is informed how many new tokens should possibly be minted. This amount is divided +into two parts:
pallet_staking
pallet_staking::Config::EraPayout
pallet_staking::Config::RewardRemainder
As it stands now the implementation of EraPayout which specifies the two amounts above lives in +the respective runtime, and uses the original proposed inflation rate proposed by W3F for Polkadot. +Read more about this model here.
EraPayout
At present, the inflation always happens at the end of an era, which is a concept know by the +staking system. The duration of an era is recorded in pallet_staking as milliseconds (as recorded +by the standard pallet_timestamp), is passed to EraPayout as an input, as is measured against +the full year to determine how much should be inflated.
pallet_timestamp
+The naming used in this section is tentative, based on a WIP implementation, and subject to change +before finalization of this RFC. +
The naming used in this section is tentative, based on a WIP implementation, and subject to change +before finalization of this RFC.
The new order splits the process for inflation into two steps:
In very abstract terms, an example of the above process can be:
i
A proper configuration of this pallet should use pallet_parameters where possible to allow for any +of the actual values used to specify Sourcing and Distribution to be changed via on-chain +governance. Please see the example configurations section for more +details.
pallet_parameters
Sourcing
Distribution
In the new model, inflation can happen at any point in time. Since now a new pallet is dedicated to +inflation, and it can internally store the timestamp of the last inflation point, and always inflate +the correct amount. This means that while the duration of a staking era is 1 day, the inflation +process can happen eg. every hour. The opposite is also possible, although more complicated: The +staking/treasury system can possibly receive their corresponding income on a weekly basis, while the +era duration is still 1 day. That being said, we don't recommend using this flexibility as it brings +no clear advantage, and is only extra complexity. We recommend the inflation to still happen shortly +before the end of the staking era. This means that if the inflation sourcing or distribution is +a function of the staking rate, it can reliably use the staking rate of the last era.
sourcing
distribution
Finally, as noted above, this RFC implies a new accounting system for staking to keep track of its +staking reward. In short, the new process is as follows: pallet_inflation will mint the staking +portion of inflation directly into a key-less account controlled by pallet_staking. At the end of +each era, pallet_staking will inspect this account, and move whatever amount is paid out into it +to another key-less account associated with the era number. The actual payouts, initiated by stakers, +will transfer from this era account into the corresponding stakers' account.
+Interestingly, this means that any account can possibly contribute to staking rewards by +transferring DOTs to the key-less parent account controlled by the staking system. +
Interestingly, this means that any account can possibly contribute to staking rewards by +transferring DOTs to the key-less parent account controlled by the staking system.
A candidate implementation of this RFC can be found in +this +branch of the polkadot-sdk repository. Please note the changes to:
polkadot-sdk
substrate/frame/inflation
substrate/frame/staking
substrate/bin/runtime
The following are working examples from the above implementation candidate, highlighting some of the +outcomes that can be achieved.
First, to parameterize the existing proposed implementation to replicate what Polkadot does today, +assuming we incorporate the fixed 2% treasury income, the outcome would be:
#![allow(unused)] +fn main() { +parameter_types! { + pub Distribution: Vec<pallet_inflation::DistributionStep<Runtime>> = vec![ + // 2% goes to treasury, no questions asked. + Box::new(pay::<Runtime, TreasuryAccount, dynamic_params::staking::FixedTreasuryIncome>), + // from whatever is left, staking gets all the rest, based on the staking rate. + Box::new(polkadot_staking_income::< + Runtime, + dynamic_params::staking::IdealStakingRate, + dynamic_params::staking::Falloff, + StakingIncomeAccount + >), + // Burn anything that is left. + Box::new(burn::<Runtime, All>), + ]; +} + +impl pallet_inflation::Config for Runtime { + /// Fixed 10% annual inflation. + type InflationSource = + pallet_inflation::FixedRatioAnnualInflation<Runtime, dynamic_params::staking::MaxInflation>; + type Distribution = Distribution; +} +}
In this snippet, we use a number of components provided by pallet_inflation, namely pay, +polkadot_staking_income, burn and FixedRatioAnnualInflation. Yet, crucially, these components +are fed parameters that are all backed by an instance of the pallet_parameters, namely everything +prefixed by dynamic_params.
pay
polkadot_staking_income
burn
FixedRatioAnnualInflation
dynamic_params
The above is a purely inflationary system. If one wants to change the inflation to +dis-inflationary, another pre-made component of pallet_inflation can be used:
impl pallet_inflation::Config for Runtime { +- /// Fixed 10% annual inflation. +- type InflationSource = +- pallet_inflation::FixedRatioAnnualInflation<Runtime, dynamic_params::staking::MaxInflation>; ++ type InflationSource = pallet_inflation::FixedAnnualInflation< ++ Runtime, ++ dynamic_params::staking::FixedAnnualInflationAmount, ++ >; +} +
Whereby FixedAnnualInflationAmount is the fixed absolute value (as opposed to ratio) by +which the chain inflates annually, for example 100m DOTs.
FixedAnnualInflationAmount
The following drawbacks are noted:
The new pallet_inflation, among its integration into pallet_staking must be thoroughly audited +and reviewed by fellows. We also emphasize on simulating the actual inflation logic using the real +polkadot state with Chopsticks and try-runtime.
The proposed system in this RFC implies a handful of extra storage reads and writes "per inflation +cycle", but given that a reasonable instance of this pallet would probably decide to inflation eg. +once per day, the performance impact is negligible.
The drawback section above noted some ergonomic concerns.
The "New Order" section above notes the compatibility notes with the existing staking +and inflation system.
This RFC proposes the addition of a secondary market feature to either the broker pallet or as a separate pallet maintained by Lastic, enabling users to list and purchase regions. This includes creating, purchasing, and removing listings, as well as emitting relevant events and handling associated errors.
Currently, the broker pallet lacks functionality for a secondary market, which limits users' ability to freely trade regions. This RFC aims to introduce a secure and straightforward mechanism for users to list regions they own for sale and allow other users to purchase these regions.
While integrating this functionality directly into the broker pallet is one option, another viable approach is to implement it as a separate pallet maintained by Lastic. This separate pallet would have access to the broker pallet and add minimal functionality necessary to support the secondary market.
Adding smart contracts to the Coretime chain could also address this need; however, this process is expected to be lengthy and complex. We cannot afford to wait for this extended timeline to enable basic secondary market functionality. By proposing either integration into the broker pallet or the creation of a dedicated pallet, we can quickly enhance the flexibility and utility of the broker pallet, making it more user-friendly and valuable.
Primary stakeholders include:
This RFC introduces the following key features:
Storage Changes:
Listings
New Dispatchable Functions:
create_listing
purchase_listing
remove_listing
Events:
ListingCreated
RegionSold
ListingRemoved
Error Handling:
ExpiredRegion
UnknownListing
InvalidPrice
NotOwner
Testing:
The main drawback of adding the additional complexity directly to the broker pallet is the potential increase in maintenance overhead. Therefore, we propose adding additional functionality as a separate pallet on the Coretime chain. To take the pressure off from implementing these features, implementation along with unit tests would be taken care of by Lastic (Aurora Makovac, Philip Lucsok).
There are potential risks of security vulnerabilities in the new market functionalities, such as unauthorized region transfers or incorrect balance adjustments. Therefore, extensive security measures would have to be implemented.
This RFC proposes the integration of smart contracts on the Coretime chain to enhance flexibility and enable complex decentralized applications, including secondary market functionalities.
Currently, the Coretime chain lacks the capability to support smart contracts, which limits the range of decentralized applications that can be developed and deployed. By enabling smart contracts, the Coretime chain can facilitate more sophisticated functionalities such as automated region trading, dynamic pricing mechanisms, and other decentralized applications that require programmable logic. This will enhance the utility of the Coretime chain, attract more developers, and create more opportunities for innovation.
Additionally, while there is a proposal (#885) to allow EVM-compatible contracts on Polkadot’s Asset Hub, the implementation of smart contracts directly on the Coretime chain will provide synchronous interactions and avoid the complexities of asynchronous operations via XCM.
This RFC introduces the following key components:
Smart Contract Support:
Storage and Execution:
Integration with Existing Pallets:
Security and Auditing:
There are several drawbacks to consider:
By enabling smart contracts on the Coretime chain, we can significantly expand its capabilities and attract a wider range of developers and users, fostering innovation and growth in the ecosystem.
This RFC proposes a solution to replicate an existing pure proxy from one chain to others. The aim is to address the current limitations where pure proxy accounts, which are keyless, cannot have their proxy relationships recreated on different chains. This leads to issues where funds or permissions transferred to the same keyless account address on chains other than its origin chain become inaccessible.
A pure proxy is a new account created by a primary account. The primary account is set as a proxy for the pure proxy account, managing it. Pure proxies are keyless and non-reproducible, meaning they lack a private key and have an address derived from a preimage determined by on-chain logic. More on pure proxies can be found here.
For the purpose of this document, we define a keyless account as a "pure account", the controlling account as a "proxy account", and the entire relationship as a "pure proxy".
The relationship between a pure account (e.g., account ID: pure1) and its proxy (e.g., account ID: alice) is stored on-chain (e.g., parachain A) and currently cannot be replicated to another chain (e.g., parachain B). Because the account pure1 is keyless and its proxy relationship with alice is not replicable from the parachain A to the parachain B, alice does not control the pure1 account on the parachain B.
pure1
alice
A
B
Given that these mistakes are likely, it is necessary to provide a solution to either prevent them or enable access to a pure account on a target chain.
Runtime Users, Runtime Devs, wallets, cross-chain dApps.
One possible solution is to allow a proxy to create or replicate a pure proxy relationship for the same pure account on a target chain. For example, Alice, as the proxy of the pure1 pure account on parachain A, should be able to set a proxy for the same pure1 account on parachain B.
To minimise security risks, the parachain B should grant the parachain A the least amount of permission necessary for the replication. First, Parachain A claims to Parachain B that the operation is commanded by the pure account, and thus by its proxy, and second, provides proof that the account is keyless.
The replication process will be facilitated by XCM, with the first claim made using the DescendOrigin instruction. The replication call on parachain A would require a signed origin by the pure account and construct an XCM program for parachain B, where it first descends the origin, resulting in the ParachainA/AccountId32(pure1) origin location on the receiving side.
DescendOrigin
ParachainA/AccountId32(pure1)
There are two disadvantages to this approach:
We could eliminate the first disadvantage by allowing only the spawner of the pure proxy to recreate the pure proxies, if they sign the transaction on a remote chain and supply the witness/preimage. Since the preimage of a pure account includes the account ID of the spawner, we can verify that the account signing the transaction is indeed the spawner of the given pure account. However, this approach would grant exclusive rights to the spawner over the pure account, which is not a property of pure proxies at present. This is why it's not an option for us.
As an alternative to requiring clients to provide a witness data, we could label pure accounts on the source chain and trust it on the receiving chain. However, this would require the receiving chain to place greater trust in the source chain. If the source chain is compromised, any type of account on the trusting chain could also be compromised.
A conceptually different solution would be to not implement replication of pure proxies and instead inform users that ownership of a pure proxy on one chain does not imply ownership of the same account on another chain. This solution seems complex, as it would require UIs and clients to adapt to this understanding. Moreover, mistakes would likely remain unavoidable.
Each chain expressly authorizes another chain to replicate its pure proxies, accepting the inherent risk of that chain potentially being compromised. This authorization allows a malicious actor from the compromised chain to take control of any pure proxy account on the chain that granted the authorization. However, this is limited to pure proxies that originated from the compromised chain if they have a chain-specific seed within the preimage.
There is a security issue, not introduced by the proposed solution but worth mentioning. The same spawner can create the pure accounts on different chains controlled by the different accounts. This is possible because the current preimage version of the proxy pallet does not include any non-reproducible, chain-specific data, and elements like block numbers and extrinsic indexes can be reproduced with some effort. This issue could be addressed by adding a chain-specific seed into the preimages of pure accounts.
The replication is facilitated by XCM, which adds some additional load to the communication channel. However, since the number of replications is not expected to be large, the impact is minimal.
The proposed solution does not alter any existing interfaces. It does require clients to obtain the witness data which should not be an issue with support of an indexer.
This RFC proposes compressing the state response message during the state syncing process to reduce the amount of data transferred.
State syncing can require downloading several gigabytes of data, particularly for blockchains with large state sizes, such as Astar, which has a state size exceeding 5 GiB (https://github.com/AstarNetwork/Astar/issues/1110). This presents a significant challenge for nodes with slower network connections. Additionally, the current state sync implementation lacks a persistence feature (https://github.com/paritytech/polkadot-sdk/issues/4), meaning any network disruption forces the node to re-download the entire state, making the process even more difficult.
This RFC benefits all projects utilizing the Substrate framework, specifically in improving the efficiency of state syncing.
The largest portion of the state response message consists of either CompactProof or Vec<KeyValueStateEntry>, depending on whether a proof is requested (source):
CompactProof
Vec<KeyValueStateEntry>
None identified.
The code changes required for this RFC are straightforward: compress the state response on the sender side and decompress it on the receiver side. Existing sync tests should ensure functionality remains intact.
This RFC optimizes network bandwidth usage during state syncing, particularly for blockchains with gigabyte-sized states, while introducing negligible CPU overhead for compression and decompression. For example, compressing the state response during a recent Polkadot warp sync (around height #22076653) reduces the data transferred from 530,310,121 bytes to 352,583,455 bytes — a 33% reduction, saving approximately 169 MiB of data.
Performance data is based on this patch, with logs available here.
No compatibility issues identified.
This RFC proposes a new host function, secp256r1_ecdsa_verify_prehashed, for verifying NIST-P256 signatures. The function takes as input the message hash, r and s components of the signature, and the x and y coordinates of the public key. By providing this function, runtime authors can leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures, reducing computational costs and improving overall performance.
secp256r1_ecdsa_verify_prehashed
NIST-P256
r
s
x
y
“secp256r1” elliptic curve is a standardized curve by NIST which has the same calculations by different input parameters with “secp256k1” elliptic curve. The cost of combined attacks and the security conditions are almost the same for both curves. Adding a host function can provide signature verifications using the “secp256r1” elliptic curve in the runtime and multi-faceted benefits can occur. One important factor is that this curve is widely used and supported in many modern devices such as Apple’s Secure Enclave, Webauthn, Android Keychain which proves the user adoption. Additionally, the introduction of this host function could enable valuable features in the account abstraction which allows more efficient and flexible management of accounts by transaction signs in mobile devices. Most of the modern devices and applications rely on the “secp256r1” elliptic curve. The addition of this host function enables a more efficient verification of device native transaction signing mechanisms. For example:
This RFC proposes a new host function for runtime authors to leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures.
Proposed host function signature:
#![allow(unused)] @@ -437,19 +3151,19 @@ Most of the modern devices and applications rely on the “secp256r1” elliptic ) -> bool; }
The host function MUST return true if the signature is valid or false otherwise.
The changes are not directly affecting the protocol security, parachains are not enforced to use the host function.
The host function proposed in this RFC allows parachain runtime developers to use a more efficient verification mechanism for "secp256r1" elliptic curve signatures.
Parachain teams will need to include this host function to upgrade.
A followup of the RFC-0014. This RFC proposes adding a new collective to the Polkadot Collectives Chain: The Unbrick Collective, as well as improvements in the mechanisms that will allow teams operating paras that had stopped producing blocks to be assisted, in order to restore the production of blocks of these paras.
Since the initial launch of Polkadot parachains, there has been many incidients causing parachains to stop producing new blocks (therefore, being bricked) and many occurrences that requires Polkadot governance to update the parachain head state/wasm. This can be due to many reasons range @@ -509,14 +3223,14 @@ damage to the parachain and users.
In consequence, the idea of a Unbrick Collective that can provide assistance to para teams when they brick and further protection against future halts is reasonable enough.
The Unbrick Collective is defined as an unranked collective of members, not paid by the Polkadot Treasury. Its main goal is to serve as a point of contact and assistance for enacting the actions @@ -578,31 +3292,31 @@ of the new PVF being set). Therefore, they must have the technical capacity to p
The ability to modify the Head State and/or the PVF of a para means a possibility to perform arbitrary modifications of it (i.e. take control the native parachain token or any bridged assets in the para).
This could introduce a new attack vectorm, and therefore, such great power needs to be handled carefully.
The implementation of this RFC will be tested on testnets (Rococo and Westend) first.
An audit will be required to ensure the implementation doesn't introduce unwanted side effects.
There are no privacy related concerns.
This RFC should improve the experience for new and existing parachain teams, lowering the barrier to unbrick a stalled para.
This RFC is fully compatible with existing interfaces.
WhitelistedUnbrickCaller
Unbrick
This RFC proposes to change the duration of the Confirmation Period for the Big Tipper and Small Tipper tracks in Polkadot OpenGov:
Big Tipper: 1 Hour -> 1 Day
Currently, these are the durations of treasury tracks in Polkadot OpenGov. Confirmation periods for the Spender tracks were adjusted based on RFC20 and its related conversation.
You can see that there is a general trend on the Spender track that when the privilege level (the amount the track can spend) the confirmation period approximately doubles.
I believe that the Big Tipper and Small Tipper track's confirmation periods should be adjusted to match this trend.
In the current state it is possible to somewhat positively snipe these tracks, and whilst the power/privilege level of these tracks is very low (they cannot spend a large amount of funds), I believe we should increase the confirmation periods to something higher. This is backed up by the recent sentiment in the greater community regarding referendums submitted on these tracks. The parameters of Polkadot OpenGov can be adjusted based on the general sentiment of token holders when necessary.
The primary stakeholders of this RFC are: – DOT token holders – as this affects the protocol's treasury – Entities wishing to submit a referendum on these tracks – as this affects the referendum's timeline – Projects with governance app integrations – see Performance, Ergonomics and Compatibility section below
This RFC proposes to change the duration of the confirmation period for both the Big Tipper and Small Tipper tracks. To achieve this the confirm_period parameter for those tracks should be changed.
confirm_period
You can see the lines of code that need to be adjusted here:
This RFC proposes to change the confirm_period for the Big Tipper track to DAYS (i.e. 1 Day) and the confirm_period for the Small Tipper track to 12 * HOURS (i.e. 12 Hours).
DAYS
12 * HOURS
The drawback of changing these confirmation periods is that the lifecycle of referenda submitted on those tracks would be ultimately longer, and it would add a greater potential to negatively "snipe" referenda on those tracks by knocking the referendum out of its confirmation period once the decision period has ended. This can be a good or a bad thing depending on your outlook of positive vs negative sniping.
This referendum will enhance the security of the protocol as it relates to its treasury. The confirmation period is one of the last lines of defense for the Polkadot token holder DAO to react to a potentially bad referendum and vote NAY in order for its confirmation period to be aborted.
This is a simple change (code wise) that should not affect the performance of the Polkadot protocol, outside of increasing the duration of the confirmation periods for these 2 tracks.
As per the implementation of changes described in RFC-20, it was identified that governance UIs automatically update to meet the new parameters:
Some token holders may want these confirmation periods to remain as they are currently and for them not to increase. If this is something that the Polkadot Technical Fellowship considers to be an issue to implement into a runtime upgrade then I can create a Wish For Change to obtain token holder approval.
The parameters of Polkadot OpenGov will likely continue to change over time, there are additional discussions in the community regarding adjusting the min_support for some tracks so that it does not trend towards 0%, similar to the current state of the Whitelisted Caller track. This is outside of the scope of this RFC and requires a lot more discussion.
min_support
This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.
The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.
The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.
More specifically, it is impossible to lease out cores at anything less than six months, and apparently unrealistic to do so at anything less than two years. This removes the ability to dynamically manage the underlying resource, and generally experimentation, iteration and innovation suffer. It bakes into the platform an assumption of permanence for anything deployed into it and restricts the market's ability to find a more optimal allocation of the finite resource.
There is no ability to determine capital requirements for hosting a parachain beyond two years from the point of its initial deployment onto Polkadot. While it would be unreasonable to have perfect and indefinite cost predictions for any real-world platform, not having any clarity whatsoever beyond "market rates" two years hence can be a very off-putting prospect for teams to buy into.
However, quite possibly the most substantial problem is both a perceived and often real high barrier to entry of the Polkadot ecosystem. By forcing innovators to either raise seven-figure sums through investors or appeal to the wider token-holding community, Polkadot makes it difficult for a small band of innovators to deploy their technology into Polkadot. While not being actually permissioned, it is also far from the barrierless, permissionless ideal which an innovation platform such as Polkadot should be striving for.
Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.
Socialization:
The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.
Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.
When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.
Bulk Coretime is sold periodically on a specialised system chain known as the Coretime-chain and allocated in advance of its usage, whereas Instantaneous Coretime is sold on the Relay-chain immediately prior to usage on a block-by-block basis.
The specific interface is properly described in RFC-5.
This proposal includes a number of parameters which need not necessarily be fixed. Their usage is explained below, but their values are suggested or specified in the later section Parameter Values.
The Coretime-chain includes some governance-set reservations of Coretime; these cover every System-chain. Additionally, governance is expected to initialize details of the pre-existing leased chains.
No specific considerations.
Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.
While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.
Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.
A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.
Any final implementation MUST pass a professional external security audit.
RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.
RFC-5 proposes the API for interacting with Relay-chain.
Additional work should specify the interface for the instantaneous market revenue so that the Coretime-chain can ensure Bulk Coretime placed in the instantaneous market is properly compensated.
Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.
In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.
This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.
The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.
This content of this RFC was discussed in the Polkdot Fellows channel.
The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.
Transact
Future work may include these messages being introduced into the XCM standard.
For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.
request_revenue_info
when
For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.
assign_core
begin
workload
Standard Polkadot testing and security auditing applies.
RFC-1 proposes a means of determining allocation of Coretime using this interface.
As core functionality moves from the Relay Chain into system chains, so increases the reliance on the liveness of these chains for the use of the network. It is not economically scalable, nor necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a mechanism -- part technical and part social -- for ensuring reliable collator sets that are resilient to attemps to stop any subsytem of the Polkadot protocol.
In order to guarantee access to Polkadot's system, the collators on its system chains must propose blocks (provide liveness) and allow all transactions to eventually be included. That is, some collators may censor transactions, but there must exist one collator in the set who will include a @@ -1459,7 +4173,7 @@ coordinated attempts to stop a single chain from halting or to censor a particul transactions.
In the case that users do not trust this set, this RFC also proposes that each chain always have available collator positions that can be acquired by anyone by placing a bond.
reserve
hold
This protocol builds on the existing Collator Selection pallet and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who @@ -1509,27 +4223,27 @@ approximately:
AccountId
The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.
The vast majority of cases can be covered by unit testing. Integration test should ensure that the Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired number of Candidates, can handle updates over XCM from the system's governance location.
UpdateOrigin
This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.
As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.
The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.
This RFC is compatible with the existing implementation and can be handled via upgrades and migration.
There may exist in the future system chains for which this model of collator selection is not appropriate. These chains should be evaluated on a case-by-case basis.
The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.
This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.
The maintenance of bootnodes has long been an annoyance for everyone.
When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.
Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.
While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.
Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.
This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.
The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.
Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.
While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.
The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.
fork_id
addrs
Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.
The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.
peer_id
The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.
genesis_hash
Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.
This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.
For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.
Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.
The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.
Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.
BabeApi_next_epoch
Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.
While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?
BabeApi_currentEpoch
BabeApi_nextEpoch
It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.
The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.
How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.
Polkadot DOT token holders.
This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.
It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.
Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.
Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.
Many groups have expressed interest in representing collectives on-chain. Some of these include:
The group that wishes to operate an on-chain collective should publish the following information:
Collective removal may also come with other governance calls, for example voiding any scheduled Treasury spends that would fund the given collective.
Passing a Root origin referendum is slow. However, given the network's investment (in terms of code maintenance and salaries) in a new collective, this is an appropriate step.
No impacts.
Generally all new collectives will be in the Collectives parachain. Thus, performance impacts should strictly be limited to this parachain and not affect others. As the majority of logic for collectives is generalized and reusable, we expect most collectives to be instances of similar subsets of modules. That is, new collectives should generally be compatible with UIs and other services that provide collective-related functionality, with little modifications to support new ones.
The launch of the Technical Fellowship, see the initial forum post.
Introduces breaking changes to the Core runtime API by letting Core::initialize_block return an enum. The versions of Core is bumped from 4 to 5.
Core
Core::initialize_block
The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks. Further it would be nice to not hinder the possibility of implementing a new hook poll, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem. This hook can then be used to replace the use of on_initialize and on_finalize for non-deadline critical logic. In a similar fashion, it should not hinder the future addition of a System::PostInherents callback that always runs after all inherents were applied.
poll
AllPalletsWithSystem
on_initialize
on_finalize
System::PostInherents
This runtime API function is changed from returning () to ExtrinsicInclusionMode:
()
ExtrinsicInclusionMode
fn initialize_block(header: &<Block as BlockT>::Header) @@ -1881,23 +4595,23 @@ multi-block migrations available. 1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants. 2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents. 3. System::PostInherents can be done in the same manner as poll. -Drawbacks +Drawbacks The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far. -Testing, Security, and Privacy +Testing, Security, and Privacy The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned. Security: n/a Privacy: n/a -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance The performance overhead is minimal in the sense that no clutter was added after fulfilling the requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible. -Ergonomics +Ergonomics The new interface allows for more extensible runtime logic. In the future, this will be utilized for multi-block-migrations which should be a huge ergonomic advantage for parachain developers. -Compatibility +Compatibility The advice here is OPTIONAL and outside of the RFC. To not degrade user experience, it is recommended to ensure that an updated node can still import historic blocks. -Prior Art and References +Prior Art and References The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge requests: @@ -1907,14 +4621,14 @@ transactions There is no module hook after inherents and before transactions -Unresolved Questions +Unresolved Questions Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode, ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them. => renamed to ExtrinsicInclusionMode Is post_inherents more consistent instead of last_inherent? Then we should change it. => renamed to last_inherent -Future Directions and Related Material +Future Directions and Related Material The long-term future here is to move the block building logic into the runtime. Currently there is a tight dance between the block author and the runtime; the author has to call into different runtime functions in quick succession and exact order. Any misstep causes the block to be invalid. This can be unified and simplified by moving both parts into the runtime. (source) @@ -1951,14 +4665,14 @@ This can be unified and simplified by moving both parts into the runtime. AuthorsBryan Chen -Summary +Summary This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action. This is achieved by remove existing lock conditions and only lock a parachain when: A parachain manager explicitly lock the parachain OR a parachain block is produced successfully -Motivation +Motivation The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain. The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting. The key scenarios this RFC seeks to improve are: @@ -1970,19 +4684,19 @@ This can be unified and simplified by moving both parts into the runtime. Perform lease renewal for an existing parachain. One way to perform lease renewal for a parachain is by doing a least swap with another parachain with a longer lease. This requires the other parachain must be operational and able to perform XCM transact call into relaychain to dispatch the swap call. Combined with the overhead of setting up a new parachain, this is an time consuming and expensive process. Ideally, the parachain manager should be able to perform the lease swap call without having a running parachain2. -Requirements +Requirements A parachain manager SHOULD be able to rescue a parachain by updating the wasm/genesis without root track governance action. A parachain manager MUST NOT be able to update the wasm/genesis if the parachain is locked. A parachain SHOULD be locked when it successfully produced the first block. A parachain manager MUST be able to perform lease swap without having a running parachain. -Stakeholders +Stakeholders Parachain teams Parachain users -Explanation +Explanation Status quo A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet: @@ -2022,31 +4736,31 @@ This can be unified and simplified by moving both parts into the runtime. Parachain never produced a block. Including from expired leases. Parachain manager never explicitly lock the parachain. -Drawbacks +Drawbacks Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains. For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective. It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently. Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive. Existing operational parachains will not be impacted. -Testing, Security, and Privacy +Testing, Security, and Privacy The implementation of this RFC will be tested on testnets (Rococo and Westend) first. An audit maybe required to ensure the implementation does not introduce unwanted side effects. There is no privacy related concerns. -Performance +Performance This RFC should not introduce any performance impact. -Ergonomics +Ergonomics This RFC should improve the developer experiences for new and existing parachain teams -Compatibility +Compatibility This RFC is fully compatibility with existing interfaces. -Prior Art and References +Prior Art and References Parachain Slot Extension Story: https://github.com/paritytech/polkadot/issues/4758 Allow parachain to renew lease without actually run another parachain: https://github.com/paritytech/polkadot/issues/6685 Always treat parachain that never produced block for a significant amount of time as unlocked: https://github.com/paritytech/polkadot/issues/7539 -Unresolved Questions +Unresolved Questions None at this stage. -Future Directions and Related Material +Future Directions and Related Material This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered. 1 https://github.com/paritytech/cumulus/issues/377 @@ -2080,19 +4794,19 @@ This can be unified and simplified by moving both parts into the runtime. Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland -Summary +Summary Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR. -Motivation +Motivation Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does. Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains. -Stakeholders +Stakeholders Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have. Kusama Network: Tokenholders can easily see the changes of all system chains in one place. Encointer Association: Further decentralization of the Encointer Network necessities like devops. Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers. -Explanation +Explanation Our PR has all details about our runtime and how we would move it into the fellowship repo. Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum. @@ -2101,17 +4815,17 @@ This can be unified and simplified by moving both parts into the runtime. Encointer will publish all its crates crates.io Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains. -Drawbacks +Drawbacks Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding. -Testing, Security, and Privacy +Testing, Security, and Privacy No changes to the existing system are proposed. Only changes to how maintenance is organized. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility No changes -Prior Art and References +Prior Art and References Existing Encointer runtime repo -Unresolved Questions +Unresolved Questions None identified -Future Directions and Related Material +Future Directions and Related Material More info on Encointer: encointer.org (source) Table of Contents @@ -3031,11 +5745,11 @@ other privacy-enhancing mechanisms to address this concern. AuthorsJoe Petrowski, Gavin Wood -Summary +Summary The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains. -Motivation +Motivation Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -3047,13 +5761,13 @@ blockspace) to the network. By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace. -Stakeholders +Stakeholders Parachains that interact with affected logic on the Relay Chain; Core protocol and XCM format developers; Tooling, block explorer, and UI developers. -Explanation +Explanation The following pallets and subsystems are good candidates to migrate from the Relay Chain: Identity @@ -3199,36 +5913,36 @@ sensible to rehearse a migration. Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- will give confidence to the chain's robustness on Polkadot. -Drawbacks +Drawbacks These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular may require some optimizations to deal with constraints. -Testing, Security, and Privacy +Testing, Security, and Privacy Standard audit/review requirements apply. More powerful multi-chain integration test tools would be useful in developement. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility Describe the impact of the proposal on the exposed functionality of Polkadot. -Performance +Performance This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its primary resources are allocated to system performance. -Ergonomics +Ergonomics This proposal alters very little for coretime users (e.g. parachain developers). Application developers will need to interact with multiple chains, making ergonomic light client tools particularly important for application development. For existing parachains that interact with these subsystems, they will need to configure their runtimes to recognize the new locations in the network. -Compatibility +Compatibility Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. Application developers will need to interact with multiple chains in the network. -Prior Art and References +Prior Art and References Transactionless Relay-chain Moving Staking off the Relay Chain -Unresolved Questions +Unresolved Questions There remain some implementation questions, like how to use balances for both Staking and Governance. See, for example, Moving Staking off the Relay Chain. -Future Directions and Related Material +Future Directions and Related Material Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. With Staking and Governance off the Relay Chain, this is not an unreasonable next step. With Identity on Polkadot, Kusama may opt to drop its People Chain. @@ -3263,13 +5977,13 @@ With Staking and Governance off the Relay Chain, this is not an unreasonable nex AuthorsVedhavyas Singareddi -Summary +Summary At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the Storage. We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field under RuntimeVersion, we would like to propose adding system_version that can be used to derive both storage and extrinsic state version. -Motivation +Motivation Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is further explored in https://github.com/polkadot-fellows/RFCs/issues/19 @@ -3281,11 +5995,11 @@ One of the main challenge here is some extrinsics could be big enough that this included in the Consensus block due to Block's weight restriction. If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but rather at maximum, 32 byte of extrinsic data. -Stakeholders +Stakeholders Technical Fellowship, in its role of maintaining system runtimes. -Explanation +Explanation In order to use project specific StateVersion for extrinsic roots, we proposed an implementation that introduced parameter to frame_system::Config but that unfortunately did not feel correct. @@ -3311,26 +6025,26 @@ pub const VERSION: RuntimeVersion = RuntimeVersion { system_version: 1, }; }
1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.
OnlyInherents
initialize_block
2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.
apply_extrinsic
System::last_inherent
3. System::PostInherents can be done in the same manner as poll.
The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.
The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.
Security: n/a
Privacy: n/a
The performance overhead is minimal in the sense that no clutter was added after fulfilling the requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.
The new interface allows for more extensible runtime logic. In the future, this will be utilized for multi-block-migrations which should be a huge ergonomic advantage for parachain developers.
The advice here is OPTIONAL and outside of the RFC. To not degrade user experience, it is recommended to ensure that an updated node can still import historic blocks.
The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge requests:
Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode, ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them. => renamed to ExtrinsicInclusionMode
BlockExecutiveMode
RuntimeExecutiveMode
Normal
Minimal
AllExtrinsics
Is post_inherents more consistent instead of last_inherent? Then we should change it. => renamed to last_inherent
post_inherents
last_inherent
The long-term future here is to move the block building logic into the runtime. Currently there is a tight dance between the block author and the runtime; the author has to call into different runtime functions in quick succession and exact order. Any misstep causes the block to be invalid. This can be unified and simplified by moving both parts into the runtime.
This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.
This is achieved by remove existing lock conditions and only lock a parachain when:
The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.
The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.
The key scenarios this RFC seeks to improve are:
One way to perform lease renewal for a parachain is by doing a least swap with another parachain with a longer lease. This requires the other parachain must be operational and able to perform XCM transact call into relaychain to dispatch the swap call. Combined with the overhead of setting up a new parachain, this is an time consuming and expensive process. Ideally, the parachain manager should be able to perform the lease swap call without having a running parachain2.
A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:
paras_registrar
Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.
For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.
It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.
Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.
Existing operational parachains will not be impacted.
An audit maybe required to ensure the implementation does not introduce unwanted side effects.
There is no privacy related concerns.
This RFC should improve the developer experiences for new and existing parachain teams
This RFC is fully compatibility with existing interfaces.
None at this stage.
This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.
https://github.com/paritytech/cumulus/issues/377 @@ -2080,19 +4794,19 @@ This can be unified and simplified by moving both parts into the runtime.
Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.
Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.
Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.
Our PR has all details about our runtime and how we would move it into the fellowship repo.
Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets
It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.
Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.
No changes to the existing system are proposed. Only changes to how maintenance is organized.
No changes
Existing Encointer runtime repo
None identified
More info on Encointer: encointer.org
The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.
Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -3047,13 +5761,13 @@ blockspace) to the network.
By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace.
The following pallets and subsystems are good candidates to migrate from the Relay Chain:
These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular may require some optimizations to deal with constraints.
Standard audit/review requirements apply. More powerful multi-chain integration test tools would be useful in developement.
Describe the impact of the proposal on the exposed functionality of Polkadot.
This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its primary resources are allocated to system performance.
This proposal alters very little for coretime users (e.g. parachain developers). Application developers will need to interact with multiple chains, making ergonomic light client tools particularly important for application development.
For existing parachains that interact with these subsystems, they will need to configure their runtimes to recognize the new locations in the network.
Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. Application developers will need to interact with multiple chains in the network.
There remain some implementation questions, like how to use balances for both Staking and Governance. See, for example, Moving Staking off the Relay Chain.
Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. With Staking and Governance off the Relay Chain, this is not an unreasonable next step.
With Identity on Polkadot, Kusama may opt to drop its People Chain.
At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the Storage. We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field under RuntimeVersion, we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.
system_version
RuntimeVersion
StateVersion::V1
Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is further explored in https://github.com/polkadot-fellows/RFCs/issues/19
StateVersion::V0
In order to use project specific StateVersion for extrinsic roots, we proposed an implementation that introduced parameter to frame_system::Config but that unfortunately did not feel correct. @@ -3311,26 +6025,26 @@ pub const VERSION: RuntimeVersion = RuntimeVersion { system_version: 1, }; }
frame_system::Config
There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated so that chains know which system_version to use.
AFAIK, should not have any impact on the security or privacy.
These changes should be compatible for existing chains if they use state_version value for system_verision.
system_verision
I do not believe there is any performance hit with this change.
This does not break any exposed Apis.
This change should not break any compatibility.
We proposed introducing a similar change by introducing a parameter to frame_system::Config but did not feel that is the correct way of introducing this change.
I do not have any specific questions about this change at the moment.
IMO, this change is pretty self-contained and there won't be any future work necessary.
This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.
storage_proof_size
The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:
These pessimistic assumptions lead to an overestimation of storage weight, negatively impacting block utilization on parachains.
In addition, the current model does not account for multiple accesses to the same storage items. While these repetitive accesses will not increase storage-proof size, the runtime-side weight monitoring will account for them multiple times. Since the proof size is completely opaque to the runtime, we can not implement retroactive storage weight correction.
A solution must provide a way for the runtime to track the exact storage-proof size consumed on a per-extrinsic basis.
This RFC proposes a new host function that exposes the storage-proof size to the runtime. As a result, runtimes can implement storage weight reclaiming mechanisms that improve block utilization.
This RFC proposes the following host function signature:
#![allow(unused)] @@ -3383,14 +6097,14 @@ is the correct way of introducing this change. fn ext_storage_proof_size_version_1() -> u64; }
The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.
Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.
The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.
This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for creating an NFT collection, minting an individual NFT, and lowering its corresponding metadata and attribute deposits. The objective is to lower the barrier to entry for NFT creators, fostering a more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.
The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2 DOT for metadata and attribute deposits) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub presents a significant financial barrier for many NFT creators. By lowering the deposit @@ -3462,11 +6176,11 @@ introduced to storage and the size of corresponding values stored.
Further, it suggests a direction for a future of calculating deposits variably based on adoption and/or market conditions. There is a discussion on tradeoffs of setting deposits too high or too low.
deposit
Previous discussions have been held within the Polkadot Forum, with artists expressing their concerns about the deposit amounts.
This RFC proposes a revision of the deposit constants in the configuration of the NFTs pallet on the Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.
As of v1.1.1, the Collection Deposit is 10 DOT and the Item Deposit is 0.01 DOT (see @@ -3551,7 +6265,7 @@ application to avoid sudden rate changes, as in:
where the constant a moves the inflection to lower or higher x values, the constant b adjusts the rate of the deposit increase, and the independent variable x is the number of collections or items, depending on application.
a
b
Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. Highlighted below are cogent points extracted from the discourse on the Polkadot Forum conversation, @@ -3580,22 +6294,22 @@ stakeholders wouldn't be much affected. As of date 9th January 2024 there are 42 Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.
As noted above, state bloat is a security concern. In the case of abuse, governance could adapt by increasing deposit rates and/or using forceDestroy on collections agreed to be spam.
forceDestroy
The primary performance consideration stems from the potential for state bloat due to increased activity from lower deposit requirements. It's vital to monitor and manage this to avoid any negative impact on the chain's performance. Strategies for mitigating state bloat, including efficient data management and periodic reviews of storage requirements, will be essential.
The proposed change aims to enhance the user experience for artists, traders, and utilizers of Kusama and Polkadot Asset Hubs, making Polkadot and Kusama more accessible and user-friendly.
The change does not impact compatibility as a redeposit function is already implemented.
redeposit
If this RFC is accepted, there should not be any unresolved questions regarding how to adapt the implementation of deposits for NFT collections.
Propose a way of permuting the availability chunk indices assigned to validators, in the context of recovering available data from systematic chunks, with the purpose of fairly distributing network bandwidth usage.
Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 validators during an entire session, when favouring availability recovery from systematic chunks.
Relay chain node core developers.
An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the resulting code. @@ -3851,7 +6565,7 @@ struct (added in https://github.com/paritytech/polkadot-sdk/pull/2177Configuration::set_node_feature extrinsic. Once the feature is enabled and new configuration is live, the validator->chunk mapping ceases to be a 1:1 mapping and systematic recovery may begin.
https://github.com/paritytech/polkadot-sdk/pull/2177Configuration::set_node_feature
core_index
CandidateReceipt
Extensive testing will be conducted - both automated and manual. This proposal doesn't affect security or privacy.
This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of CPU time in polkadot as we scale up the parachain block size and number of availability cores.
With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be halved and total POV recovery time decrease by 80% for large POVs. See more here.
Not applicable.
This is a breaking change. See upgrade path section above. All validators and collators need to have upgraded their node versions before the feature will be enabled via a governance call.
See comments on the tracking issue and the in-progress PR
This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic chunks from backers/approval-checkers.
This RFC proposes to changes the SessionKeys::generate_session_keys runtime api interface. This runtime api is used by validator operators to generate new session keys on a node. The public session keys are then registered manually on chain by the validator operator. Before this RFC it was not possible by the on chain logic to ensure that the account setting the public session keys is also in @@ -3965,7 +6679,7 @@ possession of the private session keys. To solve this the RFC proposes to pass t registration on chain to generate_session_keys. Further this RFC proposes to change the return value of the generate_session_keys function also to not only return the public session keys, but also the proof of ownership for the private session keys. The validator operator will then need to send the public session keys and the proof together when registering new session keys on chain.
SessionKeys::generate_session_keys
generate_session_keys
When submitting the new public session keys to the on chain logic there doesn't exist any verification of possession of the private session keys. This means that users can basically register any kind of public session keys on chain. While the on chain logic ensures that there are no duplicate keys, someone could try to prevent others from registering new session keys by setting them first. While this wouldn't bring @@ -3973,13 +6687,13 @@ the "attacker" any kind of advantage, more like disadvantages (potenti e.g. changing its session key in the event of a private session key leak.
After this RFC this kind of attack would not be possible anymore, because the on chain logic can verify that the sending account is in ownership of the private session keys.
We are first going to explain the proof format being used:
proof
#![allow(unused)] fn main() { @@ -4013,31 +6727,31 @@ actual exported function signature looks like: already gets the proof passed as Vec<u8>. This proof needs to be decoded to the actual Proof type as explained above. The proof and the SCALE encoded account_id of the sender are used to verify the ownership of the SessionKeys. -Drawbacks +Drawbacks Validator operators need to pass the their account id when rotating their session keys in a node. This will require updating some high level docs and making users familiar with the slightly changed ergonomics. -Testing, Security, and Privacy +Testing, Security, and Privacy Testing of the new changes only requires passing an appropriate owner for the current testing context. The changes to the proof generation and verification got audited to ensure they are correct. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance The session key generation is an offchain process and thus, doesn't influence the performance of the chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys. The verification of the proof is a signature verification number of individual session keys times. As setting the session keys is happening quite rarely, it should not influence the overall system performance. -Ergonomics +Ergonomics The interfaces have been optimized to make it as easy as possible to generate the ownership proof. -Compatibility +Compatibility Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before a runtime is enacted that contains these changes otherwise they will fail to generate session keys. The RPC that exists around this runtime api needs to be updated to support passing the account id and for returning the ownership proof alongside the public session keys. UIs would need to be updated to support the new RPC and the changed on chain logic. -Prior Art and References +Prior Art and References None. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material Substrate implementation of the RFC. (source) Table of Contents @@ -4075,10 +6789,10 @@ and for returning the ownership proof alongside the public session keys. AuthorsJoe Petrowski, Gavin Wood -Summary +Summary The Fellowship Manifesto states that members should receive a monthly allowance on par with gross income in OECD countries. This RFC proposes concrete amounts. -Motivation +Motivation One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and retain technical talent for the continued progress of the network. In order for members to uphold their commitment to the network, they should receive support to @@ -4088,12 +6802,12 @@ on par with a full-time job. Providing a livable wage to those making such contr pragmatic to work full-time on Polkadot. Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion are all explained in the Manifesto. This RFC is only to propose concrete values for allowances. -Stakeholders +Stakeholders Fellowship members Polkadot Treasury -Explanation +Explanation This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to the amount or asset used would only be on a single value, and all others would adjust relatively. A III Dan is someone whose contributions match the expectations of a full-time individual contributor. @@ -4153,19 +6867,19 @@ other hand, more people will likely join the Fellowship in the coming years. Updates Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via RFC. -Drawbacks +Drawbacks By not using DOT for payment, the protocol relies on the stability of other assets and the ability to acquire them. However, the asset of choice can be changed in the future. -Testing, Security, and Privacy +Testing, Security, and Privacy N/A. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance N/A -Ergonomics +Ergonomics N/A -Compatibility +Compatibility N/A -Prior Art and References +Prior Art and References The Polkadot Fellowship Manifesto @@ -4173,7 +6887,7 @@ Manifesto Indeed: Average Salary for Engineers, United States -Unresolved Questions +Unresolved Questions None at present. (source) Table of Contents @@ -4206,11 +6920,11 @@ States AuthorsPierre Krieger -Summary +Summary When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other. Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime. This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1. -Motivation +Motivation There exists three motivations behind this change: @@ -4223,9 +6937,9 @@ States It makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below. -Stakeholders +Stakeholders Low-level developers. -Explanation +Explanation To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are: concat( leb128(total-size-in-bytes-of-the-rest), @@ -4245,23 +6959,23 @@ A SCALE-compact encoded 1 is one byte of value 4. In o This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec. As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them. By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications. -Drawbacks +Drawbacks This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)). An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome. -Testing, Security, and Privacy +Testing, Security, and Privacy Irrelevant. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance Irrelevant. -Ergonomics +Ergonomics Irrelevant. -Compatibility +Compatibility The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format. -Prior Art and References +Prior Art and References Irrelevant. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material None. This is a simple isolated change. (source) Table of Contents @@ -4301,20 +7015,20 @@ This is equivalent to forcing the Vec<Transaction> to always AuthorsPierre Krieger -Summary +Summary This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities". Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode. The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities. -Motivation +Motivation The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on. It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available. If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time. This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data. -Stakeholders +Stakeholders Low-level client developers. People interested in accessing the archive of the chain. -Explanation +Explanation Reading RFC #8 first might help with comprehension, as this RFC is very similar. Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise. Capabilities @@ -4350,30 +7064,30 @@ If blocks pruning is enabled and the chain is a relay chain, then Substrate unfo Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol. Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case. Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much. -Drawbacks +Drawbacks None that I can see. -Testing, Security, and Privacy +Testing, Security, and Privacy The content of this section is basically the same as the one in RFC 8. This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit. For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours. Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode. Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch. Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy. -Ergonomics +Ergonomics Irrelevant. -Compatibility +Compatibility Irrelevant. -Prior Art and References +Prior Art and References Unknown. -Unresolved Questions +Unresolved Questions While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet? -Future Directions and Related Material +Future Directions and Related Material This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC. If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes. @@ -4422,12 +7136,12 @@ We could even add to the peer-to-peer network nodes that are only capable of ser AuthorsZondax AG, Parity Technologies -Summary +Summary To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format. It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute. This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails. Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime. -Motivation +Motivation Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way. On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature. The two main reasons why this is not possible today are: @@ -4436,7 +7150,7 @@ We could even add to the peer-to-peer network nodes that are only capable of ser Metadata is not authenticated. Even if there was enough space on offline devices to hold the metadata, the user would be trusting the entity providing this metadata to the hardware wallet. In the Polkadot ecosystem, this is how currently Polkadot Vault works. This RFC proposes a solution to make FRAME Metadata compatible with offline signers in a secure way. As it leverages FRAME Metadata, it does not only ensure that offline devices can always keep up to date with every FRAME based chain, but also that every offline wallet will be compatible with all FRAME based chains, avoiding the need of per-chain implementations. -Requirements +Requirements Metadata's integrity MUST be preserved. If any compromise were to happen, extrinsics sent with compromised metadata SHOULD fail. Metadata information that could be used in signable extrinsic decoding MAY be included in digest, yet its inclusion MUST be indicated in signed extensions. @@ -4454,14 +7168,14 @@ We could even add to the peer-to-peer network nodes that are only capable of ser Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead; Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching). -Stakeholders +Stakeholders Runtime implementors UI/wallet implementors Offline wallet implementors The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem. -Explanation +Explanation The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described. First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained. Metadata digest @@ -4732,23 +7446,23 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] Included in the extrinsic is u8, the "mode". The mode is either 0 which means to not include the metadata hash in the signed data or the mode is 1 to include the metadata hash in V1. Included in the signed data is an Option<[u8; 32]>. Depending on the mode the value is either None or Some(metadata_hash). -Drawbacks +Drawbacks The chunking may not be the optimal case for every kind of offline wallet. -Testing, Security, and Privacy +Testing, Security, and Privacy All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised. Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash. Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done. -Ergonomics & Compatibility +Ergonomics & Compatibility The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored. -Prior Art and References +Prior Art and References RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well. On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material Does it work with all kind of offline wallets? Generic types currently appear multiple times in the metadata with each instantiation. It could be may be useful to have generic type only once in the metadata and declare the generic parameters at their instantiation. @@ -4786,20 +7500,20 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] AuthorsGeorge Pisaltu -Summary +Summary This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction. -Motivation +Motivation "General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions. An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712. The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time. By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future. -Stakeholders +Stakeholders Runtime users Runtime devs Wallet devs -Explanation +Explanation An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version. Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time. This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic format version will be bumped to 5 and the extrinsic type bit representation would change as follows: @@ -4810,23 +7524,23 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] 11reserved -Drawbacks +Drawbacks This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again. -Testing, Security, and Privacy +Testing, Security, and Privacy There is no impact on testing, security or privacy. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal. -Performance +Performance There is no performance impact. -Ergonomics +Ergonomics The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version. -Compatibility +Compatibility This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version. -Prior Art and References +Prior Art and References The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work. (source) Table of Contents @@ -4859,16 +7573,16 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] AuthorsAlex Gheorghe (alexggh) -Summary +Summary Extend the DHT authority discovery records with a signed creation time, so that nodes can determine which record is newer and always decide to prefer the newer records to the old ones. -Motivation +Motivation Currently, we use the Kademlia DHT for storing records regarding the p2p address of an authority discovery key, the problem is that if the nodes decide to change its PeerId/Network key it will publish a new record, however because of the distributed and replicated nature of the DHT there is no way to tell which record is newer so both old PeerId and the new PeerId will live in the network until the old one expires(36h), that creates all sort of problem and leads to the node changing its address not being properly connected for up to 36h. After this RFC, nodes are extended to decide to keep the new record and propagate the new record to nodes that have the old record stored, so in the end all the nodes will converge faster to the new record(in the order of minutes, not 36h) Implementation of the rfc: https://github.com/paritytech/polkadot-sdk/pull/3786. Current issue without this enhacement: https://github.com/paritytech/polkadot-sdk/issues/3673 -Stakeholders +Stakeholders Polkadot node developers. -Explanation +Explanation This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here. In a nutshell, on a specific node the current authority-discovery protocol publishes Kademila DHT records at startup and periodically. The records contain the full address of the node for each authorithy key it owns. The node tries also to find the full address of all authorities in the network by querying the DHT and picking up the first record it finds for each of the authority id it found on chain. @@ -4901,24 +7615,24 @@ You can find a link to the specification Drawbacks +Drawbacks In theory the new protocol creates a bit more traffic on the DHT network, because it waits for DHT records to be received from more than one node, while in the current implementation we just take the first record that we receive and cancel all in-flight requests to other peers. However, because the redundancy factor will be relatively small and this operation happens rarerly, every 10min, this cost is negligible. -Testing, Security, and Privacy +Testing, Security, and Privacy This RFC's implementation https://github.com/paritytech/polkadot-sdk/pull/3786 had been tested on various local test networks and versi. With regard to security the creation time is wrapped inside SignedAuthorityRecord wo it will be signed with the authority id key, so there is no way for other malicious nodes to manipulate this field without the received node observing. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility Irrelevant. -Performance +Performance Irrelevant. -Ergonomics +Ergonomics Irrelevant. -Compatibility +Compatibility The changes are backwards compatible with the existing protocol, so nodes with both the old protocol and newer protocol can exist in the network, this is achieved by the fact that we use protobuf for serializing and deserializing the records, so new fields will be ignore when deserializing with the older protocol and vice-versa when deserializing an old record with the new protocol the new field will be None and the new code accepts this record as being valid. -Prior Art and References +Prior Art and References The enhancements have been inspired by the algorithm specified in here -Unresolved Questions +Unresolved Questions N/A -Future Directions and Related Material +Future Directions and Related Material N/A (source) Table of Contents @@ -4964,23 +7678,23 @@ in order to speed up the time until all nodes have the newest record, nodes can AuthorsJonas Gehrlein & Alistair Stewart -Summary +Summary This RFC proposes a flexible unbonding mechanism for tokens that are locked from staking on the Relay Chain (DOT/KSM), aiming to enhance user convenience without compromising system security. Locking tokens for staking ensures that Polkadot is able to slash tokens backing misbehaving validators. With changing the locking period, we still need to make sure that Polkadot can slash enough tokens to deter misbehaviour. This means that not all tokens can be unbonded immediately, however we can still allow some tokens to be unbonded quickly. The new mechanism leads to a signficantly reduced unbonding time on average, by queuing up new unbonding requests and scaling their unbonding duration relative to the size of the queue. New requests are executed with a minimum of 2 days, when the queue is comparatively empty, to the conventional 28 days, if the sum of requests (in terms of stake) exceed some threshold. In scenarios between these two bounds, the unbonding duration scales proportionately. The new mechanism will never be worse than the current fixed 28 days. In this document we also present an empirical analysis by retrospectively fitting the proposed mechanism to the historic unbonding timeline and show that the average unbonding duration would drastically reduce, while still being sensitive to large unbonding events. Additionally, we discuss implications for UI, UX, and conviction voting. Note: Our proposition solely focuses on the locks imposed from staking. Other locks, such as governance, remain unchanged. Also, this mechanism should not be confused with the already existing feature of FastUnstake, which lets users unstake tokens immediately that have not received rewards for 28 days or longer. As an initial step to gauge its effectiveness and stability, it is recommended to implement and test this model on Kusama before considering its integration into Polkadot, with appropriate adjustments to the parameters. In the following, however, we limit our discussion to Polkadot. -Motivation +Motivation Polkadot has one of the longest unbonding periods among all Proof-of-Stake protocols, because security is the most important goal. Staking on Polkadot is still attractive compared to other protocols because of its above-average staking APY. However the long unbonding period harms usability and deters potential participants that want to contribute to the security of the network. The current length of the unbonding period imposes significant costs for any entity that even wants to perform basic tasks such as a reorganization / consolidation of their stashes, or updating their private key infrastructure. It also limits participation of users that have a large preference for liquidity. The combination of long unbonding periods and high returns has lead to the proliferation of liquid staking, where parachains or centralised exchanges offer users their staked tokens before the 28 days unbonding period is over either in original DOT/KSM form or derivative tokens. Liquid staking is harmless if few tokens are involved but it could result in many validators being selected by a few entities if a large fraction of DOTs were involved. This may lead to centralization (see here for more discussion on threats of liquid staking) and an opportunity for attacks. The new mechanism greatly increases the competitiveness of Polkadot, while maintaining sufficient security. -Stakeholders +Stakeholders Every DOT/KSM token holder -Explanation +Explanation Before diving into the details of how to implement the unbonding queue, we give readers context about why Polkadot has a 28-day unbonding period in the first place. The reason for it is to prevent long-range attacks (LRA) that becomes theoretically possible if more than 1/3 of validators collude. In essence, a LRA describes the inability of users, who disconnect from the consensus at time t0 and reconnects later, to realize that validators which were legitimate at a certain time, say t0 but dropped out in the meantime, are not to be trusted anymore. That means, for example, a user syncing the state could be fooled by trusting validators that fell outside the active set of validators after t0, and are building a competitive and malicious chain (fork). LRAs of longer than 28 days are mitigated by the use of trusted checkpoints, which are assumed to be no more than 28 days old. A new node that syncs Polkadot will start at the checkpoint and look for proofs of finality of later blocks, signed by 2/3 of the validators. In an LRA fork, some of the validator sets may be different but only if 2/3 of some validator set in the last 28 days signed something incorrect. If we detect an LRA of no more than 28 days with the current unbonding period, then we should be able to detect misbehaviour from over 1/3 of validators whose nominators are still bonded. The stake backing these validators is considerable fraction of the total stake (empirically it is 0.287 or so). If we allowed more than this stake to unbond, without checking who it was backing, then the LRA attack might be free of cost for an attacker. The proposed mechansim allows up to half this stake to unbond within 28 days. This halves the amount of tokens that can be slashed, but this is still very high in absolute terms. For example, at the time of writing (19.06.2024) this would translate to around 120 millions DOTs. @@ -5038,23 +7752,23 @@ The analysis can be reproduced or changed to other parameters using Potential Extension In addition to a simple queue, we could add a market component that lets users always unbond from staking at the minimum possible waiting time)(== LOWER_BOUND, e.g., 2 days), by paying a variable fee. To achieve this, it is reasonable to split the total unbonding capacity into two chunks, with the first capacity for the simple queue and the remaining capacity for the fee-based unbonding. By doing so, we allow users to choose whether they want the quickest unbond and paying a dynamic fee or join the simple queue. Setting a capacity restriction for both queues enables us to guarantee a predictable unbonding time in the simple queue, while allowing users with the respective willingness to pay to get out even earlier. The fees are dynamically adjusted and are proportional to the unbonding stake (and thereby expressed in a percentage of the requested unbonding stake). In contrast to a unified queue, this prevents the issue that users paying a fee jump in front of other users not paying a fee, pushing their unbonding time back (which would be bad for UX). The revenue generated could be burned. This extension and further specifications are left out of this RFC, because it adds further complexity and the empirical analysis above suggests that average unbonding times will already be close the LOWER_BOUND, making a more complex design unnecessary. We advise to first implement the discussed mechanism and assess after some experience whether an extension is desirable. -Drawbacks +Drawbacks Lower security for LRAs: Without a doubt, the theoretical security against LRAs decreases. But, as we argue, the attack is still costly enough to deter attacks and the attack is sufficiently theoretical. Here, the benefits outweigh the costs. Griefing attacks: A large holder could pretend to unbond a large amount of their tokens to prevent other users to exit the network earlier. This would, however be costly due to the fact that the holder loses out on staking rewards. The larger the impact on the queue, the higher the costs. In any case it must be noted that the UPPER_BOUND is still 28 days, which means that nominators are never left with a longer unbonding period than currently. There is not enough gain for the attacker to endure this cost. Challenge for Custodians and Liquid Staking Providers: Changing the unbonding time, especially making it flexible, requires entities that offer staking derivatives to rethink and rework their products. -Testing, Security, and Privacy +Testing, Security, and Privacy NA -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility NA -Performance +Performance The authors cannot see any potential impact on performance. -Ergonomics +Ergonomics The authors cannot see any potential impact on ergonomics for developers. We discussed potential impact on UX/UI for users above. -Compatibility +Compatibility The authors cannot see any potential impact on compatibility. This should be assessed by the technical fellows. -Prior Art and References +Prior Art and References Ethereum proposed a similar solution Alistair did some initial write-up @@ -5091,20 +7805,20 @@ The analysis can be reproduced or changed to other parameters using Summary +Summary This RFC proposes a change to the extrinsic format to include a transaction extension version. -Motivation +Motivation The extrinsic format supports to be extended with transaction extensions. These transaction extensions are runtime specific and can be different per chain. Each transaction extension can add data to the extrinsic itself or extend the signed payload. This means that adding a transaction extension is breaking the chain specific extrinsic format. A recent example was the introduction of the CheckMetadatHash to Polkadot and all its system chains. As the extension was adding one byte to the extrinsic, it broke a lot of tooling. By introducing an extra version for the transaction extensions it will be possible to introduce changes to these transaction extensions while still being backwards compatible. Based on the version of the transaction extensions, each chain runtime could decode the extrinsic correctly and also create the correct signed payload. -Stakeholders +Stakeholders Runtime users Runtime devs Wallet devs -Explanation +Explanation RFC84 introduced the extrinsic format 5. The idea is to piggyback onto this change of the extrinsic format to add the extra version for the transaction extensions. If required, this could also come as extrinsic format 6, but 5 is not yet deployed anywhere. The extrinsic format supports the following types of transactions: @@ -5120,25 +7834,25 @@ as extrinsic format 6, but 5 is not yet deployed anywh The Version being a SCALE encoded u8 representing the version of the transaction extensions. In the chain runtime the version can be used to determine which set of transaction extensions should be used to decode and to validate the transaction. -Drawbacks +Drawbacks This adds one byte more to each signed transaction. -Testing, Security, and Privacy +Testing, Security, and Privacy There is no impact on testing, security or privacy. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility This will ensure that changes to the transactions extensions can be done in a backwards compatible way. -Performance +Performance There is no performance impact. -Ergonomics +Ergonomics Runtime developers need to take care of the versioning and ensure to bump as required, so that there are no compatibility breaking changes without a bump of the version. It will also add a little bit more code in the runtime to decode these old versions, but this should be neglectable. -Compatibility +Compatibility When introduced together with extrinsic format version 5 from RFC84, it can be implemented in a backwards compatible way. So, transactions can still be send using the old extrinsic format and decoded by the runtime. -Prior Art and References +Prior Art and References None. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material None. (source) Table of Contents @@ -5175,14 +7889,14 @@ old extrinsic format and decoded by the runtime. AuthorsAdrian Catangiu -Summary +Summary This RFC proposes a new instruction that provides a way to initiate on remote chains, asset transfers which transfer multiple types (teleports, local-reserve, destination-reserve) of assets, using XCM alone. The currently existing instructions are too opinionated and force each XCM asset transfer to a single transfer type (teleport, local-reserve, destination-reserve). This results in inability to combine different types of transfers in single transfer which results in overall poor UX when trying to move assets across chains. -Motivation +Motivation XCM is the de-facto cross-chain messaging protocol within the Polkadot ecosystem, and cross-chain assets transfers is one of its main use-cases. Unfortunately, in its current spec, it does not support initiating on a remote chain, one or more transfers that combine assets with different transfer types. @@ -5204,14 +7918,14 @@ For example, allows single XCM program execution to transfer multiple assets fro Kusama Asset Hub, over the bridge through Polkadot Asset Hub with final destination ParaP on Polkadot. With current XCM, we are limited to doing multiple independent transfers for each individual hop in order to move both "interesting" assets, but also "supporting" assets (used to pay fees). -Stakeholders +Stakeholders Runtime users Runtime devs Wallet devs dApps devs -Explanation +Explanation A new instruction InitiateAssetsTransfer is introduced that initiates an assets transfer from the chain it is executed on, to another chain. The executed transfer is point-to-point (chain-to-chain) with all of the transfer properties specified in the instruction parameters. The instruction also @@ -5399,9 +8113,9 @@ by executing a single XCM message, even though we'll be mixing multiple ).unwrap(); }) } -Drawbacks +Drawbacks No drawbacks identified. -Testing, Security, and Privacy +Testing, Security, and Privacy There should be no security risks related to the new instruction from the XCVM perspective. It follows the same pattern as with single-type asset transfers, only now it allows combining multiple types at once. Improves security by enabling @@ -5410,16 +8124,16 @@ which minimizes the potential free/unpaid work that a receiving chain has to do. required execution fee payment, part of the instruction logic through the remote_fees: Option<AssetTransferFilter> parameter, which will make sure the remote XCM starts with a single-asset-holding-loading-instruction, immediately followed by a BuyExecution using said asset. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility This brings no impact to the rest of the XCM spec. It is a new, independent instruction, no changes to existing instructions. Enhances the exposed functionality of Polkadot. Will allow multi-chain transfers that are currently forced to happen in multiple programs per asset per "hop", to be possible in a single XCM program. -Performance +Performance No performance changes/implications. -Ergonomics +Ergonomics The proposal enhances developers' and users' cross-chain asset transfer capabilities. This enhancement is optimized for XCM programs transferring multiple assets, needing to run their logic across multiple chains. -Compatibility +Compatibility Does this proposal break compatibility with existing interfaces, older versions of implementations? Summarize necessary migrations or upgrade strategies, if any. This enhancement is compatible with all existing XCM programs and versions. @@ -5428,11 +8142,11 @@ success. A program where the new instruction is used to initiate multiple types of asset transfers, cannot be downgraded to older XCM versions, because there is no equivalent capability there. Such conversion attempts will explicitly fail. -Prior Art and References +Prior Art and References None. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material None. (source) Table of Contents @@ -5465,10 +8179,10 @@ Such conversion attempts will explicitly fail. AuthorsAdrian Catangiu -Summary +Summary The Transact XCM instruction currently forces the user to set a specific maximum weight allowed to the inner call and then also pay for that much weight regardless of how much the call actually needs in practice. This RFC proposes improving the usability of Transact by removing that parameter and instead get and charge the actual weight of the inner call from its dispatch info on the remote chain. -Motivation +Motivation The UX of using Transact is poor because of having to guess/estimate the require_weight_at_most weight used by the inner call on the target. We've seen multiple Transact on-chain failures caused by guessing wrong values for this require_weight_at_most even though the rest of the XCM program would have worked. In practice, this parameter only adds UX overhead with no real practical value. Use cases fall in one of two categories: @@ -5481,40 +8195,40 @@ weight limit parameter. We've had multiple OpenGov root/whitelisted_caller proposals initiated by core-devs completely or partially fail because of incorrect configuration of require_weight_at_most parameter. This is a strong indication that the instruction is hard to use. -Stakeholders +Stakeholders
#![allow(unused)] fn main() { @@ -4013,31 +6727,31 @@ actual exported function signature looks like: already gets the proof passed as Vec<u8>. This proof needs to be decoded to the actual Proof type as explained above. The proof and the SCALE encoded account_id of the sender are used to verify the ownership of the SessionKeys. -Drawbacks +Drawbacks Validator operators need to pass the their account id when rotating their session keys in a node. This will require updating some high level docs and making users familiar with the slightly changed ergonomics. -Testing, Security, and Privacy +Testing, Security, and Privacy Testing of the new changes only requires passing an appropriate owner for the current testing context. The changes to the proof generation and verification got audited to ensure they are correct. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance The session key generation is an offchain process and thus, doesn't influence the performance of the chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys. The verification of the proof is a signature verification number of individual session keys times. As setting the session keys is happening quite rarely, it should not influence the overall system performance. -Ergonomics +Ergonomics The interfaces have been optimized to make it as easy as possible to generate the ownership proof. -Compatibility +Compatibility Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before a runtime is enacted that contains these changes otherwise they will fail to generate session keys. The RPC that exists around this runtime api needs to be updated to support passing the account id and for returning the ownership proof alongside the public session keys. UIs would need to be updated to support the new RPC and the changed on chain logic. -Prior Art and References +Prior Art and References None. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material Substrate implementation of the RFC. (source) Table of Contents @@ -4075,10 +6789,10 @@ and for returning the ownership proof alongside the public session keys. AuthorsJoe Petrowski, Gavin Wood -Summary +Summary The Fellowship Manifesto states that members should receive a monthly allowance on par with gross income in OECD countries. This RFC proposes concrete amounts. -Motivation +Motivation One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and retain technical talent for the continued progress of the network. In order for members to uphold their commitment to the network, they should receive support to @@ -4088,12 +6802,12 @@ on par with a full-time job. Providing a livable wage to those making such contr pragmatic to work full-time on Polkadot. Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion are all explained in the Manifesto. This RFC is only to propose concrete values for allowances. -Stakeholders +Stakeholders Fellowship members Polkadot Treasury -Explanation +Explanation This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to the amount or asset used would only be on a single value, and all others would adjust relatively. A III Dan is someone whose contributions match the expectations of a full-time individual contributor. @@ -4153,19 +6867,19 @@ other hand, more people will likely join the Fellowship in the coming years. Updates Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via RFC. -Drawbacks +Drawbacks By not using DOT for payment, the protocol relies on the stability of other assets and the ability to acquire them. However, the asset of choice can be changed in the future. -Testing, Security, and Privacy +Testing, Security, and Privacy N/A. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance N/A -Ergonomics +Ergonomics N/A -Compatibility +Compatibility N/A -Prior Art and References +Prior Art and References The Polkadot Fellowship Manifesto @@ -4173,7 +6887,7 @@ Manifesto Indeed: Average Salary for Engineers, United States -Unresolved Questions +Unresolved Questions None at present. (source) Table of Contents @@ -4206,11 +6920,11 @@ States AuthorsPierre Krieger -Summary +Summary When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other. Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime. This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1. -Motivation +Motivation There exists three motivations behind this change: @@ -4223,9 +6937,9 @@ States It makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below. -Stakeholders +Stakeholders Low-level developers. -Explanation +Explanation To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are: concat( leb128(total-size-in-bytes-of-the-rest), @@ -4245,23 +6959,23 @@ A SCALE-compact encoded 1 is one byte of value 4. In o This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec. As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them. By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications. -Drawbacks +Drawbacks This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)). An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome. -Testing, Security, and Privacy +Testing, Security, and Privacy Irrelevant. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance Irrelevant. -Ergonomics +Ergonomics Irrelevant. -Compatibility +Compatibility The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format. -Prior Art and References +Prior Art and References Irrelevant. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material None. This is a simple isolated change. (source) Table of Contents @@ -4301,20 +7015,20 @@ This is equivalent to forcing the Vec<Transaction> to always AuthorsPierre Krieger -Summary +Summary This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities". Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode. The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities. -Motivation +Motivation The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on. It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available. If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time. This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data. -Stakeholders +Stakeholders Low-level client developers. People interested in accessing the archive of the chain. -Explanation +Explanation Reading RFC #8 first might help with comprehension, as this RFC is very similar. Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise. Capabilities @@ -4350,30 +7064,30 @@ If blocks pruning is enabled and the chain is a relay chain, then Substrate unfo Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol. Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case. Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much. -Drawbacks +Drawbacks None that I can see. -Testing, Security, and Privacy +Testing, Security, and Privacy The content of this section is basically the same as the one in RFC 8. This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit. For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours. Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode. Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch. Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy. -Ergonomics +Ergonomics Irrelevant. -Compatibility +Compatibility Irrelevant. -Prior Art and References +Prior Art and References Unknown. -Unresolved Questions +Unresolved Questions While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet? -Future Directions and Related Material +Future Directions and Related Material This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC. If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes. @@ -4422,12 +7136,12 @@ We could even add to the peer-to-peer network nodes that are only capable of ser AuthorsZondax AG, Parity Technologies -Summary +Summary To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format. It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute. This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails. Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime. -Motivation +Motivation Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way. On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature. The two main reasons why this is not possible today are: @@ -4436,7 +7150,7 @@ We could even add to the peer-to-peer network nodes that are only capable of ser Metadata is not authenticated. Even if there was enough space on offline devices to hold the metadata, the user would be trusting the entity providing this metadata to the hardware wallet. In the Polkadot ecosystem, this is how currently Polkadot Vault works. This RFC proposes a solution to make FRAME Metadata compatible with offline signers in a secure way. As it leverages FRAME Metadata, it does not only ensure that offline devices can always keep up to date with every FRAME based chain, but also that every offline wallet will be compatible with all FRAME based chains, avoiding the need of per-chain implementations. -Requirements +Requirements Metadata's integrity MUST be preserved. If any compromise were to happen, extrinsics sent with compromised metadata SHOULD fail. Metadata information that could be used in signable extrinsic decoding MAY be included in digest, yet its inclusion MUST be indicated in signed extensions. @@ -4454,14 +7168,14 @@ We could even add to the peer-to-peer network nodes that are only capable of ser Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead; Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching). -Stakeholders +Stakeholders Runtime implementors UI/wallet implementors Offline wallet implementors The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem. -Explanation +Explanation The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described. First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained. Metadata digest @@ -4732,23 +7446,23 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] Included in the extrinsic is u8, the "mode". The mode is either 0 which means to not include the metadata hash in the signed data or the mode is 1 to include the metadata hash in V1. Included in the signed data is an Option<[u8; 32]>. Depending on the mode the value is either None or Some(metadata_hash). -Drawbacks +Drawbacks The chunking may not be the optimal case for every kind of offline wallet. -Testing, Security, and Privacy +Testing, Security, and Privacy All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised. Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash. Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done. -Ergonomics & Compatibility +Ergonomics & Compatibility The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored. -Prior Art and References +Prior Art and References RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well. On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material Does it work with all kind of offline wallets? Generic types currently appear multiple times in the metadata with each instantiation. It could be may be useful to have generic type only once in the metadata and declare the generic parameters at their instantiation. @@ -4786,20 +7500,20 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] AuthorsGeorge Pisaltu -Summary +Summary This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction. -Motivation +Motivation "General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions. An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712. The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time. By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future. -Stakeholders +Stakeholders Runtime users Runtime devs Wallet devs -Explanation +Explanation An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version. Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time. This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic format version will be bumped to 5 and the extrinsic type bit representation would change as follows: @@ -4810,23 +7524,23 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] 11reserved -Drawbacks +Drawbacks This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again. -Testing, Security, and Privacy +Testing, Security, and Privacy There is no impact on testing, security or privacy. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal. -Performance +Performance There is no performance impact. -Ergonomics +Ergonomics The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version. -Compatibility +Compatibility This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version. -Prior Art and References +Prior Art and References The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work. (source) Table of Contents @@ -4859,16 +7573,16 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] AuthorsAlex Gheorghe (alexggh) -Summary +Summary Extend the DHT authority discovery records with a signed creation time, so that nodes can determine which record is newer and always decide to prefer the newer records to the old ones. -Motivation +Motivation Currently, we use the Kademlia DHT for storing records regarding the p2p address of an authority discovery key, the problem is that if the nodes decide to change its PeerId/Network key it will publish a new record, however because of the distributed and replicated nature of the DHT there is no way to tell which record is newer so both old PeerId and the new PeerId will live in the network until the old one expires(36h), that creates all sort of problem and leads to the node changing its address not being properly connected for up to 36h. After this RFC, nodes are extended to decide to keep the new record and propagate the new record to nodes that have the old record stored, so in the end all the nodes will converge faster to the new record(in the order of minutes, not 36h) Implementation of the rfc: https://github.com/paritytech/polkadot-sdk/pull/3786. Current issue without this enhacement: https://github.com/paritytech/polkadot-sdk/issues/3673 -Stakeholders +Stakeholders Polkadot node developers. -Explanation +Explanation This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here. In a nutshell, on a specific node the current authority-discovery protocol publishes Kademila DHT records at startup and periodically. The records contain the full address of the node for each authorithy key it owns. The node tries also to find the full address of all authorities in the network by querying the DHT and picking up the first record it finds for each of the authority id it found on chain. @@ -4901,24 +7615,24 @@ You can find a link to the specification Drawbacks +Drawbacks In theory the new protocol creates a bit more traffic on the DHT network, because it waits for DHT records to be received from more than one node, while in the current implementation we just take the first record that we receive and cancel all in-flight requests to other peers. However, because the redundancy factor will be relatively small and this operation happens rarerly, every 10min, this cost is negligible. -Testing, Security, and Privacy +Testing, Security, and Privacy This RFC's implementation https://github.com/paritytech/polkadot-sdk/pull/3786 had been tested on various local test networks and versi. With regard to security the creation time is wrapped inside SignedAuthorityRecord wo it will be signed with the authority id key, so there is no way for other malicious nodes to manipulate this field without the received node observing. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility Irrelevant. -Performance +Performance Irrelevant. -Ergonomics +Ergonomics Irrelevant. -Compatibility +Compatibility The changes are backwards compatible with the existing protocol, so nodes with both the old protocol and newer protocol can exist in the network, this is achieved by the fact that we use protobuf for serializing and deserializing the records, so new fields will be ignore when deserializing with the older protocol and vice-versa when deserializing an old record with the new protocol the new field will be None and the new code accepts this record as being valid. -Prior Art and References +Prior Art and References The enhancements have been inspired by the algorithm specified in here -Unresolved Questions +Unresolved Questions N/A -Future Directions and Related Material +Future Directions and Related Material N/A (source) Table of Contents @@ -4964,23 +7678,23 @@ in order to speed up the time until all nodes have the newest record, nodes can AuthorsJonas Gehrlein & Alistair Stewart -Summary +Summary This RFC proposes a flexible unbonding mechanism for tokens that are locked from staking on the Relay Chain (DOT/KSM), aiming to enhance user convenience without compromising system security. Locking tokens for staking ensures that Polkadot is able to slash tokens backing misbehaving validators. With changing the locking period, we still need to make sure that Polkadot can slash enough tokens to deter misbehaviour. This means that not all tokens can be unbonded immediately, however we can still allow some tokens to be unbonded quickly. The new mechanism leads to a signficantly reduced unbonding time on average, by queuing up new unbonding requests and scaling their unbonding duration relative to the size of the queue. New requests are executed with a minimum of 2 days, when the queue is comparatively empty, to the conventional 28 days, if the sum of requests (in terms of stake) exceed some threshold. In scenarios between these two bounds, the unbonding duration scales proportionately. The new mechanism will never be worse than the current fixed 28 days. In this document we also present an empirical analysis by retrospectively fitting the proposed mechanism to the historic unbonding timeline and show that the average unbonding duration would drastically reduce, while still being sensitive to large unbonding events. Additionally, we discuss implications for UI, UX, and conviction voting. Note: Our proposition solely focuses on the locks imposed from staking. Other locks, such as governance, remain unchanged. Also, this mechanism should not be confused with the already existing feature of FastUnstake, which lets users unstake tokens immediately that have not received rewards for 28 days or longer. As an initial step to gauge its effectiveness and stability, it is recommended to implement and test this model on Kusama before considering its integration into Polkadot, with appropriate adjustments to the parameters. In the following, however, we limit our discussion to Polkadot. -Motivation +Motivation Polkadot has one of the longest unbonding periods among all Proof-of-Stake protocols, because security is the most important goal. Staking on Polkadot is still attractive compared to other protocols because of its above-average staking APY. However the long unbonding period harms usability and deters potential participants that want to contribute to the security of the network. The current length of the unbonding period imposes significant costs for any entity that even wants to perform basic tasks such as a reorganization / consolidation of their stashes, or updating their private key infrastructure. It also limits participation of users that have a large preference for liquidity. The combination of long unbonding periods and high returns has lead to the proliferation of liquid staking, where parachains or centralised exchanges offer users their staked tokens before the 28 days unbonding period is over either in original DOT/KSM form or derivative tokens. Liquid staking is harmless if few tokens are involved but it could result in many validators being selected by a few entities if a large fraction of DOTs were involved. This may lead to centralization (see here for more discussion on threats of liquid staking) and an opportunity for attacks. The new mechanism greatly increases the competitiveness of Polkadot, while maintaining sufficient security. -Stakeholders +Stakeholders Every DOT/KSM token holder -Explanation +Explanation Before diving into the details of how to implement the unbonding queue, we give readers context about why Polkadot has a 28-day unbonding period in the first place. The reason for it is to prevent long-range attacks (LRA) that becomes theoretically possible if more than 1/3 of validators collude. In essence, a LRA describes the inability of users, who disconnect from the consensus at time t0 and reconnects later, to realize that validators which were legitimate at a certain time, say t0 but dropped out in the meantime, are not to be trusted anymore. That means, for example, a user syncing the state could be fooled by trusting validators that fell outside the active set of validators after t0, and are building a competitive and malicious chain (fork). LRAs of longer than 28 days are mitigated by the use of trusted checkpoints, which are assumed to be no more than 28 days old. A new node that syncs Polkadot will start at the checkpoint and look for proofs of finality of later blocks, signed by 2/3 of the validators. In an LRA fork, some of the validator sets may be different but only if 2/3 of some validator set in the last 28 days signed something incorrect. If we detect an LRA of no more than 28 days with the current unbonding period, then we should be able to detect misbehaviour from over 1/3 of validators whose nominators are still bonded. The stake backing these validators is considerable fraction of the total stake (empirically it is 0.287 or so). If we allowed more than this stake to unbond, without checking who it was backing, then the LRA attack might be free of cost for an attacker. The proposed mechansim allows up to half this stake to unbond within 28 days. This halves the amount of tokens that can be slashed, but this is still very high in absolute terms. For example, at the time of writing (19.06.2024) this would translate to around 120 millions DOTs. @@ -5038,23 +7752,23 @@ The analysis can be reproduced or changed to other parameters using Potential Extension In addition to a simple queue, we could add a market component that lets users always unbond from staking at the minimum possible waiting time)(== LOWER_BOUND, e.g., 2 days), by paying a variable fee. To achieve this, it is reasonable to split the total unbonding capacity into two chunks, with the first capacity for the simple queue and the remaining capacity for the fee-based unbonding. By doing so, we allow users to choose whether they want the quickest unbond and paying a dynamic fee or join the simple queue. Setting a capacity restriction for both queues enables us to guarantee a predictable unbonding time in the simple queue, while allowing users with the respective willingness to pay to get out even earlier. The fees are dynamically adjusted and are proportional to the unbonding stake (and thereby expressed in a percentage of the requested unbonding stake). In contrast to a unified queue, this prevents the issue that users paying a fee jump in front of other users not paying a fee, pushing their unbonding time back (which would be bad for UX). The revenue generated could be burned. This extension and further specifications are left out of this RFC, because it adds further complexity and the empirical analysis above suggests that average unbonding times will already be close the LOWER_BOUND, making a more complex design unnecessary. We advise to first implement the discussed mechanism and assess after some experience whether an extension is desirable. -Drawbacks +Drawbacks Lower security for LRAs: Without a doubt, the theoretical security against LRAs decreases. But, as we argue, the attack is still costly enough to deter attacks and the attack is sufficiently theoretical. Here, the benefits outweigh the costs. Griefing attacks: A large holder could pretend to unbond a large amount of their tokens to prevent other users to exit the network earlier. This would, however be costly due to the fact that the holder loses out on staking rewards. The larger the impact on the queue, the higher the costs. In any case it must be noted that the UPPER_BOUND is still 28 days, which means that nominators are never left with a longer unbonding period than currently. There is not enough gain for the attacker to endure this cost. Challenge for Custodians and Liquid Staking Providers: Changing the unbonding time, especially making it flexible, requires entities that offer staking derivatives to rethink and rework their products. -Testing, Security, and Privacy +Testing, Security, and Privacy NA -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility NA -Performance +Performance The authors cannot see any potential impact on performance. -Ergonomics +Ergonomics The authors cannot see any potential impact on ergonomics for developers. We discussed potential impact on UX/UI for users above. -Compatibility +Compatibility The authors cannot see any potential impact on compatibility. This should be assessed by the technical fellows. -Prior Art and References +Prior Art and References Ethereum proposed a similar solution Alistair did some initial write-up @@ -5091,20 +7805,20 @@ The analysis can be reproduced or changed to other parameters using Summary +Summary This RFC proposes a change to the extrinsic format to include a transaction extension version. -Motivation +Motivation The extrinsic format supports to be extended with transaction extensions. These transaction extensions are runtime specific and can be different per chain. Each transaction extension can add data to the extrinsic itself or extend the signed payload. This means that adding a transaction extension is breaking the chain specific extrinsic format. A recent example was the introduction of the CheckMetadatHash to Polkadot and all its system chains. As the extension was adding one byte to the extrinsic, it broke a lot of tooling. By introducing an extra version for the transaction extensions it will be possible to introduce changes to these transaction extensions while still being backwards compatible. Based on the version of the transaction extensions, each chain runtime could decode the extrinsic correctly and also create the correct signed payload. -Stakeholders +Stakeholders Runtime users Runtime devs Wallet devs -Explanation +Explanation RFC84 introduced the extrinsic format 5. The idea is to piggyback onto this change of the extrinsic format to add the extra version for the transaction extensions. If required, this could also come as extrinsic format 6, but 5 is not yet deployed anywhere. The extrinsic format supports the following types of transactions: @@ -5120,25 +7834,25 @@ as extrinsic format 6, but 5 is not yet deployed anywh The Version being a SCALE encoded u8 representing the version of the transaction extensions. In the chain runtime the version can be used to determine which set of transaction extensions should be used to decode and to validate the transaction. -Drawbacks +Drawbacks This adds one byte more to each signed transaction. -Testing, Security, and Privacy +Testing, Security, and Privacy There is no impact on testing, security or privacy. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility This will ensure that changes to the transactions extensions can be done in a backwards compatible way. -Performance +Performance There is no performance impact. -Ergonomics +Ergonomics Runtime developers need to take care of the versioning and ensure to bump as required, so that there are no compatibility breaking changes without a bump of the version. It will also add a little bit more code in the runtime to decode these old versions, but this should be neglectable. -Compatibility +Compatibility When introduced together with extrinsic format version 5 from RFC84, it can be implemented in a backwards compatible way. So, transactions can still be send using the old extrinsic format and decoded by the runtime. -Prior Art and References +Prior Art and References None. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material None. (source) Table of Contents @@ -5175,14 +7889,14 @@ old extrinsic format and decoded by the runtime. AuthorsAdrian Catangiu -Summary +Summary This RFC proposes a new instruction that provides a way to initiate on remote chains, asset transfers which transfer multiple types (teleports, local-reserve, destination-reserve) of assets, using XCM alone. The currently existing instructions are too opinionated and force each XCM asset transfer to a single transfer type (teleport, local-reserve, destination-reserve). This results in inability to combine different types of transfers in single transfer which results in overall poor UX when trying to move assets across chains. -Motivation +Motivation XCM is the de-facto cross-chain messaging protocol within the Polkadot ecosystem, and cross-chain assets transfers is one of its main use-cases. Unfortunately, in its current spec, it does not support initiating on a remote chain, one or more transfers that combine assets with different transfer types. @@ -5204,14 +7918,14 @@ For example, allows single XCM program execution to transfer multiple assets fro Kusama Asset Hub, over the bridge through Polkadot Asset Hub with final destination ParaP on Polkadot. With current XCM, we are limited to doing multiple independent transfers for each individual hop in order to move both "interesting" assets, but also "supporting" assets (used to pay fees). -Stakeholders +Stakeholders Runtime users Runtime devs Wallet devs dApps devs -Explanation +Explanation A new instruction InitiateAssetsTransfer is introduced that initiates an assets transfer from the chain it is executed on, to another chain. The executed transfer is point-to-point (chain-to-chain) with all of the transfer properties specified in the instruction parameters. The instruction also @@ -5399,9 +8113,9 @@ by executing a single XCM message, even though we'll be mixing multiple ).unwrap(); }) }
Vec<u8>
Proof
account_id
SessionKeys
Validator operators need to pass the their account id when rotating their session keys in a node. This will require updating some high level docs and making users familiar with the slightly changed ergonomics.
Testing of the new changes only requires passing an appropriate owner for the current testing context. The changes to the proof generation and verification got audited to ensure they are correct.
owner
The session key generation is an offchain process and thus, doesn't influence the performance of the chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys. The verification of the proof is a signature verification number of individual session keys times. As setting the session keys is happening quite rarely, it should not influence the overall system performance.
The interfaces have been optimized to make it as easy as possible to generate the ownership proof.
Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before a runtime is enacted that contains these changes otherwise they will fail to generate session keys. The RPC that exists around this runtime api needs to be updated to support passing the account id and for returning the ownership proof alongside the public session keys.
UIs would need to be updated to support the new RPC and the changed on chain logic.
Substrate implementation of the RFC.
The Fellowship Manifesto states that members should receive a monthly allowance on par with gross income in OECD countries. This RFC proposes concrete amounts.
One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and retain technical talent for the continued progress of the network.
In order for members to uphold their commitment to the network, they should receive support to @@ -4088,12 +6802,12 @@ on par with a full-time job. Providing a livable wage to those making such contr pragmatic to work full-time on Polkadot.
Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.
This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to the amount or asset used would only be on a single value, and all others would adjust relatively. A III Dan is someone whose contributions match the expectations of a full-time individual contributor. @@ -4153,19 +6867,19 @@ other hand, more people will likely join the Fellowship in the coming years.
Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via RFC.
By not using DOT for payment, the protocol relies on the stability of other assets and the ability to acquire them. However, the asset of choice can be changed in the future.
N/A.
When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.
Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime.
Vec<Transaction>
Transaction
This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1.
(Compact(1), Transaction)
Vec
There exists three motivations behind this change:
It makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below.
Low-level developers.
To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:
concat( leb128(total-size-in-bytes-of-the-rest), @@ -4245,23 +6959,23 @@ A SCALE-compact encoded 1 is one byte of value 4. In o This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec. As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them. By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications. -Drawbacks +Drawbacks This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)). An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome. -Testing, Security, and Privacy +Testing, Security, and Privacy Irrelevant. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance Irrelevant. -Ergonomics +Ergonomics Irrelevant. -Compatibility +Compatibility The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format. -Prior Art and References +Prior Art and References Irrelevant. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material None. This is a simple isolated change. (source) Table of Contents @@ -4301,20 +7015,20 @@ This is equivalent to forcing the Vec<Transaction> to always AuthorsPierre Krieger -Summary +Summary This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities". Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode. The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities. -Motivation +Motivation The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on. It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available. If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time. This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data. -Stakeholders +Stakeholders Low-level client developers. People interested in accessing the archive of the chain. -Explanation +Explanation Reading RFC #8 first might help with comprehension, as this RFC is very similar. Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise. Capabilities @@ -4350,30 +7064,30 @@ If blocks pruning is enabled and the chain is a relay chain, then Substrate unfo Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol. Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case. Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much. -Drawbacks +Drawbacks None that I can see. -Testing, Security, and Privacy +Testing, Security, and Privacy The content of this section is basically the same as the one in RFC 8. This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit. For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks. Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours. Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode. Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch. Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy. -Ergonomics +Ergonomics Irrelevant. -Compatibility +Compatibility Irrelevant. -Prior Art and References +Prior Art and References Unknown. -Unresolved Questions +Unresolved Questions While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet? -Future Directions and Related Material +Future Directions and Related Material This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC. If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes. @@ -4422,12 +7136,12 @@ We could even add to the peer-to-peer network nodes that are only capable of ser AuthorsZondax AG, Parity Technologies -Summary +Summary To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format. It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute. This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails. Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime. -Motivation +Motivation Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way. On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature. The two main reasons why this is not possible today are: @@ -4436,7 +7150,7 @@ We could even add to the peer-to-peer network nodes that are only capable of ser Metadata is not authenticated. Even if there was enough space on offline devices to hold the metadata, the user would be trusting the entity providing this metadata to the hardware wallet. In the Polkadot ecosystem, this is how currently Polkadot Vault works. This RFC proposes a solution to make FRAME Metadata compatible with offline signers in a secure way. As it leverages FRAME Metadata, it does not only ensure that offline devices can always keep up to date with every FRAME based chain, but also that every offline wallet will be compatible with all FRAME based chains, avoiding the need of per-chain implementations. -Requirements +Requirements Metadata's integrity MUST be preserved. If any compromise were to happen, extrinsics sent with compromised metadata SHOULD fail. Metadata information that could be used in signable extrinsic decoding MAY be included in digest, yet its inclusion MUST be indicated in signed extensions. @@ -4454,14 +7168,14 @@ We could even add to the peer-to-peer network nodes that are only capable of ser Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead; Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching). -Stakeholders +Stakeholders Runtime implementors UI/wallet implementors Offline wallet implementors The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem. -Explanation +Explanation The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described. First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained. Metadata digest @@ -4732,23 +7446,23 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] Included in the extrinsic is u8, the "mode". The mode is either 0 which means to not include the metadata hash in the signed data or the mode is 1 to include the metadata hash in V1. Included in the signed data is an Option<[u8; 32]>. Depending on the mode the value is either None or Some(metadata_hash). -Drawbacks +Drawbacks The chunking may not be the optimal case for every kind of offline wallet. -Testing, Security, and Privacy +Testing, Security, and Privacy All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised. Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash. Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone. -Performance, Ergonomics, and Compatibility -Performance +Performance, Ergonomics, and Compatibility +Performance There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done. -Ergonomics & Compatibility +Ergonomics & Compatibility The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored. -Prior Art and References +Prior Art and References RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well. On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material Does it work with all kind of offline wallets? Generic types currently appear multiple times in the metadata with each instantiation. It could be may be useful to have generic type only once in the metadata and declare the generic parameters at their instantiation. @@ -4786,20 +7500,20 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] AuthorsGeorge Pisaltu -Summary +Summary This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction. -Motivation +Motivation "General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions. An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712. The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time. By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future. -Stakeholders +Stakeholders Runtime users Runtime devs Wallet devs -Explanation +Explanation An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version. Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time. This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic format version will be bumped to 5 and the extrinsic type bit representation would change as follows: @@ -4810,23 +7524,23 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] 11reserved -Drawbacks +Drawbacks This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again. -Testing, Security, and Privacy +Testing, Security, and Privacy There is no impact on testing, security or privacy. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal. -Performance +Performance There is no performance impact. -Ergonomics +Ergonomics The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version. -Compatibility +Compatibility This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version. -Prior Art and References +Prior Art and References The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work. (source) Table of Contents @@ -4859,16 +7573,16 @@ nodes: [[[2, 3], [4, 5]], [0, 1]] AuthorsAlex Gheorghe (alexggh) -Summary +Summary Extend the DHT authority discovery records with a signed creation time, so that nodes can determine which record is newer and always decide to prefer the newer records to the old ones. -Motivation +Motivation Currently, we use the Kademlia DHT for storing records regarding the p2p address of an authority discovery key, the problem is that if the nodes decide to change its PeerId/Network key it will publish a new record, however because of the distributed and replicated nature of the DHT there is no way to tell which record is newer so both old PeerId and the new PeerId will live in the network until the old one expires(36h), that creates all sort of problem and leads to the node changing its address not being properly connected for up to 36h. After this RFC, nodes are extended to decide to keep the new record and propagate the new record to nodes that have the old record stored, so in the end all the nodes will converge faster to the new record(in the order of minutes, not 36h) Implementation of the rfc: https://github.com/paritytech/polkadot-sdk/pull/3786. Current issue without this enhacement: https://github.com/paritytech/polkadot-sdk/issues/3673 -Stakeholders +Stakeholders Polkadot node developers. -Explanation +Explanation This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here. In a nutshell, on a specific node the current authority-discovery protocol publishes Kademila DHT records at startup and periodically. The records contain the full address of the node for each authorithy key it owns. The node tries also to find the full address of all authorities in the network by querying the DHT and picking up the first record it finds for each of the authority id it found on chain. @@ -4901,24 +7615,24 @@ You can find a link to the specification Drawbacks +Drawbacks In theory the new protocol creates a bit more traffic on the DHT network, because it waits for DHT records to be received from more than one node, while in the current implementation we just take the first record that we receive and cancel all in-flight requests to other peers. However, because the redundancy factor will be relatively small and this operation happens rarerly, every 10min, this cost is negligible. -Testing, Security, and Privacy +Testing, Security, and Privacy This RFC's implementation https://github.com/paritytech/polkadot-sdk/pull/3786 had been tested on various local test networks and versi. With regard to security the creation time is wrapped inside SignedAuthorityRecord wo it will be signed with the authority id key, so there is no way for other malicious nodes to manipulate this field without the received node observing. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility Irrelevant. -Performance +Performance Irrelevant. -Ergonomics +Ergonomics Irrelevant. -Compatibility +Compatibility The changes are backwards compatible with the existing protocol, so nodes with both the old protocol and newer protocol can exist in the network, this is achieved by the fact that we use protobuf for serializing and deserializing the records, so new fields will be ignore when deserializing with the older protocol and vice-versa when deserializing an old record with the new protocol the new field will be None and the new code accepts this record as being valid. -Prior Art and References +Prior Art and References The enhancements have been inspired by the algorithm specified in here -Unresolved Questions +Unresolved Questions N/A -Future Directions and Related Material +Future Directions and Related Material N/A (source) Table of Contents @@ -4964,23 +7678,23 @@ in order to speed up the time until all nodes have the newest record, nodes can AuthorsJonas Gehrlein & Alistair Stewart -Summary +Summary This RFC proposes a flexible unbonding mechanism for tokens that are locked from staking on the Relay Chain (DOT/KSM), aiming to enhance user convenience without compromising system security. Locking tokens for staking ensures that Polkadot is able to slash tokens backing misbehaving validators. With changing the locking period, we still need to make sure that Polkadot can slash enough tokens to deter misbehaviour. This means that not all tokens can be unbonded immediately, however we can still allow some tokens to be unbonded quickly. The new mechanism leads to a signficantly reduced unbonding time on average, by queuing up new unbonding requests and scaling their unbonding duration relative to the size of the queue. New requests are executed with a minimum of 2 days, when the queue is comparatively empty, to the conventional 28 days, if the sum of requests (in terms of stake) exceed some threshold. In scenarios between these two bounds, the unbonding duration scales proportionately. The new mechanism will never be worse than the current fixed 28 days. In this document we also present an empirical analysis by retrospectively fitting the proposed mechanism to the historic unbonding timeline and show that the average unbonding duration would drastically reduce, while still being sensitive to large unbonding events. Additionally, we discuss implications for UI, UX, and conviction voting. Note: Our proposition solely focuses on the locks imposed from staking. Other locks, such as governance, remain unchanged. Also, this mechanism should not be confused with the already existing feature of FastUnstake, which lets users unstake tokens immediately that have not received rewards for 28 days or longer. As an initial step to gauge its effectiveness and stability, it is recommended to implement and test this model on Kusama before considering its integration into Polkadot, with appropriate adjustments to the parameters. In the following, however, we limit our discussion to Polkadot. -Motivation +Motivation Polkadot has one of the longest unbonding periods among all Proof-of-Stake protocols, because security is the most important goal. Staking on Polkadot is still attractive compared to other protocols because of its above-average staking APY. However the long unbonding period harms usability and deters potential participants that want to contribute to the security of the network. The current length of the unbonding period imposes significant costs for any entity that even wants to perform basic tasks such as a reorganization / consolidation of their stashes, or updating their private key infrastructure. It also limits participation of users that have a large preference for liquidity. The combination of long unbonding periods and high returns has lead to the proliferation of liquid staking, where parachains or centralised exchanges offer users their staked tokens before the 28 days unbonding period is over either in original DOT/KSM form or derivative tokens. Liquid staking is harmless if few tokens are involved but it could result in many validators being selected by a few entities if a large fraction of DOTs were involved. This may lead to centralization (see here for more discussion on threats of liquid staking) and an opportunity for attacks. The new mechanism greatly increases the competitiveness of Polkadot, while maintaining sufficient security. -Stakeholders +Stakeholders Every DOT/KSM token holder -Explanation +Explanation Before diving into the details of how to implement the unbonding queue, we give readers context about why Polkadot has a 28-day unbonding period in the first place. The reason for it is to prevent long-range attacks (LRA) that becomes theoretically possible if more than 1/3 of validators collude. In essence, a LRA describes the inability of users, who disconnect from the consensus at time t0 and reconnects later, to realize that validators which were legitimate at a certain time, say t0 but dropped out in the meantime, are not to be trusted anymore. That means, for example, a user syncing the state could be fooled by trusting validators that fell outside the active set of validators after t0, and are building a competitive and malicious chain (fork). LRAs of longer than 28 days are mitigated by the use of trusted checkpoints, which are assumed to be no more than 28 days old. A new node that syncs Polkadot will start at the checkpoint and look for proofs of finality of later blocks, signed by 2/3 of the validators. In an LRA fork, some of the validator sets may be different but only if 2/3 of some validator set in the last 28 days signed something incorrect. If we detect an LRA of no more than 28 days with the current unbonding period, then we should be able to detect misbehaviour from over 1/3 of validators whose nominators are still bonded. The stake backing these validators is considerable fraction of the total stake (empirically it is 0.287 or so). If we allowed more than this stake to unbond, without checking who it was backing, then the LRA attack might be free of cost for an attacker. The proposed mechansim allows up to half this stake to unbond within 28 days. This halves the amount of tokens that can be slashed, but this is still very high in absolute terms. For example, at the time of writing (19.06.2024) this would translate to around 120 millions DOTs. @@ -5038,23 +7752,23 @@ The analysis can be reproduced or changed to other parameters using Potential Extension In addition to a simple queue, we could add a market component that lets users always unbond from staking at the minimum possible waiting time)(== LOWER_BOUND, e.g., 2 days), by paying a variable fee. To achieve this, it is reasonable to split the total unbonding capacity into two chunks, with the first capacity for the simple queue and the remaining capacity for the fee-based unbonding. By doing so, we allow users to choose whether they want the quickest unbond and paying a dynamic fee or join the simple queue. Setting a capacity restriction for both queues enables us to guarantee a predictable unbonding time in the simple queue, while allowing users with the respective willingness to pay to get out even earlier. The fees are dynamically adjusted and are proportional to the unbonding stake (and thereby expressed in a percentage of the requested unbonding stake). In contrast to a unified queue, this prevents the issue that users paying a fee jump in front of other users not paying a fee, pushing their unbonding time back (which would be bad for UX). The revenue generated could be burned. This extension and further specifications are left out of this RFC, because it adds further complexity and the empirical analysis above suggests that average unbonding times will already be close the LOWER_BOUND, making a more complex design unnecessary. We advise to first implement the discussed mechanism and assess after some experience whether an extension is desirable. -Drawbacks +Drawbacks Lower security for LRAs: Without a doubt, the theoretical security against LRAs decreases. But, as we argue, the attack is still costly enough to deter attacks and the attack is sufficiently theoretical. Here, the benefits outweigh the costs. Griefing attacks: A large holder could pretend to unbond a large amount of their tokens to prevent other users to exit the network earlier. This would, however be costly due to the fact that the holder loses out on staking rewards. The larger the impact on the queue, the higher the costs. In any case it must be noted that the UPPER_BOUND is still 28 days, which means that nominators are never left with a longer unbonding period than currently. There is not enough gain for the attacker to endure this cost. Challenge for Custodians and Liquid Staking Providers: Changing the unbonding time, especially making it flexible, requires entities that offer staking derivatives to rethink and rework their products. -Testing, Security, and Privacy +Testing, Security, and Privacy NA -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility NA -Performance +Performance The authors cannot see any potential impact on performance. -Ergonomics +Ergonomics The authors cannot see any potential impact on ergonomics for developers. We discussed potential impact on UX/UI for users above. -Compatibility +Compatibility The authors cannot see any potential impact on compatibility. This should be assessed by the technical fellows. -Prior Art and References +Prior Art and References Ethereum proposed a similar solution Alistair did some initial write-up @@ -5091,20 +7805,20 @@ The analysis can be reproduced or changed to other parameters using Summary +Summary This RFC proposes a change to the extrinsic format to include a transaction extension version. -Motivation +Motivation The extrinsic format supports to be extended with transaction extensions. These transaction extensions are runtime specific and can be different per chain. Each transaction extension can add data to the extrinsic itself or extend the signed payload. This means that adding a transaction extension is breaking the chain specific extrinsic format. A recent example was the introduction of the CheckMetadatHash to Polkadot and all its system chains. As the extension was adding one byte to the extrinsic, it broke a lot of tooling. By introducing an extra version for the transaction extensions it will be possible to introduce changes to these transaction extensions while still being backwards compatible. Based on the version of the transaction extensions, each chain runtime could decode the extrinsic correctly and also create the correct signed payload. -Stakeholders +Stakeholders Runtime users Runtime devs Wallet devs -Explanation +Explanation RFC84 introduced the extrinsic format 5. The idea is to piggyback onto this change of the extrinsic format to add the extra version for the transaction extensions. If required, this could also come as extrinsic format 6, but 5 is not yet deployed anywhere. The extrinsic format supports the following types of transactions: @@ -5120,25 +7834,25 @@ as extrinsic format 6, but 5 is not yet deployed anywh The Version being a SCALE encoded u8 representing the version of the transaction extensions. In the chain runtime the version can be used to determine which set of transaction extensions should be used to decode and to validate the transaction. -Drawbacks +Drawbacks This adds one byte more to each signed transaction. -Testing, Security, and Privacy +Testing, Security, and Privacy There is no impact on testing, security or privacy. -Performance, Ergonomics, and Compatibility +Performance, Ergonomics, and Compatibility This will ensure that changes to the transactions extensions can be done in a backwards compatible way. -Performance +Performance There is no performance impact. -Ergonomics +Ergonomics Runtime developers need to take care of the versioning and ensure to bump as required, so that there are no compatibility breaking changes without a bump of the version. It will also add a little bit more code in the runtime to decode these old versions, but this should be neglectable. -Compatibility +Compatibility When introduced together with extrinsic format version 5 from RFC84, it can be implemented in a backwards compatible way. So, transactions can still be send using the old extrinsic format and decoded by the runtime. -Prior Art and References +Prior Art and References None. -Unresolved Questions +Unresolved Questions None. -Future Directions and Related Material +Future Directions and Related Material None. (source) Table of Contents @@ -5175,14 +7889,14 @@ old extrinsic format and decoded by the runtime. AuthorsAdrian Catangiu -Summary +Summary This RFC proposes a new instruction that provides a way to initiate on remote chains, asset transfers which transfer multiple types (teleports, local-reserve, destination-reserve) of assets, using XCM alone. The currently existing instructions are too opinionated and force each XCM asset transfer to a single transfer type (teleport, local-reserve, destination-reserve). This results in inability to combine different types of transfers in single transfer which results in overall poor UX when trying to move assets across chains. -Motivation +Motivation XCM is the de-facto cross-chain messaging protocol within the Polkadot ecosystem, and cross-chain assets transfers is one of its main use-cases. Unfortunately, in its current spec, it does not support initiating on a remote chain, one or more transfers that combine assets with different transfer types. @@ -5204,14 +7918,14 @@ For example, allows single XCM program execution to transfer multiple assets fro Kusama Asset Hub, over the bridge through Polkadot Asset Hub with final destination ParaP on Polkadot. With current XCM, we are limited to doing multiple independent transfers for each individual hop in order to move both "interesting" assets, but also "supporting" assets (used to pay fees). -Stakeholders +Stakeholders Runtime users Runtime devs Wallet devs dApps devs -Explanation +Explanation A new instruction InitiateAssetsTransfer is introduced that initiates an assets transfer from the chain it is executed on, to another chain. The executed transfer is point-to-point (chain-to-chain) with all of the transfer properties specified in the instruction parameters. The instruction also @@ -5399,9 +8113,9 @@ by executing a single XCM message, even though we'll be mixing multiple ).unwrap(); }) }
4
for
As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them.
scale(transaction)
By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications.
This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)).
Compact(1)
An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome.
The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.
None. This is a simple isolated change.
This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".
Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.
The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.
The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.
It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.
If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.
This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.
Low-level client developers. People interested in accessing the archive of the chain.
Reading RFC #8 first might help with comprehension, as this RFC is very similar.
Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.
Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol.
/<genesis_hash>/kad
Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case.
Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.
None that I can see.
The content of this section is basically the same as the one in RFC 8.
This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.
Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.
sha256(peer_id)
For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.
Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.
Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.
Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.
Unknown.
This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.
If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.
To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format.
It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute.
This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails.
Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime.
Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way.
On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature.
The two main reasons why this is not possible today are:
This RFC proposes a solution to make FRAME Metadata compatible with offline signers in a secure way. As it leverages FRAME Metadata, it does not only ensure that offline devices can always keep up to date with every FRAME based chain, but also that every offline wallet will be compatible with all FRAME based chains, avoiding the need of per-chain implementations.
The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.
The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described.
First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained.
MetadataDigest
ExtrinsicMetadata
TypeRef
u8
V1
Option<[u8; 32]>
Some(metadata_hash)
The chunking may not be the optimal case for every kind of offline wallet.
All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised.
Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.
Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.
There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.
The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.
RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well.
On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid.
This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction.
"General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions.
An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712.
The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time.
By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.
An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version.
T
V
Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time.
0bTVVV_VVVV
0bT000_0100
This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic format version will be bumped to 5 and the extrinsic type bit representation would change as follows:
0bTTVV_VVVV
5
This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again.
127
63
There is no impact on testing, security or privacy.
This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.
There is no performance impact.
The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.
This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.
The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort.
TransactionExtension
Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work.
Extend the DHT authority discovery records with a signed creation time, so that nodes can determine which record is newer and always decide to prefer the newer records to the old ones.
Currently, we use the Kademlia DHT for storing records regarding the p2p address of an authority discovery key, the problem is that if the nodes decide to change its PeerId/Network key it will publish a new record, however because of the distributed and replicated nature of the DHT there is no way to tell which record is newer so both old PeerId and the new PeerId will live in the network until the old one expires(36h), that creates all sort of problem and leads to the node changing its address not being properly connected for up to 36h.
After this RFC, nodes are extended to decide to keep the new record and propagate the new record to nodes that have the old record stored, so in the end all the nodes will converge faster to the new record(in the order of minutes, not 36h)
Implementation of the rfc: https://github.com/paritytech/polkadot-sdk/pull/3786.
Current issue without this enhacement: https://github.com/paritytech/polkadot-sdk/issues/3673
Polkadot node developers.
This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here.
In a nutshell, on a specific node the current authority-discovery protocol publishes Kademila DHT records at startup and periodically. The records contain the full address of the node for each authorithy key it owns. The node tries also to find the full address of all authorities in the network by querying the DHT and picking up the first record it finds for each of the authority id it found on chain.
In theory the new protocol creates a bit more traffic on the DHT network, because it waits for DHT records to be received from more than one node, while in the current implementation we just take the first record that we receive and cancel all in-flight requests to other peers. However, because the redundancy factor will be relatively small and this operation happens rarerly, every 10min, this cost is negligible.
This RFC's implementation https://github.com/paritytech/polkadot-sdk/pull/3786 had been tested on various local test networks and versi.
With regard to security the creation time is wrapped inside SignedAuthorityRecord wo it will be signed with the authority id key, so there is no way for other malicious nodes to manipulate this field without the received node observing.
The changes are backwards compatible with the existing protocol, so nodes with both the old protocol and newer protocol can exist in the network, this is achieved by the fact that we use protobuf for serializing and deserializing the records, so new fields will be ignore when deserializing with the older protocol and vice-versa when deserializing an old record with the new protocol the new field will be None and the new code accepts this record as being valid.
The enhancements have been inspired by the algorithm specified in here
This RFC proposes a flexible unbonding mechanism for tokens that are locked from staking on the Relay Chain (DOT/KSM), aiming to enhance user convenience without compromising system security.
Locking tokens for staking ensures that Polkadot is able to slash tokens backing misbehaving validators. With changing the locking period, we still need to make sure that Polkadot can slash enough tokens to deter misbehaviour. This means that not all tokens can be unbonded immediately, however we can still allow some tokens to be unbonded quickly.
The new mechanism leads to a signficantly reduced unbonding time on average, by queuing up new unbonding requests and scaling their unbonding duration relative to the size of the queue. New requests are executed with a minimum of 2 days, when the queue is comparatively empty, to the conventional 28 days, if the sum of requests (in terms of stake) exceed some threshold. In scenarios between these two bounds, the unbonding duration scales proportionately. The new mechanism will never be worse than the current fixed 28 days.
In this document we also present an empirical analysis by retrospectively fitting the proposed mechanism to the historic unbonding timeline and show that the average unbonding duration would drastically reduce, while still being sensitive to large unbonding events. Additionally, we discuss implications for UI, UX, and conviction voting.
Note: Our proposition solely focuses on the locks imposed from staking. Other locks, such as governance, remain unchanged. Also, this mechanism should not be confused with the already existing feature of FastUnstake, which lets users unstake tokens immediately that have not received rewards for 28 days or longer.
As an initial step to gauge its effectiveness and stability, it is recommended to implement and test this model on Kusama before considering its integration into Polkadot, with appropriate adjustments to the parameters. In the following, however, we limit our discussion to Polkadot.
Polkadot has one of the longest unbonding periods among all Proof-of-Stake protocols, because security is the most important goal. Staking on Polkadot is still attractive compared to other protocols because of its above-average staking APY. However the long unbonding period harms usability and deters potential participants that want to contribute to the security of the network.
The current length of the unbonding period imposes significant costs for any entity that even wants to perform basic tasks such as a reorganization / consolidation of their stashes, or updating their private key infrastructure. It also limits participation of users that have a large preference for liquidity.
The combination of long unbonding periods and high returns has lead to the proliferation of liquid staking, where parachains or centralised exchanges offer users their staked tokens before the 28 days unbonding period is over either in original DOT/KSM form or derivative tokens. Liquid staking is harmless if few tokens are involved but it could result in many validators being selected by a few entities if a large fraction of DOTs were involved. This may lead to centralization (see here for more discussion on threats of liquid staking) and an opportunity for attacks.
The new mechanism greatly increases the competitiveness of Polkadot, while maintaining sufficient security.
Before diving into the details of how to implement the unbonding queue, we give readers context about why Polkadot has a 28-day unbonding period in the first place. The reason for it is to prevent long-range attacks (LRA) that becomes theoretically possible if more than 1/3 of validators collude. In essence, a LRA describes the inability of users, who disconnect from the consensus at time t0 and reconnects later, to realize that validators which were legitimate at a certain time, say t0 but dropped out in the meantime, are not to be trusted anymore. That means, for example, a user syncing the state could be fooled by trusting validators that fell outside the active set of validators after t0, and are building a competitive and malicious chain (fork).
LRAs of longer than 28 days are mitigated by the use of trusted checkpoints, which are assumed to be no more than 28 days old. A new node that syncs Polkadot will start at the checkpoint and look for proofs of finality of later blocks, signed by 2/3 of the validators. In an LRA fork, some of the validator sets may be different but only if 2/3 of some validator set in the last 28 days signed something incorrect.
If we detect an LRA of no more than 28 days with the current unbonding period, then we should be able to detect misbehaviour from over 1/3 of validators whose nominators are still bonded. The stake backing these validators is considerable fraction of the total stake (empirically it is 0.287 or so). If we allowed more than this stake to unbond, without checking who it was backing, then the LRA attack might be free of cost for an attacker. The proposed mechansim allows up to half this stake to unbond within 28 days. This halves the amount of tokens that can be slashed, but this is still very high in absolute terms. For example, at the time of writing (19.06.2024) this would translate to around 120 millions DOTs.
In addition to a simple queue, we could add a market component that lets users always unbond from staking at the minimum possible waiting time)(== LOWER_BOUND, e.g., 2 days), by paying a variable fee. To achieve this, it is reasonable to split the total unbonding capacity into two chunks, with the first capacity for the simple queue and the remaining capacity for the fee-based unbonding. By doing so, we allow users to choose whether they want the quickest unbond and paying a dynamic fee or join the simple queue. Setting a capacity restriction for both queues enables us to guarantee a predictable unbonding time in the simple queue, while allowing users with the respective willingness to pay to get out even earlier. The fees are dynamically adjusted and are proportional to the unbonding stake (and thereby expressed in a percentage of the requested unbonding stake). In contrast to a unified queue, this prevents the issue that users paying a fee jump in front of other users not paying a fee, pushing their unbonding time back (which would be bad for UX). The revenue generated could be burned.
LOWER_BOUND
This extension and further specifications are left out of this RFC, because it adds further complexity and the empirical analysis above suggests that average unbonding times will already be close the LOWER_BOUND, making a more complex design unnecessary. We advise to first implement the discussed mechanism and assess after some experience whether an extension is desirable.
UPPER_BOUND
NA
The authors cannot see any potential impact on performance.
The authors cannot see any potential impact on ergonomics for developers. We discussed potential impact on UX/UI for users above.
The authors cannot see any potential impact on compatibility. This should be assessed by the technical fellows.
This RFC proposes a change to the extrinsic format to include a transaction extension version.
The extrinsic format supports to be extended with transaction extensions. These transaction extensions are runtime specific and can be different per chain. Each transaction extension can add data to the extrinsic itself or extend the signed payload. This means that adding a transaction extension is breaking the chain specific extrinsic format. A recent example was the introduction of the CheckMetadatHash to Polkadot and all its system chains. As the extension was adding one byte to the extrinsic, it broke a lot of tooling. By introducing an extra version for the transaction extensions it will be possible to introduce changes to these transaction extensions while still being backwards compatible. Based on the version of the transaction extensions, each chain runtime could decode the extrinsic correctly and also create the correct signed payload.
CheckMetadatHash
RFC84 introduced the extrinsic format 5. The idea is to piggyback onto this change of the extrinsic format to add the extra version for the transaction extensions. If required, this could also come as extrinsic format 6, but 5 is not yet deployed anywhere.
6
The extrinsic format supports the following types of transactions:
The Version being a SCALE encoded u8 representing the version of the transaction extensions.
Version
In the chain runtime the version can be used to determine which set of transaction extensions should be used to decode and to validate the transaction.
This adds one byte more to each signed transaction.
This will ensure that changes to the transactions extensions can be done in a backwards compatible way.
Runtime developers need to take care of the versioning and ensure to bump as required, so that there are no compatibility breaking changes without a bump of the version. It will also add a little bit more code in the runtime to decode these old versions, but this should be neglectable.
When introduced together with extrinsic format version 5 from RFC84, it can be implemented in a backwards compatible way. So, transactions can still be send using the old extrinsic format and decoded by the runtime.
This RFC proposes a new instruction that provides a way to initiate on remote chains, asset transfers which transfer multiple types (teleports, local-reserve, destination-reserve) of assets, using XCM alone.
The currently existing instructions are too opinionated and force each XCM asset transfer to a single transfer type (teleport, local-reserve, destination-reserve). This results in inability to combine different types of transfers in single transfer which results in overall poor UX when trying to move assets across chains.
XCM is the de-facto cross-chain messaging protocol within the Polkadot ecosystem, and cross-chain assets transfers is one of its main use-cases. Unfortunately, in its current spec, it does not support initiating on a remote chain, one or more transfers that combine assets with different transfer types. @@ -5204,14 +7918,14 @@ For example, allows single XCM program execution to transfer multiple assets fro Kusama Asset Hub, over the bridge through Polkadot Asset Hub with final destination ParaP on Polkadot.
ParaP
With current XCM, we are limited to doing multiple independent transfers for each individual hop in order to move both "interesting" assets, but also "supporting" assets (used to pay fees).
A new instruction InitiateAssetsTransfer is introduced that initiates an assets transfer from the chain it is executed on, to another chain. The executed transfer is point-to-point (chain-to-chain) with all of the transfer properties specified in the instruction parameters. The instruction also @@ -5399,9 +8113,9 @@ by executing a single XCM message, even though we'll be mixing multiple ).unwrap(); }) }
InitiateAssetsTransfer
No drawbacks identified.
There should be no security risks related to the new instruction from the XCVM perspective. It follows the same pattern as with single-type asset transfers, only now it allows combining multiple types at once.
Improves security by enabling @@ -5410,16 +8124,16 @@ which minimizes the potential free/unpaid work that a receiving chain has to do. required execution fee payment, part of the instruction logic through the remote_fees: Option<AssetTransferFilter> parameter, which will make sure the remote XCM starts with a single-asset-holding-loading-instruction, immediately followed by a BuyExecution using said asset.
remote_fees: Option<AssetTransferFilter>
BuyExecution
This brings no impact to the rest of the XCM spec. It is a new, independent instruction, no changes to existing instructions.
Enhances the exposed functionality of Polkadot. Will allow multi-chain transfers that are currently forced to happen in multiple programs per asset per "hop", to be possible in a single XCM program.
No performance changes/implications.
The proposal enhances developers' and users' cross-chain asset transfer capabilities. This enhancement is optimized for XCM programs transferring multiple assets, needing to run their logic across multiple chains.
Does this proposal break compatibility with existing interfaces, older versions of implementations? Summarize necessary migrations or upgrade strategies, if any.
This enhancement is compatible with all existing XCM programs and versions.
The Transact XCM instruction currently forces the user to set a specific maximum weight allowed to the inner call and then also pay for that much weight regardless of how much the call actually needs in practice.
This RFC proposes improving the usability of Transact by removing that parameter and instead get and charge the actual weight of the inner call from its dispatch info on the remote chain.
The UX of using Transact is poor because of having to guess/estimate the require_weight_at_most weight used by the inner call on the target.
require_weight_at_most
We've seen multiple Transact on-chain failures caused by guessing wrong values for this require_weight_at_most even though the rest of the XCM program would have worked.
In practice, this parameter only adds UX overhead with no real practical value. Use cases fall in one of two categories:
We've had multiple OpenGov root/whitelisted_caller proposals initiated by core-devs completely or partially fail because of incorrect configuration of require_weight_at_most parameter. This is a strong indication that the instruction is hard to use.
root/whitelisted_caller
The proposed enhancement is simple: remove require_weight_at_most parameter from the instruction:
- Transact { origin_kind: OriginKind, require_weight_at_most: Weight, call: DoubleEncoded<Call> }, + Transact { origin_kind: OriginKind, call: DoubleEncoded<Call> },
The XCVM implementation shall no longer use require_weight_at_most for weighing. Instead, it shall weigh the Transact instruction by decoding and weighing the inner call.
call
No drawbacks, existing scenarios work as before, while this also allows new/easier flows.
Currently, an XCVM implementation can weigh a message just by looking at the decoded instructions without decoding the Transact's call, but assuming require_weight_at_most weight for it. With the new version it has to decode the inner call to know its actual weight.
But this does not actually change the security considerations, as can be seen below.
With the new Transact the weighing happens after decoding the inner call. The entirety of the XCM program containing this Transact needs to be either covered by enough bought weight using a BuyExecution, or the origin has to be allowed to do free execution.
The security considerations around how much can someone execute for free are the same for both this new version and the old. In both cases, an "attacker" can do the XCM decoding (including Transact inner calls) for free by adding a large enough BuyExecution without actually having the funds available.
In both cases, decoding is done for free, but in both cases execution fails early on BuyExecution.
No performance change.
Ergonomics are slightly improved by simplifying Transact API.
Compatible with previous XCM programs.
Elastic scaling is not resilient against griefing attacks without a way for a PoV (Proof of Validity) to commit to the particular core index it was intended for. This RFC proposes a way to include core index information in the candidate commitments and the CandidateDescriptor data structure in a backward compatible way. Additionally, it proposes the addition of a SessionIndex field in the CandidateDescriptor to make dispute resolution more secure and robust.
CandidateDescriptor
SessionIndex
This RFC proposes a way to solve two different problems:
This approach and alternatives have been considered and discussed in this issue.
The approach proposed below was chosen primarily because it minimizes the number of breaking changes, the complexity and takes less implementation and testing time. The proposal is to change the existing primitives while keeping binary compatibility with the older versions. We repurpose @@ -5732,28 +8446,28 @@ A candidate must not be backed if any of the following are true:
The only drawback is that further additions to the descriptor are limited to the amount of remaining unused space.
Standard testing (unit tests, CI zombienet tests) for functionality and mandatory security audit to ensure the implementation does not introduce any new security issues.
Backward compatibility of the implementation will be tested on testnets (Versi and Westend).
There is no impact on privacy.
Overall performance will be improved by not checking the collator signatures in runtime and nodes. The impact on the UMP queue and candidate receipt processing is negligible.
The ClaimQueueOffset along with the relay parent choice allows parachains to optimize their block production for either throughput or lower XCM message processing latency. A value of 0 with the newest relay parent provides the best latency while picking older relay parents avoids re-orgs.
ClaimQueueOffset
It is mandatory for elastic parachains to switch to the new receipt format and commit to a core by sending the UMPSignal::SelectCore message. It is optional but desired that all parachains switch to the new receipts for providing the session index for disputes.
UMPSignal::SelectCore
The implementation of this RFC itself must not introduce any breaking changes for the parachain runtime or collator nodes.
The proposed changes are not fully backward compatible, because older validators verify the collator signature of candidate descriptors.
Additional care must be taken before enabling the new descriptors by waiting for at least @@ -5778,12 +8492,12 @@ present in the receipt.
Any tooling that decodes UMP XCM messages needs an update to support or ignore the new UMP messages, but they should be fine to decode the regular XCM messages that come before the separator.
Forum discussion about a new CandidateReceipt format: https://forum.polkadot.network/t/pre-rfc-discussion-candidate-receipt-format-v2/3738
The implementation is extensible and future-proof to some extent. With minimal or no breaking changes, additional fields can be added in the candidate descriptor until the reserved space is exhausted
version
XCM already handles execution fees in an effective and efficient manner using the BuyExecution instruction. However, other types of fees are not handled as effectively -- for example, delivery fees. Fees exist that can't be measured using Weight -- as execution fees can -- so a new method should be thought up for those cases. @@ -5834,7 +8548,7 @@ This RFC proposes making the fee handling system simpler and more general, by do
Weight
fees
PayFees
Execution fees are handled correctly by XCM right now. However, the addition of extra fees, like for message delivery, result in awkward ways of integrating them into the XCVM implementation. This is because these types of fees are not included in the language. @@ -5842,14 +8556,14 @@ The standard should have a way to correctly deal with these implementation speci The new instruction moves the specified amount of fees from the holding register to a dedicated fees register that the XCVM can use in flexible ways depending on its implementation. The XCVM implementation is free to use these fees to pay for execution fees, transport fees, or any other type of fee that might be necessary. This moves the specifics of fees further away from the XCM standard, and more into the actual underlying XCVM implementation, which is a good thing.
The new instruction that will replace BuyExecution is a much simpler and general version: PayFees. This instruction takes one Asset, takes it from the holding register, and puts it into a new fees register. The XCVM implementation can now use this Asset to make sure every necessary fee is paid for, this includes execution fees, delivery fees, and any other type of fee @@ -5880,27 +8594,27 @@ BuyExecution { asset, weight_limit } PayFees { asset } // ...rest } -
Asset
There needs to be an explicit change from BuyExecution to PayFees, most often accompanied by a reduction in the assets passed in.
It might become a security concern if leftover fees are trapped, since a lot of them are expected.
There should be no performance downsides to this approach. The fees register is a simplification that may actually result in better performance, in the case an implementation is doing a workaround to achieve what this RFC proposes.
The interface is going to be very similar to the already existing one. Even simpler since PayFees will only receive one asset. That asset will allow users to limit the amount of fees they are willing to pay.
This RFC can't just change the semantics of the BuyExecution instruction since that instruction accepts any funds, uses what it needs and returns the rest immediately. The new proposed instruction, PayFees, doesn't return the leftover immediately, it keeps it in the fees register. In practice, the deprecated BuyExecution needs to be slowly rolled out in favour of PayFees.
The closed RFC PR on the xcm-format repository, before XCM RFCs got moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/53.
This proposal would greatly benefit from an improved asset trapping system.
CustomAssetClaimer is also related, as it directly improves the ergonomics of this proposal.
LeftoverAssetsDestination execution hint would also similarly improve the ergonomics.
A previous XCM RFC (https://github.com/polkadot-fellows/xcm-format/pull/37) introduced a SetAssetClaimer instruction. This idea of instructing the XCVM to change some implementation-specific behavior is useful. In order to generalize this mechanism, this RFC introduces a new instruction SetHints and makes the SetAssetClaimer be just one of many possible execution hints.
SetAssetClaimer
SetHints
There is a need for specifying how certain implementation-specific things should behave. Things like who can claim the assets or what can be done instead of trapping assets. Another idea for a hint:
AssetForFees
LeftoverAssetsDestination
A new instruction, SetHints, will be added. This instruction will take a single parameter of type Hint, an enumeration. The first variant for this enum is AssetClaimer, which allows to specify a location that should be able to claim trapped assets. @@ -5976,27 +8690,27 @@ enum Hint { type NumVariants = /* Number of variants of the `Hint` enum */; } -
Hint
AssetClaimer
The SetHints instruction might be hard to benchmark, since we should look into the actual hints being set to know how much weight to attribute to it.
Hints are specified on a per-message basis, so they have to be specified at the beginning of a message. If they were to be specified at the end, hints like AssetClaimer would be useless if an error occurs beforehand and assets get trapped before ever reaching the hint.
The instruction takes a bounded vector of hints so as to not force barriers to allow an arbitrary number of SetHint instructions.
SetHint
The SetHints instruction provides a better integration with barriers. If we had to add one barrier for SetAssetClaimer and another for each new hint that's added, barriers would need to be changed all the time. Also, this instruction would make it simpler to write XCM programs. You only need to specify the hints you want in one single instruction at the top of your program.
The previous RFC PR in the xcm-format repository before XCM RFCs moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/59.
This RFC aims to remove the NetworkIds of Westend and Rococo, arguing that testnets shouldn't go in the language.
NetworkId
Westend
Rococo
We've already seen the plans to phase out Rococo and Paseo has appeared. Instead of constantly changing the testnets included in the language, we should favor specifying them via their genesis hash, using NetworkId::ByGenesis.
Remove Westend and Rococo from the included NetworkIds in the language.
This RFC will make it less convenient to specify a testnet, but not by a large amount.
It will very slightly reduce the ergonomics of testnet developers but improve the stability of the language.
NetworkId::Rococo and NetworkId::Westend can just use NetworkId::ByGenesis, as can other testnets.
NetworkId::Rococo
NetworkId::Westend
A previous attempt to add NetworkId::Paseo: https://github.com/polkadot-fellows/xcm-format/pull/58.
NetworkId::Paseo
(func $ext_storage_read_version_2 - (param $key i64) (param $value_out i64) (param $offset i32) (result i64)) -(func $ext_default_child_storage_read_version_2 - (param $child_storage_key i64) (param $key i64) (param $value_out i64) - (param $offset i32) (result i64)) -
The signature and behaviour of ext_storage_read_version_2 and ext_default_child_storage_read_version_2 is identical to their version 1 counterparts, but the return value has a different meaning. -The new functions directly return the number of bytes that were written in the value_out buffer. If the entry doesn't exist, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in value_out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.
(func $ext_storage_next_key_version_2 - (param $key i64) (param $out i64) (return i32)) -(func $ext_default_child_storage_next_key_version_2 - (param $child_storage_key i64) (param $key i64) (param $out i64) (return i32)) -
The behaviour of these functions is identical to their version 1 counterparts. -Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an out parameter containing a pointer-size to the memory location where the host writes the output. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. -These functions return the size, in bytes, of the next key, or 0 if there is no next key. If the size of the next key is larger than the buffer in out, the bytes of the key that fit the buffer are written to out and any extra byte that doesn't fit is discarded.
(func $ext_hashing_keccak_256_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_keccak_512_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_sha2_256_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_blake2_128_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_blake2_256_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_twox_64_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_twox_128_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_twox_256_version_2 - (param $data i64) (param $out i32)) -(func $ext_trie_blake2_256_root_version_3 - (param $data i64) (param $version i32) (param $out i32)) -(func $ext_trie_blake2_256_ordered_root_version_3 - (param $data i64) (param $version i32) (param $out i32)) -(func $ext_trie_keccak_256_root_version_3 - (param $data i64) (param $version i32) (param $out i32)) -(func $ext_trie_keccak_256_ordered_root_version_3 - (param $data i64) (param $version i32) (param $out i32)) -(func $ext_default_child_storage_root_version_3 - (param $child_storage_key i64) (param $out i32)) -(func $ext_crypto_ed25519_generate_version_2 - (param $key_type_id i32) (param $seed i64) (param $out i32)) -(func $ext_crypto_sr25519_generate_version_2 - (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32)) -(func $ext_crypto_ecdsa_generate_version_2 - (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32)) -
(func $ext_default_child_storage_root_version_3 - (param $child_storage_key i64) (param $out i32)) -(func $ext_storage_root_version_3 - (param $out i32)) -
(func $ext_storage_clear_prefix_version_3 - (param $prefix i64) (param $limit i64) (param $removed_count_out i32) - (return i32)) -(func $ext_default_child_storage_clear_prefix_version_3 - (param $child_storage_key i64) (param $prefix i64) - (param $limit i64) (param $removed_count_out i32) (return i32)) -(func $ext_default_child_storage_kill_version_4 - (param $child_storage_key i64) (param $limit i64) - (param $removed_count_out i32) (return i32)) -
(func $ext_crypto_ed25519_sign_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) -(func $ext_crypto_sr25519_sign_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) -func $ext_crypto_ecdsa_sign_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) -(func $ext_crypto_ecdsa_sign_prehashed_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64)) -
(func $ext_crypto_secp256k1_ecdsa_recover_version_3 - (param $sig i32) (param $msg i32) (param $out i32) (return i64)) -(func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3 - (param $sig i32) (param $msg i32) (param $out i32) (return i64)) -
(func $ext_crypto_ed25519_num_public_keys_version_1 - (param $key_type_id i32) (return i32)) -(func $ext_crypto_ed25519_public_key_version_2 - (param $key_type_id i32) (param $key_index i32) (param $out i32)) -(func $ext_crypto_sr25519_num_public_keys_version_1 - (param $key_type_id i32) (return i32)) -(func $ext_crypto_sr25519_public_key_version_2 - (param $key_type_id i32) (param $key_index i32) (param $out i32)) -(func $ext_crypto_ecdsa_num_public_keys_version_1 - (param $key_type_id i32) (return i32)) -(func $ext_crypto_ecdsa_public_key_version_2 - (param $key_type_id i32) (param $key_index i32) (param $out i32)) -
Instead of calling ext_crypto_ed25519_public_key_version_1 in order to obtain the list of all keys at once, the runtime should instead call ext_crypto_ed25519_num_public_keys_version_1 in order to obtain the number of public keys available, then ext_crypto_ed25519_public_key_version_2 repeatedly. -The ext_crypto_ed25519_public_key_version_2 function writes the public key of the given key_index to the memory location designated by out. The key_index must be between 0 (included) and n (excluded), where n is the value returned by ext_crypto_ed25519_num_public_keys_version_1. Execution must trap if n is out of range.
(func $ext_offchain_http_request_start_version_2 - (param $method i64) (param $uri i64) (param $meta i64) (result i32)) -
(func $ext_offchain_http_request_write_body_version_2 - (param $method i64) (param $uri i64) (param $meta i64) (result i32)) -(func $ext_offchain_http_response_read_body_version_2 - (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64)) -
(func $ext_offchain_http_response_wait_version_2 - (param $ids i64) (param $deadline i64) (param $out i32)) -
(func $ext_offchain_http_response_header_name_version_1 - (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) -(func $ext_offchain_http_response_header_value_version_1 - (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) -
(func $ext_offchain_submit_transaction_version_2 - (param $data i64) (return i32)) -(func $ext_offchain_http_request_add_header_version_2 - (param $request_id i32) (param $name i64) (param $value i64) (result i32)) -
(func $ext_offchain_local_storage_read_version_1 - (param $kind i32) (param $key i64) (param $value_out i64) (param $offset i32) (result i64)) -
(func $ext_offchain_network_peer_id_version_1 - (param $out i64)) -
This function writes the PeerId of the local node to the memory location indicated by out. A PeerId is always 38 bytes long. -The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.
(func $ext_input_size_version_1 - (return i64)) -(func $ext_input_read_version_1 - (param $offset i64) (param $out i64)) -
The ext_input_read_version_1 host function copies some data from the input data to the memory of the runtime. The offset parameter indicates the offset within the input data where to start copying, and must be inferior or equal to the value returned by ext_input_size_version_1. The out parameter is a pointer-size containing the buffer where to write to. -The runtime execution stops with an error if offset is strictly superior to the size of the input data, or if out is outside of the range of the memory of the virtual machine, even if the amount of data to copy would be 0 bytes.
All the host functions that are being superceded by new host functions are now considered deprecated and should no longer be used. -The following other host functions are similarly also considered deprecated:
After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. -This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.
P(n) = \begin{cases} - (P_{\text{old}} - P_{\text{min}}) \left(1 - \left(\frac{T - n}{T}\right)^d\right) + P_{\text{min}} & \text{if } n \leq T \\ - ((F - 1) \cdot P_{\text{old}} \cdot \left(\frac{n - T}{L - T}\right)^u) + P_{\text{old}} & \text{if } n > T -\end{cases} -
NEW_PRICE := IF CORES_SOLD <= BULK_TARGET THEN - (OLD_PRICE - MIN_PRICE) * (1 - ((BULK_TARGET - CORES_SOLD)^SCALE_DOWN / BULK_TARGET^SCALE_DOWN)) + MIN_PRICE -ELSE - ((MAX_PRICE_INCREASE_FACTOR - 1) * OLD_PRICE * ((CORES_SOLD - BULK_TARGET)^SCALE_UP / (BULK_LIMIT - BULK_TARGET)^SCALE_UP)) + OLD_PRICE -END IF -
BULK_TARGET = 30 -BULK_LIMIT = 45 -MIN_PRICE = 1 -MAX_PRICE_INCREASE_FACTOR = 2 -SCALE_DOWN = 2 -SCALE_UP = 2 -OLD_PRICE = 1000 -
BULK_TARGET = 30 -BULK_LIMIT = 45 -MIN_PRICE = 1 -MAX_PRICE_INCREASE_FACTOR = 3 -SCALE_DOWN = 2 -SCALE_UP = 1 -OLD_PRICE = 1000 -
BULK_TARGET = 30 -BULK_LIMIT = 45 -MIN_PRICE = 1 -MAX_PRICE_INCREASE_FACTOR = 1.5 -SCALE_DOWN = 0.5 -SCALE_UP = 2 -OLD_PRICE = 1000 -
BULK_TARGET = 30 -BULK_LIMIT = 45 -MIN_PRICE = 1 -MAX_PRICE_INCREASE_FACTOR = 1.5 -SCALE_DOWN = 1 -SCALE_UP = 1 -OLD_PRICE = 1000 -
Once Polkadot and Kusama will have transitioned to state_version = 1, which modifies the format of the trie entries, it will be possible to generate Merkle proofs that contain only the hashes of values in the storage. Thanks to this, it is already possible to prove the existence of a key without sending its entire value (only its hash), or to prove that a value has changed or not between two blocks (by sending just their hashes). -Thus, the only reason why aforementioned issues exist is because the existing networking messages don't give the possibility for the querier to query this. This is what this proposal aims at fixing.
@@ -11,6 +11,7 @@ message Request { - RemoteReadRequest remote_read_request = 2; - RemoteReadChildRequest remote_read_child_request = 4; - // Note: ids 3 and 5 were used in the past. It would be preferable to not re-use them. -+ RemoteReadRequestV2 remote_read_request_v2 = 6; - } - } - -@@ -48,6 +49,21 @@ message RemoteReadRequest { - repeated bytes keys = 3; - } - -+message RemoteReadRequestV2 { -+ required bytes block = 1; -+ optional ChildTrieInfo child_trie_info = 2; // Read from the main trie if missing. -+ repeated Key keys = 3; -+ optional bytes onlyKeysAfter = 4; -+ optional bool onlyKeysAfterIgnoreLastNibble = 5; -+} -+ -+message ChildTrieInfo { -+ enum ChildTrieNamespace { -+ DEFAULT = 1; -+ } -+ -+ required bytes hash = 1; -+ required ChildTrieNamespace namespace = 2; -+} -+ - // Remote read response. - message RemoteReadResponse { - // Read proof. If missing, indicates that the remote couldn't answer, for example because -@@ -65,3 +81,8 @@ message RemoteReadChildRequest { - // Storage keys. - repeated bytes keys = 6; - } -+ -+message Key { -+ required bytes key = 1; -+ optional bool skipValue = 2; // Defaults to `false` if missing -+ optional bool includeDescendants = 3; // Defaults to `false` if missing -+} -
The new child_trie_info field in the request makes it possible to specify which trie is concerned by the request. The current networking protocol uses two different structs (RemoteReadRequest and RemoteReadChildRequest) for main trie and child trie queries, while this new request would make it possible to query either. This change doesn't fix any of the issues mentioned in the previous section, but is a side change that has been done for simplicity. -An alternative could have been to specify the child_trie_info for each individual Key. However this would make it necessary to send the child trie hash many times over the network, which leads to a waste of bandwidth, and in my opinion makes things more complicated for no actual gain. If a querier would like to access more than one trie at the same time, it is always possible to send one query per trie.
For the purpose of this networking protocol, it should be considered as if the main trie contained an entry for each default child trie whose key is concat(":child_storage:default:", child_trie_hash) and whose value is equal to the trie root hash of that default child trie. This behavior is consistent with what the host functions observe when querying the storage. This behavior is present in the existing networking protocol, in other words this proposal doesn't change anything to the situation, but it is worth mentioning. -Also note that child tries aren't considered as descendants of the main trie when it comes to the includeDescendants flag. In other words, if the request concerns the main trie, no content coming from child tries is ever sent back.
#![allow(unused)] -fn main() { -// Relative location (from own perspective) -{ - parents: 0, - interior: Here -} - -// Relative location (from perspective of parent) -{ - parents: 0, - interior: [Parachain(1000)] -} - -// Relative location (from perspective of sibling) -{ - parents: 1, - interior: [Parachain(1000)] -} - -// Absolute location -[GlobalConsensus(Kusama), Parachain(1000)] -}
#![allow(unused)] -fn main() { -// Relative location (from own perspective) -// Not possible. - -// Relative location (from perspective of parent) -(b"ChildChain", Compact::<u32>::from(*index)).encode() - -// Relative location (from perspective of sibling) -(b"SiblingChain", Compact::<u32>::from(*index)).encode() - -}
#![allow(unused)] -fn main() { -( - b"GlobalConsensus", - network_id, - b"Parachain", - Compact::<u32>::from(para_id), - tail -).encode() -}
This RFC proposes a set of changes that will enable the new rent based approach to registering and storing validation code on-chain. -The new model, compared to the current one, will require periodic rent payments. The parachain won't be pruned automatically if the rent is not paid, but by permitting anyone to prune the parachain and rewarding the caller, there will be an incentive for the removal of the validation code.
#![allow(unused)] -fn main() { -trait Config { - // -- snip -- - - /// The deposit required for reserving a `ParaId`. - #[pallet::constant] - type ParaDeposit: Get<BalanceOf<Self>>; - - /// The deposit to be paid per byte stored on chain. - #[pallet::constant] - type DataDepositPerByte: Get<BalanceOf<Self>>; -} -}
#![allow(unused)] -fn main() { -trait Config { - // -- snip -- - - /// Defines how frequently the rent needs to be paid. - /// - /// The duration is set in sessions instead of block numbers. - #[pallet::constant] - type RentDuration: Get<SessionIndex>; - - /// The initial deposit amount for registering validation code. - /// - /// This is defined as a proportion of the deposit that would be required in the regular - /// model. - #[pallet::constant] - type RentalDepositProportion: Get<Perbill>; - - /// The recurring rental cost defined as a proportion of the initial rental registration deposit. - #[pallet::constant] - type RentalRecurringProportion: Get<Perbill>; -} -}
#![allow(unused)] -fn main() { -mod pallet { - // -- snip -- - - pub fn register_rental( - origin: OriginFor<T>, - id: ParaId, - genesis_head: HeadData, - validation_code: ValidationCode, - ) -> DispatchResult { /* ... */ } - - pub fn pay_rent(origin: OriginFor<T>, id: ParaId) -> DispatchResult { - /* ... */ - } -} -}
A call to register_rental will require the reservation of only a percentage of the deposit that would otherwise be required to register the validation code when using the regular model. -As described later in the Quick para re-registering section below, we will also store the code hash of each parachain to enable faster re-registration after a parachain has been pruned. For this reason the total initial deposit amount is increased to account for that.
#![allow(unused)] -fn main() { -// The logic for calculating the initial deposit for parachain registered with the -// new rent-based model: - -let validation_code_deposit = per_byte_fee.saturating_mul((validation_code.0.len() as u32).into()); - -let head_deposit = per_byte_fee.saturating_mul((genesis_head.0.len() as u32).into()) -let hash_deposit = per_byte_fee.saturating_mul(HASH_SIZE); - -let deposit = T::RentalDepositProportion::get().mul_ceil(validation_code_deposit) - .saturating_add(T::ParaDeposit::get()) - .saturating_add(head_deposit) - .saturating_add(hash_deposit) -}
#![allow(unused)] -fn main() { -/// Stores the validation code hash for parachains that successfully completed the -/// pre-checking process. -/// -/// This is stored to enable faster on-demand para re-registration in case its pvf has been earlier -/// registered and checked. -/// -/// NOTE: During a runtime upgrade where the pre-checking rules change this storage map should be -/// cleared appropriately. -#[pallet::storage] -pub(super) type CheckedCodeHash<T: Config> = - StorageMap<_, Twox64Concat, ParaId, ValidationCodeHash>; -}
As noted in this GitHub issue, we want to raise the per-byte cost of on-chain data storage. However, a substantial increase in this cost would make it highly impractical for on-demand parachains to register on Polkadot. -This RFC offers an alternative solution for on-demand parachains, ensuring that the per-byte cost increase doesn't overly burden the registration process.
In case of path A, there is one situation where the behaviour pre-RFC is not equivalent to the one post-RFC: when a host function that performs an allocation (for example ext_storage_get) is called, without this RFC this allocation might fail due to reaching the maximum heap pages, while after this RFC this will always succeed. -This is most likely not a problem, as storage values aren't supposed to be larger than a few megabytes at the very maximum.
In the unfortunate event where the runtime runs out of memory, path B would make it more difficult to relax the memory limit, as we would need to re-upload the entire Wasm, compared to updating only :heappages in path A or before this RFC. -In the case where the runtime runs out of memory only in the specific event where the Wasm runtime is modified, this could brick the chain. However, this situation is no different than the thousands of other ways that a bug in the runtime can brick a chain, and there's no reason to be particularily worried about this situation in particular.
This RFC would reduce the chance of a consensus issue between clients. -The :heappages are a rather obscure feature, and it is not clear what happens in some corner cases such as the value being too large (error? clamp?) or malformed. This RFC would completely erase these questions.
This RFC proposes adding a trivial governance track on Kusama to facilitate X (formerly known as Twitter) posts on the @kusamanetwork account. The technical aspect -of implementing this in the runtime is very inconsequential and straight-forward, though it might get more technical if the Fellowship wants to regulate this track -with a non-existent permission set. If this is implemented it would need to be followed up with:
The overall motivation for this RFC is to decentralize the management of the Kusama brand/communication channel to KSM holders. This is necessary in my opinion primarily -because of the inactivity of the account in recent history, with posts spanning weeks or months apart. I am currently unaware of who/what entity manages the Kusama -X account, but if they are affiliated with Parity or W3F this proposed solution could also offload some of the legal ramifications of making (or not making) -announcements to the public regarding Kusama. While centralized control of the X account would still be present, it could become totally moot if this RFC is implemented -and the community becomes totally autonomous in the management of Kusama's X posts.
This solution does not cover every single communication front for Kusama, but it does cover one of the largest. It also establishes a precedent for other communication channels -that could be offloaded to openGov, provided this proof-of-concept is successful.
Finally, this RFC is the epitome of experimentation that Kusama is ideal for. This proposal may spark newfound excitement for Kusama and help us realize Kusama's potential -for pushing boundaries and trying new unconventional ideas.
This idea has not been formalized by any individual (or group of) KSM holder(s). To my knowledge the socialization of this idea is contained -entirely in my recent X post here, but it is possible that an idea like this one has been discussed in -other places. It appears to me that the ecosystem would welcome a change like this which is why I am taking action to formalize the discussion.
First, we begin with this RFC to ensure all feedback can be discussed and implemented in the proposal. After the Fellowship and the community come to a reasonable -agreement on the changes necessary to make this happen, the Fellowship can merge changes into Kusama's runtime to include this new track with appropriate track configurations. -As a starting point, I recommend the following track configurations:
const APP_X_POST: Curve = Curve::make_linear(7, 28, percent(50), percent(100)); -const SUP_X_POST: Curve = Curve::make_reciprocal(?, ?, percent(?), percent(?), percent(?)); - -// I don't know how to configure the make_reciprocal variables to get what I imagine for support, -// but I recommend starting at 50% support and sharply decreasing such that 1% is sufficient quarterway -// through the decision period and hitting 0% at the end of the decision period, or something like that. - - ( - 69, - pallet_referenda::TrackInfo { - name: "x_post", - max_deciding: 50, - decision_deposit: 1 * UNIT, - prepare_period: 10 * MINUTES, - decision_period: 4 * DAYS, - confirm_period: 10 * MINUTES, - min_enactment_period: 1 * MINUTES, - min_approval: APP_X_POST, - min_support: SUP_X_POST, - }, - ), -
I also recommend restricting permissions of this track to only submitting remarks or batches of remarks - that's all we'll need for its purpose. I'm not sure how -easy that is to configure, but it is important since we don't want such an agile track to be able to make highly consequential calls.
It is important that we establish the specifications of referenda that will be submitted in this track to ensure that whatever automation tool is built can easily -make posts once a referendum is enacted. As stated above, we really only need a system.remark (or batch of remarks) to indicate the contents of a proposed X post. -The most straight-forward way to do this is to require remarks to adhere to X's requirements for making posts via their API.
For example, if I wanted to propose a post that contained the text "Hello World!" I would propose a referendum in the X post track that contains the following call data: -0x0000607b2274657874223a202248656c6c6f20576f726c6421227d (i.e. system.remark('{"text": "Hello World!"}')).
At first, we could support text posts only to prove the concept. Later on we could expand this spec to add support for media, likes, retweets, replies, polls, and -whatever other X features we want.
Once we agree on track configurations and specs for referenda in this track, the Fellowship can move forward with merging these changes into Kusama's runtime and -include them in its next release. We could also move forward with developing the necessary tools that would listen for enacted referenda to post automatically on X. -This would require coordination with whoever controls the X account; they would either need to run the tools themselves or add a third party as an authorized user to -run the tools to make posts on the account's behalf. This is a bottleneck for decentralization, but as long as the tools are run by the X account manager or by a trusted third party -it should be fine. I'm open to more decentralized solutions, but those always come at a cost of complexity.
For the tools themselves, we could open a bounty on Kusama for developers/teams to bid on. We could also just ask the community to step up with a Treasury proposal -to have anyone fund the build. Or, the Fellowship could make the release of these changes contingent on their endorsement of developers/teams to build these tools. Lots of options! -For the record, me and my team could develop all the necessary tools, but all because I'm proposing these changes doesn't entitle me to funds to build the tools needed -to implement them. Here's what would be needed:
The main drawback to this change is that it requires a lot of off-chain coordination. It's easy enough to include the track on Kusama but it's a totally different -challenge to make it function as intended. The tools need to be built and the auth tokens need to be managed. It would certainly add an administrative burden to whoever -manages the X account since they would either need to run the tools themselves or manage auth tokens.
This change also introduces on-going costs to the Treasury since it would need to compensate people to support the tools necessary to facilitate this idea. The ultimate -question is whether these on-going costs would be worth the ability for KSM holders to make posts on Kusama's X account.
There's also the risk of misconfiguring the track to make referenda too easy to pass, potentially allowing a malicious actor to get content posted on X that violates X's ToS. -If that happens, we risk getting Kusama banned on X!
This change might also be outside the scope of the Fellowship/openGov. Perhaps the best solution for the X account is to have the Treasury pay for a professional -agency to manage posts. It wouldn't be decentralized but it would probably be more effective in terms of creating good content.
Finally, this solution is merely pseudo-decentralization since the X account manager would still have ultimate control of the account. It's decentralized insofar as -the auth tokens are given to people actually running the tools; a house of cards is required to facilitate X posts via this track. Not ideal.
There's major precedent for configuring tracks on openGov given the amount of power tracks have, so it shouldn't be hard to come up with a sound configuration. -That's why I recommend restricting permissions of this track to remarks and batches of remarks, or something equally inconsequential.
If a track on Kusama promises users that compliant referenda enacted therein would be posted on Kusama's X account, users would expect that track to perform as promised. -If the house of cards tumbles down and a compliant referendum doesn't actually get anything posted, users might think that Kusama is broken or unreliable. This -could be damaging to Kusama's image and cause people to question the soundness of other features on Kusama.
As mentioned in the drawbacks, the performance of this feature would depend on off-chain coordinations. We can reduce the administrative burden of these coordinations -by funding third parties with the Treasury to deal with it, but then we're relying on trusting these parties.
By adding a new track to Kusama, governance platforms like Polkassembly or Nova Wallet would need to include it on their applications. This shouldn't be too -much of a burden or overhead since they've already built the infrastructure for other openGov tracks.
One reference to a similar feature requiring on-chain/off-chain coordination would be the Kappa-Sigma-Mu Society. Nothing on-chain necessarily enforces the rules -or facilitates bids, challenges, defenses, etc. However, the Society has managed to maintain itself with integrity to its rules. So I don't think this is totally -out of Kusama's scope. But it will require some off-chain effort to maintain.
Corporate Governance: -In a corporate setting, multisig accounts can be employed for decision-making processes. For example, a company may require the approval of multiple executives to initiate significant financial transactions.
Joint Accounts: -Multisig accounts can be used for joint accounts where multiple individuals need to authorize transactions. This is particularly useful in family finances or shared business accounts.
Decentralized Autonomous Organizations (DAOs): -DAOs can utilize multisig accounts to ensure that decisions are made collectively. Multiple key holders can be required to approve changes to the organization's rules or the allocation of funds.
#![allow(unused)] -fn main() { -enum CallOrHash<T: Config> { - Call(<T as Config>::RuntimeCall), - Hash(T::Hash), -} -}
#![allow(unused)] -fn main() { - /// Creates a new multisig account and attach signers with a threshold to it. - /// - /// The dispatch origin for this call must be _Signed_. It is expected to be a nomral AccountId and not a - /// Multisig AccountId. - /// - /// T::BaseCreationDeposit + T::PerSignerDeposit * signers.len() will be held from the caller's account. - /// - /// # Arguments - /// - /// - `signers`: Initial set of accounts to add to the multisig. These may be updated later via `add_signer` - /// and `remove_signer`. - /// - `threshold`: The threshold number of accounts required to approve an action. Must be greater than 0 and - /// less than or equal to the total number of signers. - /// - /// # Errors - /// - /// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. - /// * `InvalidThreshold` - The threshold is greater than the total number of signers. - pub fn create_multisig( - origin: OriginFor<T>, - signers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>, - threshold: u32, - ) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Starts a new proposal for a dispatchable call for a multisig account. - /// The caller must be one of the signers of the multisig account. - /// T::ProposalDeposit will be held from the caller's account. - /// - /// # Arguments - /// - /// * `multisig_account` - The multisig account ID. - /// * `call_or_hash` - The enum having the call or the hash of the call to be approved and executed later. - /// - /// # Errors - /// - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. - /// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. (shouldn't really happen as it's the first approval) - pub fn start_proposal( - origin: OriginFor<T>, - multisig_account: T::AccountId, - call_or_hash: CallOrHash, - ) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Approves a proposal for a dispatchable call for a multisig account. - /// The caller must be one of the signers of the multisig account. - /// - /// If a signer did approve -> reject -> approve, the proposal will be approved. - /// If a signer did approve -> reject, the proposal will be rejected. - /// - /// # Arguments - /// - /// * `multisig_account` - The multisig account ID. - /// * `call_or_hash` - The enum having the call or the hash of the call to be approved. - /// - /// # Errors - /// - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. - /// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. - /// This shouldn't really happen as it's an approval, not an addition of a new signer. - pub fn approve( - origin: OriginFor<T>, - multisig_account: T::AccountId, - call_or_hash: CallOrHash, - ) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Rejects a proposal for a multisig account. - /// The caller must be one of the signers of the multisig account. - /// - /// Between approving and rejecting, last call wins. - /// If a signer did approve -> reject -> approve, the proposal will be approved. - /// If a signer did approve -> reject, the proposal will be rejected. - /// - /// # Arguments - /// - /// * `multisig_account` - The multisig account ID. - /// * `call_or_hash` - The enum having the call or the hash of the call to be rejected. - /// - /// # Errors - /// - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. - /// * `SignerNotFound` - The caller has not approved the proposal. - #[pallet::call_index(3)] - #[pallet::weight(Weight::default())] - pub fn reject( - origin: OriginFor<T>, - multisig_account: T::AccountId, - call_or_hash: CallOrHash, - ) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Executes a proposal for a dispatchable call for a multisig account. - /// Poropsal needs to be approved by enough signers (exceeds or equal multisig threshold) before it can be executed. - /// The caller must be one of the signers of the multisig account. - /// - /// This function does an extra check to make sure that all approvers still exist in the multisig account. - /// That is to make sure that the multisig account is not compromised by removing an signer during an active proposal. - /// - /// Once finished, the withheld deposit will be returned to the proposal creator. - /// - /// # Arguments - /// - /// * `multisig_account` - The multisig account ID. - /// * `call_or_hash` - We should have gotten the RuntimeCall (preimage) and stored it in the proposal by the time the extrinsic is called. - /// - /// # Errors - /// - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. - /// * `NotEnoughApprovers` - approvers don't exceed the threshold. - /// * `ProposalNotFound` - The proposal does not exist. - /// * `CallPreImageNotFound` - The proposal doesn't have the preimage of the call in the state. - pub fn execute_proposal( - origin: OriginFor<T>, - multisig_account: T::AccountId, - call_or_hash: CallOrHash, - ) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Cancels an existing proposal for a multisig account. - /// Poropsal needs to be rejected by enough signers (exceeds or equal multisig threshold) before it can be executed. - /// The caller must be one of the signers of the multisig account. - /// - /// This function does an extra check to make sure that all rejectors still exist in the multisig account. - /// That is to make sure that the multisig account is not compromised by removing an signer during an active proposal. - /// - /// Once finished, the withheld deposit will be returned to the proposal creator./ - /// - /// # Arguments - /// - /// * `origin` - The origin multisig account who wants to cancel the proposal. - /// * `call_or_hash` - The call or hash of the call to be canceled. - /// - /// # Errors - /// - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `ProposalNotFound` - The proposal does not exist. - pub fn cancel_proposal( - origin: OriginFor<T>, - multisig_account: T::AccountId, - call_or_hash: CallOrHash) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Cancels an existing proposal for a multisig account Only if the proposal doesn't have approvers other than - /// the proposer. - /// - /// This function needs to be called from a the proposer of the proposal as the origin. - /// - /// The withheld deposit will be returned to the proposal creator. - /// - /// # Arguments - /// - /// * `multisig_account` - The multisig account ID. - /// * `call_or_hash` - The hash of the call to be canceled. - /// - /// # Errors - /// - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `ProposalNotFound` - The proposal does not exist. - pub fn cancel_own_proposal( - origin: OriginFor<T>, - multisig_account: T::AccountId, - call_or_hash: CallOrHash, - ) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Cleanup proposals of a multisig account. This function will iterate over a max limit per extrinsic to ensure - /// we don't have unbounded iteration over the proposals. - /// - /// The withheld deposit will be returned to the proposal creator. - /// - /// # Arguments - /// - /// * `multisig_account` - The multisig account ID. - /// - /// # Errors - /// - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `ProposalNotFound` - The proposal does not exist. - pub fn cleanup_proposals( - origin: OriginFor<T>, - multisig_account: T::AccountId, - ) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Adds a new signer to the multisig account. - /// This function needs to be called from a Multisig account as the origin. - /// Otherwise it will fail with MultisigNotFound error. - /// - /// T::PerSignerDeposit will be held from the multisig account. - /// - /// # Arguments - /// - /// * `origin` - The origin multisig account who wants to add a new signer to the multisig account. - /// * `new_signer` - The AccountId of the new signer to be added. - /// * `new_threshold` - The new threshold for the multisig account after adding the new signer. - /// - /// # Errors - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `InvalidThreshold` - The threshold is greater than the total number of signers or is zero. - /// * `TooManySignatories` - The number of signatories exceeds the maximum allowed. - pub fn add_signer( - origin: OriginFor<T>, - new_signer: T::AccountId, - new_threshold: u32, - ) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Removes an signer from the multisig account. - /// This function needs to be called from a Multisig account as the origin. - /// Otherwise it will fail with MultisigNotFound error. - /// If only one signer exists and is removed, the multisig account and any pending proposals for this account will be deleted from the state. - /// - /// # Arguments - /// - /// * `origin` - The origin multisig account who wants to remove an signer from the multisig account. - /// * `signer_to_remove` - The AccountId of the signer to be removed. - /// * `new_threshold` - The new threshold for the multisig account after removing the signer. Accepts zero if - /// the signer is the only one left.kkk - /// - /// # Errors - /// - /// This function can return the following errors: - /// - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `InvalidThreshold` - The new threshold is greater than the total number of signers or is zero. - /// * `UnAuthorizedSigner` - The caller is not an signer of the multisig account. - pub fn remove_signer( - origin: OriginFor<T>, - signer_to_remove: T::AccountId, - new_threshold: u32, - ) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Sets a new threshold for a multisig account. - /// This function needs to be called from a Multisig account as the origin. - /// Otherwise it will fail with MultisigNotFound error. - /// - /// # Arguments - /// - /// * `origin` - The origin multisig account who wants to set the new threshold. - /// * `new_threshold` - The new threshold to be set. - /// # Errors - /// - /// * `MultisigNotFound` - The multisig account does not exist. - /// * `InvalidThreshold` - The new threshold is greater than the total number of signers or is zero. - set_threshold(origin: OriginFor<T>, new_threshold: u32) -> DispatchResult -}
#![allow(unused)] -fn main() { - /// Deletes a multisig account and all related proposals. - /// - /// This function needs to be called from a Multisig account as the origin. - /// Otherwise it will fail with MultisigNotFound error. - /// - /// # Arguments - /// - /// * `origin` - The origin multisig account who wants to cancel the proposal. - /// - /// # Errors - /// - /// * `MultisigNotFound` - The multisig account does not exist. - pub fn delete_account(origin: OriginFor<T>) -> DispatchResult -}
#![allow(unused)] -fn main() { -#[pallet::storage] - pub type MultisigAccount<T: Config> = StorageMap<_, Twox64Concat, T::AccountId, MultisigAccountDetails<T>>; - -/// The set of open multisig proposals. A proposal is uniquely identified by the multisig account and the call hash. -/// (maybe a nonce as well in the future) -#[pallet::storage] -pub type PendingProposals<T: Config> = StorageDoubleMap< - _, - Twox64Concat, - T::AccountId, // Multisig Account - Blake2_128Concat, - T::Hash, // Call Hash - MultisigProposal<T>, ->; -}
#![allow(unused)] -fn main() { -pub struct MultisigAccountDetails<T: Config> { - /// The signers of the multisig account. This is a BoundedBTreeSet to ensure faster operations (add, remove). - /// As well as lookups and faster set operations to ensure approvers is always a subset from signers. (e.g. in case of removal of an signer during an active proposal) - pub signers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>, - /// The threshold of approvers required for the multisig account to be able to execute a call. - pub threshold: u32, - pub deposit: BalanceOf<T>, -} -}
#![allow(unused)] -fn main() { -pub struct MultisigProposal<T: Config> { - /// Proposal creator. - pub creator: T::AccountId, - pub creation_deposit: BalanceOf<T>, - /// The extrinsic when the multisig operation was opened. - pub when: Timepoint<BlockNumberFor<T>>, - /// The approvers achieved so far, including the depositor. - /// The approvers are stored in a BoundedBTreeSet to ensure faster lookup and operations (approve, reject). - /// It's also bounded to ensure that the size don't go over the required limit by the Runtime. - pub approvers: BoundedBTreeSet<T::AccountId, T::MaxSignatories>, - /// The rejectors for the proposal so far. - /// The rejectors are stored in a BoundedBTreeSet to ensure faster lookup and operations (approve, reject). - /// It's also bounded to ensure that the size don't go over the required limit by the Runtime. - pub rejectors: BoundedBTreeSet<T::AccountId, T::MaxSignatories>, - /// The block number until which this multisig operation is valid. None means no expiry. - pub expire_after: Option<BlockNumberFor<T>>, -} -}
Stateless Multisig: -Both as_multi and approve_as_multi has a similar parameters:
#![allow(unused)] -fn main() { -origin: OriginFor<T>, -threshold: u16, -other_signatories: Vec<T::AccountId>, -maybe_timepoint: Option<Timepoint<BlockNumberFor<T>>>, -call_hash: [u8; 32], -max_weight: Weight, -}
Stateful Multisig: -We have the following extrinsics:
#![allow(unused)] -fn main() { -pub fn start_proposal( - origin: OriginFor<T>, - multisig_account: T::AccountId, - call_or_hash: CallOrHash, - ) -}
#![allow(unused)] -fn main() { -pub fn approve( - origin: OriginFor<T>, - multisig_account: T::AccountId, - call_or_hash: CallOrHash, - ) -}
#![allow(unused)] -fn main() { -pub fn execute_proposal( - origin: OriginFor<T>, - multisig_account: T::AccountId, - call_or_hash: CallOrHash, - ) -}
Simplified table removing K from the equation: -| Pallet | Block Size | State Size | -|----------------|:-------------:|-----------:| -| Stateless | N^2 | Nil | -| Stateful | N | N |
createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Struct: failed on pgpFingerprint: Option<[u8;20]>:: Expected input with 20 bytes (160 bits), found 40 bytes -
This RFC proposes a new pallet_inflation to be added to the Polkadot runtime, which improves -inflation machinery of the Polkadot relay chain in a number of ways:
This RFC, as iterated above, proposes a new pallet_inflation that addresses all of the named -problems. However, this RFC does not propose any changes to the actual inflation rate, but -rather provide a new technical substrate (pun intended), upon which token holders can decide on the -future of the DOT token's inflation in a more clear and transparent way.
We argue that one reason why the inflation rate of Polkadot has not significantly change in ~4 years -has been the complicated process of updating it. We hope that with the tools provided in this RFC, -stakeholders can experiment with the inflation rate in a more ergonomic way. Finally, this -experimentation can be considered useful as a final step toward fixing the economics of DOT in JAM, -as proposed in the JAM graypaper.
Within the scope of this RFC, we suggest deploying the new inflation pallet in a backwards -compatible way, such that the inflation model does not change in practice, and leave the actual -changes to the token holders and researchers and further governance proposals.
-While mainly intended for Polkadot, the system proposed in this RFC is general enough such that it -can be interpreted as a "general inflation system pallet", and can be used in newly onboarding -parachain. -
While mainly intended for Polkadot, the system proposed in this RFC is general enough such that it -can be interpreted as a "general inflation system pallet", and can be used in newly onboarding -parachain.
First, let's further elaborate on the existing order. The current inflation logic is deeply nested -in pallet_staking, and pallet_staking::Config::EraPayout interface. Through this trait, the -staking pallet is informed how many new tokens should possibly be minted. This amount is divided -into two parts:
As it stands now the implementation of EraPayout which specifies the two amounts above lives in -the respective runtime, and uses the original proposed inflation rate proposed by W3F for Polkadot. -Read more about this model here.
At present, the inflation always happens at the end of an era, which is a concept know by the -staking system. The duration of an era is recorded in pallet_staking as milliseconds (as recorded -by the standard pallet_timestamp), is passed to EraPayout as an input, as is measured against -the full year to determine how much should be inflated.
-The naming used in this section is tentative, based on a WIP implementation, and subject to change -before finalization of this RFC. -
The naming used in this section is tentative, based on a WIP implementation, and subject to change -before finalization of this RFC.
A proper configuration of this pallet should use pallet_parameters where possible to allow for any -of the actual values used to specify Sourcing and Distribution to be changed via on-chain -governance. Please see the example configurations section for more -details.
In the new model, inflation can happen at any point in time. Since now a new pallet is dedicated to -inflation, and it can internally store the timestamp of the last inflation point, and always inflate -the correct amount. This means that while the duration of a staking era is 1 day, the inflation -process can happen eg. every hour. The opposite is also possible, although more complicated: The -staking/treasury system can possibly receive their corresponding income on a weekly basis, while the -era duration is still 1 day. That being said, we don't recommend using this flexibility as it brings -no clear advantage, and is only extra complexity. We recommend the inflation to still happen shortly -before the end of the staking era. This means that if the inflation sourcing or distribution is -a function of the staking rate, it can reliably use the staking rate of the last era.
Finally, as noted above, this RFC implies a new accounting system for staking to keep track of its -staking reward. In short, the new process is as follows: pallet_inflation will mint the staking -portion of inflation directly into a key-less account controlled by pallet_staking. At the end of -each era, pallet_staking will inspect this account, and move whatever amount is paid out into it -to another key-less account associated with the era number. The actual payouts, initiated by stakers, -will transfer from this era account into the corresponding stakers' account.
-Interestingly, this means that any account can possibly contribute to staking rewards by -transferring DOTs to the key-less parent account controlled by the staking system. -
Interestingly, this means that any account can possibly contribute to staking rewards by -transferring DOTs to the key-less parent account controlled by the staking system.
A candidate implementation of this RFC can be found in -this -branch of the polkadot-sdk repository. Please note the changes to:
The following are working examples from the above implementation candidate, highlighting some of the -outcomes that can be achieved.
First, to parameterize the existing proposed implementation to replicate what Polkadot does today, -assuming we incorporate the fixed 2% treasury income, the outcome would be:
#![allow(unused)] -fn main() { -parameter_types! { - pub Distribution: Vec<pallet_inflation::DistributionStep<Runtime>> = vec![ - // 2% goes to treasury, no questions asked. - Box::new(pay::<Runtime, TreasuryAccount, dynamic_params::staking::FixedTreasuryIncome>), - // from whatever is left, staking gets all the rest, based on the staking rate. - Box::new(polkadot_staking_income::< - Runtime, - dynamic_params::staking::IdealStakingRate, - dynamic_params::staking::Falloff, - StakingIncomeAccount - >), - // Burn anything that is left. - Box::new(burn::<Runtime, All>), - ]; -} - -impl pallet_inflation::Config for Runtime { - /// Fixed 10% annual inflation. - type InflationSource = - pallet_inflation::FixedRatioAnnualInflation<Runtime, dynamic_params::staking::MaxInflation>; - type Distribution = Distribution; -} -}
In this snippet, we use a number of components provided by pallet_inflation, namely pay, -polkadot_staking_income, burn and FixedRatioAnnualInflation. Yet, crucially, these components -are fed parameters that are all backed by an instance of the pallet_parameters, namely everything -prefixed by dynamic_params.
The above is a purely inflationary system. If one wants to change the inflation to -dis-inflationary, another pre-made component of pallet_inflation can be used:
impl pallet_inflation::Config for Runtime { -- /// Fixed 10% annual inflation. -- type InflationSource = -- pallet_inflation::FixedRatioAnnualInflation<Runtime, dynamic_params::staking::MaxInflation>; -+ type InflationSource = pallet_inflation::FixedAnnualInflation< -+ Runtime, -+ dynamic_params::staking::FixedAnnualInflationAmount, -+ >; -} -
Whereby FixedAnnualInflationAmount is the fixed absolute value (as opposed to ratio) by -which the chain inflates annually, for example 100m DOTs.
The new pallet_inflation, among its integration into pallet_staking must be thoroughly audited -and reviewed by fellows. We also emphasize on simulating the actual inflation logic using the real -polkadot state with Chopsticks and try-runtime.
The proposed system in this RFC implies a handful of extra storage reads and writes "per inflation -cycle", but given that a reasonable instance of this pallet would probably decide to inflation eg. -once per day, the performance impact is negligible.
The "New Order" section above notes the compatibility notes with the existing staking -and inflation system.