Markdown linter (#1309)

* Add markdown linting

- add linter default rules
- adapt rules to current code
- fix the code for linting to pass
- add CI check

fix #1243

* Fix markdown for Substrate
* Fix tooling install
* Fix workflow
* Add documentation
* Remove trailing spaces
* Update .github/.markdownlint.yaml

Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
* Fix mangled markdown/lists
* Fix captalization issues on known words
This commit is contained in:
Chevdor
2023-09-04 11:02:32 +02:00
committed by GitHub
parent 830fde2a60
commit a30092ab42
271 changed files with 6289 additions and 4450 deletions
@@ -1,65 +1,62 @@
# The `benchmark block` command
The whole benchmarking process in Substrate aims to predict the resource usage of an unexecuted block.
This command measures how accurate this prediction was by executing a block and comparing the predicted weight to its actual resource usage.
It can be used to measure the accuracy of the pallet benchmarking.
The whole benchmarking process in Substrate aims to predict the resource usage of an unexecuted block. This command
measures how accurate this prediction was by executing a block and comparing the predicted weight to its actual resource
usage. It can be used to measure the accuracy of the pallet benchmarking.
In the following it will be explained once for Polkadot and once for Substrate.
In the following it will be explained once for Polkadot and once for Substrate.
## Polkadot # 1
<sup>(Also works for Kusama, Westend and Rococo)</sup>
Suppose you either have a synced Polkadot node or downloaded a snapshot from [Polkachu].
This example uses a pruned ParityDB snapshot from the 2022-4-19 with the last block being 9939462.
For pruned snapshots you need to know the number of the last block (to be improved [here]).
Pruned snapshots normally store the last 256 blocks, archive nodes can use any block range.
Suppose you either have a synced Polkadot node or downloaded a snapshot from [Polkachu]. This example uses a pruned
ParityDB snapshot from the 2022-4-19 with the last block being 9939462. For pruned snapshots you need to know the number
of the last block (to be improved [here]). Pruned snapshots normally store the last 256 blocks, archive nodes can use
any block range.
In this example we will benchmark just the last 10 blocks:
In this example we will benchmark just the last 10 blocks:
```sh
cargo run --profile=production -- benchmark block --from 9939453 --to 9939462 --db paritydb
```
Output:
```pre
Block 9939453 with 2 tx used 4.57% of its weight ( 26,458,801 of 579,047,053 ns)
Block 9939454 with 3 tx used 4.80% of its weight ( 28,335,826 of 590,414,831 ns)
Block 9939455 with 2 tx used 4.76% of its weight ( 27,889,567 of 586,484,595 ns)
Block 9939456 with 2 tx used 4.65% of its weight ( 27,101,306 of 582,789,723 ns)
Block 9939457 with 2 tx used 4.62% of its weight ( 26,908,882 of 582,789,723 ns)
Block 9939458 with 2 tx used 4.78% of its weight ( 28,211,440 of 590,179,467 ns)
Block 9939459 with 4 tx used 4.78% of its weight ( 27,866,077 of 583,260,451 ns)
Block 9939460 with 3 tx used 4.72% of its weight ( 27,845,836 of 590,462,629 ns)
Block 9939461 with 2 tx used 4.58% of its weight ( 26,685,119 of 582,789,723 ns)
Block 9939462 with 2 tx used 4.60% of its weight ( 26,840,938 of 583,697,101 ns)
Block 9939453 with 2 tx used 4.57% of its weight ( 26,458,801 of 579,047,053 ns)
Block 9939454 with 3 tx used 4.80% of its weight ( 28,335,826 of 590,414,831 ns)
Block 9939455 with 2 tx used 4.76% of its weight ( 27,889,567 of 586,484,595 ns)
Block 9939456 with 2 tx used 4.65% of its weight ( 27,101,306 of 582,789,723 ns)
Block 9939457 with 2 tx used 4.62% of its weight ( 26,908,882 of 582,789,723 ns)
Block 9939458 with 2 tx used 4.78% of its weight ( 28,211,440 of 590,179,467 ns)
Block 9939459 with 4 tx used 4.78% of its weight ( 27,866,077 of 583,260,451 ns)
Block 9939460 with 3 tx used 4.72% of its weight ( 27,845,836 of 590,462,629 ns)
Block 9939461 with 2 tx used 4.58% of its weight ( 26,685,119 of 582,789,723 ns)
Block 9939462 with 2 tx used 4.60% of its weight ( 26,840,938 of 583,697,101 ns)
```
### Output Interpretation
<sup>(Only results from reference hardware are relevant)</sup>
Each block is executed multiple times and the results are averaged.
The percent number is the interesting part and indicates how much weight was used as compared to how much was predicted.
The closer to 100% this is without exceeding 100%, the better.
If it exceeds 100%, the block is marked with "**OVER WEIGHT!**" to easier spot them. This is not good since then the benchmarking under-estimated the weight.
This would mean that an honest validator would possibly not be able to keep up with importing blocks since users did not pay for enough weight.
If that happens the validator could lag behind the chain and get slashed for missing deadlines.
It is therefore important to investigate any overweight blocks.
Each block is executed multiple times and the results are averaged. The percent number is the interesting part and
indicates how much weight was used as compared to how much was predicted. The closer to 100% this is without exceeding
100%, the better. If it exceeds 100%, the block is marked with "**OVER WEIGHT!**" to easier spot them. This is not good
since then the benchmarking under-estimated the weight. This would mean that an honest validator would possibly not be
able to keep up with importing blocks since users did not pay for enough weight. If that happens the validator could lag
behind the chain and get slashed for missing deadlines. It is therefore important to investigate any overweight blocks.
In this example you can see an unexpected result; only < 5% of the weight was used!
The measured blocks can be executed much faster than predicted.
This means that the benchmarking process massively over-estimated the execution time.
Since they are off by so much, it is an issue [polkadot#5192].
In this example you can see an unexpected result; only < 5% of the weight was used! The measured blocks can be executed
much faster than predicted. This means that the benchmarking process massively over-estimated the execution time. Since
they are off by so much, it is an issue [`polkadot#5192`].
The ideal range for these results would be 85-100%.
## Polkadot # 2
Let's take a more interesting example where the blocks use more of their predicted weight.
Every day when validators pay out rewards, the blocks are nearly full.
Using an archive node here is the easiest.
Let's take a more interesting example where the blocks use more of their predicted weight. Every day when validators pay
out rewards, the blocks are nearly full. Using an archive node here is the easiest.
The Polkadot blocks TODO-TODO for example contain large batch transactions for staking payout.
The Polkadot blocks TODO-TODO for example contain large batch transactions for staking payout.
```sh
cargo run --profile=production -- benchmark block --from TODO --to TODO --db paritydb
@@ -71,21 +68,20 @@ TODO
## Substrate
It is also possible to try the procedure in Substrate, although it's a bit boring.
It is also possible to try the procedure in Substrate, although it's a bit boring.
First you need to create some blocks with either a local or dev chain.
This example will use the standard development spec.
Pick a non existing directory where the chain data will be stored, eg `/tmp/dev`.
First you need to create some blocks with either a local or dev chain. This example will use the standard development
spec. Pick a non existing directory where the chain data will be stored, eg `/tmp/dev`.
```sh
cargo run --profile=production -- --dev -d /tmp/dev
```
You should see after some seconds that it started to produce blocks:
You should see after some seconds that it started to produce blocks:
```pre
✨ Imported #1 (0x801d…9189)
```
You can now kill the node with `Ctrl+C`. Then measure how long it takes to execute these blocks:
You can now kill the node with `Ctrl+C`. Then measure how long it takes to execute these blocks:
```sh
cargo run --profile=production -- benchmark block --from 1 --to 1 --dev -d /tmp/dev --pruning archive
```
@@ -94,9 +90,8 @@ This will benchmark the first block. If you killed the node at a later point, yo
Block 1 with 1 tx used 72.04% of its weight ( 4,945,664 of 6,864,702 ns)
```
In this example the block used ~72% of its weight.
The benchmarking therefore over-estimated the effort to execute the block.
Since this block is empty, its not very interesting.
In this example the block used ~72% of its weight. The benchmarking therefore over-estimated the effort to execute the
block. Since this block is empty, its not very interesting.
## Arguments
@@ -1,17 +1,17 @@
# The `benchmark machine` command
Different Substrate chains can have different hardware requirements.
It is therefore important to be able to quickly gauge if a piece of hardware fits a chains' requirements.
The `benchmark machine` command archives this by measuring key metrics and making them comparable.
Different Substrate chains can have different hardware requirements.
It is therefore important to be able to quickly gauge if a piece of hardware fits a chains' requirements.
The `benchmark machine` command archives this by measuring key metrics and making them comparable.
Invoking the command looks like this:
Invoking the command looks like this:
```sh
cargo run --profile=production -- benchmark machine --dev
```
## Output
The output on reference hardware:
The output on reference hardware:
```pre
+----------+----------------+---------------+--------------+-------------------+
@@ -29,37 +29,49 @@ The output on reference hardware:
+----------+----------------+---------------+--------------+-------------------+
```
The *score* is the average result of each benchmark. It always adheres to "higher is better".
The *score* is the average result of each benchmark. It always adheres to "higher is better".
The *category* indicate which part of the hardware was benchmarked:
The *category* indicate which part of the hardware was benchmarked:
- **CPU** Processor intensive task
- **Memory** RAM intensive task
- **Disk** Hard drive intensive task
The *function* is the concrete benchmark that was run:
- **BLAKE2-256** The throughput of the [Blake2-256] cryptographic hashing function with 32 KiB input. The [blake2_256 function] is used in many places in Substrate. The throughput of a hash function strongly depends on the input size, therefore we settled to use a fixed input size for comparable results.
- **SR25519 Verify** Sr25519 is an optimized version of the [Curve25519] signature scheme. Signature verification is used by Substrate when verifying extrinsics and blocks.
The *function* is the concrete benchmark that was run:
- **BLAKE2-256** The throughput of the [Blake2-256] cryptographic hashing function with 32 KiB input. The [blake2_256
function] is used in many places in Substrate. The throughput of a hash function strongly depends on the input size,
therefore we settled to use a fixed input size for comparable results.
- **SR25519 Verify** Sr25519 is an optimized version of the [Curve25519] signature scheme. Signature verification is
used by Substrate when verifying extrinsics and blocks.
- **Copy** The throughput of copying memory from one place in the RAM to another.
- **Seq Write** The throughput of writing data to the storage location sequentially. It is important that the same disk is used that will later-on be used to store the chain data.
- **Rnd Write** The throughput of writing data to the storage location in a random order. This is normally much slower than the sequential write.
- **Seq Write** The throughput of writing data to the storage location sequentially. It is important that the same disk
is used that will later-on be used to store the chain data.
- **Rnd Write** The throughput of writing data to the storage location in a random order. This is normally much slower
than the sequential write.
The *score* needs to reach the *minimum* in order to pass the benchmark. This can be reduced with the `--tolerance` flag.
The *score* needs to reach the *minimum* in order to pass the benchmark. This can be reduced with the `--tolerance`
flag.
The *result* indicated if a specific benchmark was passed by the machine or not. The percent number is the relative score reached to the *minimum* that is needed. The `--tolerance` flag is taken into account for this decision. For example a benchmark that passes even with 95% since the *tolerance* was set to 10% would look like this: `✅ Pass ( 95.0 %)`.
The *result* indicated if a specific benchmark was passed by the machine or not. The percent number is the relative
score reached to the *minimum* that is needed. The `--tolerance` flag is taken into account for this decision. For
example a benchmark that passes even with 95% since the *tolerance* was set to 10% would look like this: `✅ Pass ( 95.0
%)`.
## Interpretation
Ideally all results show a `Pass` and the program exits with code 0. Currently some of the benchmarks can fail even on reference hardware; they are still being improved to make them more deterministic.
Make sure to run nothing else on the machine when benchmarking it.
Ideally all results show a `Pass` and the program exits with code 0. Currently some of the benchmarks can fail even on
reference hardware; they are still being improved to make them more deterministic.
Make sure to run nothing else on the machine when benchmarking it.
You can re-run them multiple times to get more reliable results.
## Arguments
- `--tolerance` A percent number to reduce the *minimum* requirement. This should be used to ignore outliers of the benchmarks. The default value is 10%.
- `--tolerance` A percent number to reduce the *minimum* requirement. This should be used to ignore outliers of the
benchmarks. The default value is 10%.
- `--verify-duration` How long the verification benchmark should run.
- `--disk-duration` How long the *read* and *write* benchmarks should run each.
- `--allow-fail` Always exit the program with code 0.
- `--chain` / `--dev` Specify the chain config to use. This will be used to compare the results with the requirements of the chain (WIP).
- `--chain` / `--dev` Specify the chain config to use. This will be used to compare the results with the requirements of
the chain (WIP).
- [`--base-path`]
License: Apache-2.0
@@ -1,21 +1,21 @@
# The `benchmark overhead` command
Each time an extrinsic or a block is executed, a fixed weight is charged as "execution overhead".
This is necessary since the weight that is calculated by the pallet benchmarks does not include this overhead.
The exact overhead to can vary per Substrate chain and needs to be calculated per chain.
This command calculates the exact values of these overhead weights for any Substrate chain that supports it.
Each time an extrinsic or a block is executed, a fixed weight is charged as "execution overhead". This is necessary
since the weight that is calculated by the pallet benchmarks does not include this overhead. The exact overhead to can
vary per Substrate chain and needs to be calculated per chain. This command calculates the exact values of these
overhead weights for any Substrate chain that supports it.
## How does it work?
The benchmark consists of two parts; the [`BlockExecutionWeight`] and the [`ExtrinsicBaseWeight`].
Both are executed sequentially when invoking the command.
The benchmark consists of two parts; the [`BlockExecutionWeight`] and the [`ExtrinsicBaseWeight`]. Both are executed
sequentially when invoking the command.
## BlockExecutionWeight
The block execution weight is defined as the weight that it takes to execute an *empty block*.
It is measured by constructing an empty block and measuring its executing time.
The result are written to a `block_weights.rs` file which is created from a template.
The file will contain the concrete weight value and various statistics about the measurements. For example:
The block execution weight is defined as the weight that it takes to execute an *empty block*. It is measured by
constructing an empty block and measuring its executing time. The result are written to a `block_weights.rs` file which
is created from a template. The file will contain the concrete weight value and various statistics about the
measurements. For example:
```rust
/// Time to execute an empty block.
/// Calculated by multiplying the *Average* with `1` and adding `0`.
@@ -34,16 +34,17 @@ pub const BlockExecutionWeight: Weight =
Weight::from_parts(WEIGHT_REF_TIME_PER_NANOS.saturating_mul(3_532_484), 0);
```
In this example it takes 3.5 ms to execute an empty block. That means that it always takes at least 3.5 ms to execute *any* block.
This constant weight is therefore added to each block to ensure that Substrate budgets enough time to execute it.
In this example it takes 3.5 ms to execute an empty block. That means that it always takes at least 3.5 ms to execute
*any* block. This constant weight is therefore added to each block to ensure that Substrate budgets enough time to
execute it.
## ExtrinsicBaseWeight
The extrinsic base weight is defined as the weight that it takes to execute an *empty* extrinsic.
An *empty* extrinsic is also called a *NO-OP*. It does nothing and is the equivalent to the empty block form above.
The benchmark now constructs a block which is filled with only NO-OP extrinsics.
This block is then executed many times and the weights are measured.
The result is divided by the number of extrinsics in that block and the results are written to `extrinsic_weights.rs`.
The extrinsic base weight is defined as the weight that it takes to execute an *empty* extrinsic. An *empty* extrinsic
is also called a *NO-OP*. It does nothing and is the equivalent to the empty block form above. The benchmark now
constructs a block which is filled with only NO-OP extrinsics. This block is then executed many times and the weights
are measured. The result is divided by the number of extrinsics in that block and the results are written to
`extrinsic_weights.rs`.
The relevant section in the output file looks like this:
```rust
@@ -64,8 +65,9 @@ pub const ExtrinsicBaseWeight: Weight =
Weight::from_parts(WEIGHT_REF_TIME_PER_NANOS.saturating_mul(67_745), 0);
```
In this example it takes 67.7 µs to execute a NO-OP extrinsic. That means that it always takes at least 67.7 µs to execute *any* extrinsic.
This constant weight is therefore added to each extrinsic to ensure that Substrate budgets enough time to execute it.
In this example it takes 67.7 µs to execute a NO-OP extrinsic. That means that it always takes at least 67.7 µs to
execute *any* extrinsic. This constant weight is therefore added to each extrinsic to ensure that Substrate budgets
enough time to execute it.
## Invocation
@@ -106,15 +108,18 @@ The complete command for Polkadot looks like this:
cargo run --profile=production -- benchmark overhead --chain=polkadot-dev --wasm-execution=compiled --weight-path=runtime/polkadot/constants/src/weights/
```
This will overwrite the the [block_weights.rs](https://github.com/paritytech/polkadot/blob/c254e5975711a6497af256f6831e9a6c752d28f5/runtime/polkadot/constants/src/weights/block_weights.rs) and [extrinsic_weights.rs](https://github.com/paritytech/polkadot/blob/c254e5975711a6497af256f6831e9a6c752d28f5/runtime/polkadot/constants/src/weights/extrinsic_weights.rs) files in the Polkadot runtime directory.
You can try the same for *Rococo* and to see that the results slightly differ.
This will overwrite the the
[block_weights.rs](https://github.com/paritytech/polkadot/blob/c254e5975711a6497af256f6831e9a6c752d28f5/runtime/polkadot/constants/src/weights/block_weights.rs)
and
[extrinsic_weights.rs](https://github.com/paritytech/polkadot/blob/c254e5975711a6497af256f6831e9a6c752d28f5/runtime/polkadot/constants/src/weights/extrinsic_weights.rs)
files in the Polkadot runtime directory. You can try the same for *Rococo* and to see that the results slightly differ.
👉 It is paramount to use `--profile=production` and `--wasm-execution=compiled` as the results are otherwise useless.
## Output Interpretation
Lower is better. The less weight the execution overhead needs, the better.
Since the weights of the overhead is charged per extrinsic and per block, a larger weight results in less extrinsics per block.
Minimizing this is important to have a large transaction throughput.
Lower is better. The less weight the execution overhead needs, the better. Since the weights of the overhead is charged
per extrinsic and per block, a larger weight results in less extrinsics per block. Minimizing this is important to have
a large transaction throughput.
## Arguments
@@ -132,7 +137,10 @@ Minimizing this is important to have a large transaction throughput.
License: Apache-2.0
<!-- LINKS -->
[`ExtrinsicBaseWeight`]: https://github.com/paritytech/substrate/blob/580ebae17fa30082604f1c9720f6f4a1cfe95b50/frame/support/src/weights/extrinsic_weights.rs#L26
[`BlockExecutionWeight`]: https://github.com/paritytech/substrate/blob/580ebae17fa30082604f1c9720f6f4a1cfe95b50/frame/support/src/weights/block_weights.rs#L26
[`ExtrinsicBaseWeight`]:
https://github.com/paritytech/substrate/blob/580ebae17fa30082604f1c9720f6f4a1cfe95b50/frame/support/src/weights/extrinsic_weights.rs#L26
[`BlockExecutionWeight`]:
https://github.com/paritytech/substrate/blob/580ebae17fa30082604f1c9720f6f4a1cfe95b50/frame/support/src/weights/block_weights.rs#L26
[System::Remark]: https://github.com/paritytech/substrate/blob/580ebae17fa30082604f1c9720f6f4a1cfe95b50/frame/system/src/lib.rs#L382
[System::Remark]:
https://github.com/paritytech/substrate/blob/580ebae17fa30082604f1c9720f6f4a1cfe95b50/frame/system/src/lib.rs#L382
@@ -10,7 +10,10 @@ Contains code that is shared among multiple sub-commands.
- `--weight-path` Set the file or directory to write the weight files to.
- `--db` The database backend to use. This depends on your snapshot.
- `--pruning` Set the pruning mode of the node. Some benchmarks require you to set this to `archive`.
- `--base-path` The location on the disk that should be used for the benchmarks. You can try this on different disks or even on a mounted RAM-disk. It is important to use the same location that will later-on be used to store the chain data to get the correct results.
- `--header` Optional file header which will be prepended to the weight output file. Can be used for adding LICENSE headers.
- `--base-path` The location on the disk that should be used for the benchmarks. You can try this on different disks or
even on a mounted RAM-disk. It is important to use the same location that will later-on be used to store the chain
data to get the correct results.
- `--header` Optional file header which will be prepended to the weight output file. Can be used for adding LICENSE
headers.
License: Apache-2.0
@@ -1,17 +1,19 @@
# The `benchmark storage` command
The cost of storage operations in a Substrate chain depends on the current chain state.
It is therefore important to regularly update these weights as the chain grows.
This sub-command measures the cost of storage operations for a concrete snapshot.
The cost of storage operations in a Substrate chain depends on the current chain state.
It is therefore important to regularly update these weights as the chain grows.
This sub-command measures the cost of storage operations for a concrete snapshot.
For the Substrate node it looks like this (for debugging you can use `--release`):
For the Substrate node it looks like this (for debugging you can use `--release`):
```sh
cargo run --profile=production -- benchmark storage --dev --state-version=1
```
Running the command on Substrate itself is not verify meaningful, since the genesis state of the `--dev` chain spec is used.
Running the command on Substrate itself is not verify meaningful, since the genesis state of the `--dev` chain spec is
used.
The output for the Polkadot client with a recent chain snapshot will give you a better impression. A recent snapshot can be downloaded from [Polkachu].
The output for the Polkadot client with a recent chain snapshot will give you a better impression. A recent snapshot can
be downloaded from [Polkachu].
Then run (remove the `--db=paritydb` if you have a RocksDB snapshot):
```sh
cargo run --profile=production -- benchmark storage --dev --state-version=0 --db=paritydb --weight-path runtime/polkadot/constants/src/weights
@@ -20,8 +22,8 @@ cargo run --profile=production -- benchmark storage --dev --state-version=0 --db
This takes a while since reads and writes all keys from the snapshot:
```pre
# The 'read' benchmark
Preparing keys from block BlockId::Number(9939462)
Reading 1379083 keys
Preparing keys from block BlockId::Number(9939462)
Reading 1379083 keys
Time summary [ns]:
Total: 19668919930
Min: 6450, Max: 1217259
@@ -31,11 +33,11 @@ Value size summary:
Total: 265702275
Min: 1, Max: 1381859
Average: 192, Median: 80, Stddev: 3427.53
Percentiles 99th, 95th, 75th: 3368, 383, 80
Percentiles 99th, 95th, 75th: 3368, 383, 80
# The 'write' benchmark
Preparing keys from block BlockId::Number(9939462)
Writing 1379083 keys
Preparing keys from block BlockId::Number(9939462)
Writing 1379083 keys
Time summary [ns]:
Total: 98393809781
Min: 12969, Max: 13282577
@@ -49,12 +51,13 @@ Percentiles 99th, 95th, 75th: 3368, 383, 80
Writing weights to "paritydb_weights.rs"
```
You will see that the [paritydb_weights.rs] files was modified and now contains new weights.
The exact command for Polkadot can be seen at the top of the file.
This uses the most recent block from your snapshot which is printed at the top.
The value size summary tells us that the pruned Polkadot chain state is ~253 MiB in size.
Reading a value on average takes (in this examples) 14.3 µs and writing 71.3 µs.
The interesting part in the generated weight file tells us the weight constants and some statistics about the measurements:
You will see that the [paritydb_weights.rs] files was modified and now contains new weights. The exact command for
Polkadot can be seen at the top of the file.
This uses the most recent block from your snapshot which is printed at the top.
The value size summary tells us that the pruned Polkadot chain state is ~253 MiB in size.
Reading a value on average takes (in this examples) 14.3 µs and writing 71.3 µs.
The interesting part in the generated weight file tells us the weight constants and some statistics about the
measurements:
```rust
/// Time to read one storage item.
/// Calculated by multiplying the *Average* of all values with `1.1` and adding `0`.
@@ -90,7 +93,8 @@ write: 71_347 * constants::WEIGHT_REF_TIME_PER_NANOS,
## Arguments
- `--db` Specify which database backend to use. This greatly influences the results.
- `--state-version` Set the version of the state encoding that this snapshot uses. Should be set to `1` for Substrate `--dev` and `0` for Polkadot et al. Using the wrong version can corrupt the snapshot.
- `--state-version` Set the version of the state encoding that this snapshot uses. Should be set to `1` for Substrate
`--dev` and `0` for Polkadot et al. Using the wrong version can corrupt the snapshot.
- [`--mul`](../shared/README.md#arguments)
- [`--add`](../shared/README.md#arguments)
- [`--metric`](../shared/README.md#arguments)
@@ -103,4 +107,5 @@ License: Apache-2.0
<!-- LINKS -->
[Polkachu]: https://polkachu.com/snapshots
[paritydb_weights.rs]: https://github.com/paritytech/polkadot/blob/c254e5975711a6497af256f6831e9a6c752d28f5/runtime/polkadot/constants/src/weights/paritydb_weights.rs#L60
[paritydb_weights.rs]:
https://github.com/paritytech/polkadot/blob/c254e5975711a6497af256f6831e9a6c752d28f5/runtime/polkadot/constants/src/weights/paritydb_weights.rs#L60