Compare commits

...

31 Commits

Author SHA1 Message Date
Omar Abdulla 7f4fadf7b1 Add a cached fs abstraction 2025-08-14 13:12:23 +03:00
Omar f2045db0e9 Add compiler directives to metadata (#139) 2025-08-14 07:38:56 +00:00
Omar 5a11f44673 Misc features/improvements (#138)
* Implement various needed features and improvements

* Reorder the metadata struct

* Format comments
2025-08-13 13:50:06 +00:00
James Wilson 46aea0890d Split reporter and case runner, use channels to pass test reports (#137)
* Use channels to send data to reporting thread and avoid hangs / mutex / duration. Limit max concurrent tasks to avoid too many open files

* More appropriate name for dirver/reporter task fns

* Back to parallelise individual cases, report individual cases, address grumbles

* newline before 'Failures' title in report
2025-08-13 13:10:26 +00:00
Omar 9b40c9b9e3 Add an EVM version filter (#136)
* Add an EVM version filter

* Update naming
2025-08-12 10:19:59 +00:00
Omar f67a9bf643 Refactor/ignore null values (#135)
* Skip serialization of null values

* Add support for comments in various steps
2025-08-12 08:55:21 +00:00
Omar 67d767ffde Implement storage empty assertion (#134) 2025-08-11 13:17:19 +00:00
Omar f7fbe094ec Balance assertions (#133)
* Make metadata serializable

* Refactor tests to use steps

* Add a balance assertion test step

* Test balance deserialization

* Box the test steps

* Permit size difference in step output
2025-08-11 12:11:16 +00:00
Omar 90b2dd4cfe Make metadata serializable (#132) 2025-08-10 21:57:41 +00:00
Omar 64d63ef999 Remove the provider cache (#121)
* Remove the provider cache

* Add timing information to the CLI report
2025-08-07 03:55:24 +00:00
Omar 757bfbe116 Add more resolvable variables (#120)
* Allow resolution of base fee

* Fix block difficulty resolution

* Allow for the resolution of gas price
2025-08-06 15:17:36 +00:00
Omar 8619e7feb0 Fix the transaction tracing issues (#118)
* Set the gc mode to archive in geth

* Add a maximum to the exponential backoff wait duration

* Edit the formatting of the CLI case reporter
2025-08-06 12:25:39 +00:00
Omar edba49b301 Use SolidityLang for solc downloads (#117) 2025-08-06 10:35:05 +00:00
Omar 9980926d40 Add a case ignore flag (#114)
* Added a resolver tied to a specific block

* Increase the number of private keys

* Increase kitchensink wait time to 60 seconds

* Add a case ignore flag
2025-08-04 16:40:53 +00:00
Omar ff993d44a5 Added a resolver tied to a specific block (#111)
* Added a resolver tied to a specific block

* Increase the number of private keys

* Increase kitchensink wait time to 60 seconds
2025-08-04 12:45:47 +00:00
Omar 8cbb1a9f77 Added basic console reporting (#110)
* Added basic console reporting

* Add some waiting period to the printing task

* Print to the stderr and print logs to stdout
2025-08-04 06:05:49 +00:00
Omar 56c2fe8c0c Parallelize Cases (#109)
* Parallelize over cases

* Rename the state and driver

* Parallelize execution

* Update the default config of the tool

* Make codebase async

* Fix machete

* Fix tests & clear node directories before startup

* Cleanup the cleanup logic

* Rename geth node
2025-08-01 11:00:08 +00:00
Omar 330a773a1c Add variables support (#96) 2025-07-30 08:41:03 +00:00
Omar f51693cb9f Support multiple compiler versions (#92)
* Allow for downloader to use version requirements.

We will soon add support for the compiler version requirement from the
metadata files to be honored. The compiler version is specified in the
solc modes section of the file and its specified as a `VersionReq` and
not as a version.

Therefore, we need to have the ability to honor this version requirement
and find the best version that satisfies the requirement.

* Request `VersionOrRequirement` in compiler interface

* Honor the compiler version requirement in metadata

This commit honors the compiler version requirement listed in the solc
modes of the metadata file. If this version requirement is provided then
it overrides what was passed in the CLI. Otherwise, the CLI version will
be used.

* Make compiler IO completely generic.

Before this commit, the types that were used for the compiler input and
output were the resolc compiler types which was a leaky abstraction as
we have traits to abstract the compilers away but we expose their
internal types out to other crates.

This commit did the following:
1. Made the compiler IO types fully generic so that all of the logic for
   constructing the map of compiled contracts is all done by the
   compiler implementation and not by the consuming code.
2. Changed the input types used for Solc to be the forge standard JSON
   types for Solc instead of resolc.

* Fix machete

* Add resolc to CI

* Add resolc to CI

* Add resolc to CI

* Add resolc to CI
2025-07-30 04:56:23 +00:00
James Wilson 4db7009640 Ensure path in corpus is relative to corpus file (#85) 2025-07-29 13:12:16 +00:00
Omar 5a36e242ec Allow for files in corpus definitions (#87)
* Allow for files to be specified in the corpus file

* Attempt to improve the geth tx indexing issue.

We're facing an issue where Geth transaction indexing can sometimes stall
on some of the nodes we're running. The logs show that for all transactions
we always need 1 second of waiting time. However, during certain runs we
sometimes run into an issue with some of the nodes where it seems like
their transaction indexer fails (either at the start or after some amount
of time) which leads us to never get the receipts back from these specific
nodes.

This is not a load issue as it appears like all of the other nodes handle
it just fine. However, it looks like once a node gets into this state it
can not get out of it and its bricked for the entire run.

This commit adds some more command line arguments to the geth command in
hopes of improving this issue.
2025-07-29 13:02:53 +00:00
James Wilson 33329632b5 Increase geth instantiate timeout from 2s to 5s (#86) 2025-07-29 10:34:31 +00:00
Omar 429f2e92a2 Fix contract discovery for simple tests (#83) 2025-07-28 07:05:53 +00:00
Omar 65f41f2038 Correct the type of address in matterlabs events (#82) 2025-07-28 05:01:52 +00:00
Omar 3ed8a1ca1c Support compiler-version aware exceptions (#81) 2025-07-25 14:23:17 +00:00
Omar 2923d675cd Support Compile-time Linking (#79)
* Use wrappers for libraries in metadata.

* Create a unified way to access deployed contracts

* Support linking at compile time
2025-07-25 07:03:21 +00:00
Omar 8f5bcf08ad Support Calldata arithmetic (#77)
* Re-order the input file.

This commit reorders the input file such that we have a definitions
section and an implementations section and such that the the order of
the items in both sections is the same.

* Implement a reverse polish calculator for calldata arithmetic
2025-07-24 15:35:25 +00:00
Omar 90fb89adc0 Add a common crate (#75)
* Add a barebones common crate

* Refactor some code into the common crate

* Add a `ResolverApi` interface.

This commit adds a `ResolverApi` trait to the `format` crate that can be
implemented by any type that can act as a resolver. A resolver is able
to provide information on the chain state. This chain state could be
fresh or it could be cached (which is something that we will do in a
future PR).

This cleans up our crate graph so that `format` is not depending on the
node interactions crate for the `EthereumNode` trait.

* Cleanup the blocking executor
2025-07-24 12:42:45 +00:00
Omar b03ad3027e Pre-seed accounts with more ETH. (#73)
* Pre-seed accounts with more ETH.

This commit fixes and solves some issues around how much ETH we seed an
account with in genesis. Currently, any account that the node has keys
to sign for will be seeded with u128::MAX WEI in genesis. This also
includes the default signer account.

* Bump commit hash of polkadot SDK

* Change how the cache key is computed

* Revert "Change how the cache key is computed"

This reverts commit 75afdd9cfd.

* Revert "Bump commit hash of polkadot SDK"

This reverts commit 8aaa69780e.

* Add extra comments

* Revert "Add extra comments"

This reverts commit bd4de2c83d.

* Update the initial balance
2025-07-24 08:46:14 +00:00
Omar 972f3b6d5b Wait longer for geth receipts (#74) 2025-07-24 04:40:19 +00:00
Omar 6f4aa731ab Handle exceptions (#54)
* Add support for wrapper types

* Move `FilesWithExtensionIterator` to `core::common`

* Remove unneeded use of two `HashMap`s

* Make metadata structs more typed

* Impl new_from for wrapper types

* Implement the new input handling logic

* Fix edge-case in input handling

* Ignore macro doc comment tests

* Correct comment

* Fix edge-case in deployment order

* Handle calldata better

* Allow for the use of function signatures

* Add support for exceptions

* Cached nonce allocator

* Fix tests

* Add support for address replacement

* Cleanup implementation

* Cleanup mutability

* Wire up address replacement with rest of code

* Implement caller replacement

* Switch to callframe trace for exceptions

* Add a way to skip tests if they don't match the target

* Handle values from the metadata files

* Remove address replacement

* Correct the arguments

* Remove empty impl

* Remove address replacement

* Correct the arguments

* Remove empty impl

* Fix size_requirement underflow

* Add support for wildcards in exceptions

* Fix calldata construction of single calldata

* Better handling for length in equivalency checks

* Make initial balance a constant

* Fix size_requirement underflow

* Add support for wildcards in exceptions

* Fix calldata construction of single calldata

* Better handling for length in equivalency checks

* Fix tests
2025-07-24 03:45:53 +00:00
56 changed files with 4836 additions and 2266 deletions
+16
View File
@@ -99,9 +99,12 @@ jobs:
- name: Install Geth on Ubuntu - name: Install Geth on Ubuntu
if: matrix.os == 'ubuntu-24.04' if: matrix.os == 'ubuntu-24.04'
run: | run: |
sudo add-apt-repository -y ppa:ethereum/ethereum
sudo apt-get update sudo apt-get update
sudo apt-get install -y protobuf-compiler sudo apt-get install -y protobuf-compiler
sudo apt-get install -y solc
# We were facing some issues in CI with the 1.16.* versions of geth, and specifically on # We were facing some issues in CI with the 1.16.* versions of geth, and specifically on
# Ubuntu. Eventually, we found out that the last version of geth that worked in our CI was # Ubuntu. Eventually, we found out that the last version of geth that worked in our CI was
# version 1.15.11. Thus, this is the version that we want to use in CI. The PPA sadly does # version 1.15.11. Thus, this is the version that we want to use in CI. The PPA sadly does
@@ -122,12 +125,22 @@ jobs:
wget -qO- "$URL" | sudo tar xz -C /usr/local/bin --strip-components=1 wget -qO- "$URL" | sudo tar xz -C /usr/local/bin --strip-components=1
geth --version geth --version
curl -sL https://github.com/paritytech/revive/releases/download/v0.3.0/resolc-x86_64-unknown-linux-musl -o resolc
chmod +x resolc
sudo mv resolc /usr/local/bin
- name: Install Geth on macOS - name: Install Geth on macOS
if: matrix.os == 'macos-14' if: matrix.os == 'macos-14'
run: | run: |
brew tap ethereum/ethereum brew tap ethereum/ethereum
brew install ethereum protobuf brew install ethereum protobuf
brew install solidity
curl -sL https://github.com/paritytech/revive/releases/download/v0.3.0/resolc-universal-apple-darwin -o resolc
chmod +x resolc
sudo mv resolc /usr/local/bin
- name: Machete - name: Machete
uses: bnjbvr/cargo-machete@v0.7.1 uses: bnjbvr/cargo-machete@v0.7.1
@@ -143,5 +156,8 @@ jobs:
- name: Check eth-rpc version - name: Check eth-rpc version
run: eth-rpc --version run: eth-rpc --version
- name: Check resolc version
run: resolc --version
- name: Test cargo workspace - name: Test cargo workspace
run: make test run: make test
Generated
+218 -85
View File
@@ -67,9 +67,9 @@ checksum = "683d7910e743518b0e34f1186f92494becacb047c7b6bf616c96772180fef923"
[[package]] [[package]]
name = "alloy" name = "alloy"
version = "1.0.20" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ae58d888221eecf621595e2096836ce7cfc37be06bfa39d7f64aa6a3ea4c9e5b" checksum = "8ad4eb51e7845257b70c51b38ef8d842d5e5e93196701fcbd757577971a043c6"
dependencies = [ dependencies = [
"alloy-consensus", "alloy-consensus",
"alloy-contract", "alloy-contract",
@@ -102,15 +102,16 @@ dependencies = [
[[package]] [[package]]
name = "alloy-consensus" name = "alloy-consensus"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ad451f9a70c341d951bca4e811d74dbe1e193897acd17e9dbac1353698cc430b" checksum = "ca3b746060277f3d7f9c36903bb39b593a741cb7afcb0044164c28f0e9b673f0"
dependencies = [ dependencies = [
"alloy-eips", "alloy-eips",
"alloy-primitives", "alloy-primitives",
"alloy-rlp", "alloy-rlp",
"alloy-serde", "alloy-serde",
"alloy-trie", "alloy-trie",
"alloy-tx-macros",
"auto_impl", "auto_impl",
"c-kzg", "c-kzg",
"derive_more 2.0.1", "derive_more 2.0.1",
@@ -126,9 +127,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-consensus-any" name = "alloy-consensus-any"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "142daffb15d5be1a2b20d2cd540edbcef03037b55d4ff69dc06beb4d06286dba" checksum = "bf98679329fa708fa809ea596db6d974da892b068ad45e48ac1956f582edf946"
dependencies = [ dependencies = [
"alloy-consensus", "alloy-consensus",
"alloy-eips", "alloy-eips",
@@ -140,9 +141,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-contract" name = "alloy-contract"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ebf25443920ecb9728cb087fe4dc04a0b290bd6ac85638c58fe94aba70f1a44e" checksum = "a10e47f5305ea08c37b1772086c1573e9a0a257227143996841172d37d3831bb"
dependencies = [ dependencies = [
"alloy-consensus", "alloy-consensus",
"alloy-dyn-abi", "alloy-dyn-abi",
@@ -157,6 +158,7 @@ dependencies = [
"alloy-transport", "alloy-transport",
"futures", "futures",
"futures-util", "futures-util",
"serde_json",
"thiserror 2.0.12", "thiserror 2.0.12",
] ]
@@ -227,9 +229,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-eips" name = "alloy-eips"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3056872f6da48046913e76edb5ddced272861f6032f09461aea1a2497be5ae5d" checksum = "f562a81278a3ed83290e68361f2d1c75d018ae3b8589a314faf9303883e18ec9"
dependencies = [ dependencies = [
"alloy-eip2124", "alloy-eip2124",
"alloy-eip2930", "alloy-eip2930",
@@ -247,15 +249,16 @@ dependencies = [
[[package]] [[package]]
name = "alloy-genesis" name = "alloy-genesis"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c98fb40f07997529235cc474de814cd7bd9de561e101716289095696c0e4639d" checksum = "dc41384e9ab8c9b2fb387c52774d9d432656a28edcda1c2d4083e96051524518"
dependencies = [ dependencies = [
"alloy-eips", "alloy-eips",
"alloy-primitives", "alloy-primitives",
"alloy-serde", "alloy-serde",
"alloy-trie", "alloy-trie",
"serde", "serde",
"serde_with",
] ]
[[package]] [[package]]
@@ -272,12 +275,13 @@ dependencies = [
[[package]] [[package]]
name = "alloy-json-rpc" name = "alloy-json-rpc"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc08b31ebf9273839bd9a01f9333cbb7a3abb4e820c312ade349dd18bdc79581" checksum = "12c454fcfcd5d26ed3b8cae5933cbee9da5f0b05df19b46d4bd4446d1f082565"
dependencies = [ dependencies = [
"alloy-primitives", "alloy-primitives",
"alloy-sol-types", "alloy-sol-types",
"http",
"serde", "serde",
"serde_json", "serde_json",
"thiserror 2.0.12", "thiserror 2.0.12",
@@ -286,9 +290,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-network" name = "alloy-network"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed117b08f0cc190312bf0c38c34cf4f0dabfb4ea8f330071c587cd7160a88cb2" checksum = "42d6d39eabe5c7b3d8f23ac47b0b683b99faa4359797114636c66e0743103d05"
dependencies = [ dependencies = [
"alloy-consensus", "alloy-consensus",
"alloy-consensus-any", "alloy-consensus-any",
@@ -312,9 +316,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-network-primitives" name = "alloy-network-primitives"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7162ff7be8649c0c391f4e248d1273e85c62076703a1f3ec7daf76b283d886d" checksum = "3704fa8b7ba9ba3f378d99b3d628c8bc8c2fc431b709947930f154e22a8368b6"
dependencies = [ dependencies = [
"alloy-consensus", "alloy-consensus",
"alloy-eips", "alloy-eips",
@@ -335,6 +339,7 @@ dependencies = [
"const-hex", "const-hex",
"derive_more 2.0.1", "derive_more 2.0.1",
"foldhash", "foldhash",
"getrandom 0.3.3",
"hashbrown 0.15.3", "hashbrown 0.15.3",
"indexmap 2.10.0", "indexmap 2.10.0",
"itoa", "itoa",
@@ -342,7 +347,7 @@ dependencies = [
"keccak-asm", "keccak-asm",
"paste", "paste",
"proptest", "proptest",
"rand 0.9.1", "rand 0.9.2",
"ruint", "ruint",
"rustc-hash", "rustc-hash",
"serde", "serde",
@@ -352,9 +357,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-provider" name = "alloy-provider"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d84eba1fd8b6fe8b02f2acd5dd7033d0f179e304bd722d11e817db570d1fa6c4" checksum = "08800e8cbe70c19e2eb7cf3d7ff4b28bdd9b3933f8e1c8136c7d910617ba03bf"
dependencies = [ dependencies = [
"alloy-chains", "alloy-chains",
"alloy-consensus", "alloy-consensus",
@@ -380,6 +385,7 @@ dependencies = [
"either", "either",
"futures", "futures",
"futures-utils-wasm", "futures-utils-wasm",
"http",
"lru", "lru",
"parking_lot", "parking_lot",
"pin-project", "pin-project",
@@ -395,9 +401,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-pubsub" name = "alloy-pubsub"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8550f7306e0230fc835eb2ff4af0a96362db4b6fc3f25767d161e0ad0ac765bf" checksum = "ae68457a2c2ead6bd7d7acb5bf5f1623324b1962d4f8e7b0250657a3c3ab0a0b"
dependencies = [ dependencies = [
"alloy-json-rpc", "alloy-json-rpc",
"alloy-primitives", "alloy-primitives",
@@ -438,9 +444,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-rpc-client" name = "alloy-rpc-client"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "518a699422a3eab800f3dac2130d8f2edba8e4fff267b27a9c7dc6a2b0d313ee" checksum = "162301b5a57d4d8f000bf30f4dcb82f9f468f3e5e846eeb8598dd39e7886932c"
dependencies = [ dependencies = [
"alloy-json-rpc", "alloy-json-rpc",
"alloy-primitives", "alloy-primitives",
@@ -448,7 +454,6 @@ dependencies = [
"alloy-transport", "alloy-transport",
"alloy-transport-http", "alloy-transport-http",
"alloy-transport-ipc", "alloy-transport-ipc",
"async-stream",
"futures", "futures",
"pin-project", "pin-project",
"reqwest", "reqwest",
@@ -458,16 +463,15 @@ dependencies = [
"tokio-stream", "tokio-stream",
"tower", "tower",
"tracing", "tracing",
"tracing-futures",
"url", "url",
"wasmtimer", "wasmtimer",
] ]
[[package]] [[package]]
name = "alloy-rpc-types" name = "alloy-rpc-types"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c000cab4ec26a4b3e29d144e999e1c539c2fa0abed871bf90311eb3466187ca8" checksum = "6cd8ca94ae7e2b32cc3895d9981f3772aab0b4756aa60e9ed0bcfee50f0e1328"
dependencies = [ dependencies = [
"alloy-primitives", "alloy-primitives",
"alloy-rpc-types-eth", "alloy-rpc-types-eth",
@@ -478,9 +482,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-rpc-types-any" name = "alloy-rpc-types-any"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "508b2fbe66d952089aa694e53802327798806498cd29ff88c75135770ecaabfc" checksum = "076b47e834b367d8618c52dd0a0d6a711ddf66154636df394805300af4923b8a"
dependencies = [ dependencies = [
"alloy-consensus-any", "alloy-consensus-any",
"alloy-rpc-types-eth", "alloy-rpc-types-eth",
@@ -489,9 +493,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-rpc-types-debug" name = "alloy-rpc-types-debug"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c832f2e851801093928dbb4b7bd83cd22270faf76b2e080646b806a285c8757" checksum = "94a2a86ad7b7d718c15e79d0779bd255561b6b22968dc5ed2e7c0fbc43bb55fe"
dependencies = [ dependencies = [
"alloy-primitives", "alloy-primitives",
"serde", "serde",
@@ -499,9 +503,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-rpc-types-eth" name = "alloy-rpc-types-eth"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fcaf7dff0fdd756a714d58014f4f8354a1706ebf9fa2cf73431e0aeec3c9431e" checksum = "2c2f847e635ec0be819d06e2ada4bcc4e4204026a83c4bfd78ae8d550e027ae7"
dependencies = [ dependencies = [
"alloy-consensus", "alloy-consensus",
"alloy-consensus-any", "alloy-consensus-any",
@@ -514,14 +518,15 @@ dependencies = [
"itertools 0.14.0", "itertools 0.14.0",
"serde", "serde",
"serde_json", "serde_json",
"serde_with",
"thiserror 2.0.12", "thiserror 2.0.12",
] ]
[[package]] [[package]]
name = "alloy-rpc-types-trace" name = "alloy-rpc-types-trace"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e3507a04e868dd83219ad3cd6a8c58aefccb64d33f426b3934423a206343e84" checksum = "6fc58180302a94c934d455eeedb3ecb99cdc93da1dbddcdbbdb79dd6fe618b2a"
dependencies = [ dependencies = [
"alloy-primitives", "alloy-primitives",
"alloy-rpc-types-eth", "alloy-rpc-types-eth",
@@ -533,9 +538,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-serde" name = "alloy-serde"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "730e8f2edf2fc224cabd1c25d090e1655fa6137b2e409f92e5eec735903f1507" checksum = "ae699248d02ade9db493bbdae61822277dc14ae0f82a5a4153203b60e34422a6"
dependencies = [ dependencies = [
"alloy-primitives", "alloy-primitives",
"serde", "serde",
@@ -544,9 +549,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-signer" name = "alloy-signer"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b0d2428445ec13edc711909e023d7779618504c4800be055a5b940025dbafe3" checksum = "3cf7d793c813515e2b627b19a15693960b3ed06670f9f66759396d06ebe5747b"
dependencies = [ dependencies = [
"alloy-primitives", "alloy-primitives",
"async-trait", "async-trait",
@@ -559,9 +564,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-signer-local" name = "alloy-signer-local"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e14fe6fedb7fe6e0dfae47fe020684f1d8e063274ef14bca387ddb7a6efa8ec1" checksum = "51a424bc5a11df0d898ce0fd15906b88ebe2a6e4f17a514b51bc93946bb756bd"
dependencies = [ dependencies = [
"alloy-consensus", "alloy-consensus",
"alloy-network", "alloy-network",
@@ -648,9 +653,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-transport" name = "alloy-transport"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a712bdfeff42401a7dd9518f72f617574c36226a9b5414537fedc34350b73bf9" checksum = "4f317d20f047b3de4d9728c556e2e9a92c9a507702d2016424cd8be13a74ca5e"
dependencies = [ dependencies = [
"alloy-json-rpc", "alloy-json-rpc",
"alloy-primitives", "alloy-primitives",
@@ -671,9 +676,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-transport-http" name = "alloy-transport-http"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ea5a76d7f2572174a382aedf36875bedf60bcc41116c9f031cf08040703a2dc" checksum = "ff084ac7b1f318c87b579d221f11b748341d68b9ddaa4ffca5e62ed2b8cfefb4"
dependencies = [ dependencies = [
"alloy-json-rpc", "alloy-json-rpc",
"alloy-transport", "alloy-transport",
@@ -686,9 +691,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-transport-ipc" name = "alloy-transport-ipc"
version = "1.0.9" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "606af17a7e064d219746f6d2625676122c79d78bf73dfe746d6db9ecd7dbcb85" checksum = "edb099cdad8ed2e6a80811cdf9bbf715ebf4e34c981b4a6e2d1f9daacbf8b218"
dependencies = [ dependencies = [
"alloy-json-rpc", "alloy-json-rpc",
"alloy-pubsub", "alloy-pubsub",
@@ -706,9 +711,9 @@ dependencies = [
[[package]] [[package]]
name = "alloy-trie" name = "alloy-trie"
version = "0.8.1" version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "983d99aa81f586cef9dae38443245e585840fcf0fc58b09aee0b1f27aed1d500" checksum = "bada1fc392a33665de0dc50d401a3701b62583c655e3522a323490a5da016962"
dependencies = [ dependencies = [
"alloy-primitives", "alloy-primitives",
"alloy-rlp", "alloy-rlp",
@@ -720,6 +725,19 @@ dependencies = [
"tracing", "tracing",
] ]
[[package]]
name = "alloy-tx-macros"
version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1154c8187a5ff985c95a8b2daa2fedcf778b17d7668e5e50e556c4ff9c881154"
dependencies = [
"alloy-primitives",
"darling",
"proc-macro2",
"quote",
"syn 2.0.101",
]
[[package]] [[package]]
name = "android-tzdata" name = "android-tzdata"
version = "0.1.1" version = "0.1.1"
@@ -2210,6 +2228,66 @@ dependencies = [
"percent-encoding", "percent-encoding",
] ]
[[package]]
name = "foundry-compilers-artifacts"
version = "0.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2676d70082ed23680fe2d08c0b750d5f7f2438c6d946f1cb140a76c5e5e0392"
dependencies = [
"foundry-compilers-artifacts-solc",
"foundry-compilers-artifacts-vyper",
]
[[package]]
name = "foundry-compilers-artifacts-solc"
version = "0.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3ada94dc5946334bb08df574855ba345ab03ba8c6f233560c72c8d61fa9db80"
dependencies = [
"alloy-json-abi",
"alloy-primitives",
"foundry-compilers-core",
"path-slash",
"regex",
"semver 1.0.26",
"serde",
"serde_json",
"thiserror 2.0.12",
"tracing",
"yansi",
]
[[package]]
name = "foundry-compilers-artifacts-vyper"
version = "0.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "372052af72652e375a6e7eed22179bd8935114e25e1c5a8cca7f00e8f20bd94c"
dependencies = [
"alloy-json-abi",
"alloy-primitives",
"foundry-compilers-artifacts-solc",
"foundry-compilers-core",
"path-slash",
"semver 1.0.26",
"serde",
]
[[package]]
name = "foundry-compilers-core"
version = "0.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf0962c46855979300f6526ed57f987ccf6a025c2b92ce574b281d9cb2ef666b"
dependencies = [
"alloy-primitives",
"cfg-if",
"dunce",
"path-slash",
"semver 1.0.26",
"serde",
"serde_json",
"thiserror 2.0.12",
]
[[package]] [[package]]
name = "fs-err" name = "fs-err"
version = "2.11.0" version = "2.11.0"
@@ -2635,7 +2713,7 @@ dependencies = [
"libc", "libc",
"percent-encoding", "percent-encoding",
"pin-project-lite", "pin-project-lite",
"socket2", "socket2 0.5.10",
"system-configuration", "system-configuration",
"tokio", "tokio",
"tower-service", "tower-service",
@@ -2875,6 +2953,17 @@ dependencies = [
"windows-sys 0.52.0", "windows-sys 0.52.0",
] ]
[[package]]
name = "io-uring"
version = "0.7.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d93587f37623a1a17d94ef2bc9ada592f5465fe7732084ab7beefabe5c77c0c4"
dependencies = [
"bitflags 2.9.1",
"cfg-if",
"libc",
]
[[package]] [[package]]
name = "ipnet" name = "ipnet"
version = "2.11.0" version = "2.11.0"
@@ -3268,13 +3357,14 @@ dependencies = [
[[package]] [[package]]
name = "nybbles" name = "nybbles"
version = "0.3.4" version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8983bb634df7248924ee0c4c3a749609b5abcb082c28fffe3254b3eb3602b307" checksum = "675b3a54e5b12af997abc8b6638b0aee51a28caedab70d4967e0d5db3a3f1d06"
dependencies = [ dependencies = [
"alloy-rlp", "alloy-rlp",
"const-hex", "cfg-if",
"proptest", "proptest",
"ruint",
"serde", "serde",
"smallvec", "smallvec",
] ]
@@ -3438,6 +3528,12 @@ version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a" checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a"
[[package]]
name = "path-slash"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e91099d4268b0e11973f036e885d652fb0b21fedcf69738c627f94db6a44f42"
[[package]] [[package]]
name = "pbkdf2" name = "pbkdf2"
version = "0.12.2" version = "0.12.2"
@@ -3719,9 +3815,9 @@ dependencies = [
[[package]] [[package]]
name = "rand" name = "rand"
version = "0.9.1" version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9fbfd9d094a40bf3ae768db9361049ace4c0e04a4fd6b359518bd7b73a73dd97" checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
dependencies = [ dependencies = [
"rand_chacha 0.9.0", "rand_chacha 0.9.0",
"rand_core 0.9.3", "rand_core 0.9.3",
@@ -3884,9 +3980,7 @@ dependencies = [
"base64", "base64",
"bytes", "bytes",
"encoding_rs", "encoding_rs",
"futures-channel",
"futures-core", "futures-core",
"futures-util",
"h2", "h2",
"http", "http",
"http-body", "http-body",
@@ -3931,16 +4025,31 @@ dependencies = [
] ]
[[package]] [[package]]
name = "revive-dt-compiler" name = "revive-dt-common"
version = "0.1.0" version = "0.1.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"semver 1.0.26",
"tokio",
]
[[package]]
name = "revive-dt-compiler"
version = "0.1.0"
dependencies = [
"alloy",
"alloy-primitives",
"anyhow",
"foundry-compilers-artifacts",
"revive-common", "revive-common",
"revive-dt-common",
"revive-dt-config", "revive-dt-config",
"revive-dt-solc-binaries", "revive-dt-solc-binaries",
"revive-solc-json-interface", "revive-solc-json-interface",
"semver 1.0.26", "semver 1.0.26",
"serde",
"serde_json", "serde_json",
"tokio",
"tracing", "tracing",
] ]
@@ -3962,17 +4071,19 @@ dependencies = [
"alloy", "alloy",
"anyhow", "anyhow",
"clap", "clap",
"futures",
"indexmap 2.10.0", "indexmap 2.10.0",
"rayon", "revive-dt-common",
"revive-dt-compiler", "revive-dt-compiler",
"revive-dt-config", "revive-dt-config",
"revive-dt-format", "revive-dt-format",
"revive-dt-node", "revive-dt-node",
"revive-dt-node-interaction", "revive-dt-node-interaction",
"revive-dt-report", "revive-dt-report",
"revive-solc-json-interface", "semver 1.0.26",
"serde_json", "serde_json",
"temp-dir", "temp-dir",
"tokio",
"tracing", "tracing",
"tracing-subscriber", "tracing-subscriber",
] ]
@@ -3985,10 +4096,12 @@ dependencies = [
"alloy-primitives", "alloy-primitives",
"alloy-sol-types", "alloy-sol-types",
"anyhow", "anyhow",
"revive-dt-node-interaction", "revive-common",
"revive-dt-common",
"semver 1.0.26", "semver 1.0.26",
"serde", "serde",
"serde_json", "serde_json",
"tokio",
"tracing", "tracing",
] ]
@@ -3998,7 +4111,10 @@ version = "0.1.0"
dependencies = [ dependencies = [
"alloy", "alloy",
"anyhow", "anyhow",
"revive-common",
"revive-dt-common",
"revive-dt-config", "revive-dt-config",
"revive-dt-format",
"revive-dt-node-interaction", "revive-dt-node-interaction",
"serde", "serde",
"serde_json", "serde_json",
@@ -4015,10 +4131,6 @@ version = "0.1.0"
dependencies = [ dependencies = [
"alloy", "alloy",
"anyhow", "anyhow",
"futures",
"once_cell",
"tokio",
"tracing",
] ]
[[package]] [[package]]
@@ -4026,9 +4138,9 @@ name = "revive-dt-report"
version = "0.1.0" version = "0.1.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"revive-dt-compiler",
"revive-dt-config", "revive-dt-config",
"revive-dt-format", "revive-dt-format",
"revive-solc-json-interface",
"serde", "serde",
"serde_json", "serde_json",
"tracing", "tracing",
@@ -4041,9 +4153,11 @@ dependencies = [
"anyhow", "anyhow",
"hex", "hex",
"reqwest", "reqwest",
"revive-dt-common",
"semver 1.0.26", "semver 1.0.26",
"serde", "serde",
"sha2 0.10.9", "sha2 0.10.9",
"tokio",
"tracing", "tracing",
] ]
@@ -4113,7 +4227,7 @@ dependencies = [
"primitive-types 0.12.2", "primitive-types 0.12.2",
"proptest", "proptest",
"rand 0.8.5", "rand 0.8.5",
"rand 0.9.1", "rand 0.9.2",
"rlp", "rlp",
"ruint-macro", "ruint-macro",
"serde", "serde",
@@ -4138,6 +4252,9 @@ name = "rustc-hash"
version = "2.1.1" version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "357703d41365b4b27c590e3ed91eabb1b663f07c4c084095e60cbed4362dff0d" checksum = "357703d41365b4b27c590e3ed91eabb1b663f07c4c084095e60cbed4362dff0d"
dependencies = [
"rand 0.8.5",
]
[[package]] [[package]]
name = "rustc-hex" name = "rustc-hex"
@@ -4597,6 +4714,15 @@ version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64" checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "signal-hook-registry"
version = "1.4.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9203b8055f63a2a00e2f593bb0510367fe707d7ff1e5c872de2f537b339e5410"
dependencies = [
"libc",
]
[[package]] [[package]]
name = "signature" name = "signature"
version = "2.2.0" version = "2.2.0"
@@ -4641,6 +4767,16 @@ dependencies = [
"windows-sys 0.52.0", "windows-sys 0.52.0",
] ]
[[package]]
name = "socket2"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "233504af464074f9d066d7b5416c5f9b894a5862a6506e306f7b816cdd6f1807"
dependencies = [
"libc",
"windows-sys 0.59.0",
]
[[package]] [[package]]
name = "sp-application-crypto" name = "sp-application-crypto"
version = "40.1.0" version = "40.1.0"
@@ -5300,18 +5436,21 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]] [[package]]
name = "tokio" name = "tokio"
version = "1.45.1" version = "1.47.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75ef51a33ef1da925cea3e4eb122833cb377c61439ca401b770f54902b806779" checksum = "43864ed400b6043a4757a25c7a64a8efde741aed79a056a2fb348a406701bb35"
dependencies = [ dependencies = [
"backtrace", "backtrace",
"bytes", "bytes",
"io-uring",
"libc", "libc",
"mio", "mio",
"pin-project-lite", "pin-project-lite",
"socket2", "signal-hook-registry",
"slab",
"socket2 0.6.0",
"tokio-macros", "tokio-macros",
"windows-sys 0.52.0", "windows-sys 0.59.0",
] ]
[[package]] [[package]]
@@ -5489,18 +5628,6 @@ dependencies = [
"valuable", "valuable",
] ]
[[package]]
name = "tracing-futures"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "97d095ae15e245a057c8e8451bab9b3ee1e1f68e9ba2b4fbc18d0ac5237835f2"
dependencies = [
"futures",
"futures-task",
"pin-project",
"tracing",
]
[[package]] [[package]]
name = "tracing-log" name = "tracing-log"
version = "0.2.0" version = "0.2.0"
@@ -6200,6 +6327,12 @@ dependencies = [
"tap", "tap",
] ]
[[package]]
name = "yansi"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cfe53a6657fd280eaa890a3bc59152892ffa3e30101319d168b781ed6529b049"
[[package]] [[package]]
name = "yoke" name = "yoke"
version = "0.8.0" version = "0.8.0"
+10 -5
View File
@@ -8,9 +8,10 @@ authors = ["Parity Technologies <admin@parity.io>"]
license = "MIT/Apache-2.0" license = "MIT/Apache-2.0"
edition = "2024" edition = "2024"
repository = "https://github.com/paritytech/revive-differential-testing.git" repository = "https://github.com/paritytech/revive-differential-testing.git"
rust-version = "1.85.0" rust-version = "1.87.0"
[workspace.dependencies] [workspace.dependencies]
revive-dt-common = { version = "0.1.0", path = "crates/common" }
revive-dt-compiler = { version = "0.1.0", path = "crates/compiler" } revive-dt-compiler = { version = "0.1.0", path = "crates/compiler" }
revive-dt-config = { version = "0.1.0", path = "crates/config" } revive-dt-config = { version = "0.1.0", path = "crates/config" }
revive-dt-core = { version = "0.1.0", path = "crates/core" } revive-dt-core = { version = "0.1.0", path = "crates/core" }
@@ -25,24 +26,27 @@ alloy-primitives = "1.2.1"
alloy-sol-types = "1.2.1" alloy-sol-types = "1.2.1"
anyhow = "1.0" anyhow = "1.0"
clap = { version = "4", features = ["derive"] } clap = { version = "4", features = ["derive"] }
foundry-compilers-artifacts = { version = "0.18.0" }
futures = { version = "0.3.31" } futures = { version = "0.3.31" }
hex = "0.4.3" hex = "0.4.3"
reqwest = { version = "0.12.15", features = ["blocking", "json"] } reqwest = { version = "0.12.15", features = ["json"] }
once_cell = "1.21" once_cell = "1.21"
rayon = { version = "1.10" }
semver = { version = "1.0", features = ["serde"] } semver = { version = "1.0", features = ["serde"] }
serde = { version = "1.0", default-features = false, features = ["derive"] } serde = { version = "1.0", default-features = false, features = ["derive"] }
serde_json = { version = "1.0", default-features = false, features = [ serde_json = { version = "1.0", default-features = false, features = [
"arbitrary_precision", "arbitrary_precision",
"std", "std",
"unbounded_depth",
] } ] }
sha2 = { version = "0.10.9" } sha2 = { version = "0.10.9" }
sp-core = "36.1.0" sp-core = "36.1.0"
sp-runtime = "41.1.0" sp-runtime = "41.1.0"
temp-dir = { version = "0.1.16" } temp-dir = { version = "0.1.16" }
tempfile = "3.3" tempfile = "3.3"
tokio = { version = "1", default-features = false, features = [ tokio = { version = "1.47.0", default-features = false, features = [
"rt-multi-thread", "rt-multi-thread",
"process",
"rt",
] } ] }
uuid = { version = "1.8", features = ["v4"] } uuid = { version = "1.8", features = ["v4"] }
tracing = "0.1.41" tracing = "0.1.41"
@@ -59,7 +63,7 @@ revive-common = { git = "https://github.com/paritytech/revive", rev = "3389865af
revive-differential = { git = "https://github.com/paritytech/revive", rev = "3389865af7c3ff6f29a586d82157e8bc573c1a8e" } revive-differential = { git = "https://github.com/paritytech/revive", rev = "3389865af7c3ff6f29a586d82157e8bc573c1a8e" }
[workspace.dependencies.alloy] [workspace.dependencies.alloy]
version = "1.0" version = "1.0.22"
default-features = false default-features = false
features = [ features = [
"json-abi", "json-abi",
@@ -73,6 +77,7 @@ features = [
"network", "network",
"serde", "serde",
"rpc-types-eth", "rpc-types-eth",
"genesis",
] ]
[profile.bench] [profile.bench]
+12
View File
@@ -8,6 +8,18 @@
{ {
"name": "first", "name": "first",
"inputs": [ "inputs": [
{
"address": "0xdeadbeef00000000000000000000000000000042",
"expected_balance": "1233"
},
{
"address": "0xdeadbeef00000000000000000000000000000042",
"is_storage_empty": true
},
{
"address": "0xdeadbeef00000000000000000000000000000042",
"is_storage_empty": false
},
{ {
"instance": "WBTC_1", "instance": "WBTC_1",
"method": "#deployer", "method": "#deployer",
+14
View File
@@ -0,0 +1,14 @@
[package]
name = "revive-dt-common"
description = "A library containing common concepts that other crates in the workspace can rely on"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
anyhow = { workspace = true }
semver = { workspace = true }
tokio = { workspace = true, default-features = false, features = ["time"] }
@@ -0,0 +1,38 @@
//! Implements a cached file system that allows for files to be read once into memory and then when
//! they're requested to be read again they will be returned from the cache.
use std::{
collections::HashMap,
path::{Path, PathBuf},
sync::{Arc, LazyLock},
};
use anyhow::Result;
use tokio::sync::RwLock;
#[allow(clippy::type_complexity)]
static CACHE: LazyLock<Arc<RwLock<HashMap<PathBuf, Vec<u8>>>>> = LazyLock::new(Default::default);
pub struct CachedFileSystem;
impl CachedFileSystem {
pub async fn read(path: impl AsRef<Path>) -> Result<Vec<u8>> {
let cache_read_lock = CACHE.read().await;
match cache_read_lock.get(path.as_ref()) {
Some(entry) => Ok(entry.clone()),
None => {
drop(cache_read_lock);
let content = std::fs::read(&path)?;
let mut cache_write_lock = CACHE.write().await;
cache_write_lock.insert(path.as_ref().to_path_buf(), content.clone());
Ok(content)
}
}
}
pub async fn read_to_string(path: impl AsRef<Path>) -> Result<String> {
let content = Self::read(path).await?;
String::from_utf8(content).map_err(Into::into)
}
}
+22
View File
@@ -0,0 +1,22 @@
use std::{
fs::{read_dir, remove_dir_all, remove_file},
path::Path,
};
use anyhow::Result;
/// This method clears the passed directory of all of the files and directories contained within
/// without deleting the directory.
pub fn clear_directory(path: impl AsRef<Path>) -> Result<()> {
for entry in read_dir(path.as_ref())? {
let entry = entry?;
let entry_path = entry.path();
if entry_path.is_file() {
remove_file(entry_path)?
} else {
remove_dir_all(entry_path)?
}
}
Ok(())
}
+5
View File
@@ -0,0 +1,5 @@
mod cached_file_system;
mod clear_dir;
pub use cached_file_system::*;
pub use clear_dir::*;
+3
View File
@@ -0,0 +1,3 @@
mod poll;
pub use poll::*;
+69
View File
@@ -0,0 +1,69 @@
use std::ops::ControlFlow;
use std::time::Duration;
use anyhow::{Result, anyhow};
const EXPONENTIAL_BACKOFF_MAX_WAIT_DURATION: Duration = Duration::from_secs(60);
/// A function that polls for a fallible future for some period of time and errors if it fails to
/// get a result after polling.
///
/// Given a future that returns a [`Result<ControlFlow<O, ()>>`], this function calls the future
/// repeatedly (with some wait period) until the future returns a [`ControlFlow::Break`] or until it
/// returns an [`Err`] in which case the function stops polling and returns the error.
///
/// If the future keeps returning [`ControlFlow::Continue`] and fails to return a [`Break`] within
/// the permitted polling duration then this function returns an [`Err`]
///
/// [`Break`]: ControlFlow::Break
/// [`Continue`]: ControlFlow::Continue
pub async fn poll<F, O>(
polling_duration: Duration,
polling_wait_behavior: PollingWaitBehavior,
mut future: impl FnMut() -> F,
) -> Result<O>
where
F: Future<Output = Result<ControlFlow<O, ()>>>,
{
let mut retries = 0;
let mut total_wait_duration = Duration::ZERO;
let max_allowed_wait_duration = polling_duration;
loop {
if total_wait_duration >= max_allowed_wait_duration {
break Err(anyhow!(
"Polling failed after {} retries and a total of {:?} of wait time",
retries,
total_wait_duration
));
}
match future().await? {
ControlFlow::Continue(()) => {
let next_wait_duration = match polling_wait_behavior {
PollingWaitBehavior::Constant(duration) => duration,
PollingWaitBehavior::ExponentialBackoff => {
Duration::from_secs(2u64.pow(retries))
.min(EXPONENTIAL_BACKOFF_MAX_WAIT_DURATION)
}
};
let next_wait_duration =
next_wait_duration.min(max_allowed_wait_duration - total_wait_duration);
total_wait_duration += next_wait_duration;
retries += 1;
tokio::time::sleep(next_wait_duration).await;
}
ControlFlow::Break(output) => {
break Ok(output);
}
}
}
}
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default)]
pub enum PollingWaitBehavior {
Constant(Duration),
#[default]
ExponentialBackoff,
}
@@ -1,4 +1,8 @@
use std::{borrow::Cow, collections::HashSet, path::PathBuf}; use std::{
borrow::Cow,
collections::HashSet,
path::{Path, PathBuf},
};
/// An iterator that finds files of a certain extension in the provided directory. You can think of /// An iterator that finds files of a certain extension in the provided directory. You can think of
/// this a glob pattern similar to: `${path}/**/*.md` /// this a glob pattern similar to: `${path}/**/*.md`
@@ -18,10 +22,10 @@ pub struct FilesWithExtensionIterator {
} }
impl FilesWithExtensionIterator { impl FilesWithExtensionIterator {
pub fn new(root_directory: PathBuf) -> Self { pub fn new(root_directory: impl AsRef<Path>) -> Self {
Self { Self {
allowed_extensions: Default::default(), allowed_extensions: Default::default(),
directories_to_search: vec![root_directory], directories_to_search: vec![root_directory.as_ref().to_path_buf()],
files_matching_allowed_extensions: Default::default(), files_matching_allowed_extensions: Default::default(),
} }
} }
+3
View File
@@ -0,0 +1,3 @@
mod files_with_extension_iterator;
pub use files_with_extension_iterator::*;
+8
View File
@@ -0,0 +1,8 @@
//! This crate provides common concepts, functionality, types, macros, and more that other crates in
//! the workspace can benefit from.
pub mod fs;
pub mod futures;
pub mod iterators;
pub mod macros;
pub mod types;
@@ -12,11 +12,9 @@
/// pub struct CaseId(usize); /// pub struct CaseId(usize);
/// ``` /// ```
/// ///
/// And would also implement a number of methods on this type making it easier /// And would also implement a number of methods on this type making it easier to use.
/// to use.
/// ///
/// These wrapper types become very useful as they make the code a lot easier /// These wrapper types become very useful as they make the code a lot easier to read.
/// to read.
/// ///
/// Take the following as an example: /// Take the following as an example:
/// ///
@@ -26,33 +24,31 @@
/// } /// }
/// ``` /// ```
/// ///
/// In the above code it's hard to understand what the various types refer to or /// In the above code it's hard to understand what the various types refer to or what to expect them
/// what to expect them to contain. /// to contain.
/// ///
/// With these wrapper types we're able to create code that's self-documenting /// With these wrapper types we're able to create code that's self-documenting in that the types
/// in that the types tell us what the code is referring to. The above code is /// tell us what the code is referring to. The above code is transformed into
/// transformed into
/// ///
/// ```rust,ignore /// ```rust,ignore
/// struct State { /// struct State {
/// contracts: HashMap<CaseId, HashMap<ContractName, ContractByteCode>> /// contracts: HashMap<CaseId, HashMap<ContractName, ContractByteCode>>
/// } /// }
/// ``` /// ```
///
/// Note that we follow the same syntax for defining wrapper structs but we do not permit the use of
/// generics.
#[macro_export] #[macro_export]
macro_rules! define_wrapper_type { macro_rules! define_wrapper_type {
( (
$(#[$meta: meta])* $(#[$meta: meta])*
$ident: ident($ty: ty) $(;)? $vis:vis struct $ident: ident($ty: ty);
) => { ) => {
$(#[$meta])* $(#[$meta])*
pub struct $ident($ty); $vis struct $ident($ty);
impl $ident { impl $ident {
pub fn new(value: $ty) -> Self { pub fn new(value: impl Into<$ty>) -> Self {
Self(value)
}
pub fn new_from<T: Into<$ty>>(value: T) -> Self {
Self(value.into()) Self(value.into())
} }
@@ -104,3 +100,7 @@ macro_rules! define_wrapper_type {
} }
}; };
} }
/// Technically not needed but this allows for the macro to be found in the `macros` module of the
/// crate in addition to being found in the root of the crate.
pub use define_wrapper_type;
+3
View File
@@ -0,0 +1,3 @@
mod define_wrapper_type;
pub use define_wrapper_type::*;
+3
View File
@@ -0,0 +1,3 @@
mod version_or_requirement;
pub use version_or_requirement::*;
@@ -0,0 +1,41 @@
use semver::{Version, VersionReq};
#[derive(Clone, Debug)]
pub enum VersionOrRequirement {
Version(Version),
Requirement(VersionReq),
}
impl From<Version> for VersionOrRequirement {
fn from(value: Version) -> Self {
Self::Version(value)
}
}
impl From<VersionReq> for VersionOrRequirement {
fn from(value: VersionReq) -> Self {
Self::Requirement(value)
}
}
impl TryFrom<VersionOrRequirement> for Version {
type Error = anyhow::Error;
fn try_from(value: VersionOrRequirement) -> Result<Self, Self::Error> {
let VersionOrRequirement::Version(version) = value else {
anyhow::bail!("Version or requirement was not a version");
};
Ok(version)
}
}
impl TryFrom<VersionOrRequirement> for VersionReq {
type Error = anyhow::Error;
fn try_from(value: VersionOrRequirement) -> Result<Self, Self::Error> {
let VersionOrRequirement::Requirement(requirement) = value else {
anyhow::bail!("Version or requirement was not a requirement");
};
Ok(requirement)
}
}
+8 -1
View File
@@ -9,11 +9,18 @@ repository.workspace = true
rust-version.workspace = true rust-version.workspace = true
[dependencies] [dependencies]
anyhow = { workspace = true }
revive-solc-json-interface = { workspace = true } revive-solc-json-interface = { workspace = true }
revive-dt-common = { workspace = true }
revive-dt-config = { workspace = true } revive-dt-config = { workspace = true }
revive-dt-solc-binaries = { workspace = true } revive-dt-solc-binaries = { workspace = true }
revive-common = { workspace = true } revive-common = { workspace = true }
alloy = { workspace = true }
alloy-primitives = { workspace = true }
anyhow = { workspace = true }
foundry-compilers-artifacts = { workspace = true }
semver = { workspace = true } semver = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
tracing = { workspace = true } tracing = { workspace = true }
tokio = { workspace = true }
+107 -97
View File
@@ -4,20 +4,19 @@
//! - Polkadot revive Wasm compiler //! - Polkadot revive Wasm compiler
use std::{ use std::{
fs::read_to_string, collections::HashMap,
hash::Hash, hash::Hash,
path::{Path, PathBuf}, path::{Path, PathBuf},
}; };
use revive_dt_config::Arguments; use alloy::json_abi::JsonAbi;
use alloy_primitives::Address;
use semver::Version;
use serde::{Deserialize, Serialize};
use revive_common::EVMVersion; use revive_common::EVMVersion;
use revive_solc_json_interface::{ use revive_dt_common::{fs::CachedFileSystem, types::VersionOrRequirement};
SolcStandardJsonInput, SolcStandardJsonInputLanguage, SolcStandardJsonInputSettings, use revive_dt_config::Arguments;
SolcStandardJsonInputSettingsOptimizer, SolcStandardJsonInputSettingsSelection,
SolcStandardJsonOutput,
};
use semver::Version;
pub mod revive_js; pub mod revive_js;
pub mod revive_resolc; pub mod revive_resolc;
@@ -31,63 +30,45 @@ pub trait SolidityCompiler {
/// The low-level compiler interface. /// The low-level compiler interface.
fn build( fn build(
&self, &self,
input: CompilerInput<Self::Options>, input: CompilerInput,
) -> anyhow::Result<CompilerOutput<Self::Options>>; additional_options: Self::Options,
) -> impl Future<Output = anyhow::Result<CompilerOutput>>;
fn new(solc_executable: PathBuf) -> Self; fn new(solc_executable: PathBuf) -> Self;
fn get_compiler_executable(config: &Arguments, version: Version) -> anyhow::Result<PathBuf>; fn get_compiler_executable(
config: &Arguments,
version: impl Into<VersionOrRequirement>,
) -> impl Future<Output = anyhow::Result<PathBuf>>;
fn version(&self) -> anyhow::Result<Version>;
} }
/// The generic compilation input configuration. /// The generic compilation input configuration.
#[derive(Debug)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CompilerInput<T: PartialEq + Eq + Hash> { pub struct CompilerInput {
pub extra_options: T, pub enable_optimization: Option<bool>,
pub input: SolcStandardJsonInput, pub via_ir: Option<bool>,
pub evm_version: Option<EVMVersion>,
pub allow_paths: Vec<PathBuf>, pub allow_paths: Vec<PathBuf>,
pub base_path: Option<PathBuf>, pub base_path: Option<PathBuf>,
pub sources: HashMap<PathBuf, String>,
pub libraries: HashMap<PathBuf, HashMap<String, Address>>,
pub revert_string_handling: Option<RevertString>,
} }
/// The generic compilation output configuration. /// The generic compilation output configuration.
#[derive(Debug)] #[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct CompilerOutput<T: PartialEq + Eq + Hash> { pub struct CompilerOutput {
/// The solc standard JSON input. /// The compiled contracts. The bytecode of the contract is kept as a string incase linking is
pub input: CompilerInput<T>, /// required and the compiled source has placeholders.
/// The produced solc standard JSON output. pub contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
pub output: SolcStandardJsonOutput,
/// The error message in case the compiler returns abnormally.
pub error: Option<String>,
} }
impl<T> PartialEq for CompilerInput<T> /// A generic builder style interface for configuring the supported compiler options.
where
T: PartialEq + Eq + Hash,
{
fn eq(&self, other: &Self) -> bool {
let self_input = serde_json::to_vec(&self.input).unwrap_or_default();
let other_input = serde_json::to_vec(&self.input).unwrap_or_default();
self.extra_options.eq(&other.extra_options) && self_input == other_input
}
}
impl<T> Eq for CompilerInput<T> where T: PartialEq + Eq + Hash {}
impl<T> Hash for CompilerInput<T>
where
T: PartialEq + Eq + Hash,
{
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.extra_options.hash(state);
state.write(&serde_json::to_vec(&self.input).unwrap_or_default());
}
}
/// A generic builder style interface for configuring all compiler options.
pub struct Compiler<T: SolidityCompiler> { pub struct Compiler<T: SolidityCompiler> {
input: SolcStandardJsonInput, input: CompilerInput,
extra_options: T::Options, additional_options: T::Options,
allow_paths: Vec<PathBuf>,
base_path: Option<PathBuf>,
} }
impl Default for Compiler<solc::Solc> { impl Default for Compiler<solc::Solc> {
@@ -102,73 +83,102 @@ where
{ {
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
input: SolcStandardJsonInput { input: CompilerInput {
language: SolcStandardJsonInputLanguage::Solidity, enable_optimization: Default::default(),
sources: Default::default(), via_ir: Default::default(),
settings: SolcStandardJsonInputSettings::new( evm_version: Default::default(),
None,
Default::default(),
None,
SolcStandardJsonInputSettingsSelection::new_required(),
SolcStandardJsonInputSettingsOptimizer::new(
false,
None,
&Version::new(0, 0, 0),
false,
),
None,
None,
),
},
extra_options: Default::default(),
allow_paths: Default::default(), allow_paths: Default::default(),
base_path: None, base_path: Default::default(),
sources: Default::default(),
libraries: Default::default(),
revert_string_handling: Default::default(),
},
additional_options: T::Options::default(),
} }
} }
pub fn solc_optimizer(mut self, enabled: bool) -> Self { pub fn with_optimization(mut self, value: impl Into<Option<bool>>) -> Self {
self.input.settings.optimizer.enabled = enabled; self.input.enable_optimization = value.into();
self self
} }
pub fn with_source(mut self, path: &Path) -> anyhow::Result<Self> { pub fn with_via_ir(mut self, value: impl Into<Option<bool>>) -> Self {
self.input self.input.via_ir = value.into();
.sources self
.insert(path.display().to_string(), read_to_string(path)?.into()); }
pub fn with_evm_version(mut self, version: impl Into<Option<EVMVersion>>) -> Self {
self.input.evm_version = version.into();
self
}
pub fn with_allow_path(mut self, path: impl AsRef<Path>) -> Self {
self.input.allow_paths.push(path.as_ref().into());
self
}
pub fn with_base_path(mut self, path: impl Into<Option<PathBuf>>) -> Self {
self.input.base_path = path.into();
self
}
pub async fn with_source(mut self, path: impl AsRef<Path>) -> anyhow::Result<Self> {
self.input.sources.insert(
path.as_ref().to_path_buf(),
CachedFileSystem::read_to_string(path.as_ref()).await?,
);
Ok(self) Ok(self)
} }
pub fn evm_version(mut self, evm_version: EVMVersion) -> Self { pub fn with_library(
self.input.settings.evm_version = Some(evm_version); mut self,
path: impl AsRef<Path>,
name: impl AsRef<str>,
address: Address,
) -> Self {
self.input
.libraries
.entry(path.as_ref().to_path_buf())
.or_default()
.insert(name.as_ref().into(), address);
self self
} }
pub fn extra_options(mut self, extra_options: T::Options) -> Self { pub fn with_revert_string_handling(
self.extra_options = extra_options; mut self,
revert_string_handling: impl Into<Option<RevertString>>,
) -> Self {
self.input.revert_string_handling = revert_string_handling.into();
self self
} }
pub fn allow_path(mut self, path: PathBuf) -> Self { pub fn with_additional_options(mut self, options: impl Into<T::Options>) -> Self {
self.allow_paths.push(path); self.additional_options = options.into();
self self
} }
pub fn base_path(mut self, base_path: PathBuf) -> Self { pub async fn try_build(
self.base_path = Some(base_path); self,
self compiler_path: impl AsRef<Path>,
) -> anyhow::Result<CompilerOutput> {
T::new(compiler_path.as_ref().to_path_buf())
.build(self.input, self.additional_options)
.await
} }
pub fn try_build(self, solc_path: PathBuf) -> anyhow::Result<CompilerOutput<T::Options>> { pub fn input(&self) -> CompilerInput {
T::new(solc_path).build(CompilerInput {
extra_options: self.extra_options,
input: self.input,
allow_paths: self.allow_paths,
base_path: self.base_path,
})
}
/// Returns the compiler JSON input.
pub fn input(&self) -> SolcStandardJsonInput {
self.input.clone() self.input.clone()
} }
} }
/// Defines how the compiler should handle revert strings.
#[derive(
Clone, Debug, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default, Serialize, Deserialize,
)]
pub enum RevertString {
#[default]
Default,
Debug,
Strip,
VerboseDebug,
}
+179 -77
View File
@@ -6,9 +6,24 @@ use std::{
process::{Command, Stdio}, process::{Command, Stdio},
}; };
use crate::{CompilerInput, CompilerOutput, SolidityCompiler}; use revive_dt_common::types::VersionOrRequirement;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use revive_solc_json_interface::SolcStandardJsonOutput; use revive_solc_json_interface::{
SolcStandardJsonInput, SolcStandardJsonInputLanguage, SolcStandardJsonInputSettings,
SolcStandardJsonInputSettingsOptimizer, SolcStandardJsonInputSettingsSelection,
SolcStandardJsonOutput,
};
use crate::{CompilerInput, CompilerOutput, SolidityCompiler};
use alloy::json_abi::JsonAbi;
use anyhow::Context;
use semver::Version;
use tokio::{io::AsyncWriteExt, process::Command as AsyncCommand};
// TODO: I believe that we need to also pass the solc compiler to resolc so that resolc uses the
// specified solc compiler. I believe that currently we completely ignore the specified solc binary
// when invoking resolc which doesn't seem right if we're using solc as a compiler frontend.
/// A wrapper around the `resolc` binary, emitting PVM-compatible bytecode. /// A wrapper around the `resolc` binary, emitting PVM-compatible bytecode.
#[derive(Debug)] #[derive(Debug)]
@@ -21,24 +36,74 @@ impl SolidityCompiler for Resolc {
type Options = Vec<String>; type Options = Vec<String>;
#[tracing::instrument(level = "debug", ret)] #[tracing::instrument(level = "debug", ret)]
fn build( async fn build(
&self, &self,
input: CompilerInput<Self::Options>, CompilerInput {
) -> anyhow::Result<CompilerOutput<Self::Options>> { enable_optimization,
let mut command = Command::new(&self.resolc_path); // Ignored and not honored since this is required for the resolc compilation.
via_ir: _via_ir,
evm_version,
allow_paths,
base_path,
sources,
libraries,
// TODO: this is currently not being handled since there is no way to pass it into
// resolc. So, we need to go back to this later once it's supported.
revert_string_handling: _,
}: CompilerInput,
additional_options: Self::Options,
) -> anyhow::Result<CompilerOutput> {
let input = SolcStandardJsonInput {
language: SolcStandardJsonInputLanguage::Solidity,
sources: sources
.into_iter()
.map(|(path, source)| (path.display().to_string(), source.into()))
.collect(),
settings: SolcStandardJsonInputSettings {
evm_version,
libraries: Some(
libraries
.into_iter()
.map(|(source_code, libraries_map)| {
(
source_code.display().to_string(),
libraries_map
.into_iter()
.map(|(library_ident, library_address)| {
(library_ident, library_address.to_string())
})
.collect(),
)
})
.collect(),
),
remappings: None,
output_selection: Some(SolcStandardJsonInputSettingsSelection::new_required()),
via_ir: Some(true),
optimizer: SolcStandardJsonInputSettingsOptimizer::new(
enable_optimization.unwrap_or(false),
None,
&Version::new(0, 0, 0),
false,
),
metadata: None,
polkavm: None,
},
};
let mut command = AsyncCommand::new(&self.resolc_path);
command command
.stdin(Stdio::piped()) .stdin(Stdio::piped())
.stdout(Stdio::piped()) .stdout(Stdio::piped())
.stderr(Stdio::piped()) .stderr(Stdio::piped())
.arg("--standard-json"); .arg("--standard-json");
if let Some(ref base_path) = input.base_path { if let Some(ref base_path) = base_path {
command.arg("--base-path").arg(base_path); command.arg("--base-path").arg(base_path);
} }
if !input.allow_paths.is_empty() { if !allow_paths.is_empty() {
command.arg("--allow-paths").arg( command.arg("--allow-paths").arg(
input allow_paths
.allow_paths
.iter() .iter()
.map(|path| path.display().to_string()) .map(|path| path.display().to_string())
.collect::<Vec<_>>() .collect::<Vec<_>>()
@@ -48,102 +113,96 @@ impl SolidityCompiler for Resolc {
let mut child = command.spawn()?; let mut child = command.spawn()?;
let stdin_pipe = child.stdin.as_mut().expect("stdin must be piped"); let stdin_pipe = child.stdin.as_mut().expect("stdin must be piped");
serde_json::to_writer(stdin_pipe, &input.input)?; let serialized_input = serde_json::to_vec(&input)?;
stdin_pipe.write_all(&serialized_input).await?;
let json_in = serde_json::to_string_pretty(&input.input)?; let output = child.wait_with_output().await?;
let output = child.wait_with_output()?;
let stdout = output.stdout; let stdout = output.stdout;
let stderr = output.stderr; let stderr = output.stderr;
if !output.status.success() { if !output.status.success() {
let json_in = serde_json::to_string_pretty(&input)?;
let message = String::from_utf8_lossy(&stderr); let message = String::from_utf8_lossy(&stderr);
tracing::error!( tracing::error!(
"resolc failed exit={} stderr={} JSON-in={} ", status = %output.status,
output.status, message = %message,
&message, json_input = json_in,
json_in, "Compilation using resolc failed"
); );
return Ok(CompilerOutput { anyhow::bail!("Compilation failed with an error: {message}");
input,
output: Default::default(),
error: Some(message.into()),
});
} }
let mut parsed = let parsed = serde_json::from_slice::<SolcStandardJsonOutput>(&stdout).map_err(|e| {
serde_json::from_slice::<SolcStandardJsonOutput>(&stdout).map_err(|e| {
anyhow::anyhow!( anyhow::anyhow!(
"failed to parse resolc JSON output: {e}\nstderr: {}", "failed to parse resolc JSON output: {e}\nstderr: {}",
String::from_utf8_lossy(&stderr) String::from_utf8_lossy(&stderr)
) )
})?; })?;
// Detecting if the compiler output contained errors and reporting them through logs and
// errors instead of returning the compiler output that might contain errors.
for error in parsed.errors.iter().flatten() {
if error.severity == "error" {
tracing::error!(?error, ?input, "Encountered an error in the compilation");
anyhow::bail!("Encountered an error in the compilation: {error}")
}
}
// We need to do some post processing on the output to make it in the same format that solc
// outputs. More specifically, for each contract, the `.metadata` field should be replaced
// with the `.metadata.solc_metadata` field which contains the ABI and other information
// about the compiled contracts. We do this because we do not want any downstream logic to
// need to differentiate between which compiler is being used when extracting the ABI of the
// contracts.
if let Some(ref mut contracts) = parsed.contracts {
for (contract_path, contracts_map) in contracts.iter_mut() {
for (contract_name, contract_info) in contracts_map.iter_mut() {
let Some(metadata) = contract_info.metadata.take() else {
continue;
};
// Get the `solc_metadata` in the metadata of the contract.
let Some(solc_metadata) = metadata
.get("solc_metadata")
.and_then(|metadata| metadata.as_str())
else {
tracing::error!(
contract_path,
contract_name,
metadata = serde_json::to_string(&metadata).unwrap(),
"Encountered a contract compiled with resolc that has no solc_metadata"
);
anyhow::bail!(
"Contract {} compiled with resolc that has no solc_metadata",
contract_name
);
};
// Replace the original metadata with the new solc_metadata.
contract_info.metadata =
Some(serde_json::Value::String(solc_metadata.to_string()));
}
}
}
tracing::debug!( tracing::debug!(
output = %serde_json::to_string(&parsed).unwrap(), output = %serde_json::to_string(&parsed).unwrap(),
"Compiled successfully" "Compiled successfully"
); );
Ok(CompilerOutput { // Detecting if the compiler output contained errors and reporting them through logs and
input, // errors instead of returning the compiler output that might contain errors.
output: parsed, for error in parsed.errors.iter().flatten() {
error: None, if error.severity == "error" {
tracing::error!(
?error,
?input,
output = %serde_json::to_string(&parsed).unwrap(),
"Encountered an error in the compilation"
);
anyhow::bail!("Encountered an error in the compilation: {error}")
}
}
let Some(contracts) = parsed.contracts else {
anyhow::bail!("Unexpected error - resolc output doesn't have a contracts section");
};
let mut compiler_output = CompilerOutput::default();
for (source_path, contracts) in contracts.into_iter() {
let source_path = PathBuf::from(source_path).canonicalize()?;
let map = compiler_output.contracts.entry(source_path).or_default();
for (contract_name, contract_information) in contracts.into_iter() {
let bytecode = contract_information
.evm
.and_then(|evm| evm.bytecode.clone())
.context("Unexpected - Contract compiled with resolc has no bytecode")?;
let abi = contract_information
.metadata
.as_ref()
.and_then(|metadata| metadata.as_object())
.and_then(|metadata| metadata.get("solc_metadata"))
.and_then(|solc_metadata| solc_metadata.as_str())
.and_then(|metadata| serde_json::from_str::<serde_json::Value>(metadata).ok())
.and_then(|metadata| {
metadata.get("output").and_then(|output| {
output
.get("abi")
.and_then(|abi| serde_json::from_value::<JsonAbi>(abi.clone()).ok())
}) })
})
.context(
"Unexpected - Failed to get the ABI for a contract compiled with resolc",
)?;
map.insert(contract_name, (bytecode.object, abi));
}
}
Ok(compiler_output)
} }
fn new(resolc_path: PathBuf) -> Self { fn new(resolc_path: PathBuf) -> Self {
Resolc { resolc_path } Resolc { resolc_path }
} }
fn get_compiler_executable( async fn get_compiler_executable(
config: &Arguments, config: &Arguments,
_version: semver::Version, _version: impl Into<VersionOrRequirement>,
) -> anyhow::Result<PathBuf> { ) -> anyhow::Result<PathBuf> {
if !config.resolc.as_os_str().is_empty() { if !config.resolc.as_os_str().is_empty() {
return Ok(config.resolc.clone()); return Ok(config.resolc.clone());
@@ -151,4 +210,47 @@ impl SolidityCompiler for Resolc {
Ok(PathBuf::from("resolc")) Ok(PathBuf::from("resolc"))
} }
fn version(&self) -> anyhow::Result<semver::Version> {
// Logic for parsing the resolc version from the following string:
// Solidity frontend for the revive compiler version 0.3.0+commit.b238913.llvm-18.1.8
let output = Command::new(self.resolc_path.as_path())
.arg("--version")
.stdout(Stdio::piped())
.spawn()?
.wait_with_output()?
.stdout;
let output = String::from_utf8_lossy(&output);
let version_string = output
.split("version ")
.nth(1)
.context("Version parsing failed")?
.split("+")
.next()
.context("Version parsing failed")?;
Version::parse(version_string).map_err(Into::into)
}
}
#[cfg(test)]
mod test {
use super::*;
#[tokio::test]
async fn compiler_version_can_be_obtained() {
// Arrange
let args = Arguments::default();
let path = Resolc::get_compiler_executable(&args, Version::new(0, 7, 6))
.await
.unwrap();
let compiler = Resolc::new(path);
// Act
let version = compiler.version();
// Assert
let _ = version.expect("Failed to get version");
}
} }
+198 -29
View File
@@ -6,10 +6,22 @@ use std::{
process::{Command, Stdio}, process::{Command, Stdio},
}; };
use crate::{CompilerInput, CompilerOutput, SolidityCompiler}; use revive_dt_common::types::VersionOrRequirement;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use revive_dt_solc_binaries::download_solc; use revive_dt_solc_binaries::download_solc;
use revive_solc_json_interface::SolcStandardJsonOutput;
use crate::{CompilerInput, CompilerOutput, SolidityCompiler};
use anyhow::Context;
use foundry_compilers_artifacts::{
output_selection::{
BytecodeOutputSelection, ContractOutputSelection, EvmOutputSelection, OutputSelection,
},
solc::CompilerOutput as SolcOutput,
solc::*,
};
use semver::Version;
use tokio::{io::AsyncWriteExt, process::Command as AsyncCommand};
#[derive(Debug)] #[derive(Debug)]
pub struct Solc { pub struct Solc {
@@ -20,24 +32,88 @@ impl SolidityCompiler for Solc {
type Options = (); type Options = ();
#[tracing::instrument(level = "debug", ret)] #[tracing::instrument(level = "debug", ret)]
fn build( async fn build(
&self, &self,
input: CompilerInput<Self::Options>, CompilerInput {
) -> anyhow::Result<CompilerOutput<Self::Options>> { enable_optimization,
let mut command = Command::new(&self.solc_path); via_ir,
evm_version,
allow_paths,
base_path,
sources,
libraries,
revert_string_handling,
}: CompilerInput,
_: Self::Options,
) -> anyhow::Result<CompilerOutput> {
let input = SolcInput {
language: SolcLanguage::Solidity,
sources: Sources(
sources
.into_iter()
.map(|(source_path, source_code)| (source_path, Source::new(source_code)))
.collect(),
),
settings: Settings {
optimizer: Optimizer {
enabled: enable_optimization,
details: Some(Default::default()),
..Default::default()
},
output_selection: OutputSelection::common_output_selection(
[
ContractOutputSelection::Abi,
ContractOutputSelection::Evm(EvmOutputSelection::ByteCode(
BytecodeOutputSelection::Object,
)),
]
.into_iter()
.map(|item| item.to_string()),
),
evm_version: evm_version.map(|version| version.to_string().parse().unwrap()),
via_ir,
libraries: Libraries {
libs: libraries
.into_iter()
.map(|(file_path, libraries)| {
(
file_path,
libraries
.into_iter()
.map(|(library_name, library_address)| {
(library_name, library_address.to_string())
})
.collect(),
)
})
.collect(),
},
debug: revert_string_handling.map(|revert_string_handling| DebuggingSettings {
revert_strings: match revert_string_handling {
crate::RevertString::Default => Some(RevertStrings::Default),
crate::RevertString::Debug => Some(RevertStrings::Debug),
crate::RevertString::Strip => Some(RevertStrings::Strip),
crate::RevertString::VerboseDebug => Some(RevertStrings::VerboseDebug),
},
debug_info: Default::default(),
}),
..Default::default()
},
};
let mut command = AsyncCommand::new(&self.solc_path);
command command
.stdin(Stdio::piped()) .stdin(Stdio::piped())
.stdout(Stdio::piped()) .stdout(Stdio::piped())
.stderr(Stdio::piped()) .stderr(Stdio::piped())
.arg("--standard-json"); .arg("--standard-json");
if let Some(ref base_path) = input.base_path { if let Some(ref base_path) = base_path {
command.arg("--base-path").arg(base_path); command.arg("--base-path").arg(base_path);
} }
if !input.allow_paths.is_empty() { if !allow_paths.is_empty() {
command.arg("--allow-paths").arg( command.arg("--allow-paths").arg(
input allow_paths
.allow_paths
.iter() .iter()
.map(|path| path.display().to_string()) .map(|path| path.display().to_string())
.collect::<Vec<_>>() .collect::<Vec<_>>()
@@ -47,21 +123,23 @@ impl SolidityCompiler for Solc {
let mut child = command.spawn()?; let mut child = command.spawn()?;
let stdin = child.stdin.as_mut().expect("should be piped"); let stdin = child.stdin.as_mut().expect("should be piped");
serde_json::to_writer(stdin, &input.input)?; let serialized_input = serde_json::to_vec(&input)?;
let output = child.wait_with_output()?; stdin.write_all(&serialized_input).await?;
let output = child.wait_with_output().await?;
if !output.status.success() { if !output.status.success() {
let json_in = serde_json::to_string_pretty(&input)?;
let message = String::from_utf8_lossy(&output.stderr); let message = String::from_utf8_lossy(&output.stderr);
tracing::error!("solc failed exit={} stderr={}", output.status, &message); tracing::error!(
return Ok(CompilerOutput { status = %output.status,
input, message = %message,
output: Default::default(), json_input = json_in,
error: Some(message.into()), "Compilation using solc failed"
}); );
anyhow::bail!("Compilation failed with an error: {message}");
} }
let parsed = let parsed = serde_json::from_slice::<SolcOutput>(&output.stdout).map_err(|e| {
serde_json::from_slice::<SolcStandardJsonOutput>(&output.stdout).map_err(|e| {
anyhow::anyhow!( anyhow::anyhow!(
"failed to parse resolc JSON output: {e}\nstderr: {}", "failed to parse resolc JSON output: {e}\nstderr: {}",
String::from_utf8_lossy(&output.stdout) String::from_utf8_lossy(&output.stdout)
@@ -70,8 +148,8 @@ impl SolidityCompiler for Solc {
// Detecting if the compiler output contained errors and reporting them through logs and // Detecting if the compiler output contained errors and reporting them through logs and
// errors instead of returning the compiler output that might contain errors. // errors instead of returning the compiler output that might contain errors.
for error in parsed.errors.iter().flatten() { for error in parsed.errors.iter() {
if error.severity == "error" { if error.severity == Severity::Error {
tracing::error!(?error, ?input, "Encountered an error in the compilation"); tracing::error!(?error, ?input, "Encountered an error in the compilation");
anyhow::bail!("Encountered an error in the compilation: {error}") anyhow::bail!("Encountered an error in the compilation: {error}")
} }
@@ -82,22 +160,113 @@ impl SolidityCompiler for Solc {
"Compiled successfully" "Compiled successfully"
); );
Ok(CompilerOutput { let mut compiler_output = CompilerOutput::default();
input, for (contract_path, contracts) in parsed.contracts {
output: parsed, let map = compiler_output
error: None, .contracts
.entry(contract_path.canonicalize()?)
.or_default();
for (contract_name, contract_info) in contracts.into_iter() {
let source_code = contract_info
.evm
.and_then(|evm| evm.bytecode)
.map(|bytecode| match bytecode.object {
BytecodeObject::Bytecode(bytecode) => bytecode.to_string(),
BytecodeObject::Unlinked(unlinked) => unlinked,
}) })
.context("Unexpected - contract compiled with solc has no source code")?;
let abi = contract_info
.abi
.context("Unexpected - contract compiled with solc as no ABI")?;
map.insert(contract_name, (source_code, abi));
}
}
Ok(compiler_output)
} }
fn new(solc_path: PathBuf) -> Self { fn new(solc_path: PathBuf) -> Self {
Self { solc_path } Self { solc_path }
} }
fn get_compiler_executable( async fn get_compiler_executable(
config: &Arguments, config: &Arguments,
version: semver::Version, version: impl Into<VersionOrRequirement>,
) -> anyhow::Result<PathBuf> { ) -> anyhow::Result<PathBuf> {
let path = download_solc(config.directory(), version, config.wasm)?; let path = download_solc(config.directory(), version, config.wasm).await?;
Ok(path) Ok(path)
} }
fn version(&self) -> anyhow::Result<semver::Version> {
// The following is the parsing code for the version from the solc version strings which
// look like the following:
// ```
// solc, the solidity compiler commandline interface
// Version: 0.8.30+commit.73712a01.Darwin.appleclang
// ```
let child = Command::new(self.solc_path.as_path())
.arg("--version")
.stdout(Stdio::piped())
.spawn()?;
let output = child.wait_with_output()?;
let output = String::from_utf8_lossy(&output.stdout);
let version_line = output
.split("Version: ")
.nth(1)
.context("Version parsing failed")?;
let version_string = version_line
.split("+")
.next()
.context("Version parsing failed")?;
Version::parse(version_string).map_err(Into::into)
}
}
#[cfg(test)]
mod test {
use super::*;
#[tokio::test]
async fn compiler_version_can_be_obtained() {
// Arrange
let args = Arguments::default();
println!("Getting compiler path");
let path = Solc::get_compiler_executable(&args, Version::new(0, 7, 6))
.await
.unwrap();
println!("Got compiler path");
let compiler = Solc::new(path);
// Act
let version = compiler.version();
// Assert
assert_eq!(
version.expect("Failed to get version"),
Version::new(0, 7, 6)
)
}
#[tokio::test]
async fn compiler_version_can_be_obtained1() {
// Arrange
let args = Arguments::default();
println!("Getting compiler path");
let path = Solc::get_compiler_executable(&args, Version::new(0, 4, 21))
.await
.unwrap();
println!("Got compiler path");
let compiler = Solc::new(path);
// Act
let version = compiler.version();
// Assert
assert_eq!(
version.expect("Failed to get version"),
Version::new(0, 4, 21)
)
}
} }
@@ -0,0 +1,9 @@
// SPDX-License-Identifier: MIT
pragma solidity >=0.6.9;
contract Callable {
function f(uint[1] memory p1) public pure returns(uint) {
return p1[0];
}
}
@@ -0,0 +1,13 @@
// SPDX-License-Identifier: MIT
// Report https://linear.app/matterlabs/issue/CPR-269/call-with-calldata-variable-bug
pragma solidity >=0.6.9;
import "./callable.sol";
contract Main {
function main(uint[1] calldata p1, Callable callable) public returns(uint) {
return callable.f(p1);
}
}
@@ -0,0 +1,21 @@
{ "cases": [ {
"name": "first",
"inputs": [
{
"instance": "Main",
"method": "main",
"calldata": [
"1",
"Callable.address"
]
}
],
"expected": [
"1"
]
} ],
"contracts": {
"Main": "main.sol:Main",
"Callable": "callable.sol:Callable"
}
}
+88
View File
@@ -0,0 +1,88 @@
use std::path::PathBuf;
use revive_dt_compiler::{Compiler, SolidityCompiler, revive_resolc::Resolc, solc::Solc};
use revive_dt_config::Arguments;
use semver::Version;
#[tokio::test]
async fn contracts_can_be_compiled_with_solc() {
// Arrange
let args = Arguments::default();
let compiler_path = Solc::get_compiler_executable(&args, Version::new(0, 8, 30))
.await
.unwrap();
println!("About to assert");
// Act
let output = Compiler::<Solc>::new()
.with_source("./tests/assets/array_one_element/callable.sol")
.unwrap()
.with_source("./tests/assets/array_one_element/main.sol")
.unwrap()
.try_build(compiler_path)
.await;
// Assert
let output = output.expect("Failed to compile");
assert_eq!(output.contracts.len(), 2);
let main_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/main.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
let callable_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/callable.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
assert!(main_file_contracts.contains_key("Main"));
assert!(callable_file_contracts.contains_key("Callable"));
}
#[tokio::test]
async fn contracts_can_be_compiled_with_resolc() {
// Arrange
let args = Arguments::default();
let compiler_path = Resolc::get_compiler_executable(&args, Version::new(0, 8, 30))
.await
.unwrap();
// Act
let output = Compiler::<Resolc>::new()
.with_source("./tests/assets/array_one_element/callable.sol")
.unwrap()
.with_source("./tests/assets/array_one_element/main.sol")
.unwrap()
.try_build(compiler_path)
.await;
// Assert
let output = output.expect("Failed to compile");
assert_eq!(output.contracts.len(), 2);
let main_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/main.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
let callable_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/callable.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
assert!(main_file_contracts.contains_key("Main"));
assert!(callable_file_contracts.contains_key("Callable"));
}
+42 -6
View File
@@ -3,6 +3,7 @@
use std::{ use std::{
fmt::Display, fmt::Display,
path::{Path, PathBuf}, path::{Path, PathBuf},
sync::LazyLock,
}; };
use alloy::{network::EthereumWallet, signers::local::PrivateKeySigner}; use alloy::{network::EthereumWallet, signers::local::PrivateKeySigner};
@@ -54,7 +55,7 @@ pub struct Arguments {
pub geth: PathBuf, pub geth: PathBuf,
/// The maximum time in milliseconds to wait for geth to start. /// The maximum time in milliseconds to wait for geth to start.
#[arg(long = "geth-start-timeout", default_value = "2000")] #[arg(long = "geth-start-timeout", default_value = "5000")]
pub geth_start_timeout: u64, pub geth_start_timeout: u64,
/// The test network chain ID. /// The test network chain ID.
@@ -73,6 +74,12 @@ pub struct Arguments {
)] )]
pub account: String, pub account: String,
/// This argument controls which private keys the nodes should have access to and be added to
/// its wallet signers. With a value of N, private keys (0, N] will be added to the signer set
/// of the node.
#[arg(long = "private-keys-count", default_value_t = 100_000)]
pub private_keys_to_add: usize,
/// The differential testing leader node implementation. /// The differential testing leader node implementation.
#[arg(short, long = "leader", default_value = "geth")] #[arg(short, long = "leader", default_value = "geth")]
pub leader: TestingPlatform, pub leader: TestingPlatform,
@@ -85,9 +92,22 @@ pub struct Arguments {
#[arg(long = "compile-only")] #[arg(long = "compile-only")]
pub compile_only: Option<TestingPlatform>, pub compile_only: Option<TestingPlatform>,
/// Determines the amount of tests that are executed in parallel. /// Determines the amount of nodes that will be spawned for each chain.
#[arg(long = "workers", default_value = "12")] #[arg(long, default_value = "1")]
pub workers: usize, pub number_of_nodes: usize,
/// Determines the amount of tokio worker threads that will will be used.
#[arg(
long,
default_value_t = std::thread::available_parallelism()
.map(|n| n.get())
.unwrap_or(1)
)]
pub number_of_threads: usize,
/// Determines the amount of concurrent tasks that will be spawned to run tests. Defaults to 10 x the number of nodes.
#[arg(long)]
pub number_concurrent_tasks: Option<usize>,
/// Extract problems back to the test corpus. /// Extract problems back to the test corpus.
#[arg(short, long = "extract-problems")] #[arg(short, long = "extract-problems")]
@@ -123,6 +143,13 @@ impl Arguments {
panic!("should have a workdir configured") panic!("should have a workdir configured")
} }
/// Return the number of concurrent tasks to run. This is provided via the
/// `--number-concurrent-tasks` argument, and otherwise defaults to --number-of-nodes * 20.
pub fn number_of_concurrent_tasks(&self) -> usize {
self.number_concurrent_tasks
.unwrap_or(20 * self.number_of_nodes)
}
/// Try to parse `self.account` into a [PrivateKeySigner], /// Try to parse `self.account` into a [PrivateKeySigner],
/// panicing on error. /// panicing on error.
pub fn wallet(&self) -> EthereumWallet { pub fn wallet(&self) -> EthereumWallet {
@@ -138,14 +165,23 @@ impl Arguments {
impl Default for Arguments { impl Default for Arguments {
fn default() -> Self { fn default() -> Self {
Arguments::parse_from(["retester"]) static TEMP_DIR: LazyLock<TempDir> = LazyLock::new(|| TempDir::new().unwrap());
let default = Arguments::parse_from(["retester"]);
Arguments {
temp_dir: Some(&TEMP_DIR),
..default
}
} }
} }
/// The Solidity compatible node implementation. /// The Solidity compatible node implementation.
/// ///
/// This describes the solutions to be tested against on a high level. /// This describes the solutions to be tested against on a high level.
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq, ValueEnum, Serialize, Deserialize)] #[derive(
Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, ValueEnum, Serialize, Deserialize,
)]
#[clap(rename_all = "lower")] #[clap(rename_all = "lower")]
pub enum TestingPlatform { pub enum TestingPlatform {
/// The go-ethereum reference full node EVM implementation. /// The go-ethereum reference full node EVM implementation.
+4 -2
View File
@@ -13,6 +13,7 @@ name = "retester"
path = "src/main.rs" path = "src/main.rs"
[dependencies] [dependencies]
revive-dt-common = { workspace = true }
revive-dt-compiler = { workspace = true } revive-dt-compiler = { workspace = true }
revive-dt-config = { workspace = true } revive-dt-config = { workspace = true }
revive-dt-format = { workspace = true } revive-dt-format = { workspace = true }
@@ -23,10 +24,11 @@ revive-dt-report = { workspace = true }
alloy = { workspace = true } alloy = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
clap = { workspace = true } clap = { workspace = true }
futures = { workspace = true }
indexmap = { workspace = true } indexmap = { workspace = true }
tokio = { workspace = true }
tracing = { workspace = true } tracing = { workspace = true }
tracing-subscriber = { workspace = true } tracing-subscriber = { workspace = true }
rayon = { workspace = true } semver = { workspace = true }
revive-solc-json-interface = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
temp-dir = { workspace = true } temp-dir = { workspace = true }
File diff suppressed because it is too large Load Diff
+4 -4
View File
@@ -5,17 +5,17 @@
use revive_dt_compiler::{SolidityCompiler, revive_resolc, solc}; use revive_dt_compiler::{SolidityCompiler, revive_resolc, solc};
use revive_dt_config::TestingPlatform; use revive_dt_config::TestingPlatform;
use revive_dt_node::{geth, kitchensink::KitchensinkNode}; use revive_dt_format::traits::ResolverApi;
use revive_dt_node::{Node, geth, kitchensink::KitchensinkNode};
use revive_dt_node_interaction::EthereumNode; use revive_dt_node_interaction::EthereumNode;
pub mod common;
pub mod driver; pub mod driver;
/// One platform can be tested differentially against another. /// One platform can be tested differentially against another.
/// ///
/// For this we need a blockchain node implementation and a compiler. /// For this we need a blockchain node implementation and a compiler.
pub trait Platform { pub trait Platform {
type Blockchain: EthereumNode; type Blockchain: EthereumNode + Node + ResolverApi;
type Compiler: SolidityCompiler; type Compiler: SolidityCompiler;
/// Returns the matching [TestingPlatform] of the [revive_dt_config::Arguments]. /// Returns the matching [TestingPlatform] of the [revive_dt_config::Arguments].
@@ -26,7 +26,7 @@ pub trait Platform {
pub struct Geth; pub struct Geth;
impl Platform for Geth { impl Platform for Geth {
type Blockchain = geth::Instance; type Blockchain = geth::GethNode;
type Compiler = solc::Solc; type Compiler = solc::Solc;
fn config_id() -> TestingPlatform { fn config_id() -> TestingPlatform {
+684 -78
View File
@@ -1,37 +1,89 @@
use std::{collections::HashMap, sync::LazyLock}; use std::{
collections::HashMap,
path::{Path, PathBuf},
sync::{Arc, LazyLock},
time::Instant,
};
use alloy::{
json_abi::JsonAbi,
network::{Ethereum, TransactionBuilder},
primitives::Address,
rpc::types::TransactionRequest,
};
use anyhow::Context;
use clap::Parser; use clap::Parser;
use rayon::{ThreadPoolBuilder, prelude::*}; use futures::StreamExt;
use revive_dt_common::iterators::FilesWithExtensionIterator;
use revive_dt_node_interaction::EthereumNode;
use semver::Version;
use temp_dir::TempDir;
use tokio::sync::{Mutex, RwLock, mpsc};
use tracing::{Instrument, Level};
use tracing_subscriber::{EnvFilter, FmtSubscriber};
use revive_dt_compiler::SolidityCompiler;
use revive_dt_compiler::{Compiler, CompilerOutput};
use revive_dt_config::*; use revive_dt_config::*;
use revive_dt_core::{ use revive_dt_core::{
Geth, Kitchensink, Platform, Geth, Kitchensink, Platform,
driver::{Driver, State}, driver::{CaseDriver, CaseState},
};
use revive_dt_format::{
case::{Case, CaseIdx},
corpus::Corpus,
input::{Input, Step},
metadata::{ContractInstance, ContractPathAndIdent, Metadata, MetadataFile},
mode::SolcMode,
}; };
use revive_dt_format::{corpus::Corpus, metadata::MetadataFile};
use revive_dt_node::pool::NodePool; use revive_dt_node::pool::NodePool;
use revive_dt_report::reporter::{Report, Span}; use revive_dt_report::reporter::{Report, Span};
use temp_dir::TempDir;
use tracing::Level;
use tracing_subscriber::{EnvFilter, FmtSubscriber};
static TEMP_DIR: LazyLock<TempDir> = LazyLock::new(|| TempDir::new().unwrap()); static TEMP_DIR: LazyLock<TempDir> = LazyLock::new(|| TempDir::new().unwrap());
type CompilationCache = Arc<
RwLock<
HashMap<
(PathBuf, SolcMode, TestingPlatform),
Arc<Mutex<Option<Arc<(Version, CompilerOutput)>>>>,
>,
>,
>;
/// this represents a single "test"; a mode, path and collection of cases.
#[derive(Clone)]
struct Test {
metadata: Metadata,
path: PathBuf,
mode: SolcMode,
case_idx: usize,
case: Case,
}
/// This represents the results that we gather from running test cases.
type CaseResult = Result<usize, anyhow::Error>;
fn main() -> anyhow::Result<()> { fn main() -> anyhow::Result<()> {
let args = init_cli()?; let args = init_cli()?;
for (corpus, tests) in collect_corpora(&args)? { let body = async {
for (corpus, tests) in collect_corpora(&args).await? {
let span = Span::new(corpus, args.clone())?; let span = Span::new(corpus, args.clone())?;
match &args.compile_only { match &args.compile_only {
Some(platform) => compile_corpus(&args, &tests, platform, span), Some(platform) => compile_corpus(&args, &tests, platform, span).await,
None => execute_corpus(&args, &tests, span)?, None => execute_corpus(&args, &tests, span).await?,
} }
Report::save()?; Report::save()?;
} }
Ok(()) Ok(())
};
tokio::runtime::Builder::new_multi_thread()
.worker_threads(args.number_of_threads)
.enable_all()
.build()
.expect("Failed building the Runtime")
.block_on(body)
} }
fn init_cli() -> anyhow::Result<Arguments> { fn init_cli() -> anyhow::Result<Arguments> {
@@ -62,20 +114,16 @@ fn init_cli() -> anyhow::Result<Arguments> {
} }
tracing::info!("workdir: {}", args.directory().display()); tracing::info!("workdir: {}", args.directory().display());
ThreadPoolBuilder::new()
.num_threads(args.workers)
.build_global()?;
Ok(args) Ok(args)
} }
fn collect_corpora(args: &Arguments) -> anyhow::Result<HashMap<Corpus, Vec<MetadataFile>>> { async fn collect_corpora(args: &Arguments) -> anyhow::Result<HashMap<Corpus, Vec<MetadataFile>>> {
let mut corpora = HashMap::new(); let mut corpora = HashMap::new();
for path in &args.corpus { for path in &args.corpus {
let corpus = Corpus::try_from_path(path)?; let corpus = Corpus::try_from_path(path)?;
tracing::info!("found corpus: {}", path.display()); tracing::info!("found corpus: {}", path.display());
let tests = corpus.enumerate_tests(); let tests = corpus.enumerate_tests().await;
tracing::info!("corpus '{}' contains {} tests", &corpus.name, tests.len()); tracing::info!("corpus '{}' contains {} tests", &corpus.name, tests.len());
corpora.insert(corpus, tests); corpora.insert(corpus, tests);
} }
@@ -83,71 +131,610 @@ fn collect_corpora(args: &Arguments) -> anyhow::Result<HashMap<Corpus, Vec<Metad
Ok(corpora) Ok(corpora)
} }
fn run_driver<L, F>(args: &Arguments, tests: &[MetadataFile], span: Span) -> anyhow::Result<()> async fn run_driver<L, F>(
args: &Arguments,
metadata_files: &[MetadataFile],
span: Span,
) -> anyhow::Result<()>
where where
L: Platform, L: Platform,
F: Platform, F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static, L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static, F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
{ {
let leader_nodes = NodePool::<L::Blockchain>::new(args)?; let (report_tx, report_rx) = mpsc::unbounded_channel::<(Test, CaseResult)>();
let follower_nodes = NodePool::<F::Blockchain>::new(args)?;
tests.par_iter().for_each( let tests = prepare_tests::<L, F>(metadata_files);
|MetadataFile { let driver_task = start_driver_task::<L, F>(args, tests, span, report_tx).await?;
content: metadata, let status_reporter_task = start_reporter_task(report_rx);
path: metadata_file_path,
}| {
// Starting a new tracing span for this metadata file. This allows our logs to be clear
// about which metadata file the logs belong to. We can add other information into this
// as well to be able to associate the logs with the correct metadata file and case
// that's being executed.
let tracing_span = tracing::span!(
Level::INFO,
"Running driver",
metadata_file_path = metadata_file_path.display().to_string(),
);
let _guard = tracing_span.enter();
let mut driver = Driver::<L, F>::new( tokio::join!(status_reporter_task, driver_task);
metadata,
args,
leader_nodes.round_robbin(),
follower_nodes.round_robbin(),
);
let execution_result = driver.execute(span);
tracing::info!(
case_success_count = execution_result.successful_cases_count,
case_failure_count = execution_result.failed_cases_count,
"Execution completed"
);
let mut error_count = 0;
for result in execution_result.results.iter() {
if !result.is_success() {
tracing::error!(execution_error = ?result, "Encountered an error");
error_count += 1;
}
}
if error_count == 0 {
tracing::info!("Execution succeeded");
} else {
tracing::info!("Execution failed");
}
},
);
Ok(()) Ok(())
} }
fn execute_corpus(args: &Arguments, tests: &[MetadataFile], span: Span) -> anyhow::Result<()> { fn prepare_tests<L, F>(metadata_files: &[MetadataFile]) -> impl Iterator<Item = Test>
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
{
metadata_files
.iter()
.flat_map(
|MetadataFile {
path,
content: metadata,
}| {
metadata
.cases
.iter()
.enumerate()
.flat_map(move |(case_idx, case)| {
metadata
.solc_modes()
.into_iter()
.map(move |solc_mode| (path, metadata, case_idx, case, solc_mode))
})
},
)
.filter(
|(metadata_file_path, metadata, _, _, _)| match metadata.ignore {
Some(true) => {
tracing::warn!(
metadata_file_path = %metadata_file_path.display(),
"Ignoring metadata file"
);
false
}
Some(false) | None => true,
},
)
.filter(
|(metadata_file_path, _, case_idx, case, _)| match case.ignore {
Some(true) => {
tracing::warn!(
metadata_file_path = %metadata_file_path.display(),
case_idx,
case_name = ?case.name,
"Ignoring case"
);
false
}
Some(false) | None => true,
},
)
.filter(|(metadata_file_path, metadata, ..)| match metadata.required_evm_version {
Some(evm_version_requirement) => {
let is_allowed = evm_version_requirement
.matches(&<L::Blockchain as revive_dt_node::Node>::evm_version())
&& evm_version_requirement
.matches(&<F::Blockchain as revive_dt_node::Node>::evm_version());
if !is_allowed {
tracing::warn!(
metadata_file_path = %metadata_file_path.display(),
leader_evm_version = %<L::Blockchain as revive_dt_node::Node>::evm_version(),
follower_evm_version = %<F::Blockchain as revive_dt_node::Node>::evm_version(),
version_requirement = %evm_version_requirement,
"Skipped test since the EVM version requirement was not fulfilled."
);
}
is_allowed
}
None => true,
})
.map(|(metadata_file_path, metadata, case_idx, case, solc_mode)| {
Test {
metadata: metadata.clone(),
path: metadata_file_path.to_path_buf(),
mode: solc_mode,
case_idx,
case: case.clone(),
}
})
}
async fn start_driver_task<L, F>(
args: &Arguments,
tests: impl Iterator<Item = Test>,
span: Span,
report_tx: mpsc::UnboundedSender<(Test, CaseResult)>,
) -> anyhow::Result<impl Future<Output = ()>>
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
{
let leader_nodes = Arc::new(NodePool::<L::Blockchain>::new(args).await?);
let follower_nodes = Arc::new(NodePool::<F::Blockchain>::new(args).await?);
let compilation_cache = Arc::new(RwLock::new(HashMap::new()));
let number_concurrent_tasks = args.number_of_concurrent_tasks();
Ok(futures::stream::iter(tests).for_each_concurrent(
// We want to limit the concurrent tasks here because:
//
// 1. We don't want to overwhelm the nodes with too many requests, leading to responses timing out.
// 2. We don't want to open too many files at once, leading to the OS running out of file descriptors.
//
// By default, we allow maximum of 10 ongoing requests per node in order to limit (1), and assume that
// this number will automatically be low enough to address (2). The user can override this.
Some(number_concurrent_tasks),
move |test| {
let leader_nodes = leader_nodes.clone();
let follower_nodes = follower_nodes.clone();
let compilation_cache = compilation_cache.clone();
let report_tx = report_tx.clone();
async move {
let leader_node = leader_nodes.round_robbin();
let follower_node = follower_nodes.round_robbin();
let tracing_span = tracing::span!(
Level::INFO,
"Running driver",
metadata_file_path = %test.path.display(),
case_idx = ?test.case_idx,
solc_mode = ?test.mode,
);
let result = handle_case_driver::<L, F>(
&test.path,
&test.metadata,
test.case_idx.into(),
&test.case,
test.mode.clone(),
args,
compilation_cache.clone(),
leader_node,
follower_node,
span,
)
.instrument(tracing_span)
.await;
report_tx
.send((test, result))
.expect("Failed to send report");
}
},
))
}
async fn start_reporter_task(mut report_rx: mpsc::UnboundedReceiver<(Test, CaseResult)>) {
let start = Instant::now();
const GREEN: &str = "\x1B[32m";
const RED: &str = "\x1B[31m";
const COLOUR_RESET: &str = "\x1B[0m";
const BOLD: &str = "\x1B[1m";
const BOLD_RESET: &str = "\x1B[22m";
let mut number_of_successes = 0;
let mut number_of_failures = 0;
let mut failures = vec![];
// Wait for reports to come from our test runner. When the channel closes, this ends.
while let Some((test, case_result)) = report_rx.recv().await {
let case_name = test.case.name.as_deref().unwrap_or("unnamed_case");
let case_idx = test.case_idx;
let test_path = test.path.display();
let test_mode = test.mode.clone();
match case_result {
Ok(_inputs) => {
number_of_successes += 1;
eprintln!(
"{GREEN}Case Succeeded:{COLOUR_RESET} {test_path} -> {case_name}:{case_idx} (mode: {test_mode:?})"
);
}
Err(err) => {
number_of_failures += 1;
eprintln!(
"{RED}Case Failed:{COLOUR_RESET} {test_path} -> {case_name}:{case_idx} (mode: {test_mode:?})"
);
failures.push((test, err));
}
}
}
eprintln!();
let elapsed = start.elapsed();
// Now, log the failures with more complete errors at the bottom, like `cargo test` does, so
// that we don't have to scroll through the entire output to find them.
if !failures.is_empty() {
eprintln!("{BOLD}Failures:{BOLD_RESET}\n");
for failure in failures {
let (test, err) = failure;
let case_name = test.case.name.as_deref().unwrap_or("unnamed_case");
let case_idx = test.case_idx;
let test_path = test.path.display();
let test_mode = test.mode.clone();
eprintln!(
"---- {RED}Case Failed:{COLOUR_RESET} {test_path} -> {case_name}:{case_idx} (mode: {test_mode:?}) ----\n\n{err}\n"
);
}
}
// Summary at the end.
eprintln!(
"{} cases: {GREEN}{number_of_successes}{COLOUR_RESET} cases succeeded, {RED}{number_of_failures}{COLOUR_RESET} cases failed in {} seconds",
number_of_successes + number_of_failures,
elapsed.as_secs()
);
}
#[allow(clippy::too_many_arguments)]
async fn handle_case_driver<L, F>(
metadata_file_path: &Path,
metadata: &Metadata,
case_idx: CaseIdx,
case: &Case,
mode: SolcMode,
config: &Arguments,
compilation_cache: CompilationCache,
leader_node: &L::Blockchain,
follower_node: &F::Blockchain,
_: Span,
) -> anyhow::Result<usize>
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
{
let leader_pre_link_contracts = get_or_build_contracts::<L>(
metadata,
metadata_file_path,
mode.clone(),
config,
compilation_cache.clone(),
&HashMap::new(),
)
.await?;
let follower_pre_link_contracts = get_or_build_contracts::<F>(
metadata,
metadata_file_path,
mode.clone(),
config,
compilation_cache.clone(),
&HashMap::new(),
)
.await?;
let mut leader_deployed_libraries = HashMap::new();
let mut follower_deployed_libraries = HashMap::new();
let mut contract_sources = metadata.contract_sources()?;
for library_instance in metadata
.libraries
.iter()
.flatten()
.flat_map(|(_, map)| map.values())
{
let ContractPathAndIdent {
contract_source_path: library_source_path,
contract_ident: library_ident,
} = contract_sources
.remove(library_instance)
.context("Failed to find the contract source")?;
let (leader_code, leader_abi) = leader_pre_link_contracts
.1
.contracts
.get(&library_source_path)
.and_then(|contracts| contracts.get(library_ident.as_str()))
.context("Declared library was not compiled")?;
let (follower_code, follower_abi) = follower_pre_link_contracts
.1
.contracts
.get(&library_source_path)
.and_then(|contracts| contracts.get(library_ident.as_str()))
.context("Declared library was not compiled")?;
let leader_code = match alloy::hex::decode(leader_code) {
Ok(code) => code,
Err(error) => {
tracing::error!(
?error,
contract_source_path = library_source_path.display().to_string(),
contract_ident = library_ident.as_ref(),
"Failed to hex-decode byte code - This could possibly mean that the bytecode requires linking"
);
anyhow::bail!("Failed to hex-decode the byte code {}", error)
}
};
let follower_code = match alloy::hex::decode(follower_code) {
Ok(code) => code,
Err(error) => {
tracing::error!(
?error,
contract_source_path = library_source_path.display().to_string(),
contract_ident = library_ident.as_ref(),
"Failed to hex-decode byte code - This could possibly mean that the bytecode requires linking"
);
anyhow::bail!("Failed to hex-decode the byte code {}", error)
}
};
// Getting the deployer address from the cases themselves. This is to ensure that we're
// doing the deployments from different accounts and therefore we're not slowed down by
// the nonce.
let deployer_address = case
.steps
.iter()
.filter_map(|step| match step {
Step::FunctionCall(input) => Some(input.caller),
Step::BalanceAssertion(..) => None,
Step::StorageEmptyAssertion(..) => None,
})
.next()
.unwrap_or(Input::default_caller());
let leader_tx = TransactionBuilder::<Ethereum>::with_deploy_code(
TransactionRequest::default().from(deployer_address),
leader_code,
);
let follower_tx = TransactionBuilder::<Ethereum>::with_deploy_code(
TransactionRequest::default().from(deployer_address),
follower_code,
);
let leader_receipt = match leader_node.execute_transaction(leader_tx).await {
Ok(receipt) => receipt,
Err(error) => {
tracing::error!(
node = std::any::type_name::<L>(),
?error,
"Contract deployment transaction failed."
);
return Err(error);
}
};
let follower_receipt = match follower_node.execute_transaction(follower_tx).await {
Ok(receipt) => receipt,
Err(error) => {
tracing::error!(
node = std::any::type_name::<F>(),
?error,
"Contract deployment transaction failed."
);
return Err(error);
}
};
tracing::info!(
?library_instance,
library_address = ?leader_receipt.contract_address,
"Deployed library to leader"
);
tracing::info!(
?library_instance,
library_address = ?follower_receipt.contract_address,
"Deployed library to follower"
);
let Some(leader_library_address) = leader_receipt.contract_address else {
anyhow::bail!("Contract deployment didn't return an address");
};
let Some(follower_library_address) = follower_receipt.contract_address else {
anyhow::bail!("Contract deployment didn't return an address");
};
leader_deployed_libraries.insert(
library_instance.clone(),
(leader_library_address, leader_abi.clone()),
);
follower_deployed_libraries.insert(
library_instance.clone(),
(follower_library_address, follower_abi.clone()),
);
}
let metadata_file_contains_libraries = metadata
.libraries
.iter()
.flat_map(|map| map.iter())
.flat_map(|(_, value)| value.iter())
.next()
.is_some();
let compiled_contracts_require_linking = leader_pre_link_contracts
.1
.contracts
.values()
.chain(follower_pre_link_contracts.1.contracts.values())
.flat_map(|value| value.values())
.any(|(code, _)| !code.chars().all(|char| char.is_ascii_hexdigit()));
let (leader_compiled_contracts, follower_compiled_contracts) =
if metadata_file_contains_libraries && compiled_contracts_require_linking {
let leader_key = (
metadata_file_path.to_path_buf(),
mode.clone(),
L::config_id(),
);
let follower_key = (
metadata_file_path.to_path_buf(),
mode.clone(),
F::config_id(),
);
{
let mut cache = compilation_cache.write().await;
cache.remove(&leader_key);
cache.remove(&follower_key);
}
let leader_post_link_contracts = get_or_build_contracts::<L>(
metadata,
metadata_file_path,
mode.clone(),
config,
compilation_cache.clone(),
&leader_deployed_libraries,
)
.await?;
let follower_post_link_contracts = get_or_build_contracts::<F>(
metadata,
metadata_file_path,
mode.clone(),
config,
compilation_cache,
&follower_deployed_libraries,
)
.await?;
(leader_post_link_contracts, follower_post_link_contracts)
} else {
(leader_pre_link_contracts, follower_pre_link_contracts)
};
let leader_state = CaseState::<L>::new(
leader_compiled_contracts.0.clone(),
leader_compiled_contracts.1.contracts.clone(),
leader_deployed_libraries,
);
let follower_state = CaseState::<F>::new(
follower_compiled_contracts.0.clone(),
follower_compiled_contracts.1.contracts.clone(),
follower_deployed_libraries,
);
let mut driver = CaseDriver::<L, F>::new(
metadata,
case,
case_idx,
leader_node,
follower_node,
leader_state,
follower_state,
);
driver.execute().await
}
async fn get_or_build_contracts<P: Platform>(
metadata: &Metadata,
metadata_file_path: &Path,
mode: SolcMode,
config: &Arguments,
compilation_cache: CompilationCache,
deployed_libraries: &HashMap<ContractInstance, (Address, JsonAbi)>,
) -> anyhow::Result<Arc<(Version, CompilerOutput)>> {
let key = (
metadata_file_path.to_path_buf(),
mode.clone(),
P::config_id(),
);
if let Some(compilation_artifact) = compilation_cache.read().await.get(&key).cloned() {
let mut compilation_artifact = compilation_artifact.lock().await;
match *compilation_artifact {
Some(ref compiled_contracts) => {
tracing::debug!(?key, "Compiled contracts cache hit");
return Ok(compiled_contracts.clone());
}
None => {
tracing::debug!(?key, "Compiled contracts cache miss");
let compiled_contracts = Arc::new(
compile_contracts::<P>(
metadata,
metadata_file_path,
&mode,
config,
deployed_libraries,
)
.await?,
);
*compilation_artifact = Some(compiled_contracts.clone());
return Ok(compiled_contracts.clone());
}
}
};
tracing::debug!(?key, "Compiled contracts cache miss");
let mutex = {
let mut compilation_cache = compilation_cache.write().await;
let mutex = Arc::new(Mutex::new(None));
compilation_cache.insert(key, mutex.clone());
mutex
};
let mut compilation_artifact = mutex.lock().await;
let compiled_contracts = Arc::new(
compile_contracts::<P>(
metadata,
metadata_file_path,
&mode,
config,
deployed_libraries,
)
.await?,
);
*compilation_artifact = Some(compiled_contracts.clone());
Ok(compiled_contracts.clone())
}
async fn compile_contracts<P: Platform>(
metadata: &Metadata,
metadata_file_path: &Path,
mode: &SolcMode,
config: &Arguments,
deployed_libraries: &HashMap<ContractInstance, (Address, JsonAbi)>,
) -> anyhow::Result<(Version, CompilerOutput)> {
let compiler_version_or_requirement = mode.compiler_version_to_use(config.solc.clone());
let compiler_path =
P::Compiler::get_compiler_executable(config, compiler_version_or_requirement).await?;
let compiler_version = P::Compiler::new(compiler_path.clone()).version()?;
tracing::info!(
%compiler_version,
metadata_file_path = %metadata_file_path.display(),
mode = ?mode,
"Compiling contracts"
);
let mut compiler = Compiler::<P::Compiler>::new()
.with_allow_path(metadata.directory()?)
.with_optimization(mode.solc_optimize());
for path in metadata.files_to_compile()? {
compiler = compiler.with_source(path).await?;
}
for (library_instance, (library_address, _)) in deployed_libraries.iter() {
let library_ident = &metadata
.contracts
.as_ref()
.and_then(|contracts| contracts.get(library_instance))
.expect("Impossible for library to not be found in contracts")
.contract_ident;
// Note the following: we need to tell solc which files require the libraries to be linked
// into them. We do not have access to this information and therefore we choose an easier,
// yet more compute intensive route, of telling solc that all of the files need to link the
// library and it will only perform the linking for the files that do actually need the
// library.
compiler = FilesWithExtensionIterator::new(metadata.directory()?)
.with_allowed_extension("sol")
.fold(compiler, |compiler, path| {
compiler.with_library(&path, library_ident.as_str(), *library_address)
});
}
let compiler_output = compiler.try_build(compiler_path).await?;
Ok((compiler_version, compiler_output))
}
async fn execute_corpus(
args: &Arguments,
tests: &[MetadataFile],
span: Span,
) -> anyhow::Result<()> {
match (&args.leader, &args.follower) { match (&args.leader, &args.follower) {
(TestingPlatform::Geth, TestingPlatform::Kitchensink) => { (TestingPlatform::Geth, TestingPlatform::Kitchensink) => {
run_driver::<Geth, Kitchensink>(args, tests, span)? run_driver::<Geth, Kitchensink>(args, tests, span).await?
} }
(TestingPlatform::Geth, TestingPlatform::Geth) => { (TestingPlatform::Geth, TestingPlatform::Geth) => {
run_driver::<Geth, Geth>(args, tests, span)? run_driver::<Geth, Geth>(args, tests, span).await?
} }
_ => unimplemented!(), _ => unimplemented!(),
} }
@@ -155,24 +742,43 @@ fn execute_corpus(args: &Arguments, tests: &[MetadataFile], span: Span) -> anyho
Ok(()) Ok(())
} }
fn compile_corpus( async fn compile_corpus(
config: &Arguments, config: &Arguments,
tests: &[MetadataFile], tests: &[MetadataFile],
platform: &TestingPlatform, platform: &TestingPlatform,
span: Span, _: Span,
) { ) {
tests.par_iter().for_each(|metadata| { let tests = tests.iter().flat_map(|metadata| {
for mode in &metadata.solc_modes() { metadata
.solc_modes()
.into_iter()
.map(move |solc_mode| (metadata, solc_mode))
});
futures::stream::iter(tests)
.for_each_concurrent(None, |(metadata, mode)| async move {
match platform { match platform {
TestingPlatform::Geth => { TestingPlatform::Geth => {
let mut state = State::<Geth>::new(config, span); let _ = compile_contracts::<Geth>(
let _ = state.build_contracts(mode, metadata); &metadata.content,
&metadata.path,
&mode,
config,
&Default::default(),
)
.await;
} }
TestingPlatform::Kitchensink => { TestingPlatform::Kitchensink => {
let mut state = State::<Kitchensink>::new(config, span); let _ = compile_contracts::<Geth>(
let _ = state.build_contracts(mode, metadata); &metadata.content,
&metadata.path,
&mode,
config,
&Default::default(),
)
.await;
} }
};
} }
}); })
.await;
} }
+6 -1
View File
@@ -9,7 +9,9 @@ repository.workspace = true
rust-version.workspace = true rust-version.workspace = true
[dependencies] [dependencies]
revive-dt-node-interaction = { workspace = true } revive-dt-common = { workspace = true }
revive-common = { workspace = true }
alloy = { workspace = true } alloy = { workspace = true }
alloy-primitives = { workspace = true } alloy-primitives = { workspace = true }
@@ -19,3 +21,6 @@ tracing = { workspace = true }
semver = { workspace = true } semver = { workspace = true }
serde = { workspace = true, features = ["derive"] } serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true } serde_json = { workspace = true }
[dev-dependencies]
tokio = { workspace = true }
+56 -5
View File
@@ -1,18 +1,69 @@
use serde::Deserialize; use serde::{Deserialize, Serialize};
use crate::{define_wrapper_type, input::Input, mode::Mode}; use revive_dt_common::macros::define_wrapper_type;
#[derive(Debug, Default, Deserialize, Clone, Eq, PartialEq)] use crate::{
input::{Expected, Step},
mode::Mode,
};
#[derive(Debug, Default, Serialize, Deserialize, Clone, Eq, PartialEq)]
pub struct Case { pub struct Case {
#[serde(skip_serializing_if = "Option::is_none")]
pub name: Option<String>, pub name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>, pub comment: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub modes: Option<Vec<Mode>>, pub modes: Option<Vec<Mode>>,
pub inputs: Vec<Input>,
#[serde(rename = "inputs")]
pub steps: Vec<Step>,
#[serde(skip_serializing_if = "Option::is_none")]
pub group: Option<String>, pub group: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub expected: Option<Expected>,
#[serde(skip_serializing_if = "Option::is_none")]
pub ignore: Option<bool>,
}
impl Case {
#[allow(irrefutable_let_patterns)]
pub fn steps_iterator(&self) -> impl Iterator<Item = Step> {
let steps_len = self.steps.len();
self.steps
.clone()
.into_iter()
.enumerate()
.map(move |(idx, mut step)| {
let Step::FunctionCall(ref mut input) = step else {
return step;
};
if idx + 1 == steps_len {
if input.expected.is_none() {
input.expected = self.expected.clone();
}
// TODO: What does it mean for us to have an `expected` field on the case itself
// but the final input also has an expected field that doesn't match the one on
// the case? What are we supposed to do with that final expected field on the
// case?
step
} else {
step
}
})
}
} }
define_wrapper_type!( define_wrapper_type!(
/// A wrapper type for the index of test cases found in metadata file. /// A wrapper type for the index of test cases found in metadata file.
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)] #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
CaseIdx(usize); pub struct CaseIdx(usize);
); );
+38 -6
View File
@@ -17,13 +17,31 @@ impl Corpus {
/// Try to read and parse the corpus definition file at given `path`. /// Try to read and parse the corpus definition file at given `path`.
pub fn try_from_path(path: &Path) -> anyhow::Result<Self> { pub fn try_from_path(path: &Path) -> anyhow::Result<Self> {
let file = File::open(path)?; let file = File::open(path)?;
Ok(serde_json::from_reader(file)?) let mut corpus: Corpus = serde_json::from_reader(file)?;
// Ensure that the path mentioned in the corpus is relative to the corpus file.
// Canonicalizing also helps make the path in any errors unambiguous.
corpus.path = path
.parent()
.ok_or_else(|| {
anyhow::anyhow!("Corpus path '{}' does not point to a file", path.display())
})?
.canonicalize()
.map_err(|error| {
anyhow::anyhow!(
"Failed to canonicalize path to corpus '{}': {error}",
path.display()
)
})?
.join(corpus.path);
Ok(corpus)
} }
/// Scan the corpus base directory and return all tests found. /// Scan the corpus base directory and return all tests found.
pub fn enumerate_tests(&self) -> Vec<MetadataFile> { pub async fn enumerate_tests(&self) -> Vec<MetadataFile> {
let mut tests = Vec::new(); let mut tests = Vec::new();
collect_metadata(&self.path, &mut tests); collect_metadata(&self.path, &mut tests).await;
tests tests
} }
} }
@@ -34,7 +52,8 @@ impl Corpus {
/// Found tests are inserted into `tests`. /// Found tests are inserted into `tests`.
/// ///
/// `path` is expected to be a directory. /// `path` is expected to be a directory.
pub fn collect_metadata(path: &Path, tests: &mut Vec<MetadataFile>) { pub async fn collect_metadata(path: &Path, tests: &mut Vec<MetadataFile>) {
if path.is_dir() {
let dir_entry = match std::fs::read_dir(path) { let dir_entry = match std::fs::read_dir(path) {
Ok(dir_entry) => dir_entry, Ok(dir_entry) => dir_entry,
Err(error) => { Err(error) => {
@@ -54,14 +73,27 @@ pub fn collect_metadata(path: &Path, tests: &mut Vec<MetadataFile>) {
let path = entry.path(); let path = entry.path();
if path.is_dir() { if path.is_dir() {
collect_metadata(&path, tests); Box::pin(collect_metadata(&path, tests)).await;
continue; continue;
} }
if path.is_file() { if path.is_file() {
if let Some(metadata) = MetadataFile::try_from_file(&path) { if let Some(metadata) = MetadataFile::try_from_file(&path).await {
tests.push(metadata) tests.push(metadata)
} }
} }
} }
} else {
let Some(extension) = path.extension() else {
tracing::error!("Failed to get file extension");
return;
};
if extension.eq_ignore_ascii_case("sol") || extension.eq_ignore_ascii_case("json") {
if let Some(metadata) = MetadataFile::try_from_file(path).await {
tests.push(metadata)
}
} else {
tracing::error!(?extension, "Unsupported file extension");
}
}
} }
+835 -280
View File
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -3,6 +3,6 @@
pub mod case; pub mod case;
pub mod corpus; pub mod corpus;
pub mod input; pub mod input;
pub mod macros;
pub mod metadata; pub mod metadata;
pub mod mode; pub mod mode;
pub mod traits;
+255 -39
View File
@@ -1,7 +1,7 @@
use std::{ use std::{
cmp::Ordering,
collections::BTreeMap, collections::BTreeMap,
fmt::Display, fmt::Display,
fs::{File, read_to_string},
ops::Deref, ops::Deref,
path::{Path, PathBuf}, path::{Path, PathBuf},
str::FromStr, str::FromStr,
@@ -9,9 +9,13 @@ use std::{
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use revive_common::EVMVersion;
use revive_dt_common::{
fs::CachedFileSystem, iterators::FilesWithExtensionIterator, macros::define_wrapper_type,
};
use crate::{ use crate::{
case::Case, case::Case,
define_wrapper_type,
mode::{Mode, SolcMode}, mode::{Mode, SolcMode},
}; };
@@ -26,8 +30,8 @@ pub struct MetadataFile {
} }
impl MetadataFile { impl MetadataFile {
pub fn try_from_file(path: &Path) -> Option<Self> { pub async fn try_from_file(path: &Path) -> Option<Self> {
Metadata::try_from_file(path).map(|metadata| Self { Metadata::try_from_file(path).await.map(|metadata| Self {
path: path.to_owned(), path: path.to_owned(),
content: metadata, content: metadata,
}) })
@@ -42,15 +46,42 @@ impl Deref for MetadataFile {
} }
} }
#[derive(Debug, Default, Deserialize, Clone, Eq, PartialEq)] #[derive(Debug, Default, Serialize, Deserialize, Clone, Eq, PartialEq)]
pub struct Metadata { pub struct Metadata {
pub cases: Vec<Case>, /// A comment on the test case that's added for human-readability.
pub contracts: Option<BTreeMap<ContractInstance, ContractPathAndIdentifier>>, #[serde(skip_serializing_if = "Option::is_none")]
// TODO: Convert into wrapper types for clarity. pub comment: Option<String>,
pub libraries: Option<BTreeMap<String, BTreeMap<String, String>>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub ignore: Option<bool>, pub ignore: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub targets: Option<Vec<String>>,
pub cases: Vec<Case>,
#[serde(skip_serializing_if = "Option::is_none")]
pub contracts: Option<BTreeMap<ContractInstance, ContractPathAndIdent>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub libraries: Option<BTreeMap<PathBuf, BTreeMap<ContractIdent, ContractInstance>>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub modes: Option<Vec<Mode>>, pub modes: Option<Vec<Mode>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub file_path: Option<PathBuf>, pub file_path: Option<PathBuf>,
/// This field specifies an EVM version requirement that the test case has where the test might
/// be run of the evm version of the nodes match the evm version specified here.
#[serde(skip_serializing_if = "Option::is_none")]
pub required_evm_version: Option<EvmVersionRequirement>,
/// A set of compilation directives that will be passed to the compiler whenever the contracts for
/// the test are being compiled. Note that this differs from the [`Mode`]s in that a [`Mode`] is
/// just a filter for when a test can run whereas this is an instruction to the compiler.
#[serde(skip_serializing_if = "Option::is_none")]
pub compiler_directives: Option<CompilationDirectives>,
} }
impl Metadata { impl Metadata {
@@ -84,7 +115,7 @@ impl Metadata {
/// Returns the contract sources with canonicalized paths for the files /// Returns the contract sources with canonicalized paths for the files
pub fn contract_sources( pub fn contract_sources(
&self, &self,
) -> anyhow::Result<BTreeMap<ContractInstance, ContractPathAndIdentifier>> { ) -> anyhow::Result<BTreeMap<ContractInstance, ContractPathAndIdent>> {
let directory = self.directory()?; let directory = self.directory()?;
let mut sources = BTreeMap::new(); let mut sources = BTreeMap::new();
let Some(contracts) = &self.contracts else { let Some(contracts) = &self.contracts else {
@@ -93,7 +124,7 @@ impl Metadata {
for ( for (
alias, alias,
ContractPathAndIdentifier { ContractPathAndIdent {
contract_source_path, contract_source_path,
contract_ident, contract_ident,
}, },
@@ -105,7 +136,7 @@ impl Metadata {
sources.insert( sources.insert(
alias, alias,
ContractPathAndIdentifier { ContractPathAndIdent {
contract_source_path: absolute_path, contract_source_path: absolute_path,
contract_ident, contract_ident,
}, },
@@ -121,7 +152,7 @@ impl Metadata {
/// ///
/// # Panics /// # Panics
/// Expects the supplied `path` to be a file. /// Expects the supplied `path` to be a file.
pub fn try_from_file(path: &Path) -> Option<Self> { pub async fn try_from_file(path: &Path) -> Option<Self> {
assert!(path.is_file(), "not a file: {}", path.display()); assert!(path.is_file(), "not a file: {}", path.display());
let Some(file_extension) = path.extension() else { let Some(file_extension) = path.extension() else {
@@ -130,19 +161,20 @@ impl Metadata {
}; };
if file_extension == METADATA_FILE_EXTENSION { if file_extension == METADATA_FILE_EXTENSION {
return Self::try_from_json(path); return Self::try_from_json(path).await;
} }
if file_extension == SOLIDITY_CASE_FILE_EXTENSION { if file_extension == SOLIDITY_CASE_FILE_EXTENSION {
return Self::try_from_solidity(path); return Self::try_from_solidity(path).await;
} }
tracing::debug!("ignoring invalid corpus file: {}", path.display()); tracing::debug!("ignoring invalid corpus file: {}", path.display());
None None
} }
fn try_from_json(path: &Path) -> Option<Self> { async fn try_from_json(path: &Path) -> Option<Self> {
let file = File::open(path) let content = CachedFileSystem::read(path)
.await
.inspect_err(|error| { .inspect_err(|error| {
tracing::error!( tracing::error!(
"opening JSON test metadata file '{}' error: {error}", "opening JSON test metadata file '{}' error: {error}",
@@ -151,7 +183,7 @@ impl Metadata {
}) })
.ok()?; .ok()?;
match serde_json::from_reader::<_, Metadata>(file) { match serde_json::from_slice::<Metadata>(content.as_slice()) {
Ok(mut metadata) => { Ok(mut metadata) => {
metadata.file_path = Some(path.to_path_buf()); metadata.file_path = Some(path.to_path_buf());
Some(metadata) Some(metadata)
@@ -166,8 +198,9 @@ impl Metadata {
} }
} }
fn try_from_solidity(path: &Path) -> Option<Self> { async fn try_from_solidity(path: &Path) -> Option<Self> {
let spec = read_to_string(path) let spec = CachedFileSystem::read_to_string(path)
.await
.inspect_err(|error| { .inspect_err(|error| {
tracing::error!( tracing::error!(
"opening JSON test metadata file '{}' error: {error}", "opening JSON test metadata file '{}' error: {error}",
@@ -191,10 +224,10 @@ impl Metadata {
metadata.file_path = Some(path.to_path_buf()); metadata.file_path = Some(path.to_path_buf());
metadata.contracts = Some( metadata.contracts = Some(
[( [(
ContractInstance::new_from("test"), ContractInstance::new("Test"),
ContractPathAndIdentifier { ContractPathAndIdent {
contract_source_path: path.to_path_buf(), contract_source_path: path.to_path_buf(),
contract_ident: ContractIdent::new_from("Test"), contract_ident: ContractIdent::new("Test"),
}, },
)] )]
.into(), .into(),
@@ -210,24 +243,51 @@ impl Metadata {
} }
} }
} }
/// Returns an iterator over all of the solidity files that needs to be compiled for this
/// [`Metadata`] object
///
/// Note: if the metadata is contained within a solidity file then this is the only file that
/// we wish to compile since this is a self-contained test. Otherwise, if it's a JSON file
/// then we need to compile all of the contracts that are in the directory since imports are
/// allowed in there.
pub fn files_to_compile(&self) -> anyhow::Result<Box<dyn Iterator<Item = PathBuf>>> {
let Some(ref metadata_file_path) = self.file_path else {
anyhow::bail!("The metadata file path is not defined");
};
if metadata_file_path
.extension()
.is_some_and(|extension| extension.eq_ignore_ascii_case("sol"))
{
Ok(Box::new(std::iter::once(metadata_file_path.clone())))
} else {
Ok(Box::new(
FilesWithExtensionIterator::new(self.directory()?).with_allowed_extension("sol"),
))
}
}
} }
define_wrapper_type!( define_wrapper_type!(
/// Represents a contract instance found a metadata file. /// Represents a contract instance found a metadata file.
/// ///
/// Typically, this is used as the key to the "contracts" field of metadata files. /// Typically, this is used as the key to the "contracts" field of metadata files.
#[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)] #[derive(
Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize,
)]
#[serde(transparent)] #[serde(transparent)]
ContractInstance(String); pub struct ContractInstance(String);
); );
define_wrapper_type!( define_wrapper_type!(
/// Represents a contract identifier found a metadata file. /// Represents a contract identifier found a metadata file.
/// ///
/// A contract identifier is the name of the contract in the source code. /// A contract identifier is the name of the contract in the source code.
#[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)] #[derive(
Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize,
)]
#[serde(transparent)] #[serde(transparent)]
ContractIdent(String); pub struct ContractIdent(String);
); );
/// Represents an identifier used for contracts. /// Represents an identifier used for contracts.
@@ -239,7 +299,7 @@ define_wrapper_type!(
/// ``` /// ```
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)] #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
#[serde(try_from = "String", into = "String")] #[serde(try_from = "String", into = "String")]
pub struct ContractPathAndIdentifier { pub struct ContractPathAndIdent {
/// The path of the contract source code relative to the directory containing the metadata file. /// The path of the contract source code relative to the directory containing the metadata file.
pub contract_source_path: PathBuf, pub contract_source_path: PathBuf,
@@ -247,7 +307,7 @@ pub struct ContractPathAndIdentifier {
pub contract_ident: ContractIdent, pub contract_ident: ContractIdent,
} }
impl Display for ContractPathAndIdentifier { impl Display for ContractPathAndIdent {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!( write!(
f, f,
@@ -258,7 +318,7 @@ impl Display for ContractPathAndIdentifier {
} }
} }
impl FromStr for ContractPathAndIdentifier { impl FromStr for ContractPathAndIdent {
type Err = anyhow::Error; type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> { fn from_str(s: &str) -> Result<Self, Self::Err> {
@@ -281,20 +341,26 @@ impl FromStr for ContractPathAndIdentifier {
identifier = Some(next_item.to_owned()) identifier = Some(next_item.to_owned())
} }
} }
let Some(path) = path else { match (path, identifier) {
anyhow::bail!("Path is not defined"); (Some(path), Some(identifier)) => Ok(Self {
}; contract_source_path: PathBuf::from(path),
let Some(identifier) = identifier else { contract_ident: ContractIdent::new(identifier),
anyhow::bail!("Contract identifier is not defined") }),
(None, Some(path)) | (Some(path), None) => {
let Some(identifier) = path.split(".").next().map(ToOwned::to_owned) else {
anyhow::bail!("Failed to find identifier");
}; };
Ok(Self { Ok(Self {
contract_source_path: PathBuf::from(path), contract_source_path: PathBuf::from(path),
contract_ident: ContractIdent::new(identifier), contract_ident: ContractIdent::new(identifier),
}) })
} }
(None, None) => anyhow::bail!("Failed to find the path and identifier"),
}
}
} }
impl TryFrom<String> for ContractPathAndIdentifier { impl TryFrom<String> for ContractPathAndIdent {
type Error = anyhow::Error; type Error = anyhow::Error;
fn try_from(value: String) -> Result<Self, Self::Error> { fn try_from(value: String) -> Result<Self, Self::Error> {
@@ -302,12 +368,162 @@ impl TryFrom<String> for ContractPathAndIdentifier {
} }
} }
impl From<ContractPathAndIdentifier> for String { impl From<ContractPathAndIdent> for String {
fn from(value: ContractPathAndIdentifier) -> Self { fn from(value: ContractPathAndIdent) -> Self {
value.to_string() value.to_string()
} }
} }
/// An EVM version requirement that the test case has. This gets serialized and
/// deserialized from and into [`String`].
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
#[serde(try_from = "String", into = "String")]
pub struct EvmVersionRequirement {
ordering: Ordering,
or_equal: bool,
evm_version: EVMVersion,
}
impl EvmVersionRequirement {
pub fn new_greater_than_or_equals(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Greater,
or_equal: true,
evm_version: version,
}
}
pub fn new_greater_than(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Greater,
or_equal: false,
evm_version: version,
}
}
pub fn new_equals(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Equal,
or_equal: false,
evm_version: version,
}
}
pub fn new_less_than(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Less,
or_equal: false,
evm_version: version,
}
}
pub fn new_less_than_or_equals(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Less,
or_equal: true,
evm_version: version,
}
}
pub fn matches(&self, other: &EVMVersion) -> bool {
let ordering = other.cmp(&self.evm_version);
ordering == self.ordering || (self.or_equal && matches!(ordering, Ordering::Equal))
}
}
impl Display for EvmVersionRequirement {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let Self {
ordering,
or_equal,
evm_version,
} = self;
match ordering {
Ordering::Less => write!(f, "<")?,
Ordering::Equal => write!(f, "=")?,
Ordering::Greater => write!(f, ">")?,
}
if *or_equal && !matches!(ordering, Ordering::Equal) {
write!(f, "=")?;
}
write!(f, "{evm_version}")
}
}
impl FromStr for EvmVersionRequirement {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s.as_bytes() {
[b'>', b'=', remaining @ ..] => Ok(Self {
ordering: Ordering::Greater,
or_equal: true,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
[b'>', remaining @ ..] => Ok(Self {
ordering: Ordering::Greater,
or_equal: false,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
[b'<', b'=', remaining @ ..] => Ok(Self {
ordering: Ordering::Less,
or_equal: true,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
[b'<', remaining @ ..] => Ok(Self {
ordering: Ordering::Less,
or_equal: false,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
[b'=', remaining @ ..] => Ok(Self {
ordering: Ordering::Equal,
or_equal: false,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
_ => anyhow::bail!("Invalid EVM version requirement {s}"),
}
}
}
impl TryFrom<String> for EvmVersionRequirement {
type Error = anyhow::Error;
fn try_from(value: String) -> Result<Self, Self::Error> {
value.parse()
}
}
impl From<EvmVersionRequirement> for String {
fn from(value: EvmVersionRequirement) -> Self {
value.to_string()
}
}
/// A set of compilation directives that will be passed to the compiler whenever the contracts for
/// the test are being compiled. Note that this differs from the [`Mode`]s in that a [`Mode`] is
/// just a filter for when a test can run whereas this is an instruction to the compiler.
/// Defines how the compiler should handle revert strings.
#[derive(
Clone, Debug, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default, Serialize, Deserialize,
)]
pub struct CompilationDirectives {
/// Defines how the revert strings should be handled.
pub revert_string_handling: Option<RevertString>,
}
/// Defines how the compiler should handle revert strings.
#[derive(
Clone, Debug, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default, Serialize, Deserialize,
)]
#[serde(rename_all = "camelCase")]
pub enum RevertString {
#[default]
Default,
Debug,
Strip,
VerboseDebug,
}
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use super::*; use super::*;
@@ -318,7 +534,7 @@ mod test {
let string = "ERC20/ERC20.sol:ERC20"; let string = "ERC20/ERC20.sol:ERC20";
// Act // Act
let identifier = ContractPathAndIdentifier::from_str(string); let identifier = ContractPathAndIdent::from_str(string);
// Assert // Assert
let identifier = identifier.expect("Failed to parse"); let identifier = identifier.expect("Failed to parse");
+28 -1
View File
@@ -1,3 +1,4 @@
use revive_dt_common::types::VersionOrRequirement;
use semver::Version; use semver::Version;
use serde::de::Deserializer; use serde::de::Deserializer;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -15,6 +16,7 @@ pub struct SolcMode {
pub solc_version: Option<semver::VersionReq>, pub solc_version: Option<semver::VersionReq>,
solc_optimize: Option<bool>, solc_optimize: Option<bool>,
pub llvm_optimizer_settings: Vec<String>, pub llvm_optimizer_settings: Vec<String>,
mode_string: String,
} }
impl SolcMode { impl SolcMode {
@@ -28,7 +30,10 @@ impl SolcMode {
/// - A solc `SemVer version requirement` string /// - A solc `SemVer version requirement` string
/// - One or more `-OX` where X is a supposed to be an LLVM opt mode /// - One or more `-OX` where X is a supposed to be an LLVM opt mode
pub fn parse_from_mode_string(mode_string: &str) -> Option<Self> { pub fn parse_from_mode_string(mode_string: &str) -> Option<Self> {
let mut result = Self::default(); let mut result = Self {
mode_string: mode_string.to_string(),
..Default::default()
};
let mut parts = mode_string.trim().split(" "); let mut parts = mode_string.trim().split(" ");
@@ -78,6 +83,15 @@ impl SolcMode {
None None
} }
/// Resolves the [`SolcMode`]'s solidity version requirement into a [`VersionOrRequirement`] if
/// the requirement is present on the object. Otherwise, the passed default version is used.
pub fn compiler_version_to_use(&self, default: Version) -> VersionOrRequirement {
match self.solc_version {
Some(ref requirement) => requirement.clone().into(),
None => default.into(),
}
}
} }
impl<'de> Deserialize<'de> for Mode { impl<'de> Deserialize<'de> for Mode {
@@ -94,3 +108,16 @@ impl<'de> Deserialize<'de> for Mode {
Ok(Self::Unknown(mode_string)) Ok(Self::Unknown(mode_string))
} }
} }
impl Serialize for Mode {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let string = match self {
Mode::Solidity(solc_mode) => &solc_mode.mode_string,
Mode::Unknown(string) => string,
};
string.serialize(serializer)
}
}
+150
View File
@@ -0,0 +1,150 @@
use std::collections::HashMap;
use alloy::eips::BlockNumberOrTag;
use alloy::json_abi::JsonAbi;
use alloy::primitives::{Address, BlockHash, BlockNumber, BlockTimestamp, ChainId, U256};
use alloy_primitives::TxHash;
use anyhow::Result;
use crate::metadata::ContractInstance;
/// A trait of the interface are required to implement to be used by the resolution logic that this
/// crate implements to go from string calldata and into the bytes calldata.
pub trait ResolverApi {
/// Returns the ID of the chain that the node is on.
fn chain_id(&self) -> impl Future<Output = Result<ChainId>>;
/// Returns the gas price for the specified transaction.
fn transaction_gas_price(&self, tx_hash: &TxHash) -> impl Future<Output = Result<u128>>;
// TODO: This is currently a u128 due to Kitchensink needing more than 64 bits for its gas limit
// when we implement the changes to the gas we need to adjust this to be a u64.
/// Returns the gas limit of the specified block.
fn block_gas_limit(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<u128>>;
/// Returns the coinbase of the specified block.
fn block_coinbase(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<Address>>;
/// Returns the difficulty of the specified block.
fn block_difficulty(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<U256>>;
/// Returns the base fee of the specified block.
fn block_base_fee(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<u64>>;
/// Returns the hash of the specified block.
fn block_hash(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<BlockHash>>;
/// Returns the timestamp of the specified block,
fn block_timestamp(
&self,
number: BlockNumberOrTag,
) -> impl Future<Output = Result<BlockTimestamp>>;
/// Returns the number of the last block.
fn last_block_number(&self) -> impl Future<Output = Result<BlockNumber>>;
}
#[derive(Clone, Copy, Debug, Default)]
/// Contextual information required by the code that's performing the resolution.
pub struct ResolutionContext<'a> {
/// When provided the contracts provided here will be used for resolutions.
deployed_contracts: Option<&'a HashMap<ContractInstance, (Address, JsonAbi)>>,
/// When provided the variables in here will be used for performing resolutions.
variables: Option<&'a HashMap<String, U256>>,
/// When provided this block number will be treated as the tip of the chain.
block_number: Option<&'a BlockNumber>,
/// When provided the resolver will use this transaction hash for all of its resolutions.
transaction_hash: Option<&'a TxHash>,
}
impl<'a> ResolutionContext<'a> {
pub fn new() -> Self {
Default::default()
}
pub fn new_from_parts(
deployed_contracts: impl Into<Option<&'a HashMap<ContractInstance, (Address, JsonAbi)>>>,
variables: impl Into<Option<&'a HashMap<String, U256>>>,
block_number: impl Into<Option<&'a BlockNumber>>,
transaction_hash: impl Into<Option<&'a TxHash>>,
) -> Self {
Self {
deployed_contracts: deployed_contracts.into(),
variables: variables.into(),
block_number: block_number.into(),
transaction_hash: transaction_hash.into(),
}
}
pub fn with_deployed_contracts(
mut self,
deployed_contracts: impl Into<Option<&'a HashMap<ContractInstance, (Address, JsonAbi)>>>,
) -> Self {
self.deployed_contracts = deployed_contracts.into();
self
}
pub fn with_variables(
mut self,
variables: impl Into<Option<&'a HashMap<String, U256>>>,
) -> Self {
self.variables = variables.into();
self
}
pub fn with_block_number(mut self, block_number: impl Into<Option<&'a BlockNumber>>) -> Self {
self.block_number = block_number.into();
self
}
pub fn with_transaction_hash(
mut self,
transaction_hash: impl Into<Option<&'a TxHash>>,
) -> Self {
self.transaction_hash = transaction_hash.into();
self
}
pub fn resolve_block_number(&self, number: BlockNumberOrTag) -> BlockNumberOrTag {
match self.block_number {
Some(block_number) => match number {
BlockNumberOrTag::Latest => BlockNumberOrTag::Number(*block_number),
n @ (BlockNumberOrTag::Finalized
| BlockNumberOrTag::Safe
| BlockNumberOrTag::Earliest
| BlockNumberOrTag::Pending
| BlockNumberOrTag::Number(_)) => n,
},
None => number,
}
}
pub fn deployed_contract(&self, instance: &ContractInstance) -> Option<&(Address, JsonAbi)> {
self.deployed_contracts
.and_then(|deployed_contracts| deployed_contracts.get(instance))
}
pub fn deployed_contract_address(&self, instance: &ContractInstance) -> Option<&Address> {
self.deployed_contract(instance).map(|(a, _)| a)
}
pub fn deployed_contract_abi(&self, instance: &ContractInstance) -> Option<&JsonAbi> {
self.deployed_contract(instance).map(|(_, a)| a)
}
pub fn variable(&self, name: impl AsRef<str>) -> Option<&U256> {
self.variables
.and_then(|variables| variables.get(name.as_ref()))
}
pub fn tip_block_number(&self) -> Option<&'a BlockNumber> {
self.block_number
}
pub fn transaction_hash(&self) -> Option<&'a TxHash> {
self.transaction_hash
}
}
-4
View File
@@ -11,7 +11,3 @@ rust-version.workspace = true
[dependencies] [dependencies]
alloy = { workspace = true } alloy = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
futures = { workspace = true }
tracing = { workspace = true }
once_cell = { workspace = true }
tokio = { workspace = true }
@@ -1,220 +0,0 @@
//! The alloy crate __requires__ a tokio runtime.
//! We contain any async rust right here.
use std::{any::Any, panic::AssertUnwindSafe, pin::Pin, thread};
use futures::FutureExt;
use once_cell::sync::Lazy;
use tokio::{
runtime::Builder,
sync::{mpsc::UnboundedSender, oneshot},
};
/// A blocking async executor.
///
/// This struct exposes the abstraction of a blocking async executor. It is a global and static
/// executor which means that it doesn't require for new instances of it to be created, it's a
/// singleton and can be accessed by any thread that wants to perform some async computation on the
/// blocking executor thread.
///
/// The API of the blocking executor is created in a way so that it's very natural, simple to use,
/// and unbounded to specific tasks or return types. The following is an example of using this
/// executor to drive an async computation:
///
/// ```rust
/// use revive_dt_node_interaction::*;
///
/// fn blocking_function() {
/// let result = BlockingExecutor::execute(async move {
/// tokio::time::sleep(std::time::Duration::from_secs(1)).await;
/// 0xFFu8
/// })
/// .expect("Computation failed");
///
/// assert_eq!(result, 0xFF);
/// }
/// ```
///
/// Users get to pass in their async tasks without needing to worry about putting them in a [`Box`],
/// [`Pin`], needing to perform down-casting, or the internal channel mechanism used by the runtime.
/// To the user, it just looks like a function that converts some async code into sync code.
///
/// This struct also handled panics that occur in the passed futures and converts them into errors
/// that can be handled by the user. This is done to allow the executor to be robust.
///
/// Internally, the executor communicates with the tokio runtime thread through channels which carry
/// the [`TaskMessage`] and the results of the execution.
pub struct BlockingExecutor;
impl BlockingExecutor {
pub fn execute<R>(future: impl Future<Output = R> + Send + 'static) -> Result<R, anyhow::Error>
where
R: Send + 'static,
{
// Note: The blocking executor is a singleton and therefore we store its state in a static
// so that it's assigned only once. Additionally, when we set the state of the executor we
// spawn the thread where the async runtime runs.
static STATE: Lazy<ExecutorState> = Lazy::new(|| {
tracing::trace!("Initializing the BlockingExecutor state");
// All communication with the tokio runtime thread happens over mspc channels where the
// producers here are the threads that want to run async tasks and the consumer here is
// the tokio runtime thread.
let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel::<TaskMessage>();
thread::spawn(move || {
let runtime = Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to create the async runtime");
runtime.block_on(async move {
while let Some(TaskMessage {
future: task,
response_tx: response_channel,
}) = rx.recv().await
{
tracing::trace!("Received a new future to execute");
tokio::spawn(async move {
// One of the things that the blocking executor does is that it allows
// us to catch panics if they occur. By wrapping the given future in an
// AssertUnwindSafe::catch_unwind we are able to catch all panic unwinds
// in the given future and convert them into errors.
let task = AssertUnwindSafe(task).catch_unwind();
let result = task.await;
let _ = response_channel.send(result);
});
}
})
});
ExecutorState { tx }
});
// We need to perform blocking synchronous communication between the current thread and the
// tokio runtime thread with the result of the async computation and the oneshot channels
// from tokio allows us to do that. The sender side of the channel will be given to the
// tokio runtime thread to send the result when the computation is completed and the receive
// side of the channel will be kept with this thread to await for the response of the async
// task to come back.
let (response_tx, response_rx) =
oneshot::channel::<Result<Box<dyn Any + Send>, Box<dyn Any + Send>>>();
// The tokio runtime thread expects a Future<Output = Box<dyn Any + Send>> + Send to be
// sent to it to execute. However, this function has a typed Future<Output = R> + Send and
// therefore we need to change the type of the future to fit what the runtime thread expects
// in the task message. In doing this conversion, we lose some of the type information since
// we're converting R => dyn Any. However, we will perform down-casting on the result to
// convert it back into R.
let future = Box::pin(async move { Box::new(future.await) as Box<dyn Any + Send> });
let task = TaskMessage::new(future, response_tx);
if let Err(error) = STATE.tx.send(task) {
tracing::error!(?error, "Failed to send the task to the blocking executor");
anyhow::bail!("Failed to send the task to the blocking executor: {error:?}")
}
let result = match response_rx.blocking_recv() {
Ok(result) => result,
Err(error) => {
tracing::error!(
?error,
"Failed to get the response from the blocking executor"
);
anyhow::bail!("Failed to get the response from the blocking executor: {error:?}")
}
};
match result.map(|result| {
*result
.downcast::<R>()
.expect("Type mismatch in the downcast")
}) {
Ok(result) => Ok(result),
Err(error) => {
tracing::error!(
?error,
"Failed to downcast the returned result into the expected type"
);
anyhow::bail!(
"Failed to downcast the returned result into the expected type: {error:?}"
)
}
}
}
}
/// Represents the state of the async runtime. This runtime is designed to be a singleton runtime
/// which means that in the current running program there's just a single thread that has an async
/// runtime.
struct ExecutorState {
/// The sending side of the task messages channel. This is used by all of the other threads to
/// communicate with the async runtime thread.
tx: UnboundedSender<TaskMessage>,
}
/// Represents a message that contains an asynchronous task that's to be executed by the runtime
/// as well as a way for the runtime to report back on the result of the execution.
struct TaskMessage {
/// The task that's being requested to run. This is a future that returns an object that does
/// implement [`Any`] and [`Send`] to allow it to be sent between the requesting thread and the
/// async thread.
future: Pin<Box<dyn Future<Output = Box<dyn Any + Send>> + Send>>,
/// A one shot sender channel where the sender of the task is expecting to hear back on the
/// result of the task.
response_tx: oneshot::Sender<Result<Box<dyn Any + Send>, Box<dyn Any + Send>>>,
}
impl TaskMessage {
pub fn new(
future: Pin<Box<dyn Future<Output = Box<dyn Any + Send>> + Send>>,
response_tx: oneshot::Sender<Result<Box<dyn Any + Send>, Box<dyn Any + Send>>>,
) -> Self {
Self {
future,
response_tx,
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn simple_future_works() {
// Act
let result = BlockingExecutor::execute(async move {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
0xFFu8
})
.unwrap();
// Assert
assert_eq!(result, 0xFFu8);
}
#[test]
#[allow(unreachable_code, clippy::unreachable)]
fn panics_in_futures_are_caught() {
// Act
let result = BlockingExecutor::execute(async move {
panic!("This is a panic!");
0xFFu8
});
// Assert
assert!(result.is_err());
// Act
let result = BlockingExecutor::execute(async move {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
0xFFu8
})
.unwrap();
// Assert
assert_eq!(result, 0xFFu8)
}
}
+21 -34
View File
@@ -1,48 +1,35 @@
//! This crate implements all node interactions. //! This crate implements all node interactions.
use alloy::eips::BlockNumberOrTag; use alloy::primitives::{Address, StorageKey, U256};
use alloy::primitives::{Address, BlockHash, BlockNumber, BlockTimestamp, ChainId, U256}; use alloy::rpc::types::trace::geth::{DiffMode, GethDebugTracingOptions, GethTrace};
use alloy::rpc::types::trace::geth::{DiffMode, GethTrace}; use alloy::rpc::types::{EIP1186AccountProofResponse, TransactionReceipt, TransactionRequest};
use alloy::rpc::types::{TransactionReceipt, TransactionRequest};
use anyhow::Result; use anyhow::Result;
mod blocking_executor;
pub use blocking_executor::*;
/// An interface for all interactions with Ethereum compatible nodes. /// An interface for all interactions with Ethereum compatible nodes.
pub trait EthereumNode { pub trait EthereumNode {
/// Execute the [TransactionRequest] and return a [TransactionReceipt]. /// Execute the [TransactionRequest] and return a [TransactionReceipt].
fn execute_transaction(&self, transaction: TransactionRequest) -> Result<TransactionReceipt>; fn execute_transaction(
&self,
transaction: TransactionRequest,
) -> impl Future<Output = Result<TransactionReceipt>>;
/// Trace the transaction in the [TransactionReceipt] and return a [GethTrace]. /// Trace the transaction in the [TransactionReceipt] and return a [GethTrace].
fn trace_transaction(&self, transaction: TransactionReceipt) -> Result<GethTrace>; fn trace_transaction(
&self,
receipt: &TransactionReceipt,
trace_options: GethDebugTracingOptions,
) -> impl Future<Output = Result<GethTrace>>;
/// Returns the state diff of the transaction hash in the [TransactionReceipt]. /// Returns the state diff of the transaction hash in the [TransactionReceipt].
fn state_diff(&self, transaction: TransactionReceipt) -> Result<DiffMode>; fn state_diff(&self, receipt: &TransactionReceipt) -> impl Future<Output = Result<DiffMode>>;
/// Returns the next available nonce for the given [Address]. /// Returns the balance of the provided [`Address`] back.
fn fetch_add_nonce(&self, address: Address) -> Result<u64>; fn balance_of(&self, address: Address) -> impl Future<Output = Result<U256>>;
/// Returns the ID of the chain that the node is on. /// Returns the latest storage proof of the provided [`Address`]
fn chain_id(&self) -> Result<ChainId>; fn latest_state_proof(
&self,
// TODO: This is currently a u128 due to Kitchensink needing more than 64 bits for its gas limit address: Address,
// when we implement the changes to the gas we need to adjust this to be a u64. keys: Vec<StorageKey>,
/// Returns the gas limit of the specified block. ) -> impl Future<Output = Result<EIP1186AccountProofResponse>>;
fn block_gas_limit(&self, number: BlockNumberOrTag) -> Result<u128>;
/// Returns the coinbase of the specified block.
fn block_coinbase(&self, number: BlockNumberOrTag) -> Result<Address>;
/// Returns the difficulty of the specified block.
fn block_difficulty(&self, number: BlockNumberOrTag) -> Result<U256>;
/// Returns the hash of the specified block.
fn block_hash(&self, number: BlockNumberOrTag) -> Result<BlockHash>;
/// Returns the timestamp of the specified block,
fn block_timestamp(&self, number: BlockNumberOrTag) -> Result<BlockTimestamp>;
/// Returns the number of the last block.
fn last_block_number(&self) -> Result<BlockNumber>;
} }
+4 -1
View File
@@ -14,8 +14,11 @@ alloy = { workspace = true }
tracing = { workspace = true } tracing = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
revive-dt-node-interaction = { workspace = true } revive-common = { workspace = true }
revive-dt-common = { workspace = true }
revive-dt-config = { workspace = true } revive-dt-config = { workspace = true }
revive-dt-format = { workspace = true }
revive-dt-node-interaction = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
+78
View File
@@ -0,0 +1,78 @@
use alloy::{
network::{Network, TransactionBuilder},
providers::{
Provider, SendableTx,
fillers::{GasFiller, TxFiller},
},
transports::TransportResult,
};
#[derive(Clone, Debug)]
pub struct FallbackGasFiller {
inner: GasFiller,
default_gas_limit: u64,
default_max_fee_per_gas: u128,
default_priority_fee: u128,
}
impl FallbackGasFiller {
pub fn new(
default_gas_limit: u64,
default_max_fee_per_gas: u128,
default_priority_fee: u128,
) -> Self {
Self {
inner: GasFiller,
default_gas_limit,
default_max_fee_per_gas,
default_priority_fee,
}
}
}
impl<N> TxFiller<N> for FallbackGasFiller
where
N: Network,
{
type Fillable = Option<<GasFiller as TxFiller<N>>::Fillable>;
fn status(
&self,
tx: &<N as Network>::TransactionRequest,
) -> alloy::providers::fillers::FillerControlFlow {
<GasFiller as TxFiller<N>>::status(&self.inner, tx)
}
fn fill_sync(&self, _: &mut alloy::providers::SendableTx<N>) {}
async fn prepare<P: Provider<N>>(
&self,
provider: &P,
tx: &<N as Network>::TransactionRequest,
) -> TransportResult<Self::Fillable> {
// Try to fetch GasFillers “fillable” (gas_price, base_fee, estimate_gas, …)
// If it errors (i.e. tx would revert under eth_estimateGas), swallow it.
match self.inner.prepare(provider, tx).await {
Ok(fill) => Ok(Some(fill)),
Err(_) => Ok(None),
}
}
async fn fill(
&self,
fillable: Self::Fillable,
mut tx: alloy::providers::SendableTx<N>,
) -> TransportResult<SendableTx<N>> {
if let Some(fill) = fillable {
// our inner GasFiller succeeded — use it
self.inner.fill(fill, tx).await
} else {
if let Some(builder) = tx.as_mut_builder() {
builder.set_gas_limit(self.default_gas_limit);
builder.set_max_fee_per_gas(self.default_max_fee_per_gas);
builder.set_max_priority_fee_per_gas(self.default_priority_fee);
}
Ok(tx)
}
}
}
+5
View File
@@ -0,0 +1,5 @@
/// This constant defines how much Wei accounts are pre-seeded with in genesis.
///
/// Note: After changing this number, check that the tests for kitchensink work as we encountered
/// some issues with different values of the initial balance on Kitchensink.
pub const INITIAL_BALANCE: u128 = 10u128.pow(37);
+268 -188
View File
@@ -1,13 +1,13 @@
//! The go-ethereum node implementation. //! The go-ethereum node implementation.
use std::{ use std::{
collections::HashMap,
fs::{File, OpenOptions, create_dir_all, remove_dir_all}, fs::{File, OpenOptions, create_dir_all, remove_dir_all},
io::{BufRead, BufReader, Read, Write}, io::{BufRead, BufReader, Read, Write},
ops::ControlFlow,
path::PathBuf, path::PathBuf,
process::{Child, Command, Stdio}, process::{Child, Command, Stdio},
sync::{ sync::{
Mutex, Arc,
atomic::{AtomicU32, Ordering}, atomic::{AtomicU32, Ordering},
}, },
time::{Duration, Instant}, time::{Duration, Instant},
@@ -15,23 +15,32 @@ use std::{
use alloy::{ use alloy::{
eips::BlockNumberOrTag, eips::BlockNumberOrTag,
network::{Ethereum, EthereumWallet}, genesis::{Genesis, GenesisAccount},
primitives::{Address, BlockHash, BlockNumber, BlockTimestamp, U256}, network::{Ethereum, EthereumWallet, NetworkWallet},
primitives::{
Address, BlockHash, BlockNumber, BlockTimestamp, FixedBytes, StorageKey, TxHash, U256,
},
providers::{ providers::{
Provider, ProviderBuilder, Provider, ProviderBuilder,
ext::DebugApi, ext::DebugApi,
fillers::{FillProvider, TxFiller}, fillers::{CachedNonceManager, ChainIdFiller, FillProvider, NonceFiller, TxFiller},
}, },
rpc::types::{ rpc::types::{
TransactionReceipt, TransactionRequest, EIP1186AccountProofResponse, TransactionReceipt, TransactionRequest,
trace::geth::{DiffMode, GethDebugTracingOptions, PreStateConfig, PreStateFrame}, trace::geth::{DiffMode, GethDebugTracingOptions, PreStateConfig, PreStateFrame},
}, },
signers::local::PrivateKeySigner,
}; };
use revive_dt_config::Arguments; use anyhow::Context;
use revive_dt_node_interaction::{BlockingExecutor, EthereumNode}; use revive_common::EVMVersion;
use tracing::Level; use tracing::{Instrument, Level};
use crate::Node; use revive_dt_common::{fs::clear_directory, futures::poll};
use revive_dt_config::Arguments;
use revive_dt_format::traits::ResolverApi;
use revive_dt_node_interaction::EthereumNode;
use crate::{Node, common::FallbackGasFiller, constants::INITIAL_BALANCE};
static NODE_COUNT: AtomicU32 = AtomicU32::new(0); static NODE_COUNT: AtomicU32 = AtomicU32::new(0);
@@ -43,7 +52,7 @@ static NODE_COUNT: AtomicU32 = AtomicU32::new(0);
/// ///
/// Prunes the child process and the base directory on drop. /// Prunes the child process and the base directory on drop.
#[derive(Debug)] #[derive(Debug)]
pub struct Instance { pub struct GethNode {
connection_string: String, connection_string: String,
base_directory: PathBuf, base_directory: PathBuf,
data_directory: PathBuf, data_directory: PathBuf,
@@ -54,7 +63,7 @@ pub struct Instance {
network_id: u64, network_id: u64,
start_timeout: u64, start_timeout: u64,
wallet: EthereumWallet, wallet: EthereumWallet,
nonces: Mutex<HashMap<Address, u64>>, nonce_manager: CachedNonceManager,
/// This vector stores [`File`] objects that we use for logging which we want to flush when the /// This vector stores [`File`] objects that we use for logging which we want to flush when the
/// node object is dropped. We do not store them in a structured fashion at the moment (in /// node object is dropped. We do not store them in a structured fashion at the moment (in
/// separate fields) as the logic that we need to apply to them is all the same regardless of /// separate fields) as the logic that we need to apply to them is all the same regardless of
@@ -62,7 +71,7 @@ pub struct Instance {
logs_file_to_flush: Vec<File>, logs_file_to_flush: Vec<File>,
} }
impl Instance { impl GethNode {
const BASE_DIRECTORY: &str = "geth"; const BASE_DIRECTORY: &str = "geth";
const DATA_DIRECTORY: &str = "data"; const DATA_DIRECTORY: &str = "data";
const LOGS_DIRECTORY: &str = "logs"; const LOGS_DIRECTORY: &str = "logs";
@@ -76,16 +85,38 @@ impl Instance {
const GETH_STDOUT_LOG_FILE_NAME: &str = "node_stdout.log"; const GETH_STDOUT_LOG_FILE_NAME: &str = "node_stdout.log";
const GETH_STDERR_LOG_FILE_NAME: &str = "node_stderr.log"; const GETH_STDERR_LOG_FILE_NAME: &str = "node_stderr.log";
const TRANSACTION_INDEXING_ERROR: &str = "transaction indexing is in progress";
const TRANSACTION_TRACING_ERROR: &str = "historical state not available in path scheme yet";
const RECEIPT_POLLING_DURATION: Duration = Duration::from_secs(5 * 60);
const TRACE_POLLING_DURATION: Duration = Duration::from_secs(60);
/// Create the node directory and call `geth init` to configure the genesis. /// Create the node directory and call `geth init` to configure the genesis.
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn init(&mut self, genesis: String) -> anyhow::Result<&mut Self> { fn init(&mut self, genesis: String) -> anyhow::Result<&mut Self> {
let _ = clear_directory(&self.base_directory);
let _ = clear_directory(&self.logs_directory);
create_dir_all(&self.base_directory)?; create_dir_all(&self.base_directory)?;
create_dir_all(&self.logs_directory)?; create_dir_all(&self.logs_directory)?;
let mut genesis = serde_json::from_str::<Genesis>(&genesis)?;
for signer_address in
<EthereumWallet as NetworkWallet<Ethereum>>::signer_addresses(&self.wallet)
{
// Note, the use of the entry API here means that we only modify the entries for any
// account that is not in the `alloc` field of the genesis state.
genesis
.alloc
.entry(signer_address)
.or_insert(GenesisAccount::default().with_balance(U256::from(INITIAL_BALANCE)));
}
let genesis_path = self.base_directory.join(Self::GENESIS_JSON_FILE); let genesis_path = self.base_directory.join(Self::GENESIS_JSON_FILE);
File::create(&genesis_path)?.write_all(genesis.as_bytes())?; serde_json::to_writer(File::create(&genesis_path)?, &genesis)?;
let mut child = Command::new(&self.geth) let mut child = Command::new(&self.geth)
.arg("--state.scheme")
.arg("hash")
.arg("init") .arg("init")
.arg("--datadir") .arg("--datadir")
.arg(&self.data_directory) .arg(&self.data_directory)
@@ -111,7 +142,7 @@ impl Instance {
/// Spawn the go-ethereum node child process. /// Spawn the go-ethereum node child process.
/// ///
/// [Instance::init] must be called prior. /// [Instance::init] must be called prior.
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn spawn_process(&mut self) -> anyhow::Result<&mut Self> { fn spawn_process(&mut self) -> anyhow::Result<&mut Self> {
// This is the `OpenOptions` that we wish to use for all of the log files that we will be // This is the `OpenOptions` that we wish to use for all of the log files that we will be
// opening in this method. We need to construct it in this way to: // opening in this method. We need to construct it in this way to:
@@ -139,6 +170,16 @@ impl Instance {
.arg("--nodiscover") .arg("--nodiscover")
.arg("--maxpeers") .arg("--maxpeers")
.arg("0") .arg("0")
.arg("--txlookuplimit")
.arg("0")
.arg("--cache.blocklogs")
.arg("512")
.arg("--state.scheme")
.arg("hash")
.arg("--syncmode")
.arg("full")
.arg("--gcmode")
.arg("archive")
.stderr(stderr_logs_file.try_clone()?) .stderr(stderr_logs_file.try_clone()?)
.stdout(stdout_logs_file.try_clone()?) .stdout(stdout_logs_file.try_clone()?)
.spawn()? .spawn()?
@@ -159,7 +200,7 @@ impl Instance {
/// Wait for the g-ethereum node child process getting ready. /// Wait for the g-ethereum node child process getting ready.
/// ///
/// [Instance::spawn_process] must be called priorly. /// [Instance::spawn_process] must be called priorly.
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn wait_ready(&mut self) -> anyhow::Result<&mut Self> { fn wait_ready(&mut self) -> anyhow::Result<&mut Self> {
let start_time = Instant::now(); let start_time = Instant::now();
@@ -206,8 +247,19 @@ impl Instance {
> + 'static { > + 'static {
let connection_string = self.connection_string(); let connection_string = self.connection_string();
let wallet = self.wallet.clone(); let wallet = self.wallet.clone();
// Note: We would like all providers to make use of the same nonce manager so that we have
// monotonically increasing nonces that are cached. The cached nonce manager uses Arc's in
// its implementation and therefore it means that when we clone it then it still references
// the same state.
let nonce_manager = self.nonce_manager.clone();
Box::pin(async move { Box::pin(async move {
ProviderBuilder::new() ProviderBuilder::new()
.disable_recommended_fillers()
.filler(FallbackGasFiller::new(500_000_000, 500_000_000, 1))
.filler(ChainIdFiller::default())
.filler(NonceFiller::new(nonce_manager))
.wallet(wallet) .wallet(wallet)
.connect(&connection_string) .connect(&connection_string)
.await .await
@@ -216,25 +268,18 @@ impl Instance {
} }
} }
impl EthereumNode for Instance { impl EthereumNode for GethNode {
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn execute_transaction( async fn execute_transaction(
&self, &self,
transaction: TransactionRequest, transaction: TransactionRequest,
) -> anyhow::Result<alloy::rpc::types::TransactionReceipt> { ) -> anyhow::Result<alloy::rpc::types::TransactionReceipt> {
let provider = self.provider(); let span = tracing::debug_span!("Submitting transaction", ?transaction);
BlockingExecutor::execute(async move {
let outer_span = tracing::debug_span!("Submitting transaction", ?transaction,);
let _outer_guard = outer_span.enter();
let provider = provider.await?;
let pending_transaction = provider.send_transaction(transaction).await?;
let transaction_hash = pending_transaction.tx_hash();
let span = tracing::info_span!("Awaiting transaction receipt", ?transaction_hash);
let _guard = span.enter(); let _guard = span.enter();
let provider = Arc::new(self.provider().await?);
let transaction_hash = *provider.send_transaction(transaction).await?.tx_hash();
// The following is a fix for the "transaction indexing is in progress" error that we // The following is a fix for the "transaction indexing is in progress" error that we
// used to get. You can find more information on this in the following GH issue in geth // used to get. You can find more information on this in the following GH issue in geth
// https://github.com/ethereum/go-ethereum/issues/28877. To summarize what's going on, // https://github.com/ethereum/go-ethereum/issues/28877. To summarize what's going on,
@@ -247,88 +292,82 @@ impl EthereumNode for Instance {
// it eventually works, but we only do that if the error we get back is the "transaction // it eventually works, but we only do that if the error we get back is the "transaction
// indexing is in progress" error or if the receipt is None. // indexing is in progress" error or if the receipt is None.
// //
// At the moment we do not allow for the 60 seconds to be modified and we take it as // Getting the transaction indexed and taking a receipt can take a long time especially
// being an implementation detail that's invisible to anything outside of this module. // when a lot of transactions are being submitted to the node. Thus, while initially we
// // only allowed for 60 seconds of waiting with a 1 second delay in polling, we need to
// We allow a total of 60 retries for getting the receipt with one second between each // allow for a larger wait time. Therefore, in here we allow for 5 minutes of waiting
// retry and the next which means that we allow for a total of 60 seconds of waiting // with exponential backoff each time we attempt to get the receipt and find that it's
// before we consider that we're unable to get the transaction receipt. // not available.
let mut retries = 0; poll(
loop { Self::RECEIPT_POLLING_DURATION,
match provider.get_transaction_receipt(*transaction_hash).await { Default::default(),
Ok(Some(receipt)) => { move || {
tracing::info!("Obtained the transaction receipt"); let provider = provider.clone();
break Ok(receipt); async move {
} match provider.get_transaction_receipt(transaction_hash).await {
Ok(None) => { Ok(Some(receipt)) => Ok(ControlFlow::Break(receipt)),
if retries == 60 { Ok(None) => Ok(ControlFlow::Continue(())),
tracing::error!(
"Polled for transaction receipt for 60 seconds but failed to get it"
);
break Err(anyhow::anyhow!("Failed to get the transaction receipt"));
} else {
tracing::trace!(
retries,
"Sleeping for 1 second and trying to get the receipt again"
);
retries += 1;
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
continue;
}
}
Err(error) => { Err(error) => {
let error_string = error.to_string(); let error_string = error.to_string();
if error_string.contains("transaction indexing is in progress") { match error_string.contains(Self::TRANSACTION_INDEXING_ERROR) {
if retries == 60 { true => Ok(ControlFlow::Continue(())),
tracing::error!( false => Err(error.into()),
"Polled for transaction receipt for 60 seconds but failed to get it"
);
break Err(error.into());
} else {
tracing::trace!(
retries,
"Sleeping for 1 second and trying to get the receipt again"
);
retries += 1;
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
continue;
}
} else {
break Err(error.into());
} }
} }
} }
} }
})? },
)
.instrument(tracing::info_span!(
"Awaiting transaction receipt",
?transaction_hash
))
.await
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn trace_transaction( async fn trace_transaction(
&self, &self,
transaction: TransactionReceipt, transaction: &TransactionReceipt,
trace_options: GethDebugTracingOptions,
) -> anyhow::Result<alloy::rpc::types::trace::geth::GethTrace> { ) -> anyhow::Result<alloy::rpc::types::trace::geth::GethTrace> {
let provider = Arc::new(self.provider().await?);
poll(
Self::TRACE_POLLING_DURATION,
Default::default(),
move || {
let provider = provider.clone();
let trace_options = trace_options.clone();
async move {
match provider
.debug_trace_transaction(transaction.transaction_hash, trace_options)
.await
{
Ok(trace) => Ok(ControlFlow::Break(trace)),
Err(error) => {
let error_string = error.to_string();
match error_string.contains(Self::TRANSACTION_TRACING_ERROR) {
true => Ok(ControlFlow::Continue(())),
false => Err(error.into()),
}
}
}
}
},
)
.await
}
#[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
async fn state_diff(&self, transaction: &TransactionReceipt) -> anyhow::Result<DiffMode> {
let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig { let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig {
diff_mode: Some(true), diff_mode: Some(true),
disable_code: None, disable_code: None,
disable_storage: None, disable_storage: None,
}); });
let provider = self.provider();
BlockingExecutor::execute(async move {
Ok(provider
.await?
.debug_trace_transaction(transaction.transaction_hash, trace_options)
.await?)
})?
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn state_diff(
&self,
transaction: alloy::rpc::types::TransactionReceipt,
) -> anyhow::Result<DiffMode> {
match self match self
.trace_transaction(transaction)? .trace_transaction(transaction, trace_options)
.await?
.try_into_pre_state_frame()? .try_into_pre_state_frame()?
{ {
PreStateFrame::Diff(diff) => Ok(diff), PreStateFrame::Diff(diff) => Ok(diff),
@@ -336,112 +375,140 @@ impl EthereumNode for Instance {
} }
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn fetch_add_nonce(&self, address: Address) -> anyhow::Result<u64> { async fn balance_of(&self, address: Address) -> anyhow::Result<U256> {
let provider = self.provider(); self.provider()
let onchain_nonce = BlockingExecutor::execute::<anyhow::Result<_>>(async move {
provider
.await? .await?
.get_transaction_count(address) .get_balance(address)
.await .await
.map_err(Into::into) .map_err(Into::into)
})??;
let mut nonces = self.nonces.lock().unwrap();
let current = nonces.entry(address).or_insert(onchain_nonce);
let value = *current;
*current += 1;
Ok(value)
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn chain_id(&self) -> anyhow::Result<alloy::primitives::ChainId> { async fn latest_state_proof(
let provider = self.provider(); &self,
BlockingExecutor::execute(async move { address: Address,
provider.await?.get_chain_id().await.map_err(Into::into) keys: Vec<StorageKey>,
})? ) -> anyhow::Result<EIP1186AccountProofResponse> {
self.provider()
.await?
.get_proof(address, keys)
.latest()
.await
.map_err(Into::into)
}
}
impl ResolverApi for GethNode {
#[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
async fn chain_id(&self) -> anyhow::Result<alloy::primitives::ChainId> {
self.provider()
.await?
.get_chain_id()
.await
.map_err(Into::into)
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn block_gas_limit(&self, number: BlockNumberOrTag) -> anyhow::Result<u128> { async fn transaction_gas_price(&self, tx_hash: &TxHash) -> anyhow::Result<u128> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move { .await?
provider .get_transaction_receipt(*tx_hash)
.await?
.context("Failed to get the transaction receipt")
.map(|receipt| receipt.effective_gas_price)
}
#[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
async fn block_gas_limit(&self, number: BlockNumberOrTag) -> anyhow::Result<u128> {
self.provider()
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.gas_limit as _) .map(|block| block.header.gas_limit as _)
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn block_coinbase(&self, number: BlockNumberOrTag) -> anyhow::Result<Address> { async fn block_coinbase(&self, number: BlockNumberOrTag) -> anyhow::Result<Address> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move {
provider
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.beneficiary) .map(|block| block.header.beneficiary)
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn block_difficulty(&self, number: BlockNumberOrTag) -> anyhow::Result<U256> { async fn block_difficulty(&self, number: BlockNumberOrTag) -> anyhow::Result<U256> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move {
provider
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.difficulty) .map(|block| U256::from_be_bytes(block.header.mix_hash.0))
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn block_hash(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockHash> { async fn block_base_fee(&self, number: BlockNumberOrTag) -> anyhow::Result<u64> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move { .await?
provider .get_block_by_number(number)
.await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.and_then(|block| {
block
.header
.base_fee_per_gas
.context("Failed to get the base fee per gas")
})
}
#[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
async fn block_hash(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockHash> {
self.provider()
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.hash) .map(|block| block.header.hash)
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn block_timestamp(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockTimestamp> { async fn block_timestamp(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockTimestamp> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move {
provider
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.timestamp) .map(|block| block.header.timestamp)
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn last_block_number(&self) -> anyhow::Result<BlockNumber> { async fn last_block_number(&self) -> anyhow::Result<BlockNumber> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move { .await?
provider.await?.get_block_number().await.map_err(Into::into) .get_block_number()
})? .await
.map_err(Into::into)
} }
} }
impl Node for Instance { impl Node for GethNode {
fn new(config: &Arguments) -> Self { fn new(config: &Arguments) -> Self {
let geth_directory = config.directory().join(Self::BASE_DIRECTORY); let geth_directory = config.directory().join(Self::BASE_DIRECTORY);
let id = NODE_COUNT.fetch_add(1, Ordering::SeqCst); let id = NODE_COUNT.fetch_add(1, Ordering::SeqCst);
let base_directory = geth_directory.join(id.to_string()); let base_directory = geth_directory.join(id.to_string());
let mut wallet = config.wallet();
for signer in (1..=config.private_keys_to_add)
.map(|id| U256::from(id))
.map(|id| id.to_be_bytes::<32>())
.map(|id| PrivateKeySigner::from_bytes(&FixedBytes(id)).unwrap())
{
wallet.register_signer(signer);
}
Self { Self {
connection_string: base_directory.join(Self::IPC_FILE).display().to_string(), connection_string: base_directory.join(Self::IPC_FILE).display().to_string(),
data_directory: base_directory.join(Self::DATA_DIRECTORY), data_directory: base_directory.join(Self::DATA_DIRECTORY),
@@ -452,20 +519,20 @@ impl Node for Instance {
handle: None, handle: None,
network_id: config.network_id, network_id: config.network_id,
start_timeout: config.geth_start_timeout, start_timeout: config.geth_start_timeout,
wallet: config.wallet(), wallet,
nonces: Mutex::new(HashMap::new()),
// We know that we only need to be storing 2 files so we can specify that when creating // We know that we only need to be storing 2 files so we can specify that when creating
// the vector. It's the stdout and stderr of the geth node. // the vector. It's the stdout and stderr of the geth node.
logs_file_to_flush: Vec::with_capacity(2), logs_file_to_flush: Vec::with_capacity(2),
nonce_manager: Default::default(),
} }
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn connection_string(&self) -> String { fn connection_string(&self) -> String {
self.connection_string.clone() self.connection_string.clone()
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn shutdown(&mut self) -> anyhow::Result<()> { fn shutdown(&mut self) -> anyhow::Result<()> {
// Terminate the processes in a graceful manner to allow for the output to be flushed. // Terminate the processes in a graceful manner to allow for the output to be flushed.
if let Some(mut child) = self.handle.take() { if let Some(mut child) = self.handle.take() {
@@ -487,13 +554,13 @@ impl Node for Instance {
Ok(()) Ok(())
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn spawn(&mut self, genesis: String) -> anyhow::Result<()> { fn spawn(&mut self, genesis: String) -> anyhow::Result<()> {
self.init(genesis)?.spawn_process()?; self.init(genesis)?.spawn_process()?;
Ok(()) Ok(())
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id), err)]
fn version(&self) -> anyhow::Result<String> { fn version(&self) -> anyhow::Result<String> {
let output = Command::new(&self.geth) let output = Command::new(&self.geth)
.arg("--version") .arg("--version")
@@ -505,10 +572,22 @@ impl Node for Instance {
.stdout; .stdout;
Ok(String::from_utf8_lossy(&output).into()) Ok(String::from_utf8_lossy(&output).into())
} }
#[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn matches_target(&self, targets: Option<&[String]>) -> bool {
match targets {
None => true,
Some(targets) => targets.iter().any(|str| str.as_str() == "evm"),
}
}
fn evm_version() -> EVMVersion {
EVMVersion::Cancun
}
} }
impl Drop for Instance { impl Drop for GethNode {
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn drop(&mut self) { fn drop(&mut self) {
self.shutdown().expect("Failed to shutdown") self.shutdown().expect("Failed to shutdown")
} }
@@ -517,6 +596,7 @@ impl Drop for Instance {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use temp_dir::TempDir; use temp_dir::TempDir;
use crate::{GENESIS_JSON, Node}; use crate::{GENESIS_JSON, Node};
@@ -531,9 +611,9 @@ mod tests {
(config, temp_dir) (config, temp_dir)
} }
fn new_node() -> (Instance, TempDir) { fn new_node() -> (GethNode, TempDir) {
let (args, temp_dir) = test_config(); let (args, temp_dir) = test_config();
let mut node = Instance::new(&args); let mut node = GethNode::new(&args);
node.init(GENESIS_JSON.to_owned()) node.init(GENESIS_JSON.to_owned())
.expect("Failed to initialize the node") .expect("Failed to initialize the node")
.spawn_process() .spawn_process()
@@ -543,110 +623,110 @@ mod tests {
#[test] #[test]
fn init_works() { fn init_works() {
Instance::new(&test_config().0) GethNode::new(&test_config().0)
.init(GENESIS_JSON.to_string()) .init(GENESIS_JSON.to_string())
.unwrap(); .unwrap();
} }
#[test] #[test]
fn spawn_works() { fn spawn_works() {
Instance::new(&test_config().0) GethNode::new(&test_config().0)
.spawn(GENESIS_JSON.to_string()) .spawn(GENESIS_JSON.to_string())
.unwrap(); .unwrap();
} }
#[test] #[test]
fn version_works() { fn version_works() {
let version = Instance::new(&test_config().0).version().unwrap(); let version = GethNode::new(&test_config().0).version().unwrap();
assert!( assert!(
version.starts_with("geth version"), version.starts_with("geth version"),
"expected version string, got: '{version}'" "expected version string, got: '{version}'"
); );
} }
#[test] #[tokio::test]
fn can_get_chain_id_from_node() { async fn can_get_chain_id_from_node() {
// Arrange // Arrange
let (node, _temp_dir) = new_node(); let (node, _temp_dir) = new_node();
// Act // Act
let chain_id = node.chain_id(); let chain_id = node.chain_id().await;
// Assert // Assert
let chain_id = chain_id.expect("Failed to get the chain id"); let chain_id = chain_id.expect("Failed to get the chain id");
assert_eq!(chain_id, 420_420_420); assert_eq!(chain_id, 420_420_420);
} }
#[test] #[tokio::test]
fn can_get_gas_limit_from_node() { async fn can_get_gas_limit_from_node() {
// Arrange // Arrange
let (node, _temp_dir) = new_node(); let (node, _temp_dir) = new_node();
// Act // Act
let gas_limit = node.block_gas_limit(BlockNumberOrTag::Latest); let gas_limit = node.block_gas_limit(BlockNumberOrTag::Latest).await;
// Assert // Assert
let gas_limit = gas_limit.expect("Failed to get the gas limit"); let gas_limit = gas_limit.expect("Failed to get the gas limit");
assert_eq!(gas_limit, u32::MAX as u128) assert_eq!(gas_limit, u32::MAX as u128)
} }
#[test] #[tokio::test]
fn can_get_coinbase_from_node() { async fn can_get_coinbase_from_node() {
// Arrange // Arrange
let (node, _temp_dir) = new_node(); let (node, _temp_dir) = new_node();
// Act // Act
let coinbase = node.block_coinbase(BlockNumberOrTag::Latest); let coinbase = node.block_coinbase(BlockNumberOrTag::Latest).await;
// Assert // Assert
let coinbase = coinbase.expect("Failed to get the coinbase"); let coinbase = coinbase.expect("Failed to get the coinbase");
assert_eq!(coinbase, Address::new([0xFF; 20])) assert_eq!(coinbase, Address::new([0xFF; 20]))
} }
#[test] #[tokio::test]
fn can_get_block_difficulty_from_node() { async fn can_get_block_difficulty_from_node() {
// Arrange // Arrange
let (node, _temp_dir) = new_node(); let (node, _temp_dir) = new_node();
// Act // Act
let block_difficulty = node.block_difficulty(BlockNumberOrTag::Latest); let block_difficulty = node.block_difficulty(BlockNumberOrTag::Latest).await;
// Assert // Assert
let block_difficulty = block_difficulty.expect("Failed to get the block difficulty"); let block_difficulty = block_difficulty.expect("Failed to get the block difficulty");
assert_eq!(block_difficulty, U256::ZERO) assert_eq!(block_difficulty, U256::ZERO)
} }
#[test] #[tokio::test]
fn can_get_block_hash_from_node() { async fn can_get_block_hash_from_node() {
// Arrange // Arrange
let (node, _temp_dir) = new_node(); let (node, _temp_dir) = new_node();
// Act // Act
let block_hash = node.block_hash(BlockNumberOrTag::Latest); let block_hash = node.block_hash(BlockNumberOrTag::Latest).await;
// Assert // Assert
let _ = block_hash.expect("Failed to get the block hash"); let _ = block_hash.expect("Failed to get the block hash");
} }
#[test] #[tokio::test]
fn can_get_block_timestamp_from_node() { async fn can_get_block_timestamp_from_node() {
// Arrange // Arrange
let (node, _temp_dir) = new_node(); let (node, _temp_dir) = new_node();
// Act // Act
let block_timestamp = node.block_timestamp(BlockNumberOrTag::Latest); let block_timestamp = node.block_timestamp(BlockNumberOrTag::Latest).await;
// Assert // Assert
let _ = block_timestamp.expect("Failed to get the block timestamp"); let _ = block_timestamp.expect("Failed to get the block timestamp");
} }
#[test] #[tokio::test]
fn can_get_block_number_from_node() { async fn can_get_block_number_from_node() {
// Arrange // Arrange
let (node, _temp_dir) = new_node(); let (node, _temp_dir) = new_node();
// Act // Act
let block_number = node.last_block_number(); let block_number = node.last_block_number().await;
// Assert // Assert
let block_number = block_number.expect("Failed to get the block number"); let block_number = block_number.expect("Failed to get the block number");
+248 -188
View File
@@ -1,36 +1,40 @@
use std::{ use std::{
collections::HashMap,
fs::{File, OpenOptions, create_dir_all, remove_dir_all}, fs::{File, OpenOptions, create_dir_all, remove_dir_all},
io::{BufRead, Write}, io::{BufRead, Write},
path::{Path, PathBuf}, path::{Path, PathBuf},
process::{Child, Command, Stdio}, process::{Child, Command, Stdio},
sync::{ sync::atomic::{AtomicU32, Ordering},
Mutex,
atomic::{AtomicU32, Ordering},
},
time::Duration, time::Duration,
}; };
use alloy::{ use alloy::{
consensus::{BlockHeader, TxEnvelope}, consensus::{BlockHeader, TxEnvelope},
eips::BlockNumberOrTag, eips::BlockNumberOrTag,
hex, genesis::{Genesis, GenesisAccount},
network::{ network::{
Ethereum, EthereumWallet, Network, TransactionBuilder, TransactionBuilderError, Ethereum, EthereumWallet, Network, NetworkWallet, TransactionBuilder,
UnbuiltTransactionError, TransactionBuilderError, UnbuiltTransactionError,
},
primitives::{
Address, B64, B256, BlockHash, BlockNumber, BlockTimestamp, Bloom, Bytes, FixedBytes,
StorageKey, TxHash, U256,
}, },
primitives::{Address, B64, B256, BlockHash, BlockNumber, BlockTimestamp, Bloom, Bytes, U256},
providers::{ providers::{
Provider, ProviderBuilder, Provider, ProviderBuilder,
ext::DebugApi, ext::DebugApi,
fillers::{FillProvider, TxFiller}, fillers::{CachedNonceManager, ChainIdFiller, FillProvider, NonceFiller, TxFiller},
}, },
rpc::types::{ rpc::types::{
TransactionReceipt, EIP1186AccountProofResponse, TransactionReceipt,
eth::{Block, Header, Transaction}, eth::{Block, Header, Transaction},
trace::geth::{DiffMode, GethDebugTracingOptions, PreStateConfig, PreStateFrame}, trace::geth::{DiffMode, GethDebugTracingOptions, PreStateConfig, PreStateFrame},
}, },
signers::local::PrivateKeySigner,
}; };
use anyhow::Context;
use revive_common::EVMVersion;
use revive_dt_common::fs::clear_directory;
use revive_dt_format::traits::ResolverApi;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::{Value as JsonValue, json}; use serde_json::{Value as JsonValue, json};
use sp_core::crypto::Ss58Codec; use sp_core::crypto::Ss58Codec;
@@ -38,9 +42,9 @@ use sp_runtime::AccountId32;
use tracing::Level; use tracing::Level;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use revive_dt_node_interaction::{BlockingExecutor, EthereumNode}; use revive_dt_node_interaction::EthereumNode;
use crate::Node; use crate::{Node, common::FallbackGasFiller, constants::INITIAL_BALANCE};
static NODE_COUNT: AtomicU32 = AtomicU32::new(0); static NODE_COUNT: AtomicU32 = AtomicU32::new(0);
@@ -55,7 +59,7 @@ pub struct KitchensinkNode {
logs_directory: PathBuf, logs_directory: PathBuf,
process_substrate: Option<Child>, process_substrate: Option<Child>,
process_proxy: Option<Child>, process_proxy: Option<Child>,
nonces: Mutex<HashMap<Address, u64>>, nonce_manager: CachedNonceManager,
/// This vector stores [`File`] objects that we use for logging which we want to flush when the /// This vector stores [`File`] objects that we use for logging which we want to flush when the
/// node object is dropped. We do not store them in a structured fashion at the moment (in /// node object is dropped. We do not store them in a structured fashion at the moment (in
/// separate fields) as the logic that we need to apply to them is all the same regardless of /// separate fields) as the logic that we need to apply to them is all the same regardless of
@@ -85,6 +89,9 @@ impl KitchensinkNode {
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))]
fn init(&mut self, genesis: &str) -> anyhow::Result<&mut Self> { fn init(&mut self, genesis: &str) -> anyhow::Result<&mut Self> {
let _ = clear_directory(&self.base_directory);
let _ = clear_directory(&self.logs_directory);
create_dir_all(&self.base_directory)?; create_dir_all(&self.base_directory)?;
create_dir_all(&self.logs_directory)?; create_dir_all(&self.logs_directory)?;
@@ -127,7 +134,20 @@ impl KitchensinkNode {
None None
}) })
.collect(); .collect();
let mut eth_balances = self.extract_balance_from_genesis_file(genesis)?; let mut eth_balances = {
let mut genesis = serde_json::from_str::<Genesis>(genesis)?;
for signer_address in
<EthereumWallet as NetworkWallet<Ethereum>>::signer_addresses(&self.wallet)
{
// Note, the use of the entry API here means that we only modify the entries for any
// account that is not in the `alloc` field of the genesis state.
genesis
.alloc
.entry(signer_address)
.or_insert(GenesisAccount::default().with_balance(U256::from(INITIAL_BALANCE)));
}
self.extract_balance_from_genesis_file(&genesis)?
};
merged_balances.append(&mut eth_balances); merged_balances.append(&mut eth_balances);
chainspec_json["genesis"]["runtimeGenesis"]["patch"]["balances"]["balances"] = chainspec_json["genesis"]["runtimeGenesis"]["patch"]["balances"]["balances"] =
@@ -140,7 +160,7 @@ impl KitchensinkNode {
Ok(self) Ok(self)
} }
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn spawn_process(&mut self) -> anyhow::Result<()> { fn spawn_process(&mut self) -> anyhow::Result<()> {
let substrate_rpc_port = Self::BASE_SUBSTRATE_RPC_PORT + self.id as u16; let substrate_rpc_port = Self::BASE_SUBSTRATE_RPC_PORT + self.id as u16;
let proxy_rpc_port = Self::BASE_PROXY_RPC_PORT + self.id as u16; let proxy_rpc_port = Self::BASE_PROXY_RPC_PORT + self.id as u16;
@@ -192,7 +212,7 @@ impl KitchensinkNode {
if let Err(error) = Self::wait_ready( if let Err(error) = Self::wait_ready(
self.kitchensink_stderr_log_file_path().as_path(), self.kitchensink_stderr_log_file_path().as_path(),
Self::SUBSTRATE_READY_MARKER, Self::SUBSTRATE_READY_MARKER,
Duration::from_secs(30), Duration::from_secs(60),
) { ) {
tracing::error!( tracing::error!(
?error, ?error,
@@ -221,7 +241,7 @@ impl KitchensinkNode {
if let Err(error) = Self::wait_ready( if let Err(error) = Self::wait_ready(
self.proxy_stderr_log_file_path().as_path(), self.proxy_stderr_log_file_path().as_path(),
Self::ETH_PROXY_READY_MARKER, Self::ETH_PROXY_READY_MARKER,
Duration::from_secs(30), Duration::from_secs(60),
) { ) {
tracing::error!(?error, "Failed to start proxy, shutting down gracefully"); tracing::error!(?error, "Failed to start proxy, shutting down gracefully");
self.shutdown()?; self.shutdown()?;
@@ -238,45 +258,30 @@ impl KitchensinkNode {
Ok(()) Ok(())
} }
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn extract_balance_from_genesis_file( fn extract_balance_from_genesis_file(
&self, &self,
genesis_str: &str, genesis: &Genesis,
) -> anyhow::Result<Vec<(String, u128)>> { ) -> anyhow::Result<Vec<(String, u128)>> {
let genesis_json: JsonValue = serde_json::from_str(genesis_str)?; genesis
let alloc = genesis_json .alloc
.get("alloc") .iter()
.and_then(|a| a.as_object()) .try_fold(Vec::new(), |mut vec, (address, acc)| {
.ok_or_else(|| anyhow::anyhow!("Missing 'alloc' in genesis"))?; let substrate_address = Self::eth_to_substrate_address(address);
let balance = acc.balance.try_into()?;
let mut balances = Vec::new(); vec.push((substrate_address, balance));
for (eth_addr, obj) in alloc.iter() { Ok(vec)
let balance_str = obj.get("balance").and_then(|b| b.as_str()).unwrap_or("0"); })
let balance = if balance_str.starts_with("0x") {
u128::from_str_radix(balance_str.trim_start_matches("0x"), 16)?
} else {
balance_str.parse::<u128>()?
};
let substrate_addr = Self::eth_to_substrate_address(eth_addr)?;
balances.push((substrate_addr.clone(), balance));
}
Ok(balances)
} }
fn eth_to_substrate_address(eth_addr: &str) -> anyhow::Result<String> { fn eth_to_substrate_address(address: &Address) -> String {
let eth_bytes = hex::decode(eth_addr.trim_start_matches("0x"))?; let eth_bytes = address.0.0;
if eth_bytes.len() != 20 {
anyhow::bail!(
"Invalid Ethereum address length: expected 20 bytes, got {}",
eth_bytes.len()
);
}
let mut padded = [0xEEu8; 32]; let mut padded = [0xEEu8; 32];
padded[..20].copy_from_slice(&eth_bytes); padded[..20].copy_from_slice(&eth_bytes);
let account_id = AccountId32::from(padded); let account_id = AccountId32::from(padded);
Ok(account_id.to_ss58check()) account_id.to_ss58check()
} }
fn wait_ready(logs_file_path: &Path, marker: &str, timeout: Duration) -> anyhow::Result<()> { fn wait_ready(logs_file_path: &Path, marker: &str, timeout: Duration) -> anyhow::Result<()> {
@@ -302,7 +307,7 @@ impl KitchensinkNode {
} }
} }
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
pub fn eth_rpc_version(&self) -> anyhow::Result<String> { pub fn eth_rpc_version(&self) -> anyhow::Result<String> {
let output = Command::new(&self.eth_proxy_binary) let output = Command::new(&self.eth_proxy_binary)
.arg("--version") .arg("--version")
@@ -350,9 +355,24 @@ impl KitchensinkNode {
> + 'static { > + 'static {
let connection_string = self.connection_string(); let connection_string = self.connection_string();
let wallet = self.wallet.clone(); let wallet = self.wallet.clone();
// Note: We would like all providers to make use of the same nonce manager so that we have
// monotonically increasing nonces that are cached. The cached nonce manager uses Arc's in
// its implementation and therefore it means that when we clone it then it still references
// the same state.
let nonce_manager = self.nonce_manager.clone();
Box::pin(async move { Box::pin(async move {
ProviderBuilder::new() ProviderBuilder::new()
.disable_recommended_fillers()
.network::<KitchenSinkNetwork>() .network::<KitchenSinkNetwork>()
.filler(FallbackGasFiller::new(
30_000_000,
200_000_000_000,
3_000_000_000,
))
.filler(ChainIdFiller::default())
.filler(NonceFiller::new(nonce_manager))
.wallet(wallet) .wallet(wallet)
.connect(&connection_string) .connect(&connection_string)
.await .await
@@ -362,49 +382,47 @@ impl KitchensinkNode {
} }
impl EthereumNode for KitchensinkNode { impl EthereumNode for KitchensinkNode {
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn execute_transaction( async fn execute_transaction(
&self, &self,
transaction: alloy::rpc::types::TransactionRequest, transaction: alloy::rpc::types::TransactionRequest,
) -> anyhow::Result<TransactionReceipt> { ) -> anyhow::Result<TransactionReceipt> {
tracing::debug!(?transaction, "Submitting transaction"); tracing::debug!(?transaction, "Submitting transaction");
let provider = self.provider(); let receipt = self
let receipt = BlockingExecutor::execute(async move { .provider()
Ok(provider
.await? .await?
.send_transaction(transaction) .send_transaction(transaction)
.await? .await?
.get_receipt() .get_receipt()
.await?) .await?;
})?;
tracing::info!(?receipt, "Submitted tx to kitchensink"); tracing::info!(?receipt, "Submitted tx to kitchensink");
receipt Ok(receipt)
} }
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn trace_transaction( async fn trace_transaction(
&self, &self,
transaction: TransactionReceipt, transaction: &TransactionReceipt,
trace_options: GethDebugTracingOptions,
) -> anyhow::Result<alloy::rpc::types::trace::geth::GethTrace> { ) -> anyhow::Result<alloy::rpc::types::trace::geth::GethTrace> {
let tx_hash = transaction.transaction_hash;
Ok(self
.provider()
.await?
.debug_trace_transaction(tx_hash, trace_options)
.await?)
}
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
async fn state_diff(&self, transaction: &TransactionReceipt) -> anyhow::Result<DiffMode> {
let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig { let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig {
diff_mode: Some(true), diff_mode: Some(true),
disable_code: None, disable_code: None,
disable_storage: None, disable_storage: None,
}); });
let provider = self.provider();
BlockingExecutor::execute(async move {
Ok(provider
.await?
.debug_trace_transaction(transaction.transaction_hash, trace_options)
.await?)
})?
}
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))]
fn state_diff(&self, transaction: TransactionReceipt) -> anyhow::Result<DiffMode> {
match self match self
.trace_transaction(transaction)? .trace_transaction(transaction, trace_options)
.await?
.try_into_pre_state_frame()? .try_into_pre_state_frame()?
{ {
PreStateFrame::Diff(diff) => Ok(diff), PreStateFrame::Diff(diff) => Ok(diff),
@@ -412,103 +430,122 @@ impl EthereumNode for KitchensinkNode {
} }
} }
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn fetch_add_nonce(&self, address: Address) -> anyhow::Result<u64> { async fn balance_of(&self, address: Address) -> anyhow::Result<U256> {
let provider = self.provider(); self.provider()
let onchain_nonce = BlockingExecutor::execute::<anyhow::Result<_>>(async move {
provider
.await? .await?
.get_transaction_count(address) .get_balance(address)
.await .await
.map_err(Into::into) .map_err(Into::into)
})??;
let mut nonces = self.nonces.lock().unwrap();
let current = nonces.entry(address).or_insert(onchain_nonce);
let value = *current;
*current += 1;
Ok(value)
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn chain_id(&self) -> anyhow::Result<alloy::primitives::ChainId> { async fn latest_state_proof(
let provider = self.provider(); &self,
BlockingExecutor::execute(async move { address: Address,
provider.await?.get_chain_id().await.map_err(Into::into) keys: Vec<StorageKey>,
})? ) -> anyhow::Result<EIP1186AccountProofResponse> {
self.provider()
.await?
.get_proof(address, keys)
.latest()
.await
.map_err(Into::into)
}
}
impl ResolverApi for KitchensinkNode {
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
async fn chain_id(&self) -> anyhow::Result<alloy::primitives::ChainId> {
self.provider()
.await?
.get_chain_id()
.await
.map_err(Into::into)
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn block_gas_limit(&self, number: BlockNumberOrTag) -> anyhow::Result<u128> { async fn transaction_gas_price(&self, tx_hash: &TxHash) -> anyhow::Result<u128> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move { .await?
provider .get_transaction_receipt(*tx_hash)
.await?
.context("Failed to get the transaction receipt")
.map(|receipt| receipt.effective_gas_price)
}
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
async fn block_gas_limit(&self, number: BlockNumberOrTag) -> anyhow::Result<u128> {
self.provider()
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.gas_limit) .map(|block| block.header.gas_limit as _)
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn block_coinbase(&self, number: BlockNumberOrTag) -> anyhow::Result<Address> { async fn block_coinbase(&self, number: BlockNumberOrTag) -> anyhow::Result<Address> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move {
provider
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.beneficiary) .map(|block| block.header.beneficiary)
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn block_difficulty(&self, number: BlockNumberOrTag) -> anyhow::Result<U256> { async fn block_difficulty(&self, number: BlockNumberOrTag) -> anyhow::Result<U256> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move {
provider
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.difficulty) .map(|block| U256::from_be_bytes(block.header.mix_hash.0))
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn block_hash(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockHash> { async fn block_base_fee(&self, number: BlockNumberOrTag) -> anyhow::Result<u64> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move { .await?
provider .get_block_by_number(number)
.await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.and_then(|block| {
block
.header
.base_fee_per_gas
.context("Failed to get the base fee per gas")
})
}
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
async fn block_hash(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockHash> {
self.provider()
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.hash) .map(|block| block.header.hash)
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn block_timestamp(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockTimestamp> { async fn block_timestamp(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockTimestamp> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move {
provider
.await? .await?
.get_block_by_number(number) .get_block_by_number(number)
.await? .await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks")) .ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.timestamp) .map(|block| block.header.timestamp)
})?
} }
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn last_block_number(&self) -> anyhow::Result<BlockNumber> { async fn last_block_number(&self) -> anyhow::Result<BlockNumber> {
let provider = self.provider(); self.provider()
BlockingExecutor::execute(async move { .await?
provider.await?.get_block_number().await.map_err(Into::into) .get_block_number()
})? .await
.map_err(Into::into)
} }
} }
@@ -519,17 +556,26 @@ impl Node for KitchensinkNode {
let base_directory = kitchensink_directory.join(id.to_string()); let base_directory = kitchensink_directory.join(id.to_string());
let logs_directory = base_directory.join(Self::LOGS_DIRECTORY); let logs_directory = base_directory.join(Self::LOGS_DIRECTORY);
let mut wallet = config.wallet();
for signer in (1..=config.private_keys_to_add)
.map(|id| U256::from(id))
.map(|id| id.to_be_bytes::<32>())
.map(|id| PrivateKeySigner::from_bytes(&FixedBytes(id)).unwrap())
{
wallet.register_signer(signer);
}
Self { Self {
id, id,
substrate_binary: config.kitchensink.clone(), substrate_binary: config.kitchensink.clone(),
eth_proxy_binary: config.eth_proxy.clone(), eth_proxy_binary: config.eth_proxy.clone(),
rpc_url: String::new(), rpc_url: String::new(),
wallet: config.wallet(), wallet,
base_directory, base_directory,
logs_directory, logs_directory,
process_substrate: None, process_substrate: None,
process_proxy: None, process_proxy: None,
nonces: Mutex::new(HashMap::new()), nonce_manager: Default::default(),
// We know that we only need to be storing 4 files so we can specify that when creating // We know that we only need to be storing 4 files so we can specify that when creating
// the vector. It's the stdout and stderr of the substrate-node and the eth-rpc. // the vector. It's the stdout and stderr of the substrate-node and the eth-rpc.
logs_file_to_flush: Vec::with_capacity(4), logs_file_to_flush: Vec::with_capacity(4),
@@ -541,7 +587,7 @@ impl Node for KitchensinkNode {
self.rpc_url.clone() self.rpc_url.clone()
} }
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn shutdown(&mut self) -> anyhow::Result<()> { fn shutdown(&mut self) -> anyhow::Result<()> {
// Terminate the processes in a graceful manner to allow for the output to be flushed. // Terminate the processes in a graceful manner to allow for the output to be flushed.
if let Some(mut child) = self.process_proxy.take() { if let Some(mut child) = self.process_proxy.take() {
@@ -568,12 +614,12 @@ impl Node for KitchensinkNode {
Ok(()) Ok(())
} }
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn spawn(&mut self, genesis: String) -> anyhow::Result<()> { fn spawn(&mut self, genesis: String) -> anyhow::Result<()> {
self.init(&genesis)?.spawn_process() self.init(&genesis)?.spawn_process()
} }
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))] #[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id), err)]
fn version(&self) -> anyhow::Result<String> { fn version(&self) -> anyhow::Result<String> {
let output = Command::new(&self.substrate_binary) let output = Command::new(&self.substrate_binary)
.arg("--version") .arg("--version")
@@ -585,6 +631,18 @@ impl Node for KitchensinkNode {
.stdout; .stdout;
Ok(String::from_utf8_lossy(&output).into()) Ok(String::from_utf8_lossy(&output).into())
} }
#[tracing::instrument(skip_all, fields(kitchensink_node_id = self.id))]
fn matches_target(&self, targets: Option<&[String]>) -> bool {
match targets {
None => true,
Some(targets) => targets.iter().any(|str| str.as_str() == "pvm"),
}
}
fn evm_version() -> EVMVersion {
EVMVersion::Cancun
}
} }
impl Drop for KitchensinkNode { impl Drop for KitchensinkNode {
@@ -640,6 +698,12 @@ impl TransactionBuilder<KitchenSinkNetwork> for <Ethereum as Network>::Transacti
) )
} }
fn take_nonce(&mut self) -> Option<u64> {
<<Ethereum as Network>::TransactionRequest as TransactionBuilder<Ethereum>>::take_nonce(
self,
)
}
fn input(&self) -> Option<&alloy::primitives::Bytes> { fn input(&self) -> Option<&alloy::primitives::Bytes> {
<<Ethereum as Network>::TransactionRequest as TransactionBuilder<Ethereum>>::input(self) <<Ethereum as Network>::TransactionRequest as TransactionBuilder<Ethereum>>::input(self)
} }
@@ -1020,27 +1084,22 @@ mod tests {
use alloy::rpc::types::TransactionRequest; use alloy::rpc::types::TransactionRequest;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::LazyLock; use std::sync::{LazyLock, Mutex};
use temp_dir::TempDir;
use std::fs; use std::fs;
use super::*; use super::*;
use crate::{GENESIS_JSON, Node}; use crate::{GENESIS_JSON, Node};
fn test_config() -> (Arguments, TempDir) { fn test_config() -> Arguments {
let mut config = Arguments::default(); Arguments {
let temp_dir = TempDir::new().unwrap(); kitchensink: PathBuf::from("substrate-node"),
eth_proxy: PathBuf::from("eth-rpc"),
config.working_directory = temp_dir.path().to_path_buf().into(); ..Default::default()
}
config.kitchensink = PathBuf::from("substrate-node");
config.eth_proxy = PathBuf::from("eth-rpc");
(config, temp_dir)
} }
fn new_node() -> (KitchensinkNode, Arguments, TempDir) { fn new_node() -> (KitchensinkNode, Arguments) {
// Note: When we run the tests in the CI we found that if they're all // Note: When we run the tests in the CI we found that if they're all
// run in parallel then the CI is unable to start all of the nodes in // run in parallel then the CI is unable to start all of the nodes in
// time and their start up times-out. Therefore, we want all of the // time and their start up times-out. Therefore, we want all of the
@@ -1059,20 +1118,20 @@ mod tests {
static NODE_START_MUTEX: Mutex<()> = Mutex::new(()); static NODE_START_MUTEX: Mutex<()> = Mutex::new(());
let _guard = NODE_START_MUTEX.lock().unwrap(); let _guard = NODE_START_MUTEX.lock().unwrap();
let (args, temp_dir) = test_config(); let args = test_config();
let mut node = KitchensinkNode::new(&args); let mut node = KitchensinkNode::new(&args);
node.init(GENESIS_JSON) node.init(GENESIS_JSON)
.expect("Failed to initialize the node") .expect("Failed to initialize the node")
.spawn_process() .spawn_process()
.expect("Failed to spawn the node process"); .expect("Failed to spawn the node process");
(node, args, temp_dir) (node, args)
} }
/// A shared node that multiple tests can use. It starts up once. /// A shared node that multiple tests can use. It starts up once.
fn shared_node() -> &'static KitchensinkNode { fn shared_node() -> &'static KitchensinkNode {
static NODE: LazyLock<(KitchensinkNode, TempDir)> = LazyLock::new(|| { static NODE: LazyLock<(KitchensinkNode, Arguments)> = LazyLock::new(|| {
let (node, _, temp_dir) = new_node(); let (node, args) = new_node();
(node, temp_dir) (node, args)
}); });
&NODE.0 &NODE.0
} }
@@ -1080,7 +1139,7 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn node_mines_simple_transfer_transaction_and_returns_receipt() { async fn node_mines_simple_transfer_transaction_and_returns_receipt() {
// Arrange // Arrange
let (node, args, _temp_dir) = new_node(); let (node, args) = new_node();
let provider = node.provider().await.expect("Failed to create provider"); let provider = node.provider().await.expect("Failed to create provider");
@@ -1115,7 +1174,7 @@ mod tests {
} }
"#; "#;
let mut dummy_node = KitchensinkNode::new(&test_config().0); let mut dummy_node = KitchensinkNode::new(&test_config());
// Call `init()` // Call `init()`
dummy_node.init(genesis_content).expect("init failed"); dummy_node.init(genesis_content).expect("init failed");
@@ -1129,12 +1188,12 @@ mod tests {
let contents = fs::read_to_string(&final_chainspec_path).expect("Failed to read chainspec"); let contents = fs::read_to_string(&final_chainspec_path).expect("Failed to read chainspec");
// Validate that the Substrate addresses derived from the Ethereum addresses are in the file // Validate that the Substrate addresses derived from the Ethereum addresses are in the file
let first_eth_addr = let first_eth_addr = KitchensinkNode::eth_to_substrate_address(
KitchensinkNode::eth_to_substrate_address("90F8bf6A479f320ead074411a4B0e7944Ea8c9C1") &"90F8bf6A479f320ead074411a4B0e7944Ea8c9C1".parse().unwrap(),
.unwrap(); );
let second_eth_addr = let second_eth_addr = KitchensinkNode::eth_to_substrate_address(
KitchensinkNode::eth_to_substrate_address("Ab8483F64d9C6d1EcF9b849Ae677dD3315835cb2") &"Ab8483F64d9C6d1EcF9b849Ae677dD3315835cb2".parse().unwrap(),
.unwrap(); );
assert!( assert!(
contents.contains(&first_eth_addr), contents.contains(&first_eth_addr),
@@ -1159,10 +1218,10 @@ mod tests {
} }
"#; "#;
let node = KitchensinkNode::new(&test_config().0); let node = KitchensinkNode::new(&test_config());
let result = node let result = node
.extract_balance_from_genesis_file(genesis_json) .extract_balance_from_genesis_file(&serde_json::from_str(genesis_json).unwrap())
.unwrap(); .unwrap();
let result_map: std::collections::HashMap<_, _> = result.into_iter().collect(); let result_map: std::collections::HashMap<_, _> = result.into_iter().collect();
@@ -1192,7 +1251,7 @@ mod tests {
]; ];
for eth_addr in eth_addresses { for eth_addr in eth_addresses {
let ss58 = KitchensinkNode::eth_to_substrate_address(eth_addr).unwrap(); let ss58 = KitchensinkNode::eth_to_substrate_address(&eth_addr.parse().unwrap());
println!("Ethereum: {eth_addr} -> Substrate SS58: {ss58}"); println!("Ethereum: {eth_addr} -> Substrate SS58: {ss58}");
} }
@@ -1220,7 +1279,7 @@ mod tests {
]; ];
for (eth_addr, expected_ss58) in cases { for (eth_addr, expected_ss58) in cases {
let result = KitchensinkNode::eth_to_substrate_address(eth_addr).unwrap(); let result = KitchensinkNode::eth_to_substrate_address(&eth_addr.parse().unwrap());
assert_eq!( assert_eq!(
result, expected_ss58, result, expected_ss58,
"Mismatch for Ethereum address {eth_addr}" "Mismatch for Ethereum address {eth_addr}"
@@ -1230,15 +1289,16 @@ mod tests {
#[test] #[test]
fn spawn_works() { fn spawn_works() {
let (config, _temp_dir) = test_config(); let config = test_config();
let mut node = KitchensinkNode::new(&config); let mut node = KitchensinkNode::new(&config);
node.spawn(GENESIS_JSON.to_string()).unwrap(); node.spawn(GENESIS_JSON.to_string()).unwrap();
} }
#[test] #[test]
fn version_works() { fn version_works() {
let (config, _temp_dir) = test_config(); let config = test_config();
let node = KitchensinkNode::new(&config); let node = KitchensinkNode::new(&config);
let version = node.version().unwrap(); let version = node.version().unwrap();
@@ -1251,7 +1311,7 @@ mod tests {
#[test] #[test]
fn eth_rpc_version_works() { fn eth_rpc_version_works() {
let (config, _temp_dir) = test_config(); let config = test_config();
let node = KitchensinkNode::new(&config); let node = KitchensinkNode::new(&config);
let version = node.eth_rpc_version().unwrap(); let version = node.eth_rpc_version().unwrap();
@@ -1262,86 +1322,86 @@ mod tests {
); );
} }
#[test] #[tokio::test]
fn can_get_chain_id_from_node() { async fn can_get_chain_id_from_node() {
// Arrange // Arrange
let node = shared_node(); let node = shared_node();
// Act // Act
let chain_id = node.chain_id(); let chain_id = node.chain_id().await;
// Assert // Assert
let chain_id = chain_id.expect("Failed to get the chain id"); let chain_id = chain_id.expect("Failed to get the chain id");
assert_eq!(chain_id, 420_420_420); assert_eq!(chain_id, 420_420_420);
} }
#[test] #[tokio::test]
fn can_get_gas_limit_from_node() { async fn can_get_gas_limit_from_node() {
// Arrange // Arrange
let node = shared_node(); let node = shared_node();
// Act // Act
let gas_limit = node.block_gas_limit(BlockNumberOrTag::Latest); let gas_limit = node.block_gas_limit(BlockNumberOrTag::Latest).await;
// Assert // Assert
let _ = gas_limit.expect("Failed to get the gas limit"); let _ = gas_limit.expect("Failed to get the gas limit");
} }
#[test] #[tokio::test]
fn can_get_coinbase_from_node() { async fn can_get_coinbase_from_node() {
// Arrange // Arrange
let node = shared_node(); let node = shared_node();
// Act // Act
let coinbase = node.block_coinbase(BlockNumberOrTag::Latest); let coinbase = node.block_coinbase(BlockNumberOrTag::Latest).await;
// Assert // Assert
let _ = coinbase.expect("Failed to get the coinbase"); let _ = coinbase.expect("Failed to get the coinbase");
} }
#[test] #[tokio::test]
fn can_get_block_difficulty_from_node() { async fn can_get_block_difficulty_from_node() {
// Arrange // Arrange
let node = shared_node(); let node = shared_node();
// Act // Act
let block_difficulty = node.block_difficulty(BlockNumberOrTag::Latest); let block_difficulty = node.block_difficulty(BlockNumberOrTag::Latest).await;
// Assert // Assert
let _ = block_difficulty.expect("Failed to get the block difficulty"); let _ = block_difficulty.expect("Failed to get the block difficulty");
} }
#[test] #[tokio::test]
fn can_get_block_hash_from_node() { async fn can_get_block_hash_from_node() {
// Arrange // Arrange
let node = shared_node(); let node = shared_node();
// Act // Act
let block_hash = node.block_hash(BlockNumberOrTag::Latest); let block_hash = node.block_hash(BlockNumberOrTag::Latest).await;
// Assert // Assert
let _ = block_hash.expect("Failed to get the block hash"); let _ = block_hash.expect("Failed to get the block hash");
} }
#[test] #[tokio::test]
fn can_get_block_timestamp_from_node() { async fn can_get_block_timestamp_from_node() {
// Arrange // Arrange
let node = shared_node(); let node = shared_node();
// Act // Act
let block_timestamp = node.block_timestamp(BlockNumberOrTag::Latest); let block_timestamp = node.block_timestamp(BlockNumberOrTag::Latest).await;
// Assert // Assert
let _ = block_timestamp.expect("Failed to get the block timestamp"); let _ = block_timestamp.expect("Failed to get the block timestamp");
} }
#[test] #[tokio::test]
fn can_get_block_number_from_node() { async fn can_get_block_number_from_node() {
// Arrange // Arrange
let node = shared_node(); let node = shared_node();
// Act // Act
let block_number = node.last_block_number(); let block_number = node.last_block_number().await;
// Assert // Assert
let _ = block_number.expect("Failed to get the block number"); let _ = block_number.expect("Failed to get the block number");
+10
View File
@@ -1,8 +1,11 @@
//! This crate implements the testing nodes. //! This crate implements the testing nodes.
use revive_common::EVMVersion;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use revive_dt_node_interaction::EthereumNode; use revive_dt_node_interaction::EthereumNode;
pub mod common;
pub mod constants;
pub mod geth; pub mod geth;
pub mod kitchensink; pub mod kitchensink;
pub mod pool; pub mod pool;
@@ -30,4 +33,11 @@ pub trait Node: EthereumNode {
/// Returns the node version. /// Returns the node version.
fn version(&self) -> anyhow::Result<String>; fn version(&self) -> anyhow::Result<String>;
/// Given a list of targets from the metadata file, this function determines if the metadata
/// file can be ran on this node or not.
fn matches_target(&self, targets: Option<&[String]>) -> bool;
/// Returns the EVM version of the node.
fn evm_version() -> EVMVersion;
} }
+6 -7
View File
@@ -1,12 +1,12 @@
//! This crate implements concurrent handling of testing node. //! This crate implements concurrent handling of testing node.
use std::{ use std::{
fs::read_to_string,
sync::atomic::{AtomicUsize, Ordering}, sync::atomic::{AtomicUsize, Ordering},
thread, thread,
}; };
use anyhow::Context; use anyhow::Context;
use revive_dt_common::fs::CachedFileSystem;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use crate::Node; use crate::Node;
@@ -23,12 +23,11 @@ where
T: Node + Send + 'static, T: Node + Send + 'static,
{ {
/// Create a new Pool. This will start as many nodes as there are workers in `config`. /// Create a new Pool. This will start as many nodes as there are workers in `config`.
pub fn new(config: &Arguments) -> anyhow::Result<Self> { pub async fn new(config: &Arguments) -> anyhow::Result<Self> {
let nodes = config.workers; let nodes = config.number_of_nodes;
let genesis = read_to_string(&config.genesis_file).context(format!( let genesis = CachedFileSystem::read_to_string(&config.genesis_file)
"can not read genesis file: {}", .await
config.genesis_file.display() .context("Failed to read genesis file")?;
))?;
let mut handles = Vec::with_capacity(nodes); let mut handles = Vec::with_capacity(nodes);
for _ in 0..nodes { for _ in 0..nodes {
+1 -1
View File
@@ -10,9 +10,9 @@ rust-version.workspace = true
[dependencies] [dependencies]
revive-dt-config = { workspace = true } revive-dt-config = { workspace = true }
revive-dt-format = { workspace = true } revive-dt-format = { workspace = true }
revive-dt-compiler = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
tracing = { workspace = true } tracing = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
revive-solc-json-interface = { workspace = true }
+8 -21
View File
@@ -1,5 +1,6 @@
//! The report analyzer enriches the raw report data. //! The report analyzer enriches the raw report data.
use revive_dt_compiler::CompilerOutput;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::reporter::CompilationTask; use crate::reporter::CompilationTask;
@@ -13,41 +14,27 @@ pub struct CompilerStatistics {
pub mean_code_size: usize, pub mean_code_size: usize,
/// The mean size of the optimized YUL IR. /// The mean size of the optimized YUL IR.
pub mean_yul_size: usize, pub mean_yul_size: usize,
/// Is a proxy because the YUL also containes a lot of comments. /// Is a proxy because the YUL also contains a lot of comments.
pub yul_to_bytecode_size_ratio: f32, pub yul_to_bytecode_size_ratio: f32,
} }
impl CompilerStatistics { impl CompilerStatistics {
/// Cumulatively update the statistics with the next compiler task. /// Cumulatively update the statistics with the next compiler task.
pub fn sample(&mut self, compilation_task: &CompilationTask) { pub fn sample(&mut self, compilation_task: &CompilationTask) {
let Some(output) = &compilation_task.json_output else { let Some(CompilerOutput { contracts }) = &compilation_task.json_output else {
return;
};
let Some(contracts) = &output.contracts else {
return; return;
}; };
for (_solidity, contracts) in contracts.iter() { for (_solidity, contracts) in contracts.iter() {
for (_name, contract) in contracts.iter() { for (_name, (bytecode, _)) in contracts.iter() {
let Some(evm) = &contract.evm else {
continue;
};
let Some(deploy_code) = &evm.deployed_bytecode else {
continue;
};
// The EVM bytecode can be unlinked and thus is not necessarily a decodable hex // The EVM bytecode can be unlinked and thus is not necessarily a decodable hex
// string; for our statistics this is a good enough approximation. // string; for our statistics this is a good enough approximation.
let bytecode_size = deploy_code.object.len() / 2; let bytecode_size = bytecode.len() / 2;
let yul_size = contract // TODO: for the time being we set the yul_size to be zero. We need to change this
.ir_optimized // when we overhaul the reporting.
.as_ref()
.expect("if the contract has a deploy code it should also have the opimized IR")
.len();
self.update_sizes(bytecode_size, yul_size); self.update_sizes(bytecode_size, 0);
} }
} }
} }
+4 -12
View File
@@ -12,11 +12,11 @@ use std::{
}; };
use anyhow::Context; use anyhow::Context;
use revive_dt_compiler::{CompilerInput, CompilerOutput};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use revive_dt_config::{Arguments, TestingPlatform}; use revive_dt_config::{Arguments, TestingPlatform};
use revive_dt_format::{corpus::Corpus, mode::SolcMode}; use revive_dt_format::{corpus::Corpus, mode::SolcMode};
use revive_solc_json_interface::{SolcStandardJsonInput, SolcStandardJsonOutput};
use crate::analyzer::CompilerStatistics; use crate::analyzer::CompilerStatistics;
@@ -44,9 +44,9 @@ pub struct Report {
#[derive(Clone, Debug, Serialize, Deserialize)] #[derive(Clone, Debug, Serialize, Deserialize)]
pub struct CompilationTask { pub struct CompilationTask {
/// The observed compiler input. /// The observed compiler input.
pub json_input: SolcStandardJsonInput, pub json_input: CompilerInput,
/// The observed compiler output. /// The observed compiler output.
pub json_output: Option<SolcStandardJsonOutput>, pub json_output: Option<CompilerOutput>,
/// The observed compiler mode. /// The observed compiler mode.
pub mode: SolcMode, pub mode: SolcMode,
/// The observed compiler version. /// The observed compiler version.
@@ -152,15 +152,7 @@ impl Report {
for (platform, results) in self.compiler_results.iter() { for (platform, results) in self.compiler_results.iter() {
for result in results { for result in results {
// ignore if there were no errors // ignore if there were no errors
if result.compilation_task.error.is_none() if result.compilation_task.error.is_none() {
&& result
.compilation_task
.json_output
.as_ref()
.and_then(|output| output.errors.as_ref())
.map(|errors| errors.is_empty())
.unwrap_or(true)
{
continue; continue;
} }
+3
View File
@@ -9,9 +9,12 @@ repository.workspace = true
rust-version.workspace = true rust-version.workspace = true
[dependencies] [dependencies]
revive-dt-common = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
hex = { workspace = true } hex = { workspace = true }
tracing = { workspace = true } tracing = { workspace = true }
tokio = { workspace = true }
reqwest = { workspace = true } reqwest = { workspace = true }
semver = { workspace = true } semver = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
+10 -8
View File
@@ -6,37 +6,39 @@ use std::{
io::{BufWriter, Write}, io::{BufWriter, Write},
os::unix::fs::PermissionsExt, os::unix::fs::PermissionsExt,
path::{Path, PathBuf}, path::{Path, PathBuf},
sync::{LazyLock, Mutex}, sync::LazyLock,
}; };
use crate::download::GHDownloader; use tokio::sync::Mutex;
use crate::download::SolcDownloader;
pub const SOLC_CACHE_DIRECTORY: &str = "solc"; pub const SOLC_CACHE_DIRECTORY: &str = "solc";
pub(crate) static SOLC_CACHER: LazyLock<Mutex<HashSet<PathBuf>>> = LazyLock::new(Default::default); pub(crate) static SOLC_CACHER: LazyLock<Mutex<HashSet<PathBuf>>> = LazyLock::new(Default::default);
pub(crate) fn get_or_download( pub(crate) async fn get_or_download(
working_directory: &Path, working_directory: &Path,
downloader: &GHDownloader, downloader: &SolcDownloader,
) -> anyhow::Result<PathBuf> { ) -> anyhow::Result<PathBuf> {
let target_directory = working_directory let target_directory = working_directory
.join(SOLC_CACHE_DIRECTORY) .join(SOLC_CACHE_DIRECTORY)
.join(downloader.version.to_string()); .join(downloader.version.to_string());
let target_file = target_directory.join(downloader.target); let target_file = target_directory.join(downloader.target);
let mut cache = SOLC_CACHER.lock().unwrap(); let mut cache = SOLC_CACHER.lock().await;
if cache.contains(&target_file) { if cache.contains(&target_file) {
tracing::debug!("using cached solc: {}", target_file.display()); tracing::debug!("using cached solc: {}", target_file.display());
return Ok(target_file); return Ok(target_file);
} }
create_dir_all(target_directory)?; create_dir_all(target_directory)?;
download_to_file(&target_file, downloader)?; download_to_file(&target_file, downloader).await?;
cache.insert(target_file.clone()); cache.insert(target_file.clone());
Ok(target_file) Ok(target_file)
} }
fn download_to_file(path: &Path, downloader: &GHDownloader) -> anyhow::Result<()> { async fn download_to_file(path: &Path, downloader: &SolcDownloader) -> anyhow::Result<()> {
tracing::info!("caching file: {}", path.display()); tracing::info!("caching file: {}", path.display());
let Ok(file) = File::create_new(path) else { let Ok(file) = File::create_new(path) else {
@@ -52,7 +54,7 @@ fn download_to_file(path: &Path, downloader: &GHDownloader) -> anyhow::Result<()
} }
let mut file = BufWriter::new(file); let mut file = BufWriter::new(file);
file.write_all(&downloader.download()?)?; file.write_all(&downloader.download().await?)?;
file.flush()?; file.flush()?;
drop(file); drop(file);
+105 -48
View File
@@ -5,6 +5,8 @@ use std::{
sync::{LazyLock, Mutex}, sync::{LazyLock, Mutex},
}; };
use revive_dt_common::types::VersionOrRequirement;
use semver::Version; use semver::Version;
use sha2::{Digest, Sha256}; use sha2::{Digest, Sha256};
@@ -23,12 +25,12 @@ impl List {
/// ///
/// Caches the list retrieved from the `url` into [LIST_CACHE], /// Caches the list retrieved from the `url` into [LIST_CACHE],
/// subsequent calls with the same `url` will return the cached list. /// subsequent calls with the same `url` will return the cached list.
pub fn download(url: &'static str) -> anyhow::Result<Self> { pub async fn download(url: &'static str) -> anyhow::Result<Self> {
if let Some(list) = LIST_CACHE.lock().unwrap().get(url) { if let Some(list) = LIST_CACHE.lock().unwrap().get(url) {
return Ok(list.clone()); return Ok(list.clone());
} }
let body: List = reqwest::blocking::get(url)?.json()?; let body: List = reqwest::get(url).await?.json().await?;
LIST_CACHE.lock().unwrap().insert(url, body.clone()); LIST_CACHE.lock().unwrap().insert(url, body.clone());
@@ -36,65 +38,91 @@ impl List {
} }
} }
/// Download solc binaries from GitHub releases (IPFS links aren't reliable). /// Download solc binaries from the official SolidityLang site
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct GHDownloader { pub struct SolcDownloader {
pub version: Version, pub version: Version,
pub target: &'static str, pub target: &'static str,
pub list: &'static str, pub list: &'static str,
} }
impl GHDownloader { impl SolcDownloader {
pub const BASE_URL: &str = "https://github.com/ethereum/solidity/releases/download"; pub const BASE_URL: &str = "https://binaries.soliditylang.org";
pub const LINUX_NAME: &str = "solc-static-linux"; pub const LINUX_NAME: &str = "linux-amd64";
pub const MACOSX_NAME: &str = "solc-macos"; pub const MACOSX_NAME: &str = "macosx-amd64";
pub const WINDOWS_NAME: &str = "solc-windows.exe"; pub const WINDOWS_NAME: &str = "windows-amd64";
pub const WASM_NAME: &str = "soljson.js"; pub const WASM_NAME: &str = "wasm";
fn new(version: Version, target: &'static str, list: &'static str) -> Self { async fn new(
Self { version: impl Into<VersionOrRequirement>,
target: &'static str,
list: &'static str,
) -> anyhow::Result<Self> {
let version_or_requirement = version.into();
match version_or_requirement {
VersionOrRequirement::Version(version) => Ok(Self {
version, version,
target, target,
list, list,
}),
VersionOrRequirement::Requirement(requirement) => {
let Some(version) = List::download(list)
.await?
.builds
.into_iter()
.map(|build| build.version)
.filter(|version| requirement.matches(version))
.max()
else {
anyhow::bail!("Failed to find a version that satisfies {requirement:?}");
};
Ok(Self {
version,
target,
list,
})
}
} }
} }
pub fn linux(version: Version) -> Self { pub async fn linux(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::LINUX_NAME, List::LINUX_URL) Self::new(version, Self::LINUX_NAME, List::LINUX_URL).await
} }
pub fn macosx(version: Version) -> Self { pub async fn macosx(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::MACOSX_NAME, List::MACOSX_URL) Self::new(version, Self::MACOSX_NAME, List::MACOSX_URL).await
} }
pub fn windows(version: Version) -> Self { pub async fn windows(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::WINDOWS_NAME, List::WINDOWS_URL) Self::new(version, Self::WINDOWS_NAME, List::WINDOWS_URL).await
} }
pub fn wasm(version: Version) -> Self { pub async fn wasm(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::WASM_NAME, List::WASM_URL) Self::new(version, Self::WASM_NAME, List::WASM_URL).await
}
/// Returns the download link.
pub fn url(&self) -> String {
format!("{}/v{}/{}", Self::BASE_URL, &self.version, &self.target)
} }
/// Download the solc binary. /// Download the solc binary.
/// ///
/// Errors out if the download fails or the digest of the downloaded file /// Errors out if the download fails or the digest of the downloaded file
/// mismatches the expected digest from the release [List]. /// mismatches the expected digest from the release [List].
pub fn download(&self) -> anyhow::Result<Vec<u8>> { pub async fn download(&self) -> anyhow::Result<Vec<u8>> {
tracing::info!("downloading solc: {self:?}"); tracing::info!("downloading solc: {self:?}");
let expected_digest = List::download(self.list)? let builds = List::download(self.list).await?.builds;
.builds let build = builds
.iter() .iter()
.find(|build| build.version == self.version) .find(|build| build.version == self.version)
.ok_or_else(|| anyhow::anyhow!("solc v{} not found builds", self.version)) .ok_or_else(|| anyhow::anyhow!("solc v{} not found builds", self.version))?;
.map(|b| b.sha256.strip_prefix("0x").unwrap_or(&b.sha256).to_string())?;
let file = reqwest::blocking::get(self.url())?.bytes()?.to_vec(); let path = build.path.clone();
let expected_digest = build
.sha256
.strip_prefix("0x")
.unwrap_or(&build.sha256)
.to_string();
let url = format!("{}/{}/{}", Self::BASE_URL, self.target, path.display());
let file = reqwest::get(url).await?.bytes().await?.to_vec();
if hex::encode(Sha256::digest(&file)) != expected_digest { if hex::encode(Sha256::digest(&file)) != expected_digest {
anyhow::bail!("sha256 mismatch for solc version {}", self.version); anyhow::bail!("sha256 mismatch for solc version {}", self.version);
@@ -106,29 +134,58 @@ impl GHDownloader {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::{download::GHDownloader, list::List}; use crate::{download::SolcDownloader, list::List};
#[test] #[tokio::test]
fn try_get_windows() { async fn try_get_windows() {
let version = List::download(List::WINDOWS_URL).unwrap().latest_release; let version = List::download(List::WINDOWS_URL)
GHDownloader::windows(version).download().unwrap(); .await
.unwrap()
.latest_release;
SolcDownloader::windows(version)
.await
.unwrap()
.download()
.await
.unwrap();
} }
#[test] #[tokio::test]
fn try_get_macosx() { async fn try_get_macosx() {
let version = List::download(List::MACOSX_URL).unwrap().latest_release; let version = List::download(List::MACOSX_URL)
GHDownloader::macosx(version).download().unwrap(); .await
.unwrap()
.latest_release;
SolcDownloader::macosx(version)
.await
.unwrap()
.download()
.await
.unwrap();
} }
#[test] #[tokio::test]
fn try_get_linux() { async fn try_get_linux() {
let version = List::download(List::LINUX_URL).unwrap().latest_release; let version = List::download(List::LINUX_URL)
GHDownloader::linux(version).download().unwrap(); .await
.unwrap()
.latest_release;
SolcDownloader::linux(version)
.await
.unwrap()
.download()
.await
.unwrap();
} }
#[test] #[tokio::test]
fn try_get_wasm() { async fn try_get_wasm() {
let version = List::download(List::WASM_URL).unwrap().latest_release; let version = List::download(List::WASM_URL).await.unwrap().latest_release;
GHDownloader::wasm(version).download().unwrap(); SolcDownloader::wasm(version)
.await
.unwrap()
.download()
.await
.unwrap();
} }
} }
+11 -10
View File
@@ -6,8 +6,9 @@
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use cache::get_or_download; use cache::get_or_download;
use download::GHDownloader; use download::SolcDownloader;
use semver::Version;
use revive_dt_common::types::VersionOrRequirement;
pub mod cache; pub mod cache;
pub mod download; pub mod download;
@@ -18,22 +19,22 @@ pub mod list;
/// ///
/// Subsequent calls for the same version will use a cached artifact /// Subsequent calls for the same version will use a cached artifact
/// and not download it again. /// and not download it again.
pub fn download_solc( pub async fn download_solc(
cache_directory: &Path, cache_directory: &Path,
version: Version, version: impl Into<VersionOrRequirement>,
wasm: bool, wasm: bool,
) -> anyhow::Result<PathBuf> { ) -> anyhow::Result<PathBuf> {
let downloader = if wasm { let downloader = if wasm {
GHDownloader::wasm(version) SolcDownloader::wasm(version).await
} else if cfg!(target_os = "linux") { } else if cfg!(target_os = "linux") {
GHDownloader::linux(version) SolcDownloader::linux(version).await
} else if cfg!(target_os = "macos") { } else if cfg!(target_os = "macos") {
GHDownloader::macosx(version) SolcDownloader::macosx(version).await
} else if cfg!(target_os = "windows") { } else if cfg!(target_os = "windows") {
GHDownloader::windows(version) SolcDownloader::windows(version).await
} else { } else {
unimplemented!() unimplemented!()
}; }?;
get_or_download(cache_directory, &downloader) get_or_download(cache_directory, &downloader).await
} }
+1 -5
View File
@@ -33,9 +33,5 @@
"mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000", "mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp": "0x00", "timestamp": "0x00",
"alloc": { "alloc": {}
"90F8bf6A479f320ead074411a4B0e7944Ea8c9C1": {
"balance": "1000000000000000000"
}
}
} }