Compare commits

...

77 Commits

Author SHA1 Message Date
Omar Abdulla dde74af22b Fix tests 2025-08-28 16:52:05 +03:00
Omar Abdulla 453fcca4d6 Update tests stream func 2025-08-28 16:44:09 +03:00
Omar Abdulla 31f69de3cb Update compilers to use interior caching 2025-08-28 16:00:37 +03:00
Omar Abdulla f36a132e05 Switch to a for loop when reporting cases 2025-08-28 15:49:17 +03:00
Omar Abdulla a5a27957d5 Uncomment instrument macro 2025-08-28 15:41:38 +03:00
Omar Abdulla 680cc5c174 Remove the static testing target from the report 2025-08-28 15:40:23 +03:00
Omar Abdulla 90214f1afe Add reporting back to the compilers 2025-08-28 09:42:54 +03:00
Omar Abdulla b69aab903e Fix Cargo machete 2025-08-28 09:25:17 +03:00
Omar Abdulla af1eba0ccf Update the compilers interface 2025-08-27 22:01:25 +03:00
Omar Abdulla 4b641b947b Add leader and follower node assignment to test 2025-08-27 18:22:56 +03:00
Omar 8b1afc36a3 Better errors in report (#155) 2025-08-26 15:35:19 +00:00
Omar 60328cd493 Add a Quick Run Script (#152)
* Add a quick run script

* Add more context to errors

* Fix the issue with corpus directory canonicalization

* Update the quick run script

* Edit the runner script

* Support specifying the path of the polkadot sdk
2025-08-25 21:03:28 +00:00
Omar eb264fcc7b feature/fix abi finding resolc (#154)
* Configure kitchensink to use devnode by default

* Update the kitchensink tests

* Fix the logic for finding the ABI in resolc

* Edit how CLI reporter prints
2025-08-25 20:47:29 +00:00
Omar 84b139d3b4 Configure kitchensink to use devnode by default (#153)
* Configure kitchensink to use devnode by default

* Update the kitchensink tests
2025-08-25 15:46:06 +00:00
Omar d93824d973 Updated Reporting Infrastructure (#151)
* Remove the old reporting infra

* Use the Test struct more in the code

* Implement the initial set of reporter events

* Add more runner events to the reporter and refine the structure

* Add reporting infra for reporting ignored tests

* Update report to use better map data structures

* Add case status information to the report

* Integrate the reporting infrastructure with the
CLI reporter used by the program.

* Include contract compilation information in report

* Cleanup report model

* Add information on the deployed contracts
2025-08-25 11:16:09 +00:00
Omar bec5a7e390 Increase Kitchensink maximum http connections (#148)
* Throttle the Kitchensink requests

* Increase max connections limit for kitchensink
2025-08-20 22:25:17 +00:00
Omar 85033cfead Update the readme (#145) 2025-08-19 17:41:26 +00:00
Omar 76d6a154c1 Fix concurrency issues (#142)
* Fix the OS FD error

* Cache the compiler versions

* Allow for auto display impl in declare wrapper type macro

* Better logging and fix concurrency issues

* Fix tests

* Format

* Make the code even more concurrent
2025-08-19 06:47:36 +00:00
Omar c58551803d Allow multiple files in corpus (#144) 2025-08-16 16:04:17 +00:00
Omar 185edcfad9 Cached compiler artifacts (#143)
* WIP compilation cache

* Implement a persistent compilation cache

* Correct the key and value encoding for the cache
2025-08-16 16:04:13 +00:00
James Wilson 09d56f5177 Redo how we parse and use modes (#125)
* WIP redo how we parse and use modes

* test expanding, too

* WIP integrate new Mode/ParsedMode into rest of code

* First pass integrated new mode bits

* fmt

* clippy

* Remove mode we no longer support from test metadata

* Address nits

* Add ability for compiler to opt out if it can't work with some Mode/version

* Elide viaIR input if compiler does not support it

* Improve test output a little; string modes and list ignored tests

* Move Mode to common crate

* constants.mod, and Display for CaseIdx to use it

* fmt

* Rename ModePipeline::E/Y

* Re-arrange Mode things; ParsedMode in format and Mode etc in common

* Move compile check to prepare_tests

* Remove now-unused deps

* clippy nits

* Update fallback tx weights to avoid out of gas errors

* Update kitchensink weights too and fmt

* Bump default geth timeout to 10s

* 30s timeout

* Improve geth stdout logging on failure

* fix line logging

* remove --networkid and arg, back to 5s timeout for geth
2025-08-16 11:38:17 +00:00
Omar a59e287fa1 Add a cached fs abstraction (#141) 2025-08-14 15:21:05 +00:00
Omar f2045db0e9 Add compiler directives to metadata (#139) 2025-08-14 07:38:56 +00:00
Omar 5a11f44673 Misc features/improvements (#138)
* Implement various needed features and improvements

* Reorder the metadata struct

* Format comments
2025-08-13 13:50:06 +00:00
James Wilson 46aea0890d Split reporter and case runner, use channels to pass test reports (#137)
* Use channels to send data to reporting thread and avoid hangs / mutex / duration. Limit max concurrent tasks to avoid too many open files

* More appropriate name for dirver/reporter task fns

* Back to parallelise individual cases, report individual cases, address grumbles

* newline before 'Failures' title in report
2025-08-13 13:10:26 +00:00
Omar 9b40c9b9e3 Add an EVM version filter (#136)
* Add an EVM version filter

* Update naming
2025-08-12 10:19:59 +00:00
Omar f67a9bf643 Refactor/ignore null values (#135)
* Skip serialization of null values

* Add support for comments in various steps
2025-08-12 08:55:21 +00:00
Omar 67d767ffde Implement storage empty assertion (#134) 2025-08-11 13:17:19 +00:00
Omar f7fbe094ec Balance assertions (#133)
* Make metadata serializable

* Refactor tests to use steps

* Add a balance assertion test step

* Test balance deserialization

* Box the test steps

* Permit size difference in step output
2025-08-11 12:11:16 +00:00
Omar 90b2dd4cfe Make metadata serializable (#132) 2025-08-10 21:57:41 +00:00
Omar 64d63ef999 Remove the provider cache (#121)
* Remove the provider cache

* Add timing information to the CLI report
2025-08-07 03:55:24 +00:00
Omar 757bfbe116 Add more resolvable variables (#120)
* Allow resolution of base fee

* Fix block difficulty resolution

* Allow for the resolution of gas price
2025-08-06 15:17:36 +00:00
Omar 8619e7feb0 Fix the transaction tracing issues (#118)
* Set the gc mode to archive in geth

* Add a maximum to the exponential backoff wait duration

* Edit the formatting of the CLI case reporter
2025-08-06 12:25:39 +00:00
Omar edba49b301 Use SolidityLang for solc downloads (#117) 2025-08-06 10:35:05 +00:00
Omar 9980926d40 Add a case ignore flag (#114)
* Added a resolver tied to a specific block

* Increase the number of private keys

* Increase kitchensink wait time to 60 seconds

* Add a case ignore flag
2025-08-04 16:40:53 +00:00
Omar ff993d44a5 Added a resolver tied to a specific block (#111)
* Added a resolver tied to a specific block

* Increase the number of private keys

* Increase kitchensink wait time to 60 seconds
2025-08-04 12:45:47 +00:00
Omar 8cbb1a9f77 Added basic console reporting (#110)
* Added basic console reporting

* Add some waiting period to the printing task

* Print to the stderr and print logs to stdout
2025-08-04 06:05:49 +00:00
Omar 56c2fe8c0c Parallelize Cases (#109)
* Parallelize over cases

* Rename the state and driver

* Parallelize execution

* Update the default config of the tool

* Make codebase async

* Fix machete

* Fix tests & clear node directories before startup

* Cleanup the cleanup logic

* Rename geth node
2025-08-01 11:00:08 +00:00
Omar 330a773a1c Add variables support (#96) 2025-07-30 08:41:03 +00:00
Omar f51693cb9f Support multiple compiler versions (#92)
* Allow for downloader to use version requirements.

We will soon add support for the compiler version requirement from the
metadata files to be honored. The compiler version is specified in the
solc modes section of the file and its specified as a `VersionReq` and
not as a version.

Therefore, we need to have the ability to honor this version requirement
and find the best version that satisfies the requirement.

* Request `VersionOrRequirement` in compiler interface

* Honor the compiler version requirement in metadata

This commit honors the compiler version requirement listed in the solc
modes of the metadata file. If this version requirement is provided then
it overrides what was passed in the CLI. Otherwise, the CLI version will
be used.

* Make compiler IO completely generic.

Before this commit, the types that were used for the compiler input and
output were the resolc compiler types which was a leaky abstraction as
we have traits to abstract the compilers away but we expose their
internal types out to other crates.

This commit did the following:
1. Made the compiler IO types fully generic so that all of the logic for
   constructing the map of compiled contracts is all done by the
   compiler implementation and not by the consuming code.
2. Changed the input types used for Solc to be the forge standard JSON
   types for Solc instead of resolc.

* Fix machete

* Add resolc to CI

* Add resolc to CI

* Add resolc to CI

* Add resolc to CI
2025-07-30 04:56:23 +00:00
James Wilson 4db7009640 Ensure path in corpus is relative to corpus file (#85) 2025-07-29 13:12:16 +00:00
Omar 5a36e242ec Allow for files in corpus definitions (#87)
* Allow for files to be specified in the corpus file

* Attempt to improve the geth tx indexing issue.

We're facing an issue where Geth transaction indexing can sometimes stall
on some of the nodes we're running. The logs show that for all transactions
we always need 1 second of waiting time. However, during certain runs we
sometimes run into an issue with some of the nodes where it seems like
their transaction indexer fails (either at the start or after some amount
of time) which leads us to never get the receipts back from these specific
nodes.

This is not a load issue as it appears like all of the other nodes handle
it just fine. However, it looks like once a node gets into this state it
can not get out of it and its bricked for the entire run.

This commit adds some more command line arguments to the geth command in
hopes of improving this issue.
2025-07-29 13:02:53 +00:00
James Wilson 33329632b5 Increase geth instantiate timeout from 2s to 5s (#86) 2025-07-29 10:34:31 +00:00
Omar 429f2e92a2 Fix contract discovery for simple tests (#83) 2025-07-28 07:05:53 +00:00
Omar 65f41f2038 Correct the type of address in matterlabs events (#82) 2025-07-28 05:01:52 +00:00
Omar 3ed8a1ca1c Support compiler-version aware exceptions (#81) 2025-07-25 14:23:17 +00:00
Omar 2923d675cd Support Compile-time Linking (#79)
* Use wrappers for libraries in metadata.

* Create a unified way to access deployed contracts

* Support linking at compile time
2025-07-25 07:03:21 +00:00
Omar 8f5bcf08ad Support Calldata arithmetic (#77)
* Re-order the input file.

This commit reorders the input file such that we have a definitions
section and an implementations section and such that the the order of
the items in both sections is the same.

* Implement a reverse polish calculator for calldata arithmetic
2025-07-24 15:35:25 +00:00
Omar 90fb89adc0 Add a common crate (#75)
* Add a barebones common crate

* Refactor some code into the common crate

* Add a `ResolverApi` interface.

This commit adds a `ResolverApi` trait to the `format` crate that can be
implemented by any type that can act as a resolver. A resolver is able
to provide information on the chain state. This chain state could be
fresh or it could be cached (which is something that we will do in a
future PR).

This cleans up our crate graph so that `format` is not depending on the
node interactions crate for the `EthereumNode` trait.

* Cleanup the blocking executor
2025-07-24 12:42:45 +00:00
Omar b03ad3027e Pre-seed accounts with more ETH. (#73)
* Pre-seed accounts with more ETH.

This commit fixes and solves some issues around how much ETH we seed an
account with in genesis. Currently, any account that the node has keys
to sign for will be seeded with u128::MAX WEI in genesis. This also
includes the default signer account.

* Bump commit hash of polkadot SDK

* Change how the cache key is computed

* Revert "Change how the cache key is computed"

This reverts commit 75afdd9cfd.

* Revert "Bump commit hash of polkadot SDK"

This reverts commit 8aaa69780e.

* Add extra comments

* Revert "Add extra comments"

This reverts commit bd4de2c83d.

* Update the initial balance
2025-07-24 08:46:14 +00:00
Omar 972f3b6d5b Wait longer for geth receipts (#74) 2025-07-24 04:40:19 +00:00
Omar 6f4aa731ab Handle exceptions (#54)
* Add support for wrapper types

* Move `FilesWithExtensionIterator` to `core::common`

* Remove unneeded use of two `HashMap`s

* Make metadata structs more typed

* Impl new_from for wrapper types

* Implement the new input handling logic

* Fix edge-case in input handling

* Ignore macro doc comment tests

* Correct comment

* Fix edge-case in deployment order

* Handle calldata better

* Allow for the use of function signatures

* Add support for exceptions

* Cached nonce allocator

* Fix tests

* Add support for address replacement

* Cleanup implementation

* Cleanup mutability

* Wire up address replacement with rest of code

* Implement caller replacement

* Switch to callframe trace for exceptions

* Add a way to skip tests if they don't match the target

* Handle values from the metadata files

* Remove address replacement

* Correct the arguments

* Remove empty impl

* Remove address replacement

* Correct the arguments

* Remove empty impl

* Fix size_requirement underflow

* Add support for wildcards in exceptions

* Fix calldata construction of single calldata

* Better handling for length in equivalency checks

* Make initial balance a constant

* Fix size_requirement underflow

* Add support for wildcards in exceptions

* Fix calldata construction of single calldata

* Better handling for length in equivalency checks

* Fix tests
2025-07-24 03:45:53 +00:00
Omar 589a5dc988 Handle calldata better (#49)
* Add support for wrapper types

* Move `FilesWithExtensionIterator` to `core::common`

* Remove unneeded use of two `HashMap`s

* Make metadata structs more typed

* Impl new_from for wrapper types

* Implement the new input handling logic

* Fix edge-case in input handling

* Ignore macro doc comment tests

* Correct comment

* Fix edge-case in deployment order

* Handle calldata better

* Remove todo
2025-07-22 03:39:35 +00:00
Omar c6d55515be Allow for the use of function signatures (#50)
* Allow for the use of function signatures

* Add test
2025-07-21 10:43:17 +00:00
Omar a9970eb2bb Refactor the input handling logic (#48)
* Add support for wrapper types

* Move `FilesWithExtensionIterator` to `core::common`

* Remove unneeded use of two `HashMap`s

* Make metadata structs more typed

* Impl new_from for wrapper types

* Implement the new input handling logic

* Fix edge-case in input handling

* Ignore macro doc comment tests

* Correct comment

* Fix edge-case in deployment order
2025-07-21 09:01:52 +00:00
Omar 2259942363 Cleanup execution logic (#45)
* Introduce a custom kitchensink network

* fix formatting

* Added `--dev` to `substrate-node` arguments.

This commit adds the `--dev` argument to the `substrate-node` to allow
the chain to keep advancing as time goes own. We have found that if this
option is not added then the chain won't advance forward.

* fix clippy warning

* fix clippy warning

* Fix the ABI finding logic

* Fix function selector and argument encoding

* Avoid extra buffer allocation

* Remove reliance on the web3 crate

* Implement ABI fix in the compiler trait impl

* Update the async runtime with syntactic sugar.

* Fix tests

* Fix doc test

* Give nodes a standard way to get their alloy provider

* Add ability to get the chain_id from node

* Get kitchensink provider to use kitchensink network

* Use provider method in tests

* Add support for getting the gas limit from the node

* Add a way to get the coinbase address

* Add a way to get the block difficulty from the node

* Add a way to get block info from the node

* Expose APIs for getting the info of a specific block

* Add resolution logic for other matterlabs variables

* Fix tests

* Add comment on alternative solutions

* Change kitchensink gas limit assertion

* Cleanup execution logic
2025-07-18 12:08:13 +00:00
Omar 0b97d7dc29 Support other matterlabs variables (#43)
* Introduce a custom kitchensink network

* fix formatting

* Added `--dev` to `substrate-node` arguments.

This commit adds the `--dev` argument to the `substrate-node` to allow
the chain to keep advancing as time goes own. We have found that if this
option is not added then the chain won't advance forward.

* fix clippy warning

* fix clippy warning

* Fix function selector and argument encoding

* Avoid extra buffer allocation

* Remove reliance on the web3 crate

* Update the async runtime with syntactic sugar.

* Fix tests

* Fix doc test

* Give nodes a standard way to get their alloy provider

* Add ability to get the chain_id from node

* Get kitchensink provider to use kitchensink network

* Use provider method in tests

* Add support for getting the gas limit from the node

* Add a way to get the coinbase address

* Add a way to get the block difficulty from the node

* Add a way to get block info from the node

* Expose APIs for getting the info of a specific block

* Add resolution logic for other matterlabs variables

* Fix tests

* Add comment on alternative solutions

* Change kitchensink gas limit assertion

* Remove un-needed profile config
2025-07-18 12:06:40 +00:00
Omar 2bee2d5c8b Fix the ABI finding logic (#38)
* Fix the ABI finding logic

* Implement ABI fix in the compiler trait impl
2025-07-18 11:22:51 +00:00
Omar 854e8d9690 Fix deserialization error: invalid value: string "0x2d79dd80ff729c000" (#34)
* Introduce a custom kitchensink network

* fix formatting

* Added `--dev` to `substrate-node` arguments.

This commit adds the `--dev` argument to the `substrate-node` to allow
the chain to keep advancing as time goes own. We have found that if this
option is not added then the chain won't advance forward.

* fix clippy warning

* fix clippy warning
2025-07-18 11:22:13 +00:00
Omar 2d517784dd Better logging for contract deployment (#46)
* Log certain errors better

* Remove unneeded code
2025-07-16 18:16:12 +00:00
Omar baa11ad28f Correctly identify which contracts to compile (#44)
* Compile all contracts for a test file

* Fix compilation errors related to paths

* Set the base path if specified
2025-07-16 11:52:40 +00:00
Omar c2e65f9e33 Fix function selector & argument encoding (#39)
* Fix function selector and argument encoding

* Avoid extra buffer allocation

* Remove reliance on the web3 crate

* Fix tests
2025-07-15 20:00:10 +00:00
Omar 14888f9767 Update the async runtime (#42)
* Update the async runtime with syntactic sugar.

* Fix doc test

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Improve the comments

* Update the release profile

---------

Co-authored-by: xermicus <cyrill@parity.io>
2025-07-15 11:19:17 +00:00
Omar 3e99d1c2a5 Allow alloy to estimate tx gas (#37) 2025-07-14 17:34:44 +00:00
Omar 4e234aa1bd Remove code that was accidentally committed. (#41)
* Remove code that was accidentally committed.

* Remove unneeded dependency
2025-07-14 16:24:39 +00:00
Omar b204de5484 Persist node logs (#36)
* Persist node logs

* Fix clippy lints

* Delete the node's db on shutdown but persist logs

* Fix tests

* Separate stdout and stderr and use more consts.

* More consistent handling of open options

* Revert the use of subprocess

* Remove outdated comment

* Flush the log files on drop

* Rename `log_files` -> `logs_file_to_flush`
2025-07-14 16:08:47 +00:00
Omar 5eb3a0e1b5 Fix for "transaction indexing is in progress" (#32)
* Retry getting transaction receipt

* Small fix to logging consistency

* Introduce a custom kitchensink network

* Fix formtting and clippy
2025-07-14 09:32:57 +00:00
Omar 772bd217c3 Fixing the CI on Ubuntu (#31)
* pin the version of geth used in CI

* pin the version of geth used in CI

* temp: run on each push

* pin the version of geth used in CI

* Make geth installation arch dependent

* Remove temp run on push to branch

* Add a comment on the need for pre-built binaries
2025-07-14 09:17:13 +00:00
Omar 0513a4befb Use tracing for logging. (#29)
This commit updates how logging is done in the differential testing
harness to use `tracing` instead of using the `log` crate. This allows
us to be able to better associate logs with the cases being executed
which makes it easier to debug and understand what the harness is doing.
2025-07-10 07:28:16 +00:00
activecoder10 de7c7d6703 Compute transaction input for executing transactions (#28)
* Parsed ABI field in order to get method parameter

* Added logic for ABI

* Refactored dependencies

* Small refactoring

* Added unit tests for ABI parameter extraction logic

* Fixed format issues

* Fixed format

* Added new changes to format

* Added bail to stop execution when we have an error during deployment
2025-07-09 11:03:38 +00:00
activecoder10 3a537c2812 Added extra logging for critical part of the flow. (#27)
* Fix legacy_transaction to address for execution part

* updated polkadot-sdk to latest

* Update polkadot-sdk to latest main with fixes

* Added extra logging

* Applied some clippy improvements
2025-06-27 15:24:57 +00:00
activecoder10 4ab79ed97e Fixed the contract deployment logic. Added new tracing logging for differential for leader and follower receipt structure (#26) 2025-06-20 13:02:54 +00:00
activecoder10 ee97b62e70 Added fetch_add_nonce method for NodeInteraction trait. Added extra logging. (#25)
* added logging

* added fetch_add_nonce method

* Added nonce for legacy transaction also

* Addressed PR comments
2025-06-18 19:43:16 +00:00
xermicus e9b5a06aec fix the simple test case definition (#24)
Signed-off-by: xermicus <cyrill@parity.io>
2025-06-17 10:23:09 +00:00
xermicus 534170db6f dont fail machete on polkadot-sdk submodule (#23)
Signed-off-by: xermicus <cyrill@parity.io>
2025-06-14 10:12:30 +00:00
activecoder10 090b56c46a deploy contracts (#22) 2025-06-12 11:09:01 +00:00
activecoder10 547563e718 Extended execute_input method (#21)
* Extended execute_input method

* Improve tracing part
2025-06-10 08:23:37 +00:00
73 changed files with 10636 additions and 1778 deletions
+36 -1
View File
@@ -101,7 +101,33 @@ jobs:
run: | run: |
sudo add-apt-repository -y ppa:ethereum/ethereum sudo add-apt-repository -y ppa:ethereum/ethereum
sudo apt-get update sudo apt-get update
sudo apt-get install -y ethereum protobuf-compiler sudo apt-get install -y protobuf-compiler
sudo apt-get install -y solc
# We were facing some issues in CI with the 1.16.* versions of geth, and specifically on
# Ubuntu. Eventually, we found out that the last version of geth that worked in our CI was
# version 1.15.11. Thus, this is the version that we want to use in CI. The PPA sadly does
# not have historic versions of Geth and therefore we need to resort to downloading pre
# built binaries for Geth and the surrounding tools which is what the following parts of
# the script do.
sudo apt-get install -y wget ca-certificates tar
ARCH=$(uname -m)
if [ "$ARCH" = "x86_64" ]; then
URL="https://gethstore.blob.core.windows.net/builds/geth-alltools-linux-amd64-1.15.11-36b2371c.tar.gz"
elif [ "$ARCH" = "aarch64" ]; then
URL="https://gethstore.blob.core.windows.net/builds/geth-alltools-linux-arm64-1.15.11-36b2371c.tar.gz"
else
echo "Unsupported architecture: $ARCH"
exit 1
fi
wget -qO- "$URL" | sudo tar xz -C /usr/local/bin --strip-components=1
geth --version
curl -sL https://github.com/paritytech/revive/releases/download/v0.3.0/resolc-x86_64-unknown-linux-musl -o resolc
chmod +x resolc
sudo mv resolc /usr/local/bin
- name: Install Geth on macOS - name: Install Geth on macOS
if: matrix.os == 'macos-14' if: matrix.os == 'macos-14'
@@ -109,6 +135,12 @@ jobs:
brew tap ethereum/ethereum brew tap ethereum/ethereum
brew install ethereum protobuf brew install ethereum protobuf
brew install solidity
curl -sL https://github.com/paritytech/revive/releases/download/v0.3.0/resolc-universal-apple-darwin -o resolc
chmod +x resolc
sudo mv resolc /usr/local/bin
- name: Machete - name: Machete
uses: bnjbvr/cargo-machete@v0.7.1 uses: bnjbvr/cargo-machete@v0.7.1
@@ -124,5 +156,8 @@ jobs:
- name: Check eth-rpc version - name: Check eth-rpc version
run: eth-rpc --version run: eth-rpc --version
- name: Check resolc version
run: resolc --version
- name: Test cargo workspace - name: Test cargo workspace
run: make test run: make test
+8
View File
@@ -3,3 +3,11 @@
.DS_Store .DS_Store
node_modules node_modules
/*.json /*.json
# We do not want to commit any log files that we produce from running the code locally so this is
# added to the .gitignore file.
*.log
profile.json.gz
resolc-compiler-tests
workdir
Generated
+905 -185
View File
File diff suppressed because it is too large Load Diff
+41 -11
View File
@@ -4,15 +4,14 @@ members = ["crates/*"]
[workspace.package] [workspace.package]
version = "0.1.0" version = "0.1.0"
authors = [ authors = ["Parity Technologies <admin@parity.io>"]
"Parity Technologies <admin@parity.io>",
]
license = "MIT/Apache-2.0" license = "MIT/Apache-2.0"
edition = "2024" edition = "2024"
repository = "https://github.com/paritytech/revive-differential-testing.git" repository = "https://github.com/paritytech/revive-differential-testing.git"
rust-version = "1.85.0" rust-version = "1.87.0"
[workspace.dependencies] [workspace.dependencies]
revive-dt-common = { version = "0.1.0", path = "crates/common" }
revive-dt-compiler = { version = "0.1.0", path = "crates/compiler" } revive-dt-compiler = { version = "0.1.0", path = "crates/compiler" }
revive-dt-config = { version = "0.1.0", path = "crates/config" } revive-dt-config = { version = "0.1.0", path = "crates/config" }
revive-dt-core = { version = "0.1.0", path = "crates/core" } revive-dt-core = { version = "0.1.0", path = "crates/core" }
@@ -23,24 +22,49 @@ revive-dt-node-pool = { version = "0.1.0", path = "crates/node-pool" }
revive-dt-report = { version = "0.1.0", path = "crates/report" } revive-dt-report = { version = "0.1.0", path = "crates/report" }
revive-dt-solc-binaries = { version = "0.1.0", path = "crates/solc-binaries" } revive-dt-solc-binaries = { version = "0.1.0", path = "crates/solc-binaries" }
alloy-primitives = "1.2.1"
alloy-sol-types = "1.2.1"
anyhow = "1.0" anyhow = "1.0"
bson = { version = "2.15.0" }
cacache = { version = "13.1.0" }
clap = { version = "4", features = ["derive"] } clap = { version = "4", features = ["derive"] }
env_logger = "0.11.8" dashmap = { version = "6.1.0" }
foundry-compilers-artifacts = { version = "0.18.0" }
futures = { version = "0.3.31" }
hex = "0.4.3" hex = "0.4.3"
reqwest = { version = "0.12.15", features = ["blocking", "json"] } regex = "1"
log = "0.4.27" moka = "0.12.10"
paste = "1.0.15"
reqwest = { version = "0.12.15", features = ["json"] }
once_cell = "1.21" once_cell = "1.21"
rayon = { version = "1.10" }
semver = { version = "1.0", features = ["serde"] } semver = { version = "1.0", features = ["serde"] }
serde = { version = "1.0", default-features = false, features = ["derive"] } serde = { version = "1.0", default-features = false, features = ["derive"] }
serde_json = { version = "1.0", default-features = false, features = ["arbitrary_precision", "std"] } serde_json = { version = "1.0", default-features = false, features = [
"arbitrary_precision",
"std",
"unbounded_depth",
] }
serde_with = { version = "3.14.0" }
sha2 = { version = "0.10.9" } sha2 = { version = "0.10.9" }
sp-core = "36.1.0" sp-core = "36.1.0"
sp-runtime = "41.1.0" sp-runtime = "41.1.0"
temp-dir = { version = "0.1.16" } temp-dir = { version = "0.1.16" }
tempfile = "3.3" tempfile = "3.3"
tokio = { version = "1", default-features = false, features = ["rt-multi-thread"] } thiserror = "2"
tokio = { version = "1.47.0", default-features = false, features = [
"rt-multi-thread",
"process",
"rt",
] }
uuid = { version = "1.8", features = ["v4"] } uuid = { version = "1.8", features = ["v4"] }
tracing = { version = "0.1.41" }
tracing-appender = { version = "0.2.3" }
tracing-subscriber = { version = "0.3.19", default-features = false, features = [
"fmt",
"json",
"env-filter",
] }
indexmap = { version = "2.10.0", default-features = false }
# revive compiler # revive compiler
revive-solc-json-interface = { git = "https://github.com/paritytech/revive", rev = "3389865af7c3ff6f29a586d82157e8bc573c1a8e" } revive-solc-json-interface = { git = "https://github.com/paritytech/revive", rev = "3389865af7c3ff6f29a586d82157e8bc573c1a8e" }
@@ -48,7 +72,7 @@ revive-common = { git = "https://github.com/paritytech/revive", rev = "3389865af
revive-differential = { git = "https://github.com/paritytech/revive", rev = "3389865af7c3ff6f29a586d82157e8bc573c1a8e" } revive-differential = { git = "https://github.com/paritytech/revive", rev = "3389865af7c3ff6f29a586d82157e8bc573c1a8e" }
[workspace.dependencies.alloy] [workspace.dependencies.alloy]
version = "1.0" version = "1.0.22"
default-features = false default-features = false
features = [ features = [
"json-abi", "json-abi",
@@ -59,9 +83,15 @@ features = [
"rpc-types", "rpc-types",
"signer-local", "signer-local",
"std", "std",
"network",
"serde",
"rpc-types-eth",
"genesis",
] ]
[profile.bench] [profile.bench]
inherits = "release" inherits = "release"
lto = true lto = true
codegen-units = 1 codegen-units = 1
[workspace.lints.clippy]
+1 -1
View File
@@ -8,7 +8,7 @@ clippy:
machete: machete:
cargo install cargo-machete cargo install cargo-machete
cargo machete cargo machete crates
test: format clippy machete test: format clippy machete
cargo test --workspace -- --nocapture cargo test --workspace -- --nocapture
+193 -17
View File
@@ -1,34 +1,210 @@
# revive-differential-tests <div align="center">
<h1><code>Revive Differential Tests</code></h1>
The revive differential testing framework allows to define smart contract tests in a declarative manner in order to compile and execute them against different Ethereum-compatible blockchain implmentations. This is useful to: <p>
- Analyze observable differences in contract compilation and execution across different blockchain implementations, including contract storage, account balances, transaction output and emitted events on a per-transaction base. <strong>Differential testing for Ethereum-compatible smart contract stacks</strong>
- Collect and compare benchmark metrics such as code size, gas usage or transaction throughput per seconds (TPS) of different blockchain implementations. </p>
- Ensure reproducible contract builds across multiple compiler implementations or multiple host platforms. </div>
- Implement end-to-end regression tests for Ethereum-compatible smart contract stacks.
# Declarative test format This project compiles and executes declarative smart-contract tests against multiple platforms, then compares behavior (status, return data, events, and state diffs). Today it supports:
For now, the format used to write tests is the [matter-labs era compiler format](https://github.com/matter-labs/era-compiler-tests?tab=readme-ov-file#matter-labs-simplecomplex-format). This allows us to re-use many tests from their corpora. - Geth (EVM reference implementation)
- Revive Kitchensink (Substrate-based PolkaVM + `eth-rpc` proxy)
# The `retester` utility Use it to:
The `retester` helper utilty is used to run the tests. To get an idea of what `retester` can do, please consults its command line help: - Detect observable differences between platforms (execution success, logs, state changes)
- Ensure reproducible builds across compilers/hosts
- Run end-to-end regression suites
``` This framework uses the [MatterLabs tests format](https://github.com/matter-labs/era-compiler-tests/tree/main/solidity) for declarative tests which is composed of the following:
cargo run -p revive-dt-core -- --help
- Metadata files, this is akin to a module of tests in Rust.
- Each metadata file contains multiple cases, a case is akin to a Rust test where a module can contain multiple tests.
- Each case contains multiple steps and assertions, this is akin to any Rust test that contains multiple statements.
Metadata files are JSON files, but Solidity files can also be metadata files if they include inline metadata provided as a comment at the top of the contract.
All of the steps contained within each test case are either:
- Transactions that need to be submitted and assertions to run on the submitted transactions.
- Assertions on the state of the chain (e.g., account balances, storage, etc...)
All of the transactions submitted by the this tool to the test nodes follow a similar logic to what wallets do. We first use alloy to estimate the transaction fees, then we attach that to the transaction and submit it to the node and then await the transaction receipt.
This repository contains none of the tests and only contains the testing framework or the test runner. The tests can be found in the [`resolc-compiler-tests`](https://github.com/paritytech/resolc-compiler-tests) repository which is a clone of [MatterLab's test suite](https://github.com/matter-labs/era-compiler-tests) with some modifications and adjustments made to suit our use case.
## Requirements
This section describes the required dependencies that this framework requires to run. Compiling this framework is pretty straightforward and no additional dependencies beyond what's specified in the `Cargo.toml` file should be required.
- Stable Rust
- Geth - When doing differential testing against the PVM we submit transactions to a Geth node and to Kitchensink to compare them.
- Kitchensink - When doing differential testing against the PVM we submit transactions to a Geth node and to Kitchensink to compare them.
- ETH-RPC - All communication with Kitchensink is done through the ETH RPC.
- Solc - This is actually a transitive dependency, while this tool doesn't require solc as it downloads the versions that it requires, resolc requires that Solc is installed and available in the path.
- Resolc - This is required to compile the contracts to PolkaVM bytecode.
All of the above need to be installed and available in the path in order for the tool to work.
## Running The Tool
This tool is being updated quite frequently. Therefore, it's recommended that you don't install the tool and then run it, but rather that you run it from the root of the directory using `cargo run --release`. The help command of the tool gives you all of the information you need to know about each of the options and flags that the tool offers.
```bash
$ cargo run --release -- --help
Usage: retester [OPTIONS]
Options:
-s, --solc <SOLC>
The `solc` version to use if the test didn't specify it explicitly
[default: 0.8.29]
--wasm
Use the Wasm compiler versions
-r, --resolc <RESOLC>
The path to the `resolc` executable to be tested.
By default it uses the `resolc` binary found in `$PATH`.
If `--wasm` is set, this should point to the resolc Wasm ile.
[default: resolc]
-c, --corpus <CORPUS>
A list of test corpus JSON files to be tested
-w, --workdir <WORKING_DIRECTORY>
A place to store temporary artifacts during test execution.
Creates a temporary dir if not specified.
-g, --geth <GETH>
The path to the `geth` executable.
By default it uses `geth` binary found in `$PATH`.
[default: geth]
--geth-start-timeout <GETH_START_TIMEOUT>
The maximum time in milliseconds to wait for geth to start
[default: 5000]
--genesis <GENESIS_FILE>
Configure nodes according to this genesis.json file
[default: genesis.json]
-a, --account <ACCOUNT>
The signing account private key
[default: 0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d]
--private-keys-count <PRIVATE_KEYS_TO_ADD>
This argument controls which private keys the nodes should have access to and be added to its wallet signers. With a value of N, private keys (0, N] will be added to the signer set of the node
[default: 100000]
-l, --leader <LEADER>
The differential testing leader node implementation
[default: geth]
Possible values:
- geth: The go-ethereum reference full node EVM implementation
- kitchensink: The kitchensink runtime provides the PolkaVM (PVM) based node implentation
-f, --follower <FOLLOWER>
The differential testing follower node implementation
[default: kitchensink]
Possible values:
- geth: The go-ethereum reference full node EVM implementation
- kitchensink: The kitchensink runtime provides the PolkaVM (PVM) based node implentation
--compile-only <COMPILE_ONLY>
Only compile against this testing platform (doesn't execute the tests)
Possible values:
- geth: The go-ethereum reference full node EVM implementation
- kitchensink: The kitchensink runtime provides the PolkaVM (PVM) based node implentation
--number-of-nodes <NUMBER_OF_NODES>
Determines the amount of nodes that will be spawned for each chain
[default: 1]
--number-of-threads <NUMBER_OF_THREADS>
Determines the amount of tokio worker threads that will will be used
[default: 16]
--number-concurrent-tasks <NUMBER_CONCURRENT_TASKS>
Determines the amount of concurrent tasks that will be spawned to run tests. Defaults to 10 x the number of nodes
-e, --extract-problems
Extract problems back to the test corpus
-k, --kitchensink <KITCHENSINK>
The path to the `kitchensink` executable.
By default it uses `substrate-node` binary found in `$PATH`.
[default: substrate-node]
-p, --eth_proxy <ETH_PROXY>
The path to the `eth_proxy` executable.
By default it uses `eth-rpc` binary found in `$PATH`.
[default: eth-rpc]
-i, --invalidate-compilation-cache
Controls if the compilation cache should be invalidated or not
-h, --help
Print help (see a summary with '-h')
``` ```
For example, to run the [complex Solidity tests](https://github.com/matter-labs/era-compiler-tests/tree/main/solidity/complex), define a corpus structure as follows: To run tests with this tool you need a corpus JSON file that defines the tests included in the corpus. The simplest corpus file looks like the following:
```json ```json
{ {
"name": "ML Solidity Complex", "name": "MatterLabs Solidity Simple, Complex, and Semantic Tests",
"path": "/path/to/era-compiler-tests/solidity/complex" "path": "resolc-compiler-tests/fixtures/solidity"
} }
``` ```
Assuming this to be saved in a `ml-solidity-complex.json` file, the following command will try to compile and execute the tests found inside the corpus: > [!NOTE]
> Note that the tests can be found in the [`resolc-compiler-tests`](https://github.com/paritytech/resolc-compiler-tests) repository.
The above corpus file instructs the tool to look for all of the test cases contained within all of the metadata files of the specified directory.
The simplest command to run this tool is the following:
```bash ```bash
RUST_LOG=debug cargo r --release -p revive-dt-core -- --corpus ml-solidity-complex.json RUST_LOG="info" cargo run --release -- \
--corpus path_to_your_corpus_file.json \
--workdir path_to_a_temporary_directory_to_cache_things_in \
--number-of-nodes 5 \
> logs.log \
2> output.log
```
The above command will run the tool executing every one of the tests discovered in the path specified in the corpus file. All of the logs from the execution will be persisted in the `logs.log` file and all of the output of the tool will be persisted to the `output.log` file. If all that you're looking for is to run the tool and check which tests succeeded and failed, then the `output.log` file is what you need to be looking at. However, if you're contributing the to the tool then the `logs.log` file will be very valuable.
If you only want to run a subset of tests, then you can specify that in your corpus file. The following is an example:
```json
{
"name": "MatterLabs Solidity Simple, Complex, and Semantic Tests",
"paths": [
"path/to/a/single/metadata/file/I/want/to/run.json",
"path/to/a/directory/to/find/all/metadata/files/within"
]
}
``` ```
+337
View File
@@ -0,0 +1,337 @@
{
"modes": [
"Y >=0.8.9",
"E"
],
"cases": [
{
"name": "first",
"inputs": [
{
"address": "0xdeadbeef00000000000000000000000000000042",
"expected_balance": "1233"
},
{
"address": "0xdeadbeef00000000000000000000000000000042",
"is_storage_empty": true
},
{
"address": "0xdeadbeef00000000000000000000000000000042",
"is_storage_empty": false
},
{
"instance": "WBTC_1",
"method": "#deployer",
"calldata": [
"0x40",
"0x80",
"4",
"0x5742544300000000000000000000000000000000000000000000000000000000",
"14",
"0x5772617070656420425443000000000000000000000000000000000000000000"
],
"expected": [
"WBTC_1.address"
]
},
{
"instance": "WBTC_2",
"method": "#deployer",
"calldata": [
"0x40",
"0x80",
"4",
"0x5742544300000000000000000000000000000000000000000000000000000000",
"14",
"0x5772617070656420425443000000000000000000000000000000000000000000"
],
"expected": [
"WBTC_2.address"
]
},
{
"instance": "Mooniswap",
"method": "#deployer",
"calldata": [
"0x0000000000000000000000000000000000000000000000000000000000000060",
"0x00000000000000000000000000000000000000000000000000000000000000c0",
"0x0000000000000000000000000000000000000000000000000000000000000100",
"0x0000000000000000000000000000000000000000000000000000000000000002",
"WBTC_1.address",
"WBTC_2.address",
"4",
"0x5742544300000000000000000000000000000000000000000000000000000000",
"14",
"0x5772617070656420425443000000000000000000000000000000000000000000"
],
"expected": {
"return_data": [
"Mooniswap.address"
],
"events": [
{
"topics": [
"0x8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e0",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0xdeadbeef01000000000000000000000000000000"
],
"values": []
}
],
"exception": false
}
},
{
"instance": "WBTC_1",
"method": "_mint",
"calldata": [
"0xdeadbeef00000000000000000000000000000042",
"1000000000"
],
"expected": {
"return_data": [],
"events": [
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0xdeadbeef00000000000000000000000000000042"
],
"values": [
"1000000000"
]
}
],
"exception": false
}
},
{
"instance": "WBTC_2",
"method": "_mint",
"calldata": [
"0xdeadbeef00000000000000000000000000000042",
"1000000000"
],
"expected": {
"return_data": [],
"events": [
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0xdeadbeef00000000000000000000000000000042"
],
"values": [
"1000000000"
]
}
],
"exception": false
}
},
{
"instance": "WBTC_1",
"caller": "0xdeadbeef00000000000000000000000000000042",
"method": "approve",
"calldata": [
"Mooniswap.address",
"500000000"
],
"expected": {
"return_data": [
"0x0000000000000000000000000000000000000000000000000000000000000001"
],
"events": [
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"500000000"
]
}
],
"exception": false
}
},
{
"instance": "WBTC_2",
"caller": "0xdeadbeef00000000000000000000000000000042",
"method": "approve",
"calldata": [
"Mooniswap.address",
"500000000"
],
"expected": {
"return_data": [
"0x0000000000000000000000000000000000000000000000000000000000000001"
],
"events": [
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"500000000"
]
}
],
"exception": false
}
},
{
"instance": "Mooniswap",
"caller": "0xdeadbeef00000000000000000000000000000042",
"method": "deposit",
"calldata": [
"0x0000000000000000000000000000000000000000000000000000000000000040",
"0x00000000000000000000000000000000000000000000000000000000000000a0",
"0x0000000000000000000000000000000000000000000000000000000000000002",
"10000000",
"10000000",
"0x0000000000000000000000000000000000000000000000000000000000000002",
"1000000",
"1000000"
],
"expected": {
"return_data": [
"10000000"
],
"events": [
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"Mooniswap.address"
],
"values": [
"1000"
]
},
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"10000000"
]
},
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"490000000"
]
},
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"10000000"
]
},
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"490000000"
]
},
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0xdeadbeef00000000000000000000000000000042"
],
"values": [
"10000000"
]
},
{
"topics": [
"0x2da466a7b24304f47e87fa2e1e5a81b9831ce54fec19055ce277ca2f39ba42c4",
"0xdeadbeef00000000000000000000000000000042"
],
"values": [
"10000000"
]
}
],
"exception": false
}
},
{
"instance": "Mooniswap",
"caller": "0xdeadbeef00000000000000000000000000000042",
"method": "swap",
"calldata": [
"WBTC_1.address",
"WBTC_2.address",
"5000",
"5000",
"0"
]
}
],
"expected": {
"return_data": [
"5000"
],
"events": [
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"5000"
]
},
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"489995000"
]
}
],
"exception": false
}
}
],
"contracts": {
"Mooniswap": "Mooniswap.sol:Mooniswap",
"WBTC_1": "ERC20/ERC20.sol:ERC20",
"WBTC_2": "ERC20/ERC20.sol:ERC20",
"VirtualBalance": "Mooniswap.sol:VirtualBalance",
"Math": "math/Math.sol:Math"
},
"libraries": {
"Mooniswap.sol": {
"VirtualBalance": "VirtualBalance"
},
"math/Math.sol": {
"Math": "Math"
}
},
"group": "Real life"
}
+1
View File
@@ -0,0 +1 @@
+20
View File
@@ -0,0 +1,20 @@
[package]
name = "revive-dt-common"
description = "A library containing common concepts that other crates in the workspace can rely on"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
anyhow = { workspace = true }
moka = { workspace = true, features = ["sync"] }
once_cell = { workspace = true }
semver = { workspace = true }
serde = { workspace = true }
tokio = { workspace = true, default-features = false, features = ["time"] }
[lints]
workspace = true
+49
View File
@@ -0,0 +1,49 @@
//! This module implements a cached file system allowing for results to be stored in-memory rather
//! rather being queried from the file system again.
use std::fs;
use std::io::{Error, Result};
use std::path::{Path, PathBuf};
use moka::sync::Cache;
use once_cell::sync::Lazy;
pub fn read(path: impl AsRef<Path>) -> Result<Vec<u8>> {
static READ_CACHE: Lazy<Cache<PathBuf, Vec<u8>>> = Lazy::new(|| Cache::new(10_000));
let path = path.as_ref().canonicalize()?;
match READ_CACHE.get(path.as_path()) {
Some(content) => Ok(content),
None => {
let content = fs::read(path.as_path())?;
READ_CACHE.insert(path, content.clone());
Ok(content)
}
}
}
pub fn read_to_string(path: impl AsRef<Path>) -> Result<String> {
let content = read(path)?;
String::from_utf8(content).map_err(|_| {
Error::new(
std::io::ErrorKind::InvalidData,
"The contents of the file are not valid UTF8",
)
})
}
pub fn read_dir(path: impl AsRef<Path>) -> Result<Box<dyn Iterator<Item = Result<PathBuf>>>> {
static READ_DIR_CACHE: Lazy<Cache<PathBuf, Vec<PathBuf>>> = Lazy::new(|| Cache::new(10_000));
let path = path.as_ref().canonicalize()?;
match READ_DIR_CACHE.get(path.as_path()) {
Some(entries) => Ok(Box::new(entries.into_iter().map(Ok)) as Box<_>),
None => {
let entries = fs::read_dir(path.as_path())?
.flat_map(|maybe_entry| maybe_entry.map(|entry| entry.path()))
.collect();
READ_DIR_CACHE.insert(path.clone(), entries);
Ok(read_dir(path).unwrap())
}
}
}
+31
View File
@@ -0,0 +1,31 @@
use std::{
fs::{read_dir, remove_dir_all, remove_file},
path::Path,
};
use anyhow::{Context, Result};
/// This method clears the passed directory of all of the files and directories contained within
/// without deleting the directory.
pub fn clear_directory(path: impl AsRef<Path>) -> Result<()> {
for entry in read_dir(path.as_ref())
.with_context(|| format!("Failed to read directory: {}", path.as_ref().display()))?
{
let entry = entry.with_context(|| {
format!(
"Failed to read an entry in directory: {}",
path.as_ref().display()
)
})?;
let entry_path = entry.path();
if entry_path.is_file() {
remove_file(&entry_path)
.with_context(|| format!("Failed to remove file: {}", entry_path.display()))?
} else {
remove_dir_all(&entry_path)
.with_context(|| format!("Failed to remove directory: {}", entry_path.display()))?
}
}
Ok(())
}
+3
View File
@@ -0,0 +1,3 @@
mod clear_dir;
pub use clear_dir::*;
+3
View File
@@ -0,0 +1,3 @@
mod poll;
pub use poll::*;
+72
View File
@@ -0,0 +1,72 @@
use std::ops::ControlFlow;
use std::time::Duration;
use anyhow::{Context as _, Result, anyhow};
const EXPONENTIAL_BACKOFF_MAX_WAIT_DURATION: Duration = Duration::from_secs(60);
/// A function that polls for a fallible future for some period of time and errors if it fails to
/// get a result after polling.
///
/// Given a future that returns a [`Result<ControlFlow<O, ()>>`], this function calls the future
/// repeatedly (with some wait period) until the future returns a [`ControlFlow::Break`] or until it
/// returns an [`Err`] in which case the function stops polling and returns the error.
///
/// If the future keeps returning [`ControlFlow::Continue`] and fails to return a [`Break`] within
/// the permitted polling duration then this function returns an [`Err`]
///
/// [`Break`]: ControlFlow::Break
/// [`Continue`]: ControlFlow::Continue
pub async fn poll<F, O>(
polling_duration: Duration,
polling_wait_behavior: PollingWaitBehavior,
mut future: impl FnMut() -> F,
) -> Result<O>
where
F: Future<Output = Result<ControlFlow<O, ()>>>,
{
let mut retries = 0;
let mut total_wait_duration = Duration::ZERO;
let max_allowed_wait_duration = polling_duration;
loop {
if total_wait_duration >= max_allowed_wait_duration {
break Err(anyhow!(
"Polling failed after {} retries and a total of {:?} of wait time",
retries,
total_wait_duration
));
}
match future()
.await
.context("Polled future returned an error during polling loop")?
{
ControlFlow::Continue(()) => {
let next_wait_duration = match polling_wait_behavior {
PollingWaitBehavior::Constant(duration) => duration,
PollingWaitBehavior::ExponentialBackoff => {
Duration::from_secs(2u64.pow(retries))
.min(EXPONENTIAL_BACKOFF_MAX_WAIT_DURATION)
}
};
let next_wait_duration =
next_wait_duration.min(max_allowed_wait_duration - total_wait_duration);
total_wait_duration += next_wait_duration;
retries += 1;
tokio::time::sleep(next_wait_duration).await;
}
ControlFlow::Break(output) => {
break Ok(output);
}
}
}
}
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default)]
pub enum PollingWaitBehavior {
Constant(Duration),
#[default]
ExponentialBackoff,
}
@@ -0,0 +1,21 @@
/// An iterator that could be either of two iterators.
#[derive(Clone, Debug)]
pub enum EitherIter<A, B> {
A(A),
B(B),
}
impl<A, B, T> Iterator for EitherIter<A, B>
where
A: Iterator<Item = T>,
B: Iterator<Item = T>,
{
type Item = T;
fn next(&mut self) -> Option<Self::Item> {
match self {
EitherIter::A(iter) => iter.next(),
EitherIter::B(iter) => iter.next(),
}
}
}
@@ -0,0 +1,91 @@
use std::{
borrow::Cow,
collections::HashSet,
path::{Path, PathBuf},
};
/// An iterator that finds files of a certain extension in the provided directory. You can think of
/// this a glob pattern similar to: `${path}/**/*.md`
pub struct FilesWithExtensionIterator {
/// The set of allowed extensions that that match the requirement and that should be returned
/// when found.
allowed_extensions: HashSet<Cow<'static, str>>,
/// The set of directories to visit next. This iterator does BFS and so these directories will
/// only be visited if we can't find any files in our state.
directories_to_search: Vec<PathBuf>,
/// The set of files matching the allowed extensions that were found. If there are entries in
/// this vector then they will be returned when the [`Iterator::next`] method is called. If not
/// then we visit one of the next directories to visit.
files_matching_allowed_extensions: Vec<PathBuf>,
/// This option controls if the the cached file system should be used or not. This could be
/// better for certain cases where the entries in the directories do not change and therefore
/// caching can be used.
use_cached_fs: bool,
}
impl FilesWithExtensionIterator {
pub fn new(root_directory: impl AsRef<Path>) -> Self {
Self {
allowed_extensions: Default::default(),
directories_to_search: vec![root_directory.as_ref().to_path_buf()],
files_matching_allowed_extensions: Default::default(),
use_cached_fs: Default::default(),
}
}
pub fn with_allowed_extension(
mut self,
allowed_extension: impl Into<Cow<'static, str>>,
) -> Self {
self.allowed_extensions.insert(allowed_extension.into());
self
}
pub fn with_use_cached_fs(mut self, use_cached_fs: bool) -> Self {
self.use_cached_fs = use_cached_fs;
self
}
}
impl Iterator for FilesWithExtensionIterator {
type Item = PathBuf;
fn next(&mut self) -> Option<Self::Item> {
if let Some(file_path) = self.files_matching_allowed_extensions.pop() {
return Some(file_path);
};
let directory_to_search = self.directories_to_search.pop()?;
let iterator = if self.use_cached_fs {
let Ok(dir_entries) = crate::cached_fs::read_dir(directory_to_search.as_path()) else {
return self.next();
};
Box::new(dir_entries) as Box<dyn Iterator<Item = std::io::Result<PathBuf>>>
} else {
let Ok(dir_entries) = std::fs::read_dir(directory_to_search) else {
return self.next();
};
Box::new(dir_entries.map(|maybe_entry| maybe_entry.map(|entry| entry.path()))) as Box<_>
};
for entry_path in iterator.flatten() {
if entry_path.is_dir() {
self.directories_to_search.push(entry_path)
} else if entry_path.is_file()
&& entry_path.extension().is_some_and(|ext| {
self.allowed_extensions
.iter()
.any(|allowed| ext.eq_ignore_ascii_case(allowed.as_ref()))
})
{
self.files_matching_allowed_extensions.push(entry_path)
}
}
self.next()
}
}
+5
View File
@@ -0,0 +1,5 @@
mod either_iter;
mod files_with_extension_iterator;
pub use either_iter::*;
pub use files_with_extension_iterator::*;
+9
View File
@@ -0,0 +1,9 @@
//! This crate provides common concepts, functionality, types, macros, and more that other crates in
//! the workspace can benefit from.
pub mod cached_fs;
pub mod fs;
pub mod futures;
pub mod iterators;
pub mod macros;
pub mod types;
@@ -0,0 +1,140 @@
#[macro_export]
macro_rules! impl_for_wrapper {
(Display, $ident: ident) => {
#[automatically_derived]
impl std::fmt::Display for $ident {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
std::fmt::Display::fmt(&self.0, f)
}
}
};
(FromStr, $ident: ident) => {
#[automatically_derived]
impl std::str::FromStr for $ident {
type Err = anyhow::Error;
fn from_str(s: &str) -> anyhow::Result<Self> {
s.parse().map(Self).map_err(Into::into)
}
}
};
}
/// Defines wrappers around types.
///
/// For example, the macro invocation seen below:
///
/// ```rust,ignore
/// define_wrapper_type!(CaseId => usize);
/// ```
///
/// Would define a wrapper type that looks like the following:
///
/// ```rust,ignore
/// pub struct CaseId(usize);
/// ```
///
/// And would also implement a number of methods on this type making it easier to use.
///
/// These wrapper types become very useful as they make the code a lot easier to read.
///
/// Take the following as an example:
///
/// ```rust,ignore
/// struct State {
/// contracts: HashMap<usize, HashMap<String, Vec<u8>>>
/// }
/// ```
///
/// In the above code it's hard to understand what the various types refer to or what to expect them
/// to contain.
///
/// With these wrapper types we're able to create code that's self-documenting in that the types
/// tell us what the code is referring to. The above code is transformed into
///
/// ```rust,ignore
/// struct State {
/// contracts: HashMap<CaseId, HashMap<ContractName, ContractByteCode>>
/// }
/// ```
///
/// Note that we follow the same syntax for defining wrapper structs but we do not permit the use of
/// generics.
#[macro_export]
macro_rules! define_wrapper_type {
(
$(#[$meta: meta])*
$vis:vis struct $ident: ident($ty: ty)
$(
impl $($trait_ident: ident),*
)?
;
) => {
$(#[$meta])*
$vis struct $ident($ty);
impl $ident {
pub fn new(value: impl Into<$ty>) -> Self {
Self(value.into())
}
pub fn into_inner(self) -> $ty {
self.0
}
pub fn as_inner(&self) -> &$ty {
&self.0
}
}
impl AsRef<$ty> for $ident {
fn as_ref(&self) -> &$ty {
&self.0
}
}
impl AsMut<$ty> for $ident {
fn as_mut(&mut self) -> &mut $ty {
&mut self.0
}
}
impl std::ops::Deref for $ident {
type Target = $ty;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl std::ops::DerefMut for $ident {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl From<$ty> for $ident {
fn from(value: $ty) -> Self {
Self(value)
}
}
impl From<$ident> for $ty {
fn from(value: $ident) -> Self {
value.0
}
}
$(
$(
$crate::macros::impl_for_wrapper!($trait_ident, $ident);
)*
)?
};
}
/// Technically not needed but this allows for the macro to be found in the `macros` module of the
/// crate in addition to being found in the root of the crate.
pub use {define_wrapper_type, impl_for_wrapper};
+3
View File
@@ -0,0 +1,3 @@
mod define_wrapper_type;
pub use define_wrapper_type::*;
+5
View File
@@ -0,0 +1,5 @@
mod mode;
mod version_or_requirement;
pub use mode::*;
pub use version_or_requirement::*;
+173
View File
@@ -0,0 +1,173 @@
use crate::types::VersionOrRequirement;
use semver::Version;
use serde::{Deserialize, Serialize};
use std::fmt::Display;
use std::str::FromStr;
use std::sync::LazyLock;
/// This represents a mode that a given test should be run with, if possible.
///
/// We obtain this by taking a [`ParsedMode`], which may be looser or more strict
/// in its requirements, and then expanding it out into a list of [`Mode`]s.
///
/// Use [`ParsedMode::to_test_modes()`] to do this.
#[derive(Clone, Debug, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct Mode {
pub pipeline: ModePipeline,
pub optimize_setting: ModeOptimizerSetting,
pub version: Option<semver::VersionReq>,
}
impl Display for Mode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.pipeline.fmt(f)?;
f.write_str(" ")?;
self.optimize_setting.fmt(f)?;
if let Some(version) = &self.version {
f.write_str(" ")?;
version.fmt(f)?;
}
Ok(())
}
}
impl Mode {
/// Return all of the available mode combinations.
pub fn all() -> impl Iterator<Item = &'static Mode> {
static ALL_MODES: LazyLock<Vec<Mode>> = LazyLock::new(|| {
ModePipeline::test_cases()
.flat_map(|pipeline| {
ModeOptimizerSetting::test_cases().map(move |optimize_setting| Mode {
pipeline,
optimize_setting,
version: None,
})
})
.collect::<Vec<_>>()
});
ALL_MODES.iter()
}
/// Resolves the [`Mode`]'s solidity version requirement into a [`VersionOrRequirement`] if
/// the requirement is present on the object. Otherwise, the passed default version is used.
pub fn compiler_version_to_use(&self, default: Version) -> VersionOrRequirement {
match self.version {
Some(ref requirement) => requirement.clone().into(),
None => default.into(),
}
}
}
/// What do we want the compiler to do?
#[derive(Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize)]
pub enum ModePipeline {
/// Compile Solidity code via Yul IR
ViaYulIR,
/// Compile Solidity direct to assembly
ViaEVMAssembly,
}
impl FromStr for ModePipeline {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
// via Yul IR
"Y" => Ok(ModePipeline::ViaYulIR),
// Don't go via Yul IR
"E" => Ok(ModePipeline::ViaEVMAssembly),
// Anything else that we see isn't a mode at all
_ => Err(anyhow::anyhow!(
"Unsupported pipeline '{s}': expected 'Y' or 'E'"
)),
}
}
}
impl Display for ModePipeline {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
ModePipeline::ViaYulIR => f.write_str("Y"),
ModePipeline::ViaEVMAssembly => f.write_str("E"),
}
}
}
impl ModePipeline {
/// Should we go via Yul IR?
pub fn via_yul_ir(&self) -> bool {
matches!(self, ModePipeline::ViaYulIR)
}
/// An iterator over the available pipelines that we'd like to test,
/// when an explicit pipeline was not specified.
pub fn test_cases() -> impl Iterator<Item = ModePipeline> + Clone {
[ModePipeline::ViaYulIR, ModePipeline::ViaEVMAssembly].into_iter()
}
}
#[derive(Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize)]
pub enum ModeOptimizerSetting {
/// 0 / -: Don't apply any optimizations
M0,
/// 1: Apply less than default optimizations
M1,
/// 2: Apply the default optimizations
M2,
/// 3 / +: Apply aggressive optimizations
M3,
/// s: Optimize for size
Ms,
/// z: Aggressively optimize for size
Mz,
}
impl FromStr for ModeOptimizerSetting {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"M0" => Ok(ModeOptimizerSetting::M0),
"M1" => Ok(ModeOptimizerSetting::M1),
"M2" => Ok(ModeOptimizerSetting::M2),
"M3" => Ok(ModeOptimizerSetting::M3),
"Ms" => Ok(ModeOptimizerSetting::Ms),
"Mz" => Ok(ModeOptimizerSetting::Mz),
_ => Err(anyhow::anyhow!(
"Unsupported optimizer setting '{s}': expected 'M0', 'M1', 'M2', 'M3', 'Ms' or 'Mz'"
)),
}
}
}
impl Display for ModeOptimizerSetting {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
ModeOptimizerSetting::M0 => f.write_str("M0"),
ModeOptimizerSetting::M1 => f.write_str("M1"),
ModeOptimizerSetting::M2 => f.write_str("M2"),
ModeOptimizerSetting::M3 => f.write_str("M3"),
ModeOptimizerSetting::Ms => f.write_str("Ms"),
ModeOptimizerSetting::Mz => f.write_str("Mz"),
}
}
}
impl ModeOptimizerSetting {
/// An iterator over the available optimizer settings that we'd like to test,
/// when an explicit optimizer setting was not specified.
pub fn test_cases() -> impl Iterator<Item = ModeOptimizerSetting> + Clone {
[
// No optimizations:
ModeOptimizerSetting::M0,
// Aggressive optimizations:
ModeOptimizerSetting::M3,
]
.into_iter()
}
/// Are any optimizations enabled?
pub fn optimizations_enabled(&self) -> bool {
!matches!(self, ModeOptimizerSetting::M0)
}
}
@@ -0,0 +1,41 @@
use semver::{Version, VersionReq};
#[derive(Clone, Debug)]
pub enum VersionOrRequirement {
Version(Version),
Requirement(VersionReq),
}
impl From<Version> for VersionOrRequirement {
fn from(value: Version) -> Self {
Self::Version(value)
}
}
impl From<VersionReq> for VersionOrRequirement {
fn from(value: VersionReq) -> Self {
Self::Requirement(value)
}
}
impl TryFrom<VersionOrRequirement> for Version {
type Error = anyhow::Error;
fn try_from(value: VersionOrRequirement) -> Result<Self, Self::Error> {
let VersionOrRequirement::Version(version) = value else {
anyhow::bail!("Version or requirement was not a version");
};
Ok(version)
}
}
impl TryFrom<VersionOrRequirement> for VersionReq {
type Error = anyhow::Error;
fn try_from(value: VersionOrRequirement) -> Result<Self, Self::Error> {
let VersionOrRequirement::Requirement(requirement) = value else {
anyhow::bail!("Version or requirement was not a requirement");
};
Ok(requirement)
}
}
+13 -2
View File
@@ -9,11 +9,22 @@ repository.workspace = true
rust-version.workspace = true rust-version.workspace = true
[dependencies] [dependencies]
anyhow = { workspace = true }
revive-solc-json-interface = { workspace = true } revive-solc-json-interface = { workspace = true }
revive-dt-common = { workspace = true }
revive-dt-config = { workspace = true } revive-dt-config = { workspace = true }
revive-dt-solc-binaries = { workspace = true } revive-dt-solc-binaries = { workspace = true }
revive-common = { workspace = true } revive-common = { workspace = true }
alloy = { workspace = true }
alloy-primitives = { workspace = true }
anyhow = { workspace = true }
dashmap = { workspace = true }
foundry-compilers-artifacts = { workspace = true }
semver = { workspace = true } semver = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
log = { workspace = true } tracing = { workspace = true }
tokio = { workspace = true }
[lints]
workspace = true
+126 -111
View File
@@ -4,166 +4,181 @@
//! - Polkadot revive Wasm compiler //! - Polkadot revive Wasm compiler
use std::{ use std::{
fs::read_to_string, collections::HashMap,
hash::Hash, hash::Hash,
path::{Path, PathBuf}, path::{Path, PathBuf},
}; };
use revive_dt_config::Arguments; use alloy::json_abi::JsonAbi;
use alloy_primitives::Address;
use anyhow::{Context, Result};
use semver::Version;
use serde::{Deserialize, Serialize};
use revive_common::EVMVersion; use revive_common::EVMVersion;
use revive_solc_json_interface::{ use revive_dt_common::cached_fs::read_to_string;
SolcStandardJsonInput, SolcStandardJsonInputLanguage, SolcStandardJsonInputSettings, use revive_dt_common::types::VersionOrRequirement;
SolcStandardJsonInputSettingsOptimizer, SolcStandardJsonInputSettingsSelection, use revive_dt_config::Arguments;
SolcStandardJsonOutput,
}; // Re-export this as it's a part of the compiler interface.
use semver::Version; pub use revive_dt_common::types::{Mode, ModeOptimizerSetting, ModePipeline};
pub mod revive_js; pub mod revive_js;
pub mod revive_resolc; pub mod revive_resolc;
pub mod solc; pub mod solc;
/// A common interface for all supported Solidity compilers. /// A common interface for all supported Solidity compilers.
pub trait SolidityCompiler { pub trait SolidityCompiler: Sized {
/// Extra options specific to the compiler. /// Instantiates a new compiler object.
type Options: Default + PartialEq + Eq + Hash; ///
/// Based on the given [`Arguments`] and [`VersionOrRequirement`] this function instantiates a
/// new compiler object. Certain implementations of this trait might choose to cache cache the
/// compiler objects and return the same ones over and over again.
fn new(
config: &Arguments,
version: impl Into<Option<VersionOrRequirement>>,
) -> impl Future<Output = Result<Self>>;
/// Returns the version of the compiler.
fn version(&self) -> &Version;
/// Returns the path of the compiler executable.
fn path(&self) -> &Path;
/// The low-level compiler interface. /// The low-level compiler interface.
fn build( fn build(&self, input: CompilerInput) -> impl Future<Output = Result<CompilerOutput>>;
/// Does the compiler support the provided mode and version settings.
fn supports_mode(
&self, &self,
input: CompilerInput<Self::Options>, optimizer_setting: ModeOptimizerSetting,
) -> anyhow::Result<CompilerOutput<Self::Options>>; pipeline: ModePipeline,
) -> bool;
fn new(solc_executable: PathBuf) -> Self;
fn get_compiler_executable(config: &Arguments, version: Version) -> anyhow::Result<PathBuf>;
} }
/// The generic compilation input configuration. /// The generic compilation input configuration.
#[derive(Debug)] #[derive(Clone, Debug, Default, Serialize, Deserialize)]
pub struct CompilerInput<T: PartialEq + Eq + Hash> { pub struct CompilerInput {
pub extra_options: T, pub pipeline: Option<ModePipeline>,
pub input: SolcStandardJsonInput, pub optimization: Option<ModeOptimizerSetting>,
pub evm_version: Option<EVMVersion>,
pub allow_paths: Vec<PathBuf>,
pub base_path: Option<PathBuf>,
pub sources: HashMap<PathBuf, String>,
pub libraries: HashMap<PathBuf, HashMap<String, Address>>,
pub revert_string_handling: Option<RevertString>,
} }
/// The generic compilation output configuration. /// The generic compilation output configuration.
pub struct CompilerOutput<T: PartialEq + Eq + Hash> { #[derive(Debug, Clone, Default, Serialize, Deserialize)]
/// The solc standard JSON input. pub struct CompilerOutput {
pub input: CompilerInput<T>, /// The compiled contracts. The bytecode of the contract is kept as a string incase linking is
/// The produced solc standard JSON output. /// required and the compiled source has placeholders.
pub output: SolcStandardJsonOutput, pub contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
/// The error message in case the compiler returns abnormally.
pub error: Option<String>,
} }
impl<T> PartialEq for CompilerInput<T> /// A generic builder style interface for configuring the supported compiler options.
where #[derive(Default)]
T: PartialEq + Eq + Hash, pub struct Compiler {
{ input: CompilerInput,
fn eq(&self, other: &Self) -> bool {
let self_input = serde_json::to_vec(&self.input).unwrap_or_default();
let other_input = serde_json::to_vec(&self.input).unwrap_or_default();
self.extra_options.eq(&other.extra_options) && self_input == other_input
}
} }
impl<T> Eq for CompilerInput<T> where T: PartialEq + Eq + Hash {} impl Compiler {
impl<T> Hash for CompilerInput<T>
where
T: PartialEq + Eq + Hash,
{
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.extra_options.hash(state);
state.write(&serde_json::to_vec(&self.input).unwrap_or_default());
}
}
/// A generic builder style interface for configuring all compiler options.
pub struct Compiler<T: SolidityCompiler> {
input: SolcStandardJsonInput,
extra_options: T::Options,
allow_paths: Vec<String>,
base_path: Option<String>,
}
impl Default for Compiler<solc::Solc> {
fn default() -> Self {
Self::new()
}
}
impl<T> Compiler<T>
where
T: SolidityCompiler,
{
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
input: SolcStandardJsonInput { input: CompilerInput {
language: SolcStandardJsonInputLanguage::Solidity, pipeline: Default::default(),
optimization: Default::default(),
evm_version: Default::default(),
allow_paths: Default::default(),
base_path: Default::default(),
sources: Default::default(), sources: Default::default(),
settings: SolcStandardJsonInputSettings::new( libraries: Default::default(),
None, revert_string_handling: Default::default(),
Default::default(),
None,
SolcStandardJsonInputSettingsSelection::new_required(),
SolcStandardJsonInputSettingsOptimizer::new(
false,
None,
&Version::new(0, 0, 0),
false,
),
None,
None,
),
}, },
extra_options: Default::default(),
allow_paths: Default::default(),
base_path: None,
} }
} }
pub fn solc_optimizer(mut self, enabled: bool) -> Self { pub fn with_optimization(mut self, value: impl Into<Option<ModeOptimizerSetting>>) -> Self {
self.input.settings.optimizer.enabled = enabled; self.input.optimization = value.into();
self self
} }
pub fn with_source(mut self, path: &Path) -> anyhow::Result<Self> { pub fn with_pipeline(mut self, value: impl Into<Option<ModePipeline>>) -> Self {
self.input self.input.pipeline = value.into();
.sources self
.insert(path.display().to_string(), read_to_string(path)?.into()); }
pub fn with_evm_version(mut self, version: impl Into<Option<EVMVersion>>) -> Self {
self.input.evm_version = version.into();
self
}
pub fn with_allow_path(mut self, path: impl AsRef<Path>) -> Self {
self.input.allow_paths.push(path.as_ref().into());
self
}
pub fn with_base_path(mut self, path: impl Into<Option<PathBuf>>) -> Self {
self.input.base_path = path.into();
self
}
pub fn with_source(mut self, path: impl AsRef<Path>) -> Result<Self> {
self.input.sources.insert(
path.as_ref().to_path_buf(),
read_to_string(path.as_ref()).context("Failed to read the contract source")?,
);
Ok(self) Ok(self)
} }
pub fn evm_version(mut self, evm_version: EVMVersion) -> Self { pub fn with_library(
self.input.settings.evm_version = Some(evm_version); mut self,
path: impl AsRef<Path>,
name: impl AsRef<str>,
address: Address,
) -> Self {
self.input
.libraries
.entry(path.as_ref().to_path_buf())
.or_default()
.insert(name.as_ref().into(), address);
self self
} }
pub fn extra_options(mut self, extra_options: T::Options) -> Self { pub fn with_revert_string_handling(
self.extra_options = extra_options; mut self,
revert_string_handling: impl Into<Option<RevertString>>,
) -> Self {
self.input.revert_string_handling = revert_string_handling.into();
self self
} }
pub fn allow_path(mut self, path: String) -> Self { pub fn then(self, callback: impl FnOnce(Self) -> Self) -> Self {
self.allow_paths.push(path); callback(self)
self
} }
pub fn base_path(mut self, base_path: String) -> Self { pub fn try_then<E>(self, callback: impl FnOnce(Self) -> Result<Self, E>) -> Result<Self, E> {
self.base_path = Some(base_path); callback(self)
self
} }
pub fn try_build(self, solc_path: PathBuf) -> anyhow::Result<CompilerOutput<T::Options>> { pub async fn try_build(self, compiler: &impl SolidityCompiler) -> Result<CompilerOutput> {
T::new(solc_path).build(CompilerInput { compiler.build(self.input).await
extra_options: self.extra_options,
input: self.input,
})
} }
/// Returns the compiler JSON input. pub fn input(&self) -> &CompilerInput {
pub fn input(&self) -> SolcStandardJsonInput { &self.input
self.input.clone()
} }
} }
/// Defines how the compiler should handle revert strings.
#[derive(
Clone, Debug, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default, Serialize, Deserialize,
)]
pub enum RevertString {
#[default]
Default,
Debug,
Strip,
VerboseDebug,
}
+245 -49
View File
@@ -3,84 +3,280 @@
use std::{ use std::{
path::PathBuf, path::PathBuf,
process::{Command, Stdio}, process::Stdio,
sync::{Arc, LazyLock},
}; };
use crate::{CompilerInput, CompilerOutput, SolidityCompiler}; use dashmap::DashMap;
use revive_dt_common::types::VersionOrRequirement;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use revive_solc_json_interface::SolcStandardJsonOutput; use revive_solc_json_interface::{
SolcStandardJsonInput, SolcStandardJsonInputLanguage, SolcStandardJsonInputSettings,
SolcStandardJsonInputSettingsOptimizer, SolcStandardJsonInputSettingsSelection,
SolcStandardJsonOutput,
};
use crate::{
CompilerInput, CompilerOutput, ModeOptimizerSetting, ModePipeline, SolidityCompiler, solc::Solc,
};
use alloy::json_abi::JsonAbi;
use anyhow::{Context, Result};
use semver::Version;
use tokio::{io::AsyncWriteExt, process::Command as AsyncCommand};
/// A wrapper around the `resolc` binary, emitting PVM-compatible bytecode. /// A wrapper around the `resolc` binary, emitting PVM-compatible bytecode.
pub struct Resolc { #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Resolc(Arc<ResolcInner>);
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
struct ResolcInner {
/// The internal solc compiler that the resolc compiler uses as a compiler frontend.
solc: Solc,
/// Path to the `resolc` executable /// Path to the `resolc` executable
resolc_path: PathBuf, resolc_path: PathBuf,
} }
impl SolidityCompiler for Resolc { impl SolidityCompiler for Resolc {
type Options = Vec<String>; async fn new(
config: &Arguments,
version: impl Into<Option<VersionOrRequirement>>,
) -> Result<Self> {
/// This is a cache of all of the resolc compiler objects. Since we do not currently support
/// multiple resolc compiler versions, so our cache is just keyed by the solc compiler and
/// its version to the resolc compiler.
static COMPILERS_CACHE: LazyLock<DashMap<Solc, Resolc>> = LazyLock::new(Default::default);
fn build( let solc = Solc::new(config, version)
.await
.context("Failed to create the solc compiler frontend for resolc")?;
Ok(COMPILERS_CACHE
.entry(solc.clone())
.or_insert_with(|| {
Self(Arc::new(ResolcInner {
solc,
resolc_path: config.resolc.clone(),
}))
})
.clone())
}
fn version(&self) -> &Version {
// We currently return the solc compiler version since we do not support multiple resolc
// compiler versions.
self.0.solc.version()
}
fn path(&self) -> &std::path::Path {
&self.0.resolc_path
}
#[tracing::instrument(level = "debug", ret)]
async fn build(
&self, &self,
input: CompilerInput<Self::Options>, CompilerInput {
) -> anyhow::Result<CompilerOutput<Self::Options>> { pipeline,
let mut child = Command::new(&self.resolc_path) optimization,
.arg("--standard-json") evm_version,
.args(&input.extra_options) allow_paths,
base_path,
sources,
libraries,
// TODO: this is currently not being handled since there is no way to pass it into
// resolc. So, we need to go back to this later once it's supported.
revert_string_handling: _,
}: CompilerInput,
) -> Result<CompilerOutput> {
if !matches!(pipeline, None | Some(ModePipeline::ViaYulIR)) {
anyhow::bail!(
"Resolc only supports the Y (via Yul IR) pipeline, but the provided pipeline is {pipeline:?}"
);
}
let input = SolcStandardJsonInput {
language: SolcStandardJsonInputLanguage::Solidity,
sources: sources
.into_iter()
.map(|(path, source)| (path.display().to_string(), source.into()))
.collect(),
settings: SolcStandardJsonInputSettings {
evm_version,
libraries: Some(
libraries
.into_iter()
.map(|(source_code, libraries_map)| {
(
source_code.display().to_string(),
libraries_map
.into_iter()
.map(|(library_ident, library_address)| {
(library_ident, library_address.to_string())
})
.collect(),
)
})
.collect(),
),
remappings: None,
output_selection: Some(SolcStandardJsonInputSettingsSelection::new_required()),
via_ir: Some(true),
optimizer: SolcStandardJsonInputSettingsOptimizer::new(
optimization
.unwrap_or(ModeOptimizerSetting::M0)
.optimizations_enabled(),
None,
&Version::new(0, 0, 0),
false,
),
metadata: None,
polkavm: None,
},
};
let mut command = AsyncCommand::new(self.path());
command
.stdin(Stdio::piped()) .stdin(Stdio::piped())
.stdout(Stdio::piped()) .stdout(Stdio::piped())
.stderr(Stdio::piped()) .stderr(Stdio::piped())
.spawn()?; .arg("--standard-json");
if let Some(ref base_path) = base_path {
command.arg("--base-path").arg(base_path);
}
if !allow_paths.is_empty() {
command.arg("--allow-paths").arg(
allow_paths
.iter()
.map(|path| path.display().to_string())
.collect::<Vec<_>>()
.join(","),
);
}
let mut child = command
.spawn()
.with_context(|| format!("Failed to spawn resolc at {}", self.path().display()))?;
let stdin_pipe = child.stdin.as_mut().expect("stdin must be piped"); let stdin_pipe = child.stdin.as_mut().expect("stdin must be piped");
serde_json::to_writer(stdin_pipe, &input.input)?; let serialized_input = serde_json::to_vec(&input)
.context("Failed to serialize Standard JSON input for resolc")?;
stdin_pipe
.write_all(&serialized_input)
.await
.context("Failed to write Standard JSON to resolc stdin")?;
let json_in = serde_json::to_string_pretty(&input.input)?; let output = child
.wait_with_output()
let output = child.wait_with_output()?; .await
.context("Failed while waiting for resolc process to finish")?;
let stdout = output.stdout; let stdout = output.stdout;
let stderr = output.stderr; let stderr = output.stderr;
if !output.status.success() { if !output.status.success() {
let json_in = serde_json::to_string_pretty(&input)
.context("Failed to pretty-print Standard JSON input for logging")?;
let message = String::from_utf8_lossy(&stderr); let message = String::from_utf8_lossy(&stderr);
log::error!( tracing::error!(
"resolc failed exit={} stderr={} JSON-in={} ", status = %output.status,
output.status, message = %message,
&message, json_input = json_in,
json_in, "Compilation using resolc failed"
); );
return Ok(CompilerOutput { anyhow::bail!("Compilation failed with an error: {message}");
input,
output: Default::default(),
error: Some(message.into()),
});
} }
let parsed: SolcStandardJsonOutput = serde_json::from_slice(&stdout).map_err(|e| { let parsed = serde_json::from_slice::<SolcStandardJsonOutput>(&stdout)
anyhow::anyhow!( .map_err(|e| {
"failed to parse resolc JSON output: {e}\nstderr: {}", anyhow::anyhow!(
String::from_utf8_lossy(&stderr) "failed to parse resolc JSON output: {e}\nstderr: {}",
) String::from_utf8_lossy(&stderr)
})?; )
})
.context("Failed to parse resolc standard JSON output")?;
Ok(CompilerOutput { tracing::debug!(
input, output = %serde_json::to_string(&parsed).unwrap(),
output: parsed, "Compiled successfully"
error: None, );
})
}
fn new(resolc_path: PathBuf) -> Self { // Detecting if the compiler output contained errors and reporting them through logs and
Resolc { resolc_path } // errors instead of returning the compiler output that might contain errors.
} for error in parsed.errors.iter().flatten() {
if error.severity == "error" {
fn get_compiler_executable( tracing::error!(
config: &Arguments, ?error,
_version: semver::Version, ?input,
) -> anyhow::Result<PathBuf> { output = %serde_json::to_string(&parsed).unwrap(),
if !config.resolc.as_os_str().is_empty() { "Encountered an error in the compilation"
return Ok(config.resolc.clone()); );
anyhow::bail!("Encountered an error in the compilation: {error}")
}
} }
Ok(PathBuf::from("resolc")) let Some(contracts) = parsed.contracts else {
anyhow::bail!("Unexpected error - resolc output doesn't have a contracts section");
};
let mut compiler_output = CompilerOutput::default();
for (source_path, contracts) in contracts.into_iter() {
let src_for_msg = source_path.clone();
let source_path = PathBuf::from(source_path)
.canonicalize()
.with_context(|| format!("Failed to canonicalize path {src_for_msg}"))?;
let map = compiler_output.contracts.entry(source_path).or_default();
for (contract_name, contract_information) in contracts.into_iter() {
let bytecode = contract_information
.evm
.and_then(|evm| evm.bytecode.clone())
.context("Unexpected - Contract compiled with resolc has no bytecode")?;
let abi = {
let metadata = contract_information
.metadata
.as_ref()
.context("No metadata found for the contract")?;
let solc_metadata_str = match metadata {
serde_json::Value::String(solc_metadata_str) => solc_metadata_str.as_str(),
serde_json::Value::Object(metadata_object) => {
let solc_metadata_value = metadata_object
.get("solc_metadata")
.context("Contract doesn't have a 'solc_metadata' field")?;
solc_metadata_value
.as_str()
.context("The 'solc_metadata' field is not a string")?
}
serde_json::Value::Null
| serde_json::Value::Bool(_)
| serde_json::Value::Number(_)
| serde_json::Value::Array(_) => {
anyhow::bail!("Unsupported type of metadata {metadata:?}")
}
};
let solc_metadata =
serde_json::from_str::<serde_json::Value>(solc_metadata_str).context(
"Failed to deserialize the solc_metadata as a serde_json generic value",
)?;
let output_value = solc_metadata
.get("output")
.context("solc_metadata doesn't have an output field")?;
let abi_value = output_value
.get("abi")
.context("solc_metadata output doesn't contain an abi field")?;
serde_json::from_value::<JsonAbi>(abi_value.clone())
.context("ABI found in solc_metadata output is not valid ABI")?
};
map.insert(contract_name, (bytecode.object, abi));
}
}
Ok(compiler_output)
}
fn supports_mode(
&self,
optimize_setting: ModeOptimizerSetting,
pipeline: ModePipeline,
) -> bool {
pipeline == ModePipeline::ViaYulIR && self.0.solc.supports_mode(optimize_setting, pipeline)
} }
} }
+243 -33
View File
@@ -3,61 +3,271 @@
use std::{ use std::{
path::PathBuf, path::PathBuf,
process::{Command, Stdio}, process::Stdio,
sync::{Arc, LazyLock},
}; };
use crate::{CompilerInput, CompilerOutput, SolidityCompiler}; use dashmap::DashMap;
use revive_dt_common::types::VersionOrRequirement;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use revive_dt_solc_binaries::download_solc; use revive_dt_solc_binaries::download_solc;
pub struct Solc { use crate::{CompilerInput, CompilerOutput, ModeOptimizerSetting, ModePipeline, SolidityCompiler};
use anyhow::{Context, Result};
use foundry_compilers_artifacts::{
output_selection::{
BytecodeOutputSelection, ContractOutputSelection, EvmOutputSelection, OutputSelection,
},
solc::CompilerOutput as SolcOutput,
solc::*,
};
use semver::Version;
use tokio::{io::AsyncWriteExt, process::Command as AsyncCommand};
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Solc(Arc<SolcInner>);
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
struct SolcInner {
/// The path of the solidity compiler executable that this object uses.
solc_path: PathBuf, solc_path: PathBuf,
/// The version of the solidity compiler executable that this object uses.
solc_version: Version,
} }
impl SolidityCompiler for Solc { impl SolidityCompiler for Solc {
type Options = (); async fn new(
config: &Arguments,
version: impl Into<Option<VersionOrRequirement>>,
) -> Result<Self> {
// This is a cache for the compiler objects so that whenever the same compiler version is
// requested the same object is returned. We do this as we do not want to keep cloning the
// compiler around.
static COMPILERS_CACHE: LazyLock<DashMap<Version, Solc>> = LazyLock::new(Default::default);
fn build( // We attempt to download the solc binary. Note the following: this call does the version
// resolution for us. Therefore, even if the download didn't proceed, this function will
// resolve the version requirement into a canonical version of the compiler. It's then up
// to us to either use the provided path or not.
let version = version.into().unwrap_or_else(|| config.solc.clone().into());
let (version, path) = download_solc(config.directory(), version, false)
.await
.context("Failed to download/get path to solc binary")?;
Ok(COMPILERS_CACHE
.entry(version.clone())
.or_insert_with(|| {
Self(Arc::new(SolcInner {
solc_path: path,
solc_version: version,
}))
})
.clone())
}
fn version(&self) -> &Version {
&self.0.solc_version
}
fn path(&self) -> &std::path::Path {
&self.0.solc_path
}
#[tracing::instrument(level = "debug", ret)]
async fn build(
&self, &self,
input: CompilerInput<Self::Options>, CompilerInput {
) -> anyhow::Result<CompilerOutput<Self::Options>> { pipeline,
let mut child = Command::new(&self.solc_path) optimization,
evm_version,
allow_paths,
base_path,
sources,
libraries,
revert_string_handling,
}: CompilerInput,
) -> Result<CompilerOutput> {
// Be careful to entirely omit the viaIR field if the compiler does not support it,
// as it will error if you provide fields it does not know about. Because
// `supports_mode` is called prior to instantiating a compiler, we should never
// ask for something which is invalid.
let via_ir = match (pipeline, self.compiler_supports_yul()) {
(pipeline, true) => pipeline.map(|p| p.via_yul_ir()),
(_pipeline, false) => None,
};
let input = SolcInput {
language: SolcLanguage::Solidity,
sources: Sources(
sources
.into_iter()
.map(|(source_path, source_code)| (source_path, Source::new(source_code)))
.collect(),
),
settings: Settings {
optimizer: Optimizer {
enabled: optimization.map(|o| o.optimizations_enabled()),
details: Some(Default::default()),
..Default::default()
},
output_selection: OutputSelection::common_output_selection(
[
ContractOutputSelection::Abi,
ContractOutputSelection::Evm(EvmOutputSelection::ByteCode(
BytecodeOutputSelection::Object,
)),
]
.into_iter()
.map(|item| item.to_string()),
),
evm_version: evm_version.map(|version| version.to_string().parse().unwrap()),
via_ir,
libraries: Libraries {
libs: libraries
.into_iter()
.map(|(file_path, libraries)| {
(
file_path,
libraries
.into_iter()
.map(|(library_name, library_address)| {
(library_name, library_address.to_string())
})
.collect(),
)
})
.collect(),
},
debug: revert_string_handling.map(|revert_string_handling| DebuggingSettings {
revert_strings: match revert_string_handling {
crate::RevertString::Default => Some(RevertStrings::Default),
crate::RevertString::Debug => Some(RevertStrings::Debug),
crate::RevertString::Strip => Some(RevertStrings::Strip),
crate::RevertString::VerboseDebug => Some(RevertStrings::VerboseDebug),
},
debug_info: Default::default(),
}),
..Default::default()
},
};
let mut command = AsyncCommand::new(self.path());
command
.stdin(Stdio::piped()) .stdin(Stdio::piped())
.stdout(Stdio::piped()) .stdout(Stdio::piped())
.stderr(Stdio::piped()) .stderr(Stdio::piped())
.arg("--standard-json") .arg("--standard-json");
.spawn()?;
if let Some(ref base_path) = base_path {
command.arg("--base-path").arg(base_path);
}
if !allow_paths.is_empty() {
command.arg("--allow-paths").arg(
allow_paths
.iter()
.map(|path| path.display().to_string())
.collect::<Vec<_>>()
.join(","),
);
}
let mut child = command
.spawn()
.with_context(|| format!("Failed to spawn solc at {}", self.path().display()))?;
let stdin = child.stdin.as_mut().expect("should be piped"); let stdin = child.stdin.as_mut().expect("should be piped");
serde_json::to_writer(stdin, &input.input)?; let serialized_input = serde_json::to_vec(&input)
let output = child.wait_with_output()?; .context("Failed to serialize Standard JSON input for solc")?;
stdin
.write_all(&serialized_input)
.await
.context("Failed to write Standard JSON to solc stdin")?;
let output = child
.wait_with_output()
.await
.context("Failed while waiting for solc process to finish")?;
if !output.status.success() { if !output.status.success() {
let json_in = serde_json::to_string_pretty(&input)
.context("Failed to pretty-print Standard JSON input for logging")?;
let message = String::from_utf8_lossy(&output.stderr); let message = String::from_utf8_lossy(&output.stderr);
log::error!("solc failed exit={} stderr={}", output.status, &message); tracing::error!(
return Ok(CompilerOutput { status = %output.status,
input, message = %message,
output: Default::default(), json_input = json_in,
error: Some(message.into()), "Compilation using solc failed"
}); );
anyhow::bail!("Compilation failed with an error: {message}");
} }
Ok(CompilerOutput { let parsed = serde_json::from_slice::<SolcOutput>(&output.stdout)
input, .map_err(|e| {
output: serde_json::from_slice(&output.stdout)?, anyhow::anyhow!(
error: None, "failed to parse resolc JSON output: {e}\nstderr: {}",
}) String::from_utf8_lossy(&output.stdout)
)
})
.context("Failed to parse solc standard JSON output")?;
// Detecting if the compiler output contained errors and reporting them through logs and
// errors instead of returning the compiler output that might contain errors.
for error in parsed.errors.iter() {
if error.severity == Severity::Error {
tracing::error!(?error, ?input, "Encountered an error in the compilation");
anyhow::bail!("Encountered an error in the compilation: {error}")
}
}
tracing::debug!(
output = %String::from_utf8_lossy(&output.stdout).to_string(),
"Compiled successfully"
);
let mut compiler_output = CompilerOutput::default();
for (contract_path, contracts) in parsed.contracts {
let map = compiler_output
.contracts
.entry(contract_path.canonicalize().with_context(|| {
format!(
"Failed to canonicalize contract path {}",
contract_path.display()
)
})?)
.or_default();
for (contract_name, contract_info) in contracts.into_iter() {
let source_code = contract_info
.evm
.and_then(|evm| evm.bytecode)
.map(|bytecode| match bytecode.object {
BytecodeObject::Bytecode(bytecode) => bytecode.to_string(),
BytecodeObject::Unlinked(unlinked) => unlinked,
})
.context("Unexpected - contract compiled with solc has no source code")?;
let abi = contract_info
.abi
.context("Unexpected - contract compiled with solc as no ABI")?;
map.insert(contract_name, (source_code, abi));
}
}
Ok(compiler_output)
} }
fn new(solc_path: PathBuf) -> Self { fn supports_mode(
Self { solc_path } &self,
} _optimize_setting: ModeOptimizerSetting,
pipeline: ModePipeline,
fn get_compiler_executable( ) -> bool {
config: &Arguments, // solc 0.8.13 and above supports --via-ir, and less than that does not. Thus, we support mode E
version: semver::Version, // (ie no Yul IR) in either case, but only support Y (via Yul IR) if the compiler is new enough.
) -> anyhow::Result<PathBuf> { pipeline == ModePipeline::ViaEVMAssembly
let path = download_solc(config.directory(), version, config.wasm)?; || (pipeline == ModePipeline::ViaYulIR && self.compiler_supports_yul())
Ok(path) }
}
impl Solc {
fn compiler_supports_yul(&self) -> bool {
const SOLC_VERSION_SUPPORTING_VIA_YUL_IR: Version = Version::new(0, 8, 13);
self.version() >= &SOLC_VERSION_SUPPORTING_VIA_YUL_IR
} }
} }
@@ -0,0 +1,9 @@
// SPDX-License-Identifier: MIT
pragma solidity >=0.6.9;
contract Callable {
function f(uint[1] memory p1) public pure returns(uint) {
return p1[0];
}
}
@@ -0,0 +1,13 @@
// SPDX-License-Identifier: MIT
// Report https://linear.app/matterlabs/issue/CPR-269/call-with-calldata-variable-bug
pragma solidity >=0.6.9;
import "./callable.sol";
contract Main {
function main(uint[1] calldata p1, Callable callable) public returns(uint) {
return callable.f(p1);
}
}
@@ -0,0 +1,21 @@
{ "cases": [ {
"name": "first",
"inputs": [
{
"instance": "Main",
"method": "main",
"calldata": [
"1",
"Callable.address"
]
}
],
"expected": [
"1"
]
} ],
"contracts": {
"Main": "main.sol:Main",
"Callable": "callable.sol:Callable"
}
}
+88
View File
@@ -0,0 +1,88 @@
use std::path::PathBuf;
use revive_dt_common::types::VersionOrRequirement;
use revive_dt_compiler::{Compiler, SolidityCompiler, revive_resolc::Resolc, solc::Solc};
use revive_dt_config::Arguments;
use semver::Version;
#[tokio::test]
async fn contracts_can_be_compiled_with_solc() {
// Arrange
let args = Arguments::default();
let solc = Solc::new(&args, VersionOrRequirement::Version(Version::new(0, 8, 30)))
.await
.unwrap();
// Act
let output = Compiler::new()
.with_source("./tests/assets/array_one_element/callable.sol")
.unwrap()
.with_source("./tests/assets/array_one_element/main.sol")
.unwrap()
.try_build(&solc)
.await;
// Assert
let output = output.expect("Failed to compile");
assert_eq!(output.contracts.len(), 2);
let main_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/main.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
let callable_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/callable.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
assert!(main_file_contracts.contains_key("Main"));
assert!(callable_file_contracts.contains_key("Callable"));
}
#[tokio::test]
async fn contracts_can_be_compiled_with_resolc() {
// Arrange
let args = Arguments::default();
let resolc = Resolc::new(&args, VersionOrRequirement::Version(Version::new(0, 8, 30)))
.await
.unwrap();
// Act
let output = Compiler::new()
.with_source("./tests/assets/array_one_element/callable.sol")
.unwrap()
.with_source("./tests/assets/array_one_element/main.sol")
.unwrap()
.try_build(&resolc)
.await;
// Assert
let output = output.expect("Failed to compile");
assert_eq!(output.contracts.len(), 2);
let main_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/main.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
let callable_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/callable.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
assert!(main_file_contracts.contains_key("Main"));
assert!(callable_file_contracts.contains_key("Callable"));
}
+2
View File
@@ -15,3 +15,5 @@ semver = { workspace = true }
temp-dir = { workspace = true } temp-dir = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
[lints]
workspace = true
+66 -14
View File
@@ -1,8 +1,9 @@
//! The global configuration used accross all revive differential testing crates. //! The global configuration used across all revive differential testing crates.
use std::{ use std::{
fmt::Display, fmt::Display,
path::{Path, PathBuf}, path::{Path, PathBuf},
sync::LazyLock,
}; };
use alloy::{network::EthereumWallet, signers::local::PrivateKeySigner}; use alloy::{network::EthereumWallet, signers::local::PrivateKeySigner};
@@ -54,13 +55,9 @@ pub struct Arguments {
pub geth: PathBuf, pub geth: PathBuf,
/// The maximum time in milliseconds to wait for geth to start. /// The maximum time in milliseconds to wait for geth to start.
#[arg(long = "geth-start-timeout", default_value = "2000")] #[arg(long = "geth-start-timeout", default_value = "5000")]
pub geth_start_timeout: u64, pub geth_start_timeout: u64,
/// The test network chain ID.
#[arg(short, long = "network-id", default_value = "420420420")]
pub network_id: u64,
/// Configure nodes according to this genesis.json file. /// Configure nodes according to this genesis.json file.
#[arg(long = "genesis", default_value = "genesis.json")] #[arg(long = "genesis", default_value = "genesis.json")]
pub genesis_file: PathBuf, pub genesis_file: PathBuf,
@@ -73,6 +70,12 @@ pub struct Arguments {
)] )]
pub account: String, pub account: String,
/// This argument controls which private keys the nodes should have access to and be added to
/// its wallet signers. With a value of N, private keys (0, N] will be added to the signer set
/// of the node.
#[arg(long = "private-keys-count", default_value_t = 100_000)]
pub private_keys_to_add: usize,
/// The differential testing leader node implementation. /// The differential testing leader node implementation.
#[arg(short, long = "leader", default_value = "geth")] #[arg(short, long = "leader", default_value = "geth")]
pub leader: TestingPlatform, pub leader: TestingPlatform,
@@ -81,13 +84,22 @@ pub struct Arguments {
#[arg(short, long = "follower", default_value = "kitchensink")] #[arg(short, long = "follower", default_value = "kitchensink")]
pub follower: TestingPlatform, pub follower: TestingPlatform,
/// Only compile against this testing platform (doesn't execute the tests). /// Determines the amount of nodes that will be spawned for each chain.
#[arg(long = "compile-only")] #[arg(long, default_value = "1")]
pub compile_only: Option<TestingPlatform>, pub number_of_nodes: usize,
/// Determines the amount of tests that are executed in parallel. /// Determines the amount of tokio worker threads that will will be used.
#[arg(long = "workers", default_value = "12")] #[arg(
pub workers: usize, long,
default_value_t = std::thread::available_parallelism()
.map(|n| n.get())
.unwrap_or(1)
)]
pub number_of_threads: usize,
/// Determines the amount of concurrent tasks that will be spawned to run tests. Defaults to 10 x the number of nodes.
#[arg(long)]
pub number_concurrent_tasks: Option<usize>,
/// Extract problems back to the test corpus. /// Extract problems back to the test corpus.
#[arg(short, long = "extract-problems")] #[arg(short, long = "extract-problems")]
@@ -99,11 +111,35 @@ pub struct Arguments {
#[arg(short, long = "kitchensink", default_value = "substrate-node")] #[arg(short, long = "kitchensink", default_value = "substrate-node")]
pub kitchensink: PathBuf, pub kitchensink: PathBuf,
/// The path to the `revive-dev-node` executable.
///
/// By default it uses `revive-dev-node` binary found in `$PATH`.
#[arg(long = "revive-dev-node", default_value = "revive-dev-node")]
pub revive_dev_node: PathBuf,
/// By default the tool uses the revive-dev-node when it's running differential tests against
/// PolkaVM since the dev-node is much faster than kitchensink. This flag allows the caller to
/// configure the tool to use kitchensink rather than the dev-node.
#[arg(long)]
pub use_kitchensink_not_dev_node: bool,
/// The path to the `eth_proxy` executable. /// The path to the `eth_proxy` executable.
/// ///
/// By default it uses `eth-rpc` binary found in `$PATH`. /// By default it uses `eth-rpc` binary found in `$PATH`.
#[arg(short = 'p', long = "eth_proxy", default_value = "eth-rpc")] #[arg(short = 'p', long = "eth_proxy", default_value = "eth-rpc")]
pub eth_proxy: PathBuf, pub eth_proxy: PathBuf,
/// Controls if the compilation cache should be invalidated or not.
#[arg(short, long)]
pub invalidate_compilation_cache: bool,
/// Controls if the compiler input is included in the final report.
#[clap(long = "report.include-compiler-input")]
pub report_include_compiler_input: bool,
/// Controls if the compiler output is included in the final report.
#[clap(long = "report.include-compiler-output")]
pub report_include_compiler_output: bool,
} }
impl Arguments { impl Arguments {
@@ -123,6 +159,13 @@ impl Arguments {
panic!("should have a workdir configured") panic!("should have a workdir configured")
} }
/// Return the number of concurrent tasks to run. This is provided via the
/// `--number-concurrent-tasks` argument, and otherwise defaults to --number-of-nodes * 20.
pub fn number_of_concurrent_tasks(&self) -> usize {
self.number_concurrent_tasks
.unwrap_or(20 * self.number_of_nodes)
}
/// Try to parse `self.account` into a [PrivateKeySigner], /// Try to parse `self.account` into a [PrivateKeySigner],
/// panicing on error. /// panicing on error.
pub fn wallet(&self) -> EthereumWallet { pub fn wallet(&self) -> EthereumWallet {
@@ -138,14 +181,23 @@ impl Arguments {
impl Default for Arguments { impl Default for Arguments {
fn default() -> Self { fn default() -> Self {
Arguments::parse_from(["retester"]) static TEMP_DIR: LazyLock<TempDir> = LazyLock::new(|| TempDir::new().unwrap());
let default = Arguments::parse_from(["retester"]);
Arguments {
temp_dir: Some(&TEMP_DIR),
..default
}
} }
} }
/// The Solidity compatible node implementation. /// The Solidity compatible node implementation.
/// ///
/// This describes the solutions to be tested against on a high level. /// This describes the solutions to be tested against on a high level.
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq, ValueEnum, Serialize, Deserialize)] #[derive(
Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, ValueEnum, Serialize, Deserialize,
)]
#[clap(rename_all = "lower")] #[clap(rename_all = "lower")]
pub enum TestingPlatform { pub enum TestingPlatform {
/// The go-ethereum reference full node EVM implementation. /// The go-ethereum reference full node EVM implementation.
+15 -4
View File
@@ -13,6 +13,7 @@ name = "retester"
path = "src/main.rs" path = "src/main.rs"
[dependencies] [dependencies]
revive-dt-common = { workspace = true }
revive-dt-compiler = { workspace = true } revive-dt-compiler = { workspace = true }
revive-dt-config = { workspace = true } revive-dt-config = { workspace = true }
revive-dt-format = { workspace = true } revive-dt-format = { workspace = true }
@@ -22,9 +23,19 @@ revive-dt-report = { workspace = true }
alloy = { workspace = true } alloy = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
bson = { workspace = true }
cacache = { workspace = true }
clap = { workspace = true } clap = { workspace = true }
log = { workspace = true } futures = { workspace = true }
env_logger = { workspace = true } indexmap = { workspace = true }
rayon = { workspace = true } tokio = { workspace = true }
revive-solc-json-interface = { workspace = true } tracing = { workspace = true }
tracing-appender = { workspace = true }
tracing-subscriber = { workspace = true }
semver = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
temp-dir = { workspace = true } temp-dir = { workspace = true }
[lints]
workspace = true
+359
View File
@@ -0,0 +1,359 @@
//! A wrapper around the compiler which allows for caching of compilation artifacts so that they can
//! be reused between runs.
use std::{
borrow::Cow,
collections::HashMap,
path::{Path, PathBuf},
sync::Arc,
};
use futures::FutureExt;
use revive_dt_common::iterators::FilesWithExtensionIterator;
use revive_dt_compiler::{Compiler, CompilerOutput, Mode, SolidityCompiler};
use revive_dt_config::TestingPlatform;
use revive_dt_format::metadata::{ContractIdent, ContractInstance, Metadata};
use alloy::{hex::ToHexExt, json_abi::JsonAbi, primitives::Address};
use anyhow::{Context as _, Error, Result};
use revive_dt_report::ExecutionSpecificReporter;
use semver::Version;
use serde::{Deserialize, Serialize};
use tokio::sync::{Mutex, RwLock};
use tracing::{Instrument, debug, debug_span, instrument};
use crate::Platform;
pub struct CachedCompiler<'a> {
/// The cache that stores the compiled contracts.
artifacts_cache: ArtifactsCache,
/// This is a mechanism that the cached compiler uses so that if multiple compilation requests
/// come in for the same contract we never compile all of them and only compile it once and all
/// other tasks that request this same compilation concurrently get the cached version.
cache_key_lock: RwLock<HashMap<CacheKey<'a>, Arc<Mutex<()>>>>,
}
impl<'a> CachedCompiler<'a> {
pub async fn new(path: impl AsRef<Path>, invalidate_cache: bool) -> Result<Self> {
let mut cache = ArtifactsCache::new(path);
if invalidate_cache {
cache = cache
.with_invalidated_cache()
.await
.context("Failed to invalidate compilation cache directory")?;
}
Ok(Self {
artifacts_cache: cache,
cache_key_lock: Default::default(),
})
}
/// Compiles or gets the compilation artifacts from the cache.
#[allow(clippy::too_many_arguments)]
#[instrument(
level = "debug",
skip_all,
fields(
metadata_file_path = %metadata_file_path.display(),
%mode,
platform = P::config_id().to_string()
),
err
)]
pub async fn compile_contracts<P: Platform>(
&self,
metadata: &'a Metadata,
metadata_file_path: &'a Path,
mode: Cow<'a, Mode>,
deployed_libraries: Option<&HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>>,
compiler: &P::Compiler,
reporter: &ExecutionSpecificReporter,
) -> Result<CompilerOutput> {
let cache_key = CacheKey {
platform_key: P::config_id(),
compiler_version: compiler.version().clone(),
metadata_file_path,
solc_mode: mode.clone(),
};
let compilation_callback = || {
async move {
compile_contracts::<P>(
metadata
.directory()
.context("Failed to get metadata directory while preparing compilation")?,
metadata
.files_to_compile()
.context("Failed to enumerate files to compile from metadata")?,
&mode,
deployed_libraries,
compiler,
reporter,
)
.map(|compilation_result| compilation_result.map(CacheValue::new))
.await
}
.instrument(debug_span!(
"Running compilation for the cache key",
cache_key.platform_key = %cache_key.platform_key,
cache_key.compiler_version = %cache_key.compiler_version,
cache_key.metadata_file_path = %cache_key.metadata_file_path.display(),
cache_key.solc_mode = %cache_key.solc_mode,
))
};
let compiled_contracts = match deployed_libraries {
// If deployed libraries have been specified then we will re-compile the contract as it
// means that linking is required in this case.
Some(_) => {
debug!("Deployed libraries defined, recompilation must take place");
debug!("Cache miss");
compilation_callback()
.await
.context("Compilation callback for deployed libraries failed")?
.compiler_output
}
// If no deployed libraries are specified then we can follow the cached flow and attempt
// to lookup the compilation artifacts in the cache.
None => {
debug!("Deployed libraries undefined, attempting to make use of cache");
// Lock this specific cache key such that we do not get inconsistent state. We want
// that when multiple cases come in asking for the compilation artifacts then they
// don't all trigger a compilation if there's a cache miss. Hence, the lock here.
let read_guard = self.cache_key_lock.read().await;
let mutex = match read_guard.get(&cache_key).cloned() {
Some(value) => {
drop(read_guard);
value
}
None => {
drop(read_guard);
self.cache_key_lock
.write()
.await
.entry(cache_key.clone())
.or_default()
.clone()
}
};
let _guard = mutex.lock().await;
match self.artifacts_cache.get(&cache_key).await {
Some(cache_value) => {
if deployed_libraries.is_some() {
reporter
.report_post_link_contracts_compilation_succeeded_event(
compiler.version().clone(),
compiler.path(),
true,
None,
cache_value.compiler_output.clone(),
)
.expect("Can't happen");
} else {
reporter
.report_pre_link_contracts_compilation_succeeded_event(
compiler.version().clone(),
compiler.path(),
true,
None,
cache_value.compiler_output.clone(),
)
.expect("Can't happen");
}
cache_value.compiler_output
}
None => {
compilation_callback()
.await
.context("Compilation callback failed (cache miss path)")?
.compiler_output
}
}
}
};
Ok(compiled_contracts)
}
}
async fn compile_contracts<P: Platform>(
metadata_directory: impl AsRef<Path>,
mut files_to_compile: impl Iterator<Item = PathBuf>,
mode: &Mode,
deployed_libraries: Option<&HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>>,
compiler: &P::Compiler,
reporter: &ExecutionSpecificReporter,
) -> Result<CompilerOutput> {
let all_sources_in_dir = FilesWithExtensionIterator::new(metadata_directory.as_ref())
.with_allowed_extension("sol")
.with_use_cached_fs(true)
.collect::<Vec<_>>();
let compilation = Compiler::new()
.with_allow_path(metadata_directory)
// Handling the modes
.with_optimization(mode.optimize_setting)
.with_pipeline(mode.pipeline)
// Adding the contract sources to the compiler.
.try_then(|compiler| {
files_to_compile.try_fold(compiler, |compiler, path| compiler.with_source(path))
})?
// Adding the deployed libraries to the compiler.
.then(|compiler| {
deployed_libraries
.iter()
.flat_map(|value| value.iter())
.map(|(instance, (ident, address, abi))| (instance, ident, address, abi))
.flat_map(|(_, ident, address, _)| {
all_sources_in_dir
.iter()
.map(move |path| (ident, address, path))
})
.fold(compiler, |compiler, (ident, address, path)| {
compiler.with_library(path, ident.as_str(), *address)
})
});
let input = compilation.input().clone();
let output = compilation.try_build(compiler).await;
match (output.as_ref(), deployed_libraries.is_some()) {
(Ok(output), true) => {
reporter
.report_post_link_contracts_compilation_succeeded_event(
compiler.version().clone(),
compiler.path(),
false,
input,
output.clone(),
)
.expect("Can't happen");
}
(Ok(output), false) => {
reporter
.report_pre_link_contracts_compilation_succeeded_event(
compiler.version().clone(),
compiler.path(),
false,
input,
output.clone(),
)
.expect("Can't happen");
}
(Err(err), true) => {
reporter
.report_post_link_contracts_compilation_failed_event(
compiler.version().clone(),
compiler.path().to_path_buf(),
input,
format!("{err:#}"),
)
.expect("Can't happen");
}
(Err(err), false) => {
reporter
.report_pre_link_contracts_compilation_failed_event(
compiler.version().clone(),
compiler.path().to_path_buf(),
input,
format!("{err:#}"),
)
.expect("Can't happen");
}
}
output
}
struct ArtifactsCache {
path: PathBuf,
}
impl ArtifactsCache {
pub fn new(path: impl AsRef<Path>) -> Self {
Self {
path: path.as_ref().to_path_buf(),
}
}
#[instrument(level = "debug", skip_all, err)]
pub async fn with_invalidated_cache(self) -> Result<Self> {
cacache::clear(self.path.as_path())
.await
.map_err(Into::<Error>::into)
.with_context(|| format!("Failed to clear cache at {}", self.path.display()))?;
Ok(self)
}
#[instrument(level = "debug", skip_all, err)]
pub async fn insert(&self, key: &CacheKey<'_>, value: &CacheValue) -> Result<()> {
let key = bson::to_vec(key).context("Failed to serialize cache key (bson)")?;
let value = bson::to_vec(value).context("Failed to serialize cache value (bson)")?;
cacache::write(self.path.as_path(), key.encode_hex(), value)
.await
.with_context(|| {
format!("Failed to write cache entry under {}", self.path.display())
})?;
Ok(())
}
pub async fn get(&self, key: &CacheKey<'_>) -> Option<CacheValue> {
let key = bson::to_vec(key).ok()?;
let value = cacache::read(self.path.as_path(), key.encode_hex())
.await
.ok()?;
let value = bson::from_slice::<CacheValue>(&value).ok()?;
Some(value)
}
#[instrument(level = "debug", skip_all, err)]
pub async fn get_or_insert_with(
&self,
key: &CacheKey<'_>,
callback: impl AsyncFnOnce() -> Result<CacheValue>,
) -> Result<CacheValue> {
match self.get(key).await {
Some(value) => {
debug!("Cache hit");
Ok(value)
}
None => {
debug!("Cache miss");
let value = callback().await?;
self.insert(key, &value).await?;
Ok(value)
}
}
}
}
#[derive(Clone, Debug, PartialEq, Eq, Hash, Serialize)]
struct CacheKey<'a> {
/// The platform name that this artifact was compiled for. For example, this could be EVM or
/// PVM.
platform_key: &'a TestingPlatform,
/// The version of the compiler that was used to compile the artifacts.
compiler_version: Version,
/// The path of the metadata file that the compilation artifacts are for.
metadata_file_path: &'a Path,
/// The mode that the compilation artifacts where compiled with.
solc_mode: Cow<'a, Mode>,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
struct CacheValue {
/// The compiler output from the compilation run.
compiler_output: CompilerOutput,
}
impl CacheValue {
pub fn new(compiler_output: CompilerOutput) -> Self {
Self { compiler_output }
}
}
+821 -118
View File
@@ -1,153 +1,856 @@
//! The test driver handles the compilation and execution of the test cases. //! The test driver handles the compilation and execution of the test cases.
use alloy::{ use std::collections::HashMap;
primitives::{Address, map::HashMap}, use std::marker::PhantomData;
rpc::types::trace::geth::GethTrace, use std::path::PathBuf;
use alloy::consensus::EMPTY_ROOT_HASH;
use alloy::hex;
use alloy::json_abi::JsonAbi;
use alloy::network::{Ethereum, TransactionBuilder};
use alloy::primitives::U256;
use alloy::rpc::types::TransactionReceipt;
use alloy::rpc::types::trace::geth::{
CallFrame, GethDebugBuiltInTracerType, GethDebugTracerConfig, GethDebugTracerType,
GethDebugTracingOptions, GethTrace, PreStateConfig,
}; };
use revive_dt_compiler::{Compiler, CompilerInput, SolidityCompiler}; use alloy::{
use revive_dt_config::Arguments; primitives::Address,
use revive_dt_format::{input::Input, metadata::Metadata, mode::SolcMode}; rpc::types::{TransactionRequest, trace::geth::DiffMode},
};
use anyhow::Context;
use futures::TryStreamExt;
use indexmap::IndexMap;
use revive_dt_format::traits::{ResolutionContext, ResolverApi};
use revive_dt_report::ExecutionSpecificReporter;
use semver::Version;
use revive_dt_format::case::Case;
use revive_dt_format::input::{
BalanceAssertion, Calldata, EtherValue, Expected, ExpectedOutput, Input, Method, StepIdx,
StorageEmptyAssertion,
};
use revive_dt_format::metadata::{ContractIdent, ContractInstance, ContractPathAndIdent};
use revive_dt_format::{input::Step, metadata::Metadata};
use revive_dt_node_interaction::EthereumNode; use revive_dt_node_interaction::EthereumNode;
use revive_dt_report::reporter::{CompilationTask, Report, Span}; use tokio::try_join;
use revive_solc_json_interface::SolcStandardJsonOutput; use tracing::{Instrument, info, info_span, instrument};
use crate::Platform; use crate::Platform;
type Contracts<T> = HashMap< pub struct CaseState<T: Platform> {
CompilerInput<<<T as Platform>::Compiler as SolidityCompiler>::Options>, /// A map of all of the compiled contracts for the given metadata file.
SolcStandardJsonOutput, compiled_contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
>;
pub struct State<'a, T: Platform> { /// This map stores the contracts deployments for this case.
config: &'a Arguments, deployed_contracts: HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>,
span: Span,
contracts: Contracts<T>, /// This map stores the variables used for each one of the cases contained in the metadata
deployed_contracts: HashMap<String, Address>, /// file.
variables: HashMap<String, U256>,
/// Stores the version used for the current case.
compiler_version: Version,
/// The execution reporter.
execution_reporter: ExecutionSpecificReporter,
phantom: PhantomData<T>,
} }
impl<'a, T> State<'a, T> impl<T> CaseState<T>
where where
T: Platform, T: Platform,
{ {
pub fn new(config: &'a Arguments, span: Span) -> Self { pub fn new(
compiler_version: Version,
compiled_contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
deployed_contracts: HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>,
execution_reporter: ExecutionSpecificReporter,
) -> Self {
Self { Self {
config, compiled_contracts,
span, deployed_contracts,
contracts: Default::default(), variables: Default::default(),
deployed_contracts: Default::default(), compiler_version,
execution_reporter,
phantom: PhantomData,
} }
} }
/// Returns a copy of the current span. pub async fn handle_step(
fn span(&self) -> Span {
self.span
}
pub fn build_contracts(&mut self, mode: &SolcMode, metadata: &Metadata) -> anyhow::Result<()> {
let mut span = self.span();
span.next_metadata(
metadata
.file_path
.as_ref()
.expect("metadata should have been read from a file")
.clone(),
);
let Some(version) = mode.last_patch_version(&self.config.solc) else {
anyhow::bail!("unsupported solc version: {:?}", &mode.solc_version);
};
let mut compiler = Compiler::<T::Compiler>::new()
.base_path(metadata.directory()?.display().to_string())
.solc_optimizer(mode.solc_optimize());
for (file, _contract) in metadata.contract_sources()?.values() {
log::debug!("contract source {}", file.display());
compiler = compiler.with_source(file)?;
}
let mut task = CompilationTask {
json_input: compiler.input(),
json_output: None,
mode: mode.clone(),
compiler_version: format!("{}", &version),
error: None,
};
let compiler_path = T::Compiler::get_compiler_executable(self.config, version)?;
match compiler.try_build(compiler_path) {
Ok(output) => {
task.json_output = Some(output.output.clone());
task.error = output.error;
self.contracts.insert(output.input, output.output);
Report::compilation(span, T::config_id(), task);
Ok(())
}
Err(error) => {
task.error = Some(error.to_string());
Err(error)
}
}
}
pub fn execute_input(
&mut self, &mut self,
metadata: &Metadata,
step: &Step,
node: &T::Blockchain,
) -> anyhow::Result<StepOutput> {
match step {
Step::FunctionCall(input) => {
let (receipt, geth_trace, diff_mode) = self
.handle_input(metadata, input, node)
.await
.context("Failed to handle function call step")?;
Ok(StepOutput::FunctionCall(receipt, geth_trace, diff_mode))
}
Step::BalanceAssertion(balance_assertion) => {
self.handle_balance_assertion(metadata, balance_assertion, node)
.await
.context("Failed to handle balance assertion step")?;
Ok(StepOutput::BalanceAssertion)
}
Step::StorageEmptyAssertion(storage_empty) => {
self.handle_storage_empty(metadata, storage_empty, node)
.await
.context("Failed to handle storage empty assertion step")?;
Ok(StepOutput::StorageEmptyAssertion)
}
}
.inspect(|_| info!("Step Succeeded"))
}
#[instrument(level = "info", name = "Handling Input", skip_all)]
pub async fn handle_input(
&mut self,
metadata: &Metadata,
input: &Input, input: &Input,
node: &T::Blockchain, node: &T::Blockchain,
) -> anyhow::Result<GethTrace> { ) -> anyhow::Result<(TransactionReceipt, GethTrace, DiffMode)> {
let receipt = node.execute_transaction(input.legacy_transaction( let deployment_receipts = self
self.config.network_id, .handle_input_contract_deployment(metadata, input, node)
0, .await
&self.deployed_contracts, .context("Failed during contract deployment phase of input handling")?;
)?)?; let execution_receipt = self
dbg!(&receipt); .handle_input_execution(input, deployment_receipts, node)
//node.trace_transaction(receipt) .await
todo!() .context("Failed during transaction execution phase of input handling")?;
let tracing_result = self
.handle_input_call_frame_tracing(&execution_receipt, node)
.await
.context("Failed during callframe tracing phase of input handling")?;
self.handle_input_variable_assignment(input, &tracing_result)
.context("Failed to assign variables from callframe output")?;
let (_, (geth_trace, diff_mode)) = try_join!(
self.handle_input_expectations(input, &execution_receipt, node, &tracing_result),
self.handle_input_diff(&execution_receipt, node)
)
.context("Failed while evaluating expectations and diffs in parallel")?;
Ok((execution_receipt, geth_trace, diff_mode))
} }
}
pub struct Driver<'a, Leader: Platform, Follower: Platform> { #[instrument(level = "info", name = "Handling Balance Assertion", skip_all)]
metadata: &'a Metadata, pub async fn handle_balance_assertion(
config: &'a Arguments, &mut self,
leader_node: &'a Leader::Blockchain, metadata: &Metadata,
follower_node: &'a Follower::Blockchain, balance_assertion: &BalanceAssertion,
} node: &T::Blockchain,
) -> anyhow::Result<()> {
self.handle_balance_assertion_contract_deployment(metadata, balance_assertion, node)
.await
.context("Failed to deploy contract for balance assertion")?;
self.handle_balance_assertion_execution(balance_assertion, node)
.await
.context("Failed to execute balance assertion")?;
Ok(())
}
impl<'a, L, F> Driver<'a, L, F> #[instrument(level = "info", name = "Handling Storage Assertion", skip_all)]
where pub async fn handle_storage_empty(
L: Platform, &mut self,
F: Platform, metadata: &Metadata,
{ storage_empty: &StorageEmptyAssertion,
pub fn new( node: &T::Blockchain,
metadata: &'a Metadata, ) -> anyhow::Result<()> {
config: &'a Arguments, self.handle_storage_empty_assertion_contract_deployment(metadata, storage_empty, node)
leader_node: &'a L::Blockchain, .await
follower_node: &'a F::Blockchain, .context("Failed to deploy contract for storage empty assertion")?;
) -> Driver<'a, L, F> { self.handle_storage_empty_assertion_execution(storage_empty, node)
Self { .await
metadata, .context("Failed to execute storage empty assertion")?;
config, Ok(())
leader_node, }
follower_node,
/// Handles the contract deployment for a given input performing it if it needs to be performed.
#[instrument(level = "info", skip_all)]
async fn handle_input_contract_deployment(
&mut self,
metadata: &Metadata,
input: &Input,
node: &T::Blockchain,
) -> anyhow::Result<HashMap<ContractInstance, TransactionReceipt>> {
let mut instances_we_must_deploy = IndexMap::<ContractInstance, bool>::new();
for instance in input.find_all_contract_instances().into_iter() {
if !self.deployed_contracts.contains_key(&instance) {
instances_we_must_deploy.entry(instance).or_insert(false);
}
}
if let Method::Deployer = input.method {
instances_we_must_deploy.swap_remove(&input.instance);
instances_we_must_deploy.insert(input.instance.clone(), true);
}
let mut receipts = HashMap::new();
for (instance, deploy_with_constructor_arguments) in instances_we_must_deploy.into_iter() {
let calldata = deploy_with_constructor_arguments.then_some(&input.calldata);
let value = deploy_with_constructor_arguments
.then_some(input.value)
.flatten();
if let (_, _, Some(receipt)) = self
.get_or_deploy_contract_instance(
&instance,
metadata,
input.caller,
calldata,
value,
node,
)
.await
.context("Failed to get or deploy contract instance during input execution")?
{
receipts.insert(instance.clone(), receipt);
}
}
Ok(receipts)
}
/// Handles the execution of the input in terms of the calls that need to be made.
#[instrument(level = "info", skip_all)]
async fn handle_input_execution(
&mut self,
input: &Input,
mut deployment_receipts: HashMap<ContractInstance, TransactionReceipt>,
node: &T::Blockchain,
) -> anyhow::Result<TransactionReceipt> {
match input.method {
// This input was already executed when `handle_input` was called. We just need to
// lookup the transaction receipt in this case and continue on.
Method::Deployer => deployment_receipts
.remove(&input.instance)
.context("Failed to find deployment receipt for constructor call"),
Method::Fallback | Method::FunctionName(_) => {
let tx = match input
.legacy_transaction(node, self.default_resolution_context())
.await
{
Ok(tx) => tx,
Err(err) => {
return Err(err);
}
};
match node.execute_transaction(tx).await {
Ok(receipt) => Ok(receipt),
Err(err) => Err(err),
}
}
} }
} }
pub fn execute(&mut self, span: Span) -> anyhow::Result<()> { #[instrument(level = "info", skip_all)]
for mode in self.metadata.solc_modes() { async fn handle_input_call_frame_tracing(
let mut leader_state = State::<L>::new(self.config, span); &self,
leader_state.build_contracts(&mode, self.metadata)?; execution_receipt: &TransactionReceipt,
node: &T::Blockchain,
) -> anyhow::Result<CallFrame> {
node.trace_transaction(
execution_receipt,
GethDebugTracingOptions {
tracer: Some(GethDebugTracerType::BuiltInTracer(
GethDebugBuiltInTracerType::CallTracer,
)),
tracer_config: GethDebugTracerConfig(serde_json::json! {{
"onlyTopCall": true,
"withLog": false,
"withStorage": false,
"withMemory": false,
"withStack": false,
"withReturnData": true
}}),
..Default::default()
},
)
.await
.map(|trace| {
trace
.try_into_call_frame()
.expect("Impossible - we requested a callframe trace so we must get it back")
})
}
let mut follower_state = State::<F>::new(self.config, span); #[instrument(level = "info", skip_all)]
follower_state.build_contracts(&mode, self.metadata)?; fn handle_input_variable_assignment(
&mut self,
input: &Input,
tracing_result: &CallFrame,
) -> anyhow::Result<()> {
let Some(ref assignments) = input.variable_assignments else {
return Ok(());
};
for case in &self.metadata.cases { // Handling the return data variable assignments.
for input in &case.inputs { for (variable_name, output_word) in assignments.return_data.iter().zip(
let _ = leader_state.execute_input(input, self.leader_node)?; tracing_result
let _ = follower_state.execute_input(input, self.follower_node)?; .output
.as_ref()
.unwrap_or_default()
.to_vec()
.chunks(32),
) {
let value = U256::from_be_slice(output_word);
self.variables.insert(variable_name.clone(), value);
tracing::info!(
variable_name,
variable_value = hex::encode(value.to_be_bytes::<32>()),
"Assigned variable"
);
}
Ok(())
}
#[instrument(level = "info", skip_all)]
async fn handle_input_expectations(
&self,
input: &Input,
execution_receipt: &TransactionReceipt,
resolver: &impl ResolverApi,
tracing_result: &CallFrame,
) -> anyhow::Result<()> {
// Resolving the `input.expected` into a series of expectations that we can then assert on.
let mut expectations = match input {
Input {
expected: Some(Expected::Calldata(calldata)),
..
} => vec![ExpectedOutput::new().with_calldata(calldata.clone())],
Input {
expected: Some(Expected::Expected(expected)),
..
} => vec![expected.clone()],
Input {
expected: Some(Expected::ExpectedMany(expected)),
..
} => expected.clone(),
Input { expected: None, .. } => vec![ExpectedOutput::new().with_success()],
};
// This is a bit of a special case and we have to support it separately on it's own. If it's
// a call to the deployer method, then the tests will assert that it "returns" the address
// of the contract. Deployments do not return the address of the contract but the runtime
// code of the contracts. Therefore, this assertion would always fail. So, we replace it
// with an assertion of "check if it succeeded"
if let Method::Deployer = &input.method {
for expectation in expectations.iter_mut() {
expectation.return_data = None;
}
}
futures::stream::iter(expectations.into_iter().map(Ok))
.try_for_each_concurrent(None, |expectation| async move {
self.handle_input_expectation_item(
execution_receipt,
resolver,
expectation,
tracing_result,
)
.await
})
.await
}
#[instrument(level = "info", skip_all)]
async fn handle_input_expectation_item(
&self,
execution_receipt: &TransactionReceipt,
resolver: &impl ResolverApi,
expectation: ExpectedOutput,
tracing_result: &CallFrame,
) -> anyhow::Result<()> {
if let Some(ref version_requirement) = expectation.compiler_version {
if !version_requirement.matches(&self.compiler_version) {
return Ok(());
}
}
let resolution_context = self
.default_resolution_context()
.with_block_number(execution_receipt.block_number.as_ref())
.with_transaction_hash(&execution_receipt.transaction_hash);
// Handling the receipt state assertion.
let expected = !expectation.exception;
let actual = execution_receipt.status();
if actual != expected {
tracing::error!(
expected,
actual,
?execution_receipt,
?tracing_result,
"Transaction status assertion failed"
);
anyhow::bail!(
"Transaction status assertion failed - Expected {expected} but got {actual}",
);
}
// Handling the calldata assertion
if let Some(ref expected_calldata) = expectation.return_data {
let expected = expected_calldata;
let actual = &tracing_result.output.as_ref().unwrap_or_default();
if !expected
.is_equivalent(actual, resolver, resolution_context)
.await
.context("Failed to resolve calldata equivalence for return data assertion")?
{
tracing::error!(
?execution_receipt,
?expected,
%actual,
"Calldata assertion failed"
);
anyhow::bail!("Calldata assertion failed - Expected {expected:?} but got {actual}",);
}
}
// Handling the events assertion
if let Some(ref expected_events) = expectation.events {
// Handling the events length assertion.
let expected = expected_events.len();
let actual = execution_receipt.logs().len();
if actual != expected {
tracing::error!(expected, actual, "Event count assertion failed",);
anyhow::bail!(
"Event count assertion failed - Expected {expected} but got {actual}",
);
}
// Handling the events assertion.
for (event_idx, (expected_event, actual_event)) in expected_events
.iter()
.zip(execution_receipt.logs())
.enumerate()
{
// Handling the emitter assertion.
if let Some(ref expected_address) = expected_event.address {
let expected = Address::from_slice(
Calldata::new_compound([expected_address])
.calldata(resolver, resolution_context)
.await?
.get(12..32)
.expect("Can't fail"),
);
let actual = actual_event.address();
if actual != expected {
tracing::error!(
event_idx,
%expected,
%actual,
"Event emitter assertion failed",
);
anyhow::bail!(
"Event emitter assertion failed - Expected {expected} but got {actual}",
);
}
}
// Handling the topics assertion.
for (expected, actual) in expected_event
.topics
.as_slice()
.iter()
.zip(actual_event.topics())
{
let expected = Calldata::new_compound([expected]);
if !expected
.is_equivalent(&actual.0, resolver, resolution_context)
.await
.context("Failed to resolve event topic equivalence")?
{
tracing::error!(
event_idx,
?execution_receipt,
?expected,
?actual,
"Event topics assertion failed",
);
anyhow::bail!(
"Event topics assertion failed - Expected {expected:?} but got {actual:?}",
);
}
}
// Handling the values assertion.
let expected = &expected_event.values;
let actual = &actual_event.data().data;
if !expected
.is_equivalent(&actual.0, resolver, resolution_context)
.await
.context("Failed to resolve event value equivalence")?
{
tracing::error!(
event_idx,
?execution_receipt,
?expected,
?actual,
"Event value assertion failed",
);
anyhow::bail!(
"Event value assertion failed - Expected {expected:?} but got {actual:?}",
);
} }
} }
} }
Ok(()) Ok(())
} }
#[instrument(level = "info", skip_all)]
async fn handle_input_diff(
&self,
execution_receipt: &TransactionReceipt,
node: &T::Blockchain,
) -> anyhow::Result<(GethTrace, DiffMode)> {
let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig {
diff_mode: Some(true),
disable_code: None,
disable_storage: None,
});
let trace = node
.trace_transaction(execution_receipt, trace_options)
.await
.context("Failed to obtain geth prestate tracer output")?;
let diff = node
.state_diff(execution_receipt)
.await
.context("Failed to obtain state diff for transaction")?;
Ok((trace, diff))
}
#[instrument(level = "info", skip_all)]
pub async fn handle_balance_assertion_contract_deployment(
&mut self,
metadata: &Metadata,
balance_assertion: &BalanceAssertion,
node: &T::Blockchain,
) -> anyhow::Result<()> {
let Some(instance) = balance_assertion
.address
.strip_suffix(".address")
.map(ContractInstance::new)
else {
return Ok(());
};
self.get_or_deploy_contract_instance(
&instance,
metadata,
Input::default_caller(),
None,
None,
node,
)
.await?;
Ok(())
}
#[instrument(level = "info", skip_all)]
pub async fn handle_balance_assertion_execution(
&mut self,
BalanceAssertion {
address: address_string,
expected_balance: amount,
..
}: &BalanceAssertion,
node: &T::Blockchain,
) -> anyhow::Result<()> {
let address = Address::from_slice(
Calldata::new_compound([address_string])
.calldata(node, self.default_resolution_context())
.await?
.get(12..32)
.expect("Can't fail"),
);
let balance = node.balance_of(address).await?;
let expected = *amount;
let actual = balance;
if expected != actual {
tracing::error!(%expected, %actual, %address, "Balance assertion failed");
anyhow::bail!(
"Balance assertion failed - Expected {} but got {} for {} resolved to {}",
expected,
actual,
address_string,
address,
)
}
Ok(())
}
#[instrument(level = "info", skip_all)]
pub async fn handle_storage_empty_assertion_contract_deployment(
&mut self,
metadata: &Metadata,
storage_empty_assertion: &StorageEmptyAssertion,
node: &T::Blockchain,
) -> anyhow::Result<()> {
let Some(instance) = storage_empty_assertion
.address
.strip_suffix(".address")
.map(ContractInstance::new)
else {
return Ok(());
};
self.get_or_deploy_contract_instance(
&instance,
metadata,
Input::default_caller(),
None,
None,
node,
)
.await?;
Ok(())
}
#[instrument(level = "info", skip_all)]
pub async fn handle_storage_empty_assertion_execution(
&mut self,
StorageEmptyAssertion {
address: address_string,
is_storage_empty,
..
}: &StorageEmptyAssertion,
node: &T::Blockchain,
) -> anyhow::Result<()> {
let address = Address::from_slice(
Calldata::new_compound([address_string])
.calldata(node, self.default_resolution_context())
.await?
.get(12..32)
.expect("Can't fail"),
);
let storage = node.latest_state_proof(address, Default::default()).await?;
let is_empty = storage.storage_hash == EMPTY_ROOT_HASH;
let expected = is_storage_empty;
let actual = is_empty;
if *expected != actual {
tracing::error!(%expected, %actual, %address, "Storage Empty Assertion failed");
anyhow::bail!(
"Storage Empty Assertion failed - Expected {} but got {} for {} resolved to {}",
expected,
actual,
address_string,
address,
)
};
Ok(())
}
/// Gets the information of a deployed contract or library from the state. If it's found to not
/// be deployed then it will be deployed.
///
/// If a [`CaseIdx`] is not specified then this contact instance address will be stored in the
/// cross-case deployed contracts address mapping.
#[allow(clippy::too_many_arguments)]
pub async fn get_or_deploy_contract_instance(
&mut self,
contract_instance: &ContractInstance,
metadata: &Metadata,
deployer: Address,
calldata: Option<&Calldata>,
value: Option<EtherValue>,
node: &T::Blockchain,
) -> anyhow::Result<(Address, JsonAbi, Option<TransactionReceipt>)> {
if let Some((_, address, abi)) = self.deployed_contracts.get(contract_instance) {
return Ok((*address, abi.clone(), None));
}
let Some(ContractPathAndIdent {
contract_source_path,
contract_ident,
}) = metadata.contract_sources()?.remove(contract_instance)
else {
anyhow::bail!(
"Contract source not found for instance {:?}",
contract_instance
)
};
let Some((code, abi)) = self
.compiled_contracts
.get(&contract_source_path)
.and_then(|source_file_contracts| source_file_contracts.get(contract_ident.as_ref()))
.cloned()
else {
anyhow::bail!(
"Failed to find information for contract {:?}",
contract_instance
)
};
let mut code = match alloy::hex::decode(&code) {
Ok(code) => code,
Err(error) => {
tracing::error!(
?error,
contract_source_path = contract_source_path.display().to_string(),
contract_ident = contract_ident.as_ref(),
"Failed to hex-decode byte code - This could possibly mean that the bytecode requires linking"
);
anyhow::bail!("Failed to hex-decode the byte code {}", error)
}
};
if let Some(calldata) = calldata {
let calldata = calldata
.calldata(node, self.default_resolution_context())
.await?;
code.extend(calldata);
}
let tx = {
let tx = TransactionRequest::default().from(deployer);
let tx = match value {
Some(ref value) => tx.value(value.into_inner()),
_ => tx,
};
TransactionBuilder::<Ethereum>::with_deploy_code(tx, code)
};
let receipt = match node.execute_transaction(tx).await {
Ok(receipt) => receipt,
Err(error) => {
tracing::error!(
node = std::any::type_name::<T>(),
?error,
"Contract deployment transaction failed."
);
return Err(error);
}
};
let Some(address) = receipt.contract_address else {
anyhow::bail!("Contract deployment didn't return an address");
};
tracing::info!(
instance_name = ?contract_instance,
instance_address = ?address,
"Deployed contract"
);
self.execution_reporter
.report_contract_deployed_event(contract_instance.clone(), address)?;
self.deployed_contracts.insert(
contract_instance.clone(),
(contract_ident, address, abi.clone()),
);
Ok((address, abi, Some(receipt)))
}
fn default_resolution_context(&self) -> ResolutionContext<'_> {
ResolutionContext::default()
.with_deployed_contracts(&self.deployed_contracts)
.with_variables(&self.variables)
}
}
pub struct CaseDriver<'a, Leader: Platform, Follower: Platform> {
metadata: &'a Metadata,
case: &'a Case,
leader_node: &'a Leader::Blockchain,
follower_node: &'a Follower::Blockchain,
leader_state: CaseState<Leader>,
follower_state: CaseState<Follower>,
}
impl<'a, L, F> CaseDriver<'a, L, F>
where
L: Platform,
F: Platform,
{
#[allow(clippy::too_many_arguments)]
pub fn new(
metadata: &'a Metadata,
case: &'a Case,
leader_node: &'a L::Blockchain,
follower_node: &'a F::Blockchain,
leader_state: CaseState<L>,
follower_state: CaseState<F>,
) -> CaseDriver<'a, L, F> {
Self {
metadata,
case,
leader_node,
follower_node,
leader_state,
follower_state,
}
}
#[instrument(level = "info", name = "Executing Case", skip_all)]
pub async fn execute(&mut self) -> anyhow::Result<usize> {
let mut steps_executed = 0;
for (step_idx, step) in self
.case
.steps_iterator()
.enumerate()
.map(|(idx, v)| (StepIdx::new(idx), v))
{
let (leader_step_output, follower_step_output) = try_join!(
self.leader_state
.handle_step(self.metadata, &step, self.leader_node)
.instrument(info_span!(
"Handling Step",
%step_idx,
target = "Leader",
)),
self.follower_state
.handle_step(self.metadata, &step, self.follower_node)
.instrument(info_span!(
"Handling Step",
%step_idx,
target = "Follower",
))
)?;
match (leader_step_output, follower_step_output) {
(StepOutput::FunctionCall(..), StepOutput::FunctionCall(..)) => {
// TODO: We need to actually work out how/if we will compare the diff between
// the leader and the follower. The diffs are almost guaranteed to be different
// from leader and follower and therefore without an actual strategy for this
// we have something that's guaranteed to fail. Even a simple call to some
// contract will produce two non-equal diffs because on the leader the contract
// has address X and on the follower it has address Y. On the leader contract X
// contains address A in the state and on the follower it contains address B. So
// this isn't exactly a straightforward thing to do and I'm not even sure that
// it's possible to do. Once we have an actual strategy for doing the diffs we
// will implement it here. Until then, this remains empty.
}
(StepOutput::BalanceAssertion, StepOutput::BalanceAssertion) => {}
(StepOutput::StorageEmptyAssertion, StepOutput::StorageEmptyAssertion) => {}
_ => unreachable!("The two step outputs can not be of a different kind"),
}
steps_executed += 1;
}
Ok(steps_executed)
}
}
#[derive(Clone, Debug)]
#[allow(clippy::large_enum_variant)]
pub enum StepOutput {
FunctionCall(TransactionReceipt, GethTrace, DiffMode),
BalanceAssertion,
StorageEmptyAssertion,
} }
+11 -10
View File
@@ -1,11 +1,12 @@
//! The revive differential testing core library. //! The revive differential testing core library.
//! //!
//! This crate defines the testing configuration and //! This crate defines the testing configuration and
//! provides a helper utilty to execute tests. //! provides a helper utility to execute tests.
use revive_dt_compiler::{SolidityCompiler, revive_resolc, solc}; use revive_dt_compiler::{SolidityCompiler, revive_resolc, solc};
use revive_dt_config::TestingPlatform; use revive_dt_config::TestingPlatform;
use revive_dt_node::geth; use revive_dt_format::traits::ResolverApi;
use revive_dt_node::{Node, geth, kitchensink::KitchensinkNode};
use revive_dt_node_interaction::EthereumNode; use revive_dt_node_interaction::EthereumNode;
pub mod driver; pub mod driver;
@@ -14,22 +15,22 @@ pub mod driver;
/// ///
/// For this we need a blockchain node implementation and a compiler. /// For this we need a blockchain node implementation and a compiler.
pub trait Platform { pub trait Platform {
type Blockchain: EthereumNode; type Blockchain: EthereumNode + Node + ResolverApi;
type Compiler: SolidityCompiler; type Compiler: SolidityCompiler;
/// Returns the matching [TestingPlatform] of the [revive_dt_config::Arguments]. /// Returns the matching [TestingPlatform] of the [revive_dt_config::Arguments].
fn config_id() -> TestingPlatform; fn config_id() -> &'static TestingPlatform;
} }
#[derive(Default)] #[derive(Default)]
pub struct Geth; pub struct Geth;
impl Platform for Geth { impl Platform for Geth {
type Blockchain = geth::Instance; type Blockchain = geth::GethNode;
type Compiler = solc::Solc; type Compiler = solc::Solc;
fn config_id() -> TestingPlatform { fn config_id() -> &'static TestingPlatform {
TestingPlatform::Geth &TestingPlatform::Geth
} }
} }
@@ -37,10 +38,10 @@ impl Platform for Geth {
pub struct Kitchensink; pub struct Kitchensink;
impl Platform for Kitchensink { impl Platform for Kitchensink {
type Blockchain = geth::Instance; type Blockchain = KitchensinkNode;
type Compiler = revive_resolc::Resolc; type Compiler = revive_resolc::Resolc;
fn config_id() -> TestingPlatform { fn config_id() -> &'static TestingPlatform {
TestingPlatform::Kitchensink &TestingPlatform::Kitchensink
} }
} }
+790 -69
View File
@@ -1,39 +1,122 @@
use std::{collections::HashMap, sync::LazyLock}; mod cached_compiler;
use std::{
borrow::Cow,
collections::{BTreeMap, HashMap},
io::{BufWriter, Write, stderr},
path::Path,
sync::{Arc, LazyLock},
time::Instant,
};
use alloy::{
network::{Ethereum, TransactionBuilder},
rpc::types::TransactionRequest,
};
use anyhow::Context;
use clap::Parser; use clap::Parser;
use rayon::{ThreadPoolBuilder, prelude::*}; use futures::stream;
use futures::{Stream, StreamExt};
use indexmap::{IndexMap, indexmap};
use revive_dt_node_interaction::EthereumNode;
use revive_dt_report::{
NodeDesignation, ReportAggregator, Reporter, ReporterEvent, TestCaseStatus,
TestSpecificReporter, TestSpecifier,
};
use serde_json::{Value, json};
use temp_dir::TempDir;
use tokio::try_join;
use tracing::{debug, error, info, info_span, instrument};
use tracing_appender::non_blocking::WorkerGuard;
use tracing_subscriber::{EnvFilter, FmtSubscriber};
use revive_dt_common::{iterators::EitherIter, types::Mode};
use revive_dt_compiler::{CompilerOutput, SolidityCompiler};
use revive_dt_config::*; use revive_dt_config::*;
use revive_dt_core::{ use revive_dt_core::{
Geth, Kitchensink, Geth, Kitchensink, Platform,
driver::{Driver, State}, driver::{CaseDriver, CaseState},
}; };
use revive_dt_format::{corpus::Corpus, metadata::Metadata}; use revive_dt_format::{
use revive_dt_node::pool::NodePool; case::{Case, CaseIdx},
use revive_dt_report::reporter::{Report, Span}; corpus::Corpus,
use temp_dir::TempDir; input::{Input, Step},
metadata::{ContractPathAndIdent, MetadataFile},
mode::ParsedMode,
};
use revive_dt_node::{Node, pool::NodePool};
use crate::cached_compiler::CachedCompiler;
static TEMP_DIR: LazyLock<TempDir> = LazyLock::new(|| TempDir::new().unwrap()); static TEMP_DIR: LazyLock<TempDir> = LazyLock::new(|| TempDir::new().unwrap());
fn main() -> anyhow::Result<()> { fn main() -> anyhow::Result<()> {
let args = init_cli()?; let (args, _guard) = init_cli().context("Failed to initialize CLI and tracing subscriber")?;
info!(
leader = args.leader.to_string(),
follower = args.follower.to_string(),
working_directory = %args.directory().display(),
number_of_nodes = args.number_of_nodes,
invalidate_compilation_cache = args.invalidate_compilation_cache,
"Differential testing tool has been initialized"
);
for (corpus, tests) in collect_corpora(&args)? { let (reporter, report_aggregator_task) = ReportAggregator::new(args.clone()).into_task();
let span = Span::new(corpus, args.clone())?;
match &args.compile_only { let number_of_threads = args.number_of_threads;
Some(platform) => compile_corpus(&args, &tests, platform, span), let body = async move {
None => execute_corpus(&args, &tests, span)?, let tests = collect_corpora(&args)
} .context("Failed to collect corpus files from provided arguments")?
.into_iter()
.inspect(|(corpus, _)| {
reporter
.report_corpus_file_discovery_event(corpus.clone())
.expect("Can't fail")
})
.flat_map(|(_, files)| files.into_iter())
.inspect(|metadata_file| {
reporter
.report_metadata_file_discovery_event(
metadata_file.metadata_file_path.clone(),
metadata_file.content.clone(),
)
.expect("Can't fail")
})
.collect::<Vec<_>>();
Report::save()?; execute_corpus(&args, &tests, reporter, report_aggregator_task)
} .await
.context("Failed to execute corpus")?;
Ok(())
};
Ok(()) tokio::runtime::Builder::new_multi_thread()
.worker_threads(number_of_threads)
.enable_all()
.build()
.expect("Failed building the Runtime")
.block_on(body)
} }
fn init_cli() -> anyhow::Result<Arguments> { fn init_cli() -> anyhow::Result<(Arguments, WorkerGuard)> {
env_logger::init(); let (writer, guard) = tracing_appender::non_blocking::NonBlockingBuilder::default()
.lossy(false)
// Assuming that each line contains 255 characters and that each character is one byte, then
// this means that our buffer is about 4GBs large.
.buffered_lines_limit(0x1000000)
.thread_name("buffered writer")
.finish(std::io::stdout());
let subscriber = FmtSubscriber::builder()
.with_writer(writer)
.with_thread_ids(false)
.with_thread_names(false)
.with_env_filter(EnvFilter::from_default_env())
.with_ansi(false)
.pretty()
.finish();
tracing::subscriber::set_global_default(subscriber)?;
info!("Differential testing tool is starting");
let mut args = Arguments::parse(); let mut args = Arguments::parse();
@@ -51,77 +134,715 @@ fn init_cli() -> anyhow::Result<Arguments> {
args.temp_dir = Some(&TEMP_DIR); args.temp_dir = Some(&TEMP_DIR);
} }
} }
log::info!("workdir: {}", args.directory().display());
ThreadPoolBuilder::new() Ok((args, guard))
.num_threads(args.workers)
.build_global()?;
Ok(args)
} }
fn collect_corpora(args: &Arguments) -> anyhow::Result<HashMap<Corpus, Vec<Metadata>>> { #[instrument(level = "debug", name = "Collecting Corpora", skip_all)]
fn collect_corpora(args: &Arguments) -> anyhow::Result<HashMap<Corpus, Vec<MetadataFile>>> {
let mut corpora = HashMap::new(); let mut corpora = HashMap::new();
for path in &args.corpus { for path in &args.corpus {
let span = info_span!("Processing corpus file", path = %path.display());
let _guard = span.enter();
let corpus = Corpus::try_from_path(path)?; let corpus = Corpus::try_from_path(path)?;
log::info!("found corpus: {}", path.display()); info!(
name = corpus.name(),
number_of_contained_paths = corpus.path_count(),
"Deserialized corpus file"
);
let tests = corpus.enumerate_tests(); let tests = corpus.enumerate_tests();
log::info!("corpus '{}' contains {} tests", &corpus.name, tests.len());
corpora.insert(corpus, tests); corpora.insert(corpus, tests);
} }
Ok(corpora) Ok(corpora)
} }
fn execute_corpus(args: &Arguments, tests: &[Metadata], span: Span) -> anyhow::Result<()> { async fn run_driver<L, F>(
let leader_nodes = NodePool::new(args)?; args: &Arguments,
let follower_nodes = NodePool::new(args)?; metadata_files: &[MetadataFile],
reporter: Reporter,
report_aggregator_task: impl Future<Output = anyhow::Result<()>>,
) -> anyhow::Result<()>
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
{
let leader_nodes =
NodePool::<L::Blockchain>::new(args).context("Failed to initialize leader node pool")?;
let follower_nodes =
NodePool::<F::Blockchain>::new(args).context("Failed to initialize follower node pool")?;
tests.par_iter().for_each(|metadata| { let tests_stream = tests_stream(
let mut driver = match (&args.leader, &args.follower) { args,
(TestingPlatform::Geth, TestingPlatform::Kitchensink) => Driver::<Geth, Geth>::new( metadata_files.iter(),
metadata, &leader_nodes,
args, &follower_nodes,
leader_nodes.round_robbin(), reporter.clone(),
follower_nodes.round_robbin(), )
), .await;
_ => unimplemented!(), let driver_task = start_driver_task::<L, F>(args, tests_stream)
}; .await
.context("Failed to start driver task")?;
let cli_reporting_task = start_cli_reporting_task(reporter);
match driver.execute(span) { let (_, _, rtn) = tokio::join!(cli_reporting_task, driver_task, report_aggregator_task);
Ok(build) => { rtn?;
log::info!(
"metadata {} success",
metadata.directory().as_ref().unwrap().display()
);
build
}
Err(error) => {
log::warn!(
"metadata {} failure: {error:?}",
metadata.file_path.as_ref().unwrap().display()
);
}
}
});
Ok(()) Ok(())
} }
fn compile_corpus(config: &Arguments, tests: &[Metadata], platform: &TestingPlatform, span: Span) { async fn tests_stream<'a, L, F>(
tests.par_iter().for_each(|metadata| { args: &Arguments,
for mode in &metadata.solc_modes() { metadata_files: impl IntoIterator<Item = &'a MetadataFile> + Clone,
match platform { leader_node_pool: &'a NodePool<L::Blockchain>,
TestingPlatform::Geth => { follower_node_pool: &'a NodePool<F::Blockchain>,
let mut state = State::<Geth>::new(config, span); reporter: Reporter,
let _ = state.build_contracts(mode, metadata); ) -> impl Stream<Item = Test<'a, L, F>>
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
{
let tests = metadata_files
.into_iter()
.flat_map(|metadata_file| {
metadata_file
.cases
.iter()
.enumerate()
.map(move |(case_idx, case)| (metadata_file, case_idx, case))
})
// Flatten over the modes, prefer the case modes over the metadata file modes.
.flat_map(|(metadata_file, case_idx, case)| {
let reporter = reporter.clone();
let modes = case.modes.as_ref().or(metadata_file.modes.as_ref());
let modes = match modes {
Some(modes) => EitherIter::A(
ParsedMode::many_to_modes(modes.iter()).map(Cow::<'static, _>::Owned),
),
None => EitherIter::B(Mode::all().map(Cow::<'static, _>::Borrowed)),
};
modes.into_iter().map(move |mode| {
(
metadata_file,
case_idx,
case,
mode.clone(),
reporter.test_specific_reporter(Arc::new(TestSpecifier {
solc_mode: mode.as_ref().clone(),
metadata_file_path: metadata_file.metadata_file_path.clone(),
case_idx: CaseIdx::new(case_idx),
})),
)
})
})
.collect::<Vec<_>>();
// Note: before we do any kind of filtering or process the iterator in any way, we need to
// inform the report aggregator of all of the cases that were found as it keeps a state of the
// test cases for its internal use.
for (_, _, _, _, reporter) in tests.iter() {
reporter
.report_test_case_discovery_event()
.expect("Can't fail")
}
stream::iter(tests.into_iter())
.filter_map(
move |(metadata_file, case_idx, case, mode, reporter)| async move {
let leader_compiler = <L::Compiler as SolidityCompiler>::new(
args,
mode.version.clone().map(Into::into),
)
.await
.inspect_err(|err| error!(?err, "Failed to instantiate the leader compiler"))
.ok()?;
let follower_compiler = <F::Compiler as SolidityCompiler>::new(
args,
mode.version.clone().map(Into::into),
)
.await
.inspect_err(|err| error!(?err, "Failed to instantiate the follower compiler"))
.ok()?;
let leader_node = leader_node_pool.round_robbin();
let follower_node = follower_node_pool.round_robbin();
Some(Test::<L, F> {
metadata: metadata_file,
metadata_file_path: metadata_file.metadata_file_path.as_path(),
mode: mode.clone(),
case_idx: CaseIdx::new(case_idx),
case,
leader_node,
follower_node,
leader_compiler,
follower_compiler,
reporter,
})
},
)
.filter_map(move |test| async move {
match test.check_compatibility() {
Ok(()) => Some(test),
Err((reason, additional_information)) => {
debug!(
metadata_file_path = %test.metadata.metadata_file_path.display(),
case_idx = %test.case_idx,
mode = %test.mode,
reason,
additional_information =
serde_json::to_string(&additional_information).unwrap(),
"Ignoring Test Case"
);
test.reporter
.report_test_ignored_event(
reason.to_string(),
additional_information
.into_iter()
.map(|(k, v)| (k.into(), v))
.collect::<IndexMap<_, _>>(),
)
.expect("Can't fail");
None
} }
TestingPlatform::Kitchensink => { }
let mut state = State::<Kitchensink>::new(config, span); })
let _ = state.build_contracts(mode, metadata); }
async fn start_driver_task<'a, L, F>(
args: &Arguments,
tests: impl Stream<Item = Test<'a, L, F>>,
) -> anyhow::Result<impl Future<Output = ()>>
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
L::Compiler: 'a,
F::Compiler: 'a,
{
info!("Starting driver task");
let number_concurrent_tasks = args.number_of_concurrent_tasks();
let cached_compiler = Arc::new(
CachedCompiler::new(
args.directory().join("compilation_cache"),
args.invalidate_compilation_cache,
)
.await
.context("Failed to initialize cached compiler")?,
);
Ok(tests.for_each_concurrent(
// We want to limit the concurrent tasks here because:
//
// 1. We don't want to overwhelm the nodes with too many requests, leading to responses timing out.
// 2. We don't want to open too many files at once, leading to the OS running out of file descriptors.
//
// By default, we allow maximum of 10 ongoing requests per node in order to limit (1), and assume that
// this number will automatically be low enough to address (2). The user can override this.
Some(number_concurrent_tasks),
move |test| {
let cached_compiler = cached_compiler.clone();
async move {
test.reporter
.report_leader_node_assigned_event(
test.leader_node.id(),
*L::config_id(),
test.leader_node.connection_string(),
)
.expect("Can't fail");
test.reporter
.report_follower_node_assigned_event(
test.follower_node.id(),
*F::config_id(),
test.follower_node.connection_string(),
)
.expect("Can't fail");
let reporter = test.reporter.clone();
let result = handle_case_driver::<L, F>(test, cached_compiler).await;
match result {
Ok(steps_executed) => reporter
.report_test_succeeded_event(steps_executed)
.expect("Can't fail"),
Err(error) => reporter
.report_test_failed_event(format!("{error:#}"))
.expect("Can't fail"),
} }
}
},
))
}
#[allow(clippy::uninlined_format_args)]
#[allow(irrefutable_let_patterns)]
async fn start_cli_reporting_task(reporter: Reporter) {
let mut aggregator_events_rx = reporter.subscribe().await.expect("Can't fail");
drop(reporter);
let start = Instant::now();
const GREEN: &str = "\x1B[32m";
const RED: &str = "\x1B[31m";
const GREY: &str = "\x1B[90m";
const COLOR_RESET: &str = "\x1B[0m";
const BOLD: &str = "\x1B[1m";
const BOLD_RESET: &str = "\x1B[22m";
let mut number_of_successes = 0;
let mut number_of_failures = 0;
let mut buf = BufWriter::new(stderr());
while let Ok(event) = aggregator_events_rx.recv().await {
let ReporterEvent::MetadataFileSolcModeCombinationExecutionCompleted {
metadata_file_path,
mode,
case_status,
} = event
else {
continue;
};
let _ = writeln!(buf, "{} - {}", mode, metadata_file_path.display());
for (case_idx, case_status) in case_status.into_iter() {
let _ = write!(buf, "\tCase Index {case_idx:>3}: ");
let _ = match case_status {
TestCaseStatus::Succeeded { steps_executed } => {
number_of_successes += 1;
writeln!(
buf,
"{}{}Case Succeeded{} - Steps Executed: {}{}",
GREEN, BOLD, BOLD_RESET, steps_executed, COLOR_RESET
)
}
TestCaseStatus::Failed { reason } => {
number_of_failures += 1;
writeln!(
buf,
"{}{}Case Failed{} - Reason: {}{}",
RED,
BOLD,
BOLD_RESET,
reason.trim(),
COLOR_RESET,
)
}
TestCaseStatus::Ignored { reason, .. } => writeln!(
buf,
"{}{}Case Ignored{} - Reason: {}{}",
GREY,
BOLD,
BOLD_RESET,
reason.trim(),
COLOR_RESET,
),
}; };
} }
}); let _ = writeln!(buf);
}
// Summary at the end.
let _ = writeln!(
buf,
"{} cases: {}{}{} cases succeeded, {}{}{} cases failed in {} seconds",
number_of_successes + number_of_failures,
GREEN,
number_of_successes,
COLOR_RESET,
RED,
number_of_failures,
COLOR_RESET,
start.elapsed().as_secs()
);
} }
#[allow(clippy::too_many_arguments)]
#[instrument(
level = "info",
name = "Handling Case"
skip_all,
fields(
metadata_file_path = %test.metadata.relative_path().display(),
mode = %test.mode,
case_idx = %test.case_idx,
case_name = test.case.name.as_deref().unwrap_or("Unnamed Case"),
leader_node = test.leader_node.id(),
follower_node = test.follower_node.id(),
)
)]
async fn handle_case_driver<'a, L, F>(
test: Test<'a, L, F>,
cached_compiler: Arc<CachedCompiler<'a>>,
) -> anyhow::Result<usize>
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
L::Compiler: 'a,
F::Compiler: 'a,
{
let leader_reporter = test
.reporter
.execution_specific_reporter(test.leader_node.id(), NodeDesignation::Leader);
let follower_reporter = test
.reporter
.execution_specific_reporter(test.follower_node.id(), NodeDesignation::Follower);
let (
CompilerOutput {
contracts: leader_pre_link_contracts,
},
CompilerOutput {
contracts: follower_pre_link_contracts,
},
) = try_join!(
cached_compiler.compile_contracts::<L>(
test.metadata,
test.metadata_file_path,
test.mode.clone(),
None,
&test.leader_compiler,
&leader_reporter,
),
cached_compiler.compile_contracts::<F>(
test.metadata,
test.metadata_file_path,
test.mode.clone(),
None,
&test.follower_compiler,
&follower_reporter
)
)
.context("Failed to compile pre-link contracts for leader/follower in parallel")?;
let mut leader_deployed_libraries = None::<HashMap<_, _>>;
let mut follower_deployed_libraries = None::<HashMap<_, _>>;
let mut contract_sources = test
.metadata
.contract_sources()
.context("Failed to retrieve contract sources from metadata")?;
for library_instance in test
.metadata
.libraries
.iter()
.flatten()
.flat_map(|(_, map)| map.values())
{
debug!(%library_instance, "Deploying Library Instance");
let ContractPathAndIdent {
contract_source_path: library_source_path,
contract_ident: library_ident,
} = contract_sources
.remove(library_instance)
.context("Failed to find the contract source")?;
let (leader_code, leader_abi) = leader_pre_link_contracts
.get(&library_source_path)
.and_then(|contracts| contracts.get(library_ident.as_str()))
.context("Declared library was not compiled")?;
let (follower_code, follower_abi) = follower_pre_link_contracts
.get(&library_source_path)
.and_then(|contracts| contracts.get(library_ident.as_str()))
.context("Declared library was not compiled")?;
let leader_code = match alloy::hex::decode(leader_code) {
Ok(code) => code,
Err(error) => {
anyhow::bail!("Failed to hex-decode the byte code {}", error)
}
};
let follower_code = match alloy::hex::decode(follower_code) {
Ok(code) => code,
Err(error) => {
anyhow::bail!("Failed to hex-decode the byte code {}", error)
}
};
// Getting the deployer address from the cases themselves. This is to ensure that we're
// doing the deployments from different accounts and therefore we're not slowed down by
// the nonce.
let deployer_address = test
.case
.steps
.iter()
.filter_map(|step| match step {
Step::FunctionCall(input) => Some(input.caller),
Step::BalanceAssertion(..) => None,
Step::StorageEmptyAssertion(..) => None,
})
.next()
.unwrap_or(Input::default_caller());
let leader_tx = TransactionBuilder::<Ethereum>::with_deploy_code(
TransactionRequest::default().from(deployer_address),
leader_code,
);
let follower_tx = TransactionBuilder::<Ethereum>::with_deploy_code(
TransactionRequest::default().from(deployer_address),
follower_code,
);
let (leader_receipt, follower_receipt) = try_join!(
test.leader_node.execute_transaction(leader_tx),
test.follower_node.execute_transaction(follower_tx)
)?;
debug!(
?library_instance,
library_address = ?leader_receipt.contract_address,
"Deployed library to leader"
);
debug!(
?library_instance,
library_address = ?follower_receipt.contract_address,
"Deployed library to follower"
);
let leader_library_address = leader_receipt
.contract_address
.context("Contract deployment didn't return an address")?;
let follower_library_address = follower_receipt
.contract_address
.context("Contract deployment didn't return an address")?;
leader_deployed_libraries.get_or_insert_default().insert(
library_instance.clone(),
(
library_ident.clone(),
leader_library_address,
leader_abi.clone(),
),
);
follower_deployed_libraries.get_or_insert_default().insert(
library_instance.clone(),
(
library_ident,
follower_library_address,
follower_abi.clone(),
),
);
}
if let Some(ref leader_deployed_libraries) = leader_deployed_libraries {
leader_reporter.report_libraries_deployed_event(
leader_deployed_libraries
.clone()
.into_iter()
.map(|(key, (_, address, _))| (key, address))
.collect::<BTreeMap<_, _>>(),
)?;
}
if let Some(ref follower_deployed_libraries) = follower_deployed_libraries {
follower_reporter.report_libraries_deployed_event(
follower_deployed_libraries
.clone()
.into_iter()
.map(|(key, (_, address, _))| (key, address))
.collect::<BTreeMap<_, _>>(),
)?;
}
let (
CompilerOutput {
contracts: leader_post_link_contracts,
},
CompilerOutput {
contracts: follower_post_link_contracts,
},
) = try_join!(
cached_compiler.compile_contracts::<L>(
test.metadata,
test.metadata_file_path,
test.mode.clone(),
leader_deployed_libraries.as_ref(),
&test.leader_compiler,
&leader_reporter,
),
cached_compiler.compile_contracts::<F>(
test.metadata,
test.metadata_file_path,
test.mode.clone(),
follower_deployed_libraries.as_ref(),
&test.follower_compiler,
&follower_reporter
)
)
.context("Failed to compile post-link contracts for leader/follower in parallel")?;
let leader_state = CaseState::<L>::new(
test.leader_compiler.version().clone(),
leader_post_link_contracts,
leader_deployed_libraries.unwrap_or_default(),
leader_reporter,
);
let follower_state = CaseState::<F>::new(
test.follower_compiler.version().clone(),
follower_post_link_contracts,
follower_deployed_libraries.unwrap_or_default(),
follower_reporter,
);
let mut driver = CaseDriver::<L, F>::new(
test.metadata,
test.case,
test.leader_node,
test.follower_node,
leader_state,
follower_state,
);
driver
.execute()
.await
.inspect(|steps_executed| info!(steps_executed, "Case succeeded"))
}
async fn execute_corpus(
args: &Arguments,
tests: &[MetadataFile],
reporter: Reporter,
report_aggregator_task: impl Future<Output = anyhow::Result<()>>,
) -> anyhow::Result<()> {
match (&args.leader, &args.follower) {
(TestingPlatform::Geth, TestingPlatform::Kitchensink) => {
run_driver::<Geth, Kitchensink>(args, tests, reporter, report_aggregator_task).await?
}
(TestingPlatform::Geth, TestingPlatform::Geth) => {
run_driver::<Geth, Geth>(args, tests, reporter, report_aggregator_task).await?
}
_ => unimplemented!(),
}
Ok(())
}
/// this represents a single "test"; a mode, path and collection of cases.
#[derive(Clone)]
struct Test<'a, L: Platform, F: Platform> {
metadata: &'a MetadataFile,
metadata_file_path: &'a Path,
mode: Cow<'a, Mode>,
case_idx: CaseIdx,
case: &'a Case,
leader_node: &'a <L as Platform>::Blockchain,
follower_node: &'a <F as Platform>::Blockchain,
leader_compiler: L::Compiler,
follower_compiler: F::Compiler,
reporter: TestSpecificReporter,
}
impl<'a, L: Platform, F: Platform> Test<'a, L, F> {
/// Checks if this test can be ran with the current configuration.
pub fn check_compatibility(&self) -> TestCheckFunctionResult {
self.check_metadata_file_ignored()?;
self.check_case_file_ignored()?;
self.check_target_compatibility()?;
self.check_evm_version_compatibility()?;
self.check_compiler_compatibility()?;
Ok(())
}
/// Checks if the metadata file is ignored or not.
fn check_metadata_file_ignored(&self) -> TestCheckFunctionResult {
if self.metadata.ignore.is_some_and(|ignore| ignore) {
Err(("Metadata file is ignored.", indexmap! {}))
} else {
Ok(())
}
}
/// Checks if the case file is ignored or not.
fn check_case_file_ignored(&self) -> TestCheckFunctionResult {
if self.case.ignore.is_some_and(|ignore| ignore) {
Err(("Case is ignored.", indexmap! {}))
} else {
Ok(())
}
}
/// Checks if the leader and the follower both support the desired targets in the metadata file.
fn check_target_compatibility(&self) -> TestCheckFunctionResult {
let leader_support =
<L::Blockchain as Node>::matches_target(self.metadata.targets.as_deref());
let follower_support =
<F::Blockchain as Node>::matches_target(self.metadata.targets.as_deref());
let is_allowed = leader_support && follower_support;
if is_allowed {
Ok(())
} else {
Err((
"Either the leader or the follower do not support the target desired by the test.",
indexmap! {
"test_desired_targets" => json!(self.metadata.targets.as_ref()),
"leader_support" => json!(leader_support),
"follower_support" => json!(follower_support),
},
))
}
}
// Checks for the compatibility of the EVM version with the leader and follower nodes.
fn check_evm_version_compatibility(&self) -> TestCheckFunctionResult {
let Some(evm_version_requirement) = self.metadata.required_evm_version else {
return Ok(());
};
let leader_support = evm_version_requirement
.matches(&<L::Blockchain as revive_dt_node::Node>::evm_version());
let follower_support = evm_version_requirement
.matches(&<F::Blockchain as revive_dt_node::Node>::evm_version());
let is_allowed = leader_support && follower_support;
if is_allowed {
Ok(())
} else {
Err((
"EVM version is incompatible with either the leader or the follower.",
indexmap! {
"test_desired_evm_version" => json!(self.metadata.required_evm_version),
"leader_support" => json!(leader_support),
"follower_support" => json!(follower_support),
},
))
}
}
/// Checks if the leader and follower compilers support the mode that the test is for.
fn check_compiler_compatibility(&self) -> TestCheckFunctionResult {
let leader_support = self
.leader_compiler
.supports_mode(self.mode.optimize_setting, self.mode.pipeline);
let follower_support = self
.follower_compiler
.supports_mode(self.mode.optimize_setting, self.mode.pipeline);
let is_allowed = leader_support && follower_support;
if is_allowed {
Ok(())
} else {
Err((
"Compilers do not support this mode either for the leader or for the follower.",
indexmap! {
"mode" => json!(self.mode),
"leader_support" => json!(leader_support),
"follower_support" => json!(follower_support),
},
))
}
}
}
type TestCheckFunctionResult = Result<(), (&'static str, IndexMap<&'static str, Value>)>;
+16 -2
View File
@@ -9,9 +9,23 @@ repository.workspace = true
rust-version.workspace = true rust-version.workspace = true
[dependencies] [dependencies]
revive-dt-common = { workspace = true }
revive-common = { workspace = true }
alloy = { workspace = true } alloy = { workspace = true }
alloy-primitives = { workspace = true }
alloy-sol-types = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
log = { workspace = true } futures = { workspace = true }
regex = { workspace = true }
tracing = { workspace = true }
semver = { workspace = true } semver = { workspace = true }
serde = { workspace = true, features = [ "derive" ] } serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true } serde_json = { workspace = true }
[dev-dependencies]
tokio = { workspace = true }
[lints]
workspace = true
+70 -5
View File
@@ -1,12 +1,77 @@
use serde::Deserialize; use serde::{Deserialize, Serialize};
use crate::{input::Input, mode::Mode}; use revive_dt_common::{macros::define_wrapper_type, types::Mode};
#[derive(Debug, Default, Deserialize, Clone, Eq, PartialEq)] use crate::{
input::{Expected, Step},
mode::ParsedMode,
};
#[derive(Debug, Default, Serialize, Deserialize, Clone, Eq, PartialEq)]
pub struct Case { pub struct Case {
#[serde(skip_serializing_if = "Option::is_none")]
pub name: Option<String>, pub name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>, pub comment: Option<String>,
pub modes: Option<Vec<Mode>>,
pub inputs: Vec<Input>, #[serde(skip_serializing_if = "Option::is_none")]
pub modes: Option<Vec<ParsedMode>>,
#[serde(rename = "inputs")]
pub steps: Vec<Step>,
#[serde(skip_serializing_if = "Option::is_none")]
pub group: Option<String>, pub group: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub expected: Option<Expected>,
#[serde(skip_serializing_if = "Option::is_none")]
pub ignore: Option<bool>,
} }
impl Case {
#[allow(irrefutable_let_patterns)]
pub fn steps_iterator(&self) -> impl Iterator<Item = Step> {
let steps_len = self.steps.len();
self.steps
.clone()
.into_iter()
.enumerate()
.map(move |(idx, mut step)| {
let Step::FunctionCall(ref mut input) = step else {
return step;
};
if idx + 1 == steps_len {
if input.expected.is_none() {
input.expected = self.expected.clone();
}
// TODO: What does it mean for us to have an `expected` field on the case itself
// but the final input also has an expected field that doesn't match the one on
// the case? What are we supposed to do with that final expected field on the
// case?
step
} else {
step
}
})
}
pub fn solc_modes(&self) -> Vec<Mode> {
match &self.modes {
Some(modes) => ParsedMode::many_to_modes(modes.iter()).collect(),
None => Mode::all().cloned().collect(),
}
}
}
define_wrapper_type!(
/// A wrapper type for the index of test cases found in metadata file.
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
#[serde(transparent)]
pub struct CaseIdx(usize) impl Display, FromStr;
);
+108 -44
View File
@@ -3,65 +3,129 @@ use std::{
path::{Path, PathBuf}, path::{Path, PathBuf},
}; };
use revive_dt_common::iterators::FilesWithExtensionIterator;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tracing::{debug, info};
use crate::metadata::Metadata; use crate::metadata::{Metadata, MetadataFile};
use anyhow::Context as _;
#[derive(Clone, Debug, Default, Serialize, Deserialize, Eq, PartialEq, Hash)] #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
pub struct Corpus { #[serde(untagged)]
pub name: String, pub enum Corpus {
pub path: PathBuf, SinglePath { name: String, path: PathBuf },
MultiplePaths { name: String, paths: Vec<PathBuf> },
} }
impl Corpus { impl Corpus {
/// Try to read and parse the corpus definition file at given `path`. pub fn try_from_path(file_path: impl AsRef<Path>) -> anyhow::Result<Self> {
pub fn try_from_path(path: &Path) -> anyhow::Result<Self> { let mut corpus = File::open(file_path.as_ref())
let file = File::open(path)?; .map_err(anyhow::Error::from)
Ok(serde_json::from_reader(file)?) .and_then(|file| serde_json::from_reader::<_, Corpus>(file).map_err(Into::into))
.with_context(|| {
format!(
"Failed to open and deserialize corpus file at {}",
file_path.as_ref().display()
)
})?;
let corpus_directory = file_path
.as_ref()
.canonicalize()
.context("Failed to canonicalize the path to the corpus file")?
.parent()
.context("Corpus file has no parent")?
.to_path_buf();
for path in corpus.paths_iter_mut() {
*path = corpus_directory.join(path.as_path())
}
Ok(corpus)
} }
/// Scan the corpus base directory and return all tests found. pub fn enumerate_tests(&self) -> Vec<MetadataFile> {
pub fn enumerate_tests(&self) -> Vec<Metadata> { let mut tests = self
let mut tests = Vec::new(); .paths_iter()
collect_metadata(&self.path, &mut tests); .flat_map(|root_path| {
if !root_path.is_dir() {
Box::new(std::iter::once(root_path.to_path_buf()))
as Box<dyn Iterator<Item = _>>
} else {
Box::new(
FilesWithExtensionIterator::new(root_path)
.with_use_cached_fs(true)
.with_allowed_extension("sol")
.with_allowed_extension("json"),
)
}
.map(move |metadata_file_path| (root_path, metadata_file_path))
})
.filter_map(|(root_path, metadata_file_path)| {
Metadata::try_from_file(&metadata_file_path)
.or_else(|| {
debug!(
discovered_from = %root_path.display(),
metadata_file_path = %metadata_file_path.display(),
"Skipping file since it doesn't contain valid metadata"
);
None
})
.map(|metadata| MetadataFile {
metadata_file_path,
corpus_file_path: root_path.to_path_buf(),
content: metadata,
})
.inspect(|metadata_file| {
debug!(
metadata_file_path = %metadata_file.relative_path().display(),
"Loaded metadata file"
)
})
})
.collect::<Vec<_>>();
tests.sort_by(|a, b| a.metadata_file_path.cmp(&b.metadata_file_path));
tests.dedup_by(|a, b| a.metadata_file_path == b.metadata_file_path);
info!(
len = tests.len(),
corpus_name = self.name(),
"Found tests in Corpus"
);
tests tests
} }
}
/// Recursively walks `path` and parses any JSON or Solidity file into a test pub fn name(&self) -> &str {
/// definition [Metadata]. match self {
/// Corpus::SinglePath { name, .. } | Corpus::MultiplePaths { name, .. } => name.as_str(),
/// Found tests are inserted into `tests`.
///
/// `path` is expected to be a directory.
pub fn collect_metadata(path: &Path, tests: &mut Vec<Metadata>) {
let dir_entry = match std::fs::read_dir(path) {
Ok(dir_entry) => dir_entry,
Err(error) => {
log::error!("failed to read dir '{}': {error}", path.display());
return;
} }
}; }
for entry in dir_entry { pub fn paths_iter(&self) -> impl Iterator<Item = &Path> {
let entry = match entry { match self {
Ok(entry) => entry, Corpus::SinglePath { path, .. } => {
Err(error) => { Box::new(std::iter::once(path.as_path())) as Box<dyn Iterator<Item = _>>
log::error!("error reading dir entry: {error}");
continue;
} }
}; Corpus::MultiplePaths { paths, .. } => {
Box::new(paths.iter().map(|path| path.as_path())) as Box<dyn Iterator<Item = _>>
let path = entry.path();
if path.is_dir() {
collect_metadata(&path, tests);
continue;
}
if path.is_file() {
if let Some(metadata) = Metadata::try_from_file(&path) {
tests.push(metadata)
} }
} }
} }
pub fn paths_iter_mut(&mut self) -> impl Iterator<Item = &mut PathBuf> {
match self {
Corpus::SinglePath { path, .. } => {
Box::new(std::iter::once(path)) as Box<dyn Iterator<Item = _>>
}
Corpus::MultiplePaths { paths, .. } => {
Box::new(paths.iter_mut()) as Box<dyn Iterator<Item = _>>
}
}
}
pub fn path_count(&self) -> usize {
match self {
Corpus::SinglePath { .. } => 1,
Corpus::MultiplePaths { paths, .. } => paths.len(),
}
}
} }
+1187 -80
View File
File diff suppressed because it is too large Load Diff
+1
View File
@@ -5,3 +5,4 @@ pub mod corpus;
pub mod input; pub mod input;
pub mod metadata; pub mod metadata;
pub mod mode; pub mod mode;
pub mod traits;
+455 -69
View File
@@ -1,46 +1,106 @@
use std::{ use std::{
cmp::Ordering,
collections::BTreeMap, collections::BTreeMap,
fs::{File, read_to_string}, fmt::Display,
fs::File,
ops::Deref,
path::{Path, PathBuf}, path::{Path, PathBuf},
str::FromStr,
}; };
use serde::Deserialize; use serde::{Deserialize, Serialize};
use crate::{ use revive_common::EVMVersion;
case::Case, use revive_dt_common::{
mode::{Mode, SolcMode}, cached_fs::read_to_string, iterators::FilesWithExtensionIterator, macros::define_wrapper_type,
types::Mode,
}; };
use tracing::error;
use crate::{case::Case, mode::ParsedMode};
pub const METADATA_FILE_EXTENSION: &str = "json"; pub const METADATA_FILE_EXTENSION: &str = "json";
pub const SOLIDITY_CASE_FILE_EXTENSION: &str = "sol"; pub const SOLIDITY_CASE_FILE_EXTENSION: &str = "sol";
pub const SOLIDITY_CASE_COMMENT_MARKER: &str = "//!"; pub const SOLIDITY_CASE_COMMENT_MARKER: &str = "//!";
#[derive(Debug, Default, Deserialize, Clone, Eq, PartialEq)] #[derive(Debug, Default, Deserialize, Clone, Eq, PartialEq)]
pub struct MetadataFile {
/// The path of the metadata file. This will either be a JSON or solidity file.
pub metadata_file_path: PathBuf,
/// This is the path contained within the corpus file. This could either be the path of some dir
/// or could be the actual metadata file path.
pub corpus_file_path: PathBuf,
/// The metadata contained within the file.
pub content: Metadata,
}
impl MetadataFile {
pub fn relative_path(&self) -> &Path {
if self.corpus_file_path.is_file() {
&self.corpus_file_path
} else {
self.metadata_file_path
.strip_prefix(&self.corpus_file_path)
.unwrap()
}
}
}
impl Deref for MetadataFile {
type Target = Metadata;
fn deref(&self) -> &Self::Target {
&self.content
}
}
#[derive(Debug, Default, Serialize, Deserialize, Clone, Eq, PartialEq)]
pub struct Metadata { pub struct Metadata {
pub cases: Vec<Case>, /// A comment on the test case that's added for human-readability.
pub contracts: Option<BTreeMap<String, String>>, #[serde(skip_serializing_if = "Option::is_none")]
pub libraries: Option<BTreeMap<String, BTreeMap<String, String>>>, pub comment: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub ignore: Option<bool>, pub ignore: Option<bool>,
pub modes: Option<Vec<Mode>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub targets: Option<Vec<String>>,
pub cases: Vec<Case>,
#[serde(skip_serializing_if = "Option::is_none")]
pub contracts: Option<BTreeMap<ContractInstance, ContractPathAndIdent>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub libraries: Option<BTreeMap<PathBuf, BTreeMap<ContractIdent, ContractInstance>>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub modes: Option<Vec<ParsedMode>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub file_path: Option<PathBuf>, pub file_path: Option<PathBuf>,
/// This field specifies an EVM version requirement that the test case has where the test might
/// be run of the evm version of the nodes match the evm version specified here.
#[serde(skip_serializing_if = "Option::is_none")]
pub required_evm_version: Option<EvmVersionRequirement>,
/// A set of compilation directives that will be passed to the compiler whenever the contracts for
/// the test are being compiled. Note that this differs from the [`Mode`]s in that a [`Mode`] is
/// just a filter for when a test can run whereas this is an instruction to the compiler.
#[serde(skip_serializing_if = "Option::is_none")]
pub compiler_directives: Option<CompilationDirectives>,
} }
impl Metadata { impl Metadata {
/// Returns the solc modes of this metadata, inserting a default mode if not present. /// Returns the modes that we should test from this metadata.
pub fn solc_modes(&self) -> Vec<SolcMode> { pub fn solc_modes(&self) -> Vec<Mode> {
self.modes match &self.modes {
.to_owned() Some(modes) => ParsedMode::many_to_modes(modes.iter()).collect(),
.unwrap_or_else(|| vec![Mode::Solidity(Default::default())]) None => Mode::all().cloned().collect(),
.iter() }
.filter_map(|mode| match mode {
Mode::Solidity(solc_mode) => Some(solc_mode),
Mode::Unknown(mode) => {
log::debug!("compiler: ignoring unknown mode '{mode}'");
None
}
})
.cloned()
.collect()
} }
/// Returns the base directory of this metadata. /// Returns the base directory of this metadata.
@@ -53,28 +113,43 @@ impl Metadata {
.to_path_buf()) .to_path_buf())
} }
/// Extract the contract sources. /// Returns the contract sources with canonicalized paths for the files
/// pub fn contract_sources(
/// Returns a mapping of contract IDs to their source path and contract name. &self,
pub fn contract_sources(&self) -> anyhow::Result<BTreeMap<String, (PathBuf, String)>> { ) -> anyhow::Result<BTreeMap<ContractInstance, ContractPathAndIdent>> {
let directory = self.directory()?; let directory = self.directory()?;
let mut sources = BTreeMap::new(); let mut sources = BTreeMap::new();
let Some(contracts) = &self.contracts else { let Some(contracts) = &self.contracts else {
return Ok(sources); return Ok(sources);
}; };
for (id, contract) in contracts { for (
// TODO: broken if a colon is in the dir name.. alias,
let mut parts = contract.split(':'); ContractPathAndIdent {
let (Some(file_name), Some(contract_name)) = (parts.next(), parts.next()) else { contract_source_path,
anyhow::bail!("metadata contains invalid contract: {contract}"); contract_ident,
}; },
let file = directory.to_path_buf().join(file_name); ) in contracts
if !file.is_file() { {
anyhow::bail!("contract {id} is not a file: {}", file.display()); let alias = alias.clone();
} let absolute_path = directory
.join(contract_source_path)
.canonicalize()
.map_err(|error| {
anyhow::anyhow!(
"Failed to canonicalize contract source path '{}': {error}",
directory.join(contract_source_path).display()
)
})?;
let contract_ident = contract_ident.clone();
sources.insert(id.clone(), (file, contract_name.to_string())); sources.insert(
alias,
ContractPathAndIdent {
contract_source_path: absolute_path,
contract_ident,
},
);
} }
Ok(sources) Ok(sources)
@@ -89,10 +164,7 @@ impl Metadata {
pub fn try_from_file(path: &Path) -> Option<Self> { pub fn try_from_file(path: &Path) -> Option<Self> {
assert!(path.is_file(), "not a file: {}", path.display()); assert!(path.is_file(), "not a file: {}", path.display());
let Some(file_extension) = path.extension() else { let file_extension = path.extension()?;
log::debug!("skipping corpus file: {}", path.display());
return None;
};
if file_extension == METADATA_FILE_EXTENSION { if file_extension == METADATA_FILE_EXTENSION {
return Self::try_from_json(path); return Self::try_from_json(path);
@@ -102,18 +174,12 @@ impl Metadata {
return Self::try_from_solidity(path); return Self::try_from_solidity(path);
} }
log::debug!("ignoring invalid corpus file: {}", path.display());
None None
} }
fn try_from_json(path: &Path) -> Option<Self> { fn try_from_json(path: &Path) -> Option<Self> {
let file = File::open(path) let file = File::open(path)
.inspect_err(|error| { .inspect_err(|err| error!(path = %path.display(), %err, "Failed to open file"))
log::error!(
"opening JSON test metadata file '{}' error: {error}",
path.display()
);
})
.ok()?; .ok()?;
match serde_json::from_reader::<_, Metadata>(file) { match serde_json::from_reader::<_, Metadata>(file) {
@@ -121,24 +187,16 @@ impl Metadata {
metadata.file_path = Some(path.to_path_buf()); metadata.file_path = Some(path.to_path_buf());
Some(metadata) Some(metadata)
} }
Err(error) => { Err(err) => {
log::error!( error!(path = %path.display(), %err, "Deserialization of metadata failed");
"parsing JSON test metadata file '{}' error: {error}",
path.display()
);
None None
} }
} }
} }
fn try_from_solidity(path: &Path) -> Option<Self> { fn try_from_solidity(path: &Path) -> Option<Self> {
let buf = read_to_string(path) let spec = read_to_string(path)
.inspect_err(|error| { .inspect_err(|err| error!(path = %path.display(), %err, "Failed to read file content"))
log::error!(
"opening JSON test metadata file '{}' error: {error}",
path.display()
);
})
.ok()? .ok()?
.lines() .lines()
.filter_map(|line| line.strip_prefix(SOLIDITY_CASE_COMMENT_MARKER)) .filter_map(|line| line.strip_prefix(SOLIDITY_CASE_COMMENT_MARKER))
@@ -147,22 +205,350 @@ impl Metadata {
buf buf
}); });
if buf.is_empty() { if spec.is_empty() {
return None; return None;
} }
match serde_json::from_str::<Self>(&buf) { match serde_json::from_str::<Self>(&spec) {
Ok(mut metadata) => { Ok(mut metadata) => {
metadata.file_path = Some(path.to_path_buf()); metadata.file_path = Some(path.to_path_buf());
metadata.contracts = Some(
[(
ContractInstance::new("Test"),
ContractPathAndIdent {
contract_source_path: path.to_path_buf(),
contract_ident: ContractIdent::new("Test"),
},
)]
.into(),
);
Some(metadata) Some(metadata)
} }
Err(error) => { Err(err) => {
log::error!( error!(path = %path.display(), %err, "Failed to deserialize metadata");
"parsing Solidity test metadata file '{}' error: {error}",
path.display()
);
None None
} }
} }
} }
/// Returns an iterator over all of the solidity files that needs to be compiled for this
/// [`Metadata`] object
///
/// Note: if the metadata is contained within a solidity file then this is the only file that
/// we wish to compile since this is a self-contained test. Otherwise, if it's a JSON file
/// then we need to compile all of the contracts that are in the directory since imports are
/// allowed in there.
pub fn files_to_compile(&self) -> anyhow::Result<Box<dyn Iterator<Item = PathBuf>>> {
let Some(ref metadata_file_path) = self.file_path else {
anyhow::bail!("The metadata file path is not defined");
};
if metadata_file_path
.extension()
.is_some_and(|extension| extension.eq_ignore_ascii_case("sol"))
{
Ok(Box::new(std::iter::once(metadata_file_path.clone())))
} else {
Ok(Box::new(
FilesWithExtensionIterator::new(self.directory()?)
.with_allowed_extension("sol")
.with_use_cached_fs(true),
))
}
}
}
define_wrapper_type!(
/// Represents a contract instance found a metadata file.
///
/// Typically, this is used as the key to the "contracts" field of metadata files.
#[derive(
Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize,
)]
#[serde(transparent)]
pub struct ContractInstance(String) impl Display;
);
define_wrapper_type!(
/// Represents a contract identifier found a metadata file.
///
/// A contract identifier is the name of the contract in the source code.
#[derive(
Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize,
)]
#[serde(transparent)]
pub struct ContractIdent(String) impl Display;
);
/// Represents an identifier used for contracts.
///
/// The type supports serialization from and into the following string format:
///
/// ```text
/// ${path}:${contract_ident}
/// ```
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
#[serde(try_from = "String", into = "String")]
pub struct ContractPathAndIdent {
/// The path of the contract source code relative to the directory containing the metadata file.
pub contract_source_path: PathBuf,
/// The identifier of the contract.
pub contract_ident: ContractIdent,
}
impl Display for ContractPathAndIdent {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}:{}",
self.contract_source_path.display(),
self.contract_ident.as_ref()
)
}
}
impl FromStr for ContractPathAndIdent {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut splitted_string = s.split(":").peekable();
let mut path = None::<String>;
let mut identifier = None::<String>;
loop {
let Some(next_item) = splitted_string.next() else {
break;
};
if splitted_string.peek().is_some() {
match path {
Some(ref mut path) => {
path.push(':');
path.push_str(next_item);
}
None => path = Some(next_item.to_owned()),
}
} else {
identifier = Some(next_item.to_owned())
}
}
match (path, identifier) {
(Some(path), Some(identifier)) => Ok(Self {
contract_source_path: PathBuf::from(path),
contract_ident: ContractIdent::new(identifier),
}),
(None, Some(path)) | (Some(path), None) => {
let Some(identifier) = path.split(".").next().map(ToOwned::to_owned) else {
anyhow::bail!("Failed to find identifier");
};
Ok(Self {
contract_source_path: PathBuf::from(path),
contract_ident: ContractIdent::new(identifier),
})
}
(None, None) => anyhow::bail!("Failed to find the path and identifier"),
}
}
}
impl TryFrom<String> for ContractPathAndIdent {
type Error = anyhow::Error;
fn try_from(value: String) -> Result<Self, Self::Error> {
Self::from_str(&value)
}
}
impl From<ContractPathAndIdent> for String {
fn from(value: ContractPathAndIdent) -> Self {
value.to_string()
}
}
/// An EVM version requirement that the test case has. This gets serialized and
/// deserialized from and into [`String`].
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
#[serde(try_from = "String", into = "String")]
pub struct EvmVersionRequirement {
ordering: Ordering,
or_equal: bool,
evm_version: EVMVersion,
}
impl EvmVersionRequirement {
pub fn new_greater_than_or_equals(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Greater,
or_equal: true,
evm_version: version,
}
}
pub fn new_greater_than(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Greater,
or_equal: false,
evm_version: version,
}
}
pub fn new_equals(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Equal,
or_equal: false,
evm_version: version,
}
}
pub fn new_less_than(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Less,
or_equal: false,
evm_version: version,
}
}
pub fn new_less_than_or_equals(version: EVMVersion) -> Self {
Self {
ordering: Ordering::Less,
or_equal: true,
evm_version: version,
}
}
pub fn matches(&self, other: &EVMVersion) -> bool {
let ordering = other.cmp(&self.evm_version);
ordering == self.ordering || (self.or_equal && matches!(ordering, Ordering::Equal))
}
}
impl Display for EvmVersionRequirement {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let Self {
ordering,
or_equal,
evm_version,
} = self;
match ordering {
Ordering::Less => write!(f, "<")?,
Ordering::Equal => write!(f, "=")?,
Ordering::Greater => write!(f, ">")?,
}
if *or_equal && !matches!(ordering, Ordering::Equal) {
write!(f, "=")?;
}
write!(f, "{evm_version}")
}
}
impl FromStr for EvmVersionRequirement {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s.as_bytes() {
[b'>', b'=', remaining @ ..] => Ok(Self {
ordering: Ordering::Greater,
or_equal: true,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
[b'>', remaining @ ..] => Ok(Self {
ordering: Ordering::Greater,
or_equal: false,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
[b'<', b'=', remaining @ ..] => Ok(Self {
ordering: Ordering::Less,
or_equal: true,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
[b'<', remaining @ ..] => Ok(Self {
ordering: Ordering::Less,
or_equal: false,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
[b'=', remaining @ ..] => Ok(Self {
ordering: Ordering::Equal,
or_equal: false,
evm_version: str::from_utf8(remaining)?.try_into()?,
}),
_ => anyhow::bail!("Invalid EVM version requirement {s}"),
}
}
}
impl TryFrom<String> for EvmVersionRequirement {
type Error = anyhow::Error;
fn try_from(value: String) -> Result<Self, Self::Error> {
value.parse()
}
}
impl From<EvmVersionRequirement> for String {
fn from(value: EvmVersionRequirement) -> Self {
value.to_string()
}
}
/// A set of compilation directives that will be passed to the compiler whenever the contracts for
/// the test are being compiled. Note that this differs from the [`Mode`]s in that a [`Mode`] is
/// just a filter for when a test can run whereas this is an instruction to the compiler.
/// Defines how the compiler should handle revert strings.
#[derive(
Clone, Debug, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default, Serialize, Deserialize,
)]
pub struct CompilationDirectives {
/// Defines how the revert strings should be handled.
pub revert_string_handling: Option<RevertString>,
}
/// Defines how the compiler should handle revert strings.
#[derive(
Clone, Debug, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default, Serialize, Deserialize,
)]
#[serde(rename_all = "camelCase")]
pub enum RevertString {
#[default]
Default,
Debug,
Strip,
VerboseDebug,
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn contract_identifier_respects_roundtrip_property() {
// Arrange
let string = "ERC20/ERC20.sol:ERC20";
// Act
let identifier = ContractPathAndIdent::from_str(string);
// Assert
let identifier = identifier.expect("Failed to parse");
assert_eq!(
identifier.contract_source_path.display().to_string(),
"ERC20/ERC20.sol"
);
assert_eq!(identifier.contract_ident, "ERC20".to_owned().into());
// Act
let reserialized = identifier.to_string();
// Assert
assert_eq!(string, reserialized);
}
#[test]
fn complex_metadata_file_can_be_deserialized() {
// Arrange
const JSON: &str = include_str!("../../../assets/test_metadata.json");
// Act
let metadata = serde_json::from_str::<Metadata>(JSON);
// Assert
metadata.expect("Failed to deserialize metadata");
}
} }
+241 -81
View File
@@ -1,96 +1,256 @@
use semver::Version; use anyhow::Context;
use serde::de::Deserializer; use regex::Regex;
use revive_dt_common::iterators::EitherIter;
use revive_dt_common::types::{Mode, ModeOptimizerSetting, ModePipeline};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::collections::HashSet;
use std::fmt::Display;
use std::str::FromStr;
use std::sync::LazyLock;
/// Specifies the compilation mode of the test artifact. /// This represents a mode that has been parsed from test metadata.
#[derive(Hash, Debug, Clone, Eq, PartialEq)] ///
pub enum Mode { /// Mode strings can take the following form (in pseudo-regex):
Solidity(SolcMode), ///
Unknown(String), /// ```text
/// [YEILV][+-]? (M[0123sz])? <semver>?
/// ```
///
/// We can parse valid mode strings into [`ParsedMode`] using [`ParsedMode::from_str`].
#[derive(Clone, Debug, PartialEq, Eq, Hash, Deserialize, Serialize)]
#[serde(try_from = "String", into = "String")]
pub struct ParsedMode {
pub pipeline: Option<ModePipeline>,
pub optimize_flag: Option<bool>,
pub optimize_setting: Option<ModeOptimizerSetting>,
pub version: Option<semver::VersionReq>,
} }
/// Specify Solidity specific compiler options. impl FromStr for ParsedMode {
#[derive(Hash, Debug, Default, Clone, Eq, PartialEq, Serialize, Deserialize)] type Err = anyhow::Error;
pub struct SolcMode { fn from_str(s: &str) -> Result<Self, Self::Err> {
pub solc_version: Option<semver::VersionReq>, static REGEX: LazyLock<Regex> = LazyLock::new(|| {
solc_optimize: Option<bool>, Regex::new(r"(?x)
pub llvm_optimizer_settings: Vec<String>, ^
} (?:(?P<pipeline>[YEILV])(?P<optimize_flag>[+-])?)? # Pipeline to use eg Y, E+, E-
\s*
(?P<optimize_setting>M[a-zA-Z0-9])? # Optimize setting eg M0, Ms, Mz
\s*
(?P<version>[>=<]*\d+(?:\.\d+)*)? # Optional semver version eg >=0.8.0, 0.7, <0.8
$
").unwrap()
});
impl SolcMode { let Some(caps) = REGEX.captures(s) else {
/// Try to parse a mode string into a solc mode. anyhow::bail!("Cannot parse mode '{s}' from string");
/// Returns `None` if the string wasn't a solc YUL mode string.
///
/// The mode string is expected to start with the `Y` ID (YUL ID),
/// optionally followed by `+` or `-` for the solc optimizer settings.
///
/// Options can be separated by a whitespace contain the following
/// - A solc `SemVer version requirement` string
/// - One or more `-OX` where X is a supposed to be an LLVM opt mode
pub fn parse_from_mode_string(mode_string: &str) -> Option<Self> {
let mut result = Self::default();
let mut parts = mode_string.trim().split(" ");
match parts.next()? {
"Y" => {}
"Y+" => result.solc_optimize = Some(true),
"Y-" => result.solc_optimize = Some(false),
_ => return None,
}
for part in parts {
if let Ok(solc_version) = semver::VersionReq::parse(part) {
result.solc_version = Some(solc_version);
continue;
}
if let Some(level) = part.strip_prefix("-O") {
result.llvm_optimizer_settings.push(level.to_string());
continue;
}
panic!("the YUL mode string {mode_string} failed to parse, invalid part: {part}")
}
Some(result)
}
/// Returns whether to enable the solc optimizer.
pub fn solc_optimize(&self) -> bool {
self.solc_optimize.unwrap_or(true)
}
/// Calculate the latest matching solc patch version. Returns:
/// - `latest_supported` if no version request was specified.
/// - A matching version with the same minor version as `latest_supported`, if any.
/// - `None` if no minor version of the `latest_supported` version matches.
pub fn last_patch_version(&self, latest_supported: &Version) -> Option<Version> {
let Some(version_req) = self.solc_version.as_ref() else {
return Some(latest_supported.to_owned());
}; };
// lgtm let pipeline = match caps.name("pipeline") {
for patch in (0..latest_supported.patch + 1).rev() { Some(m) => Some(
let version = Version::new(0, latest_supported.minor, patch); ModePipeline::from_str(m.as_str())
if version_req.matches(&version) { .context("Failed to parse mode pipeline from string")?,
return Some(version); ),
None => None,
};
let optimize_flag = caps.name("optimize_flag").map(|m| m.as_str() == "+");
let optimize_setting = match caps.name("optimize_setting") {
Some(m) => Some(
ModeOptimizerSetting::from_str(m.as_str())
.context("Failed to parse optimizer setting from string")?,
),
None => None,
};
let version = match caps.name("version") {
Some(m) => Some(
semver::VersionReq::parse(m.as_str())
.map_err(|e| {
anyhow::anyhow!(
"Cannot parse the version requirement '{}': {e}",
m.as_str()
)
})
.context("Failed to parse semver requirement from mode string")?,
),
None => None,
};
Ok(ParsedMode {
pipeline,
optimize_flag,
optimize_setting,
version,
})
}
}
impl Display for ParsedMode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut has_written = false;
if let Some(pipeline) = self.pipeline {
pipeline.fmt(f)?;
if let Some(optimize_flag) = self.optimize_flag {
f.write_str(if optimize_flag { "+" } else { "-" })?;
} }
has_written = true;
} }
None if let Some(optimize_setting) = self.optimize_setting {
if has_written {
f.write_str(" ")?;
}
optimize_setting.fmt(f)?;
has_written = true;
}
if let Some(version) = &self.version {
if has_written {
f.write_str(" ")?;
}
version.fmt(f)?;
}
Ok(())
} }
} }
impl<'de> Deserialize<'de> for Mode { impl From<ParsedMode> for String {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> fn from(parsed_mode: ParsedMode) -> Self {
where parsed_mode.to_string()
D: Deserializer<'de>, }
{ }
let mode_string = String::deserialize(deserializer)?;
impl TryFrom<String> for ParsedMode {
if let Some(solc_mode) = SolcMode::parse_from_mode_string(&mode_string) { type Error = anyhow::Error;
return Ok(Self::Solidity(solc_mode)); fn try_from(value: String) -> Result<Self, Self::Error> {
} ParsedMode::from_str(&value)
}
Ok(Self::Unknown(mode_string)) }
impl ParsedMode {
/// This takes a [`ParsedMode`] and expands it into a list of [`Mode`]s that we should try.
pub fn to_modes(&self) -> impl Iterator<Item = Mode> {
let pipeline_iter = self.pipeline.as_ref().map_or_else(
|| EitherIter::A(ModePipeline::test_cases()),
|p| EitherIter::B(std::iter::once(*p)),
);
let optimize_flag_setting = self.optimize_flag.map(|flag| {
if flag {
ModeOptimizerSetting::M3
} else {
ModeOptimizerSetting::M0
}
});
let optimize_flag_iter = match optimize_flag_setting {
Some(setting) => EitherIter::A(std::iter::once(setting)),
None => EitherIter::B(ModeOptimizerSetting::test_cases()),
};
let optimize_settings_iter = self.optimize_setting.as_ref().map_or_else(
|| EitherIter::A(optimize_flag_iter),
|s| EitherIter::B(std::iter::once(*s)),
);
pipeline_iter.flat_map(move |pipeline| {
optimize_settings_iter
.clone()
.map(move |optimize_setting| Mode {
pipeline,
optimize_setting,
version: self.version.clone(),
})
})
}
/// Return a set of [`Mode`]s that correspond to the given [`ParsedMode`]s.
/// This avoids any duplicate entries.
pub fn many_to_modes<'a>(
parsed: impl Iterator<Item = &'a ParsedMode>,
) -> impl Iterator<Item = Mode> {
let modes: HashSet<_> = parsed.flat_map(|p| p.to_modes()).collect();
modes.into_iter()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parsed_mode_from_str() {
let strings = vec![
("Mz", "Mz"),
("Y", "Y"),
("Y+", "Y+"),
("Y-", "Y-"),
("E", "E"),
("E+", "E+"),
("E-", "E-"),
("Y M0", "Y M0"),
("Y M1", "Y M1"),
("Y M2", "Y M2"),
("Y M3", "Y M3"),
("Y Ms", "Y Ms"),
("Y Mz", "Y Mz"),
("E M0", "E M0"),
("E M1", "E M1"),
("E M2", "E M2"),
("E M3", "E M3"),
("E Ms", "E Ms"),
("E Mz", "E Mz"),
// When stringifying semver again, 0.8.0 becomes ^0.8.0 (same meaning)
("Y 0.8.0", "Y ^0.8.0"),
("E+ 0.8.0", "E+ ^0.8.0"),
("Y M3 >=0.8.0", "Y M3 >=0.8.0"),
("E Mz <0.7.0", "E Mz <0.7.0"),
// We can parse +- _and_ M1/M2 but the latter takes priority.
("Y+ M1 0.8.0", "Y+ M1 ^0.8.0"),
("E- M2 0.7.0", "E- M2 ^0.7.0"),
// We don't see this in the wild but it is parsed.
("<=0.8", "<=0.8"),
];
for (actual, expected) in strings {
let parsed = ParsedMode::from_str(actual)
.unwrap_or_else(|_| panic!("Failed to parse mode string '{actual}'"));
assert_eq!(
expected,
parsed.to_string(),
"Mode string '{actual}' did not parse to '{expected}': got '{parsed}'"
);
}
}
#[test]
fn test_parsed_mode_to_test_modes() {
let strings = vec![
("Mz", vec!["Y Mz", "E Mz"]),
("Y", vec!["Y M0", "Y M3"]),
("E", vec!["E M0", "E M3"]),
("Y+", vec!["Y M3"]),
("Y-", vec!["Y M0"]),
("Y <=0.8", vec!["Y M0 <=0.8", "Y M3 <=0.8"]),
(
"<=0.8",
vec!["Y M0 <=0.8", "Y M3 <=0.8", "E M0 <=0.8", "E M3 <=0.8"],
),
];
for (actual, expected) in strings {
let parsed = ParsedMode::from_str(actual)
.unwrap_or_else(|_| panic!("Failed to parse mode string '{actual}'"));
let expected_set: HashSet<_> = expected.into_iter().map(|s| s.to_owned()).collect();
let actual_set: HashSet<_> = parsed.to_modes().map(|m| m.to_string()).collect();
assert_eq!(
expected_set, actual_set,
"Mode string '{actual}' did not expand to '{expected_set:?}': got '{actual_set:?}'"
);
}
} }
} }
+157
View File
@@ -0,0 +1,157 @@
use std::collections::HashMap;
use alloy::eips::BlockNumberOrTag;
use alloy::json_abi::JsonAbi;
use alloy::primitives::{Address, BlockHash, BlockNumber, BlockTimestamp, ChainId, U256};
use alloy_primitives::TxHash;
use anyhow::Result;
use crate::metadata::{ContractIdent, ContractInstance};
/// A trait of the interface are required to implement to be used by the resolution logic that this
/// crate implements to go from string calldata and into the bytes calldata.
pub trait ResolverApi {
/// Returns the ID of the chain that the node is on.
fn chain_id(&self) -> impl Future<Output = Result<ChainId>>;
/// Returns the gas price for the specified transaction.
fn transaction_gas_price(&self, tx_hash: &TxHash) -> impl Future<Output = Result<u128>>;
// TODO: This is currently a u128 due to Kitchensink needing more than 64 bits for its gas limit
// when we implement the changes to the gas we need to adjust this to be a u64.
/// Returns the gas limit of the specified block.
fn block_gas_limit(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<u128>>;
/// Returns the coinbase of the specified block.
fn block_coinbase(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<Address>>;
/// Returns the difficulty of the specified block.
fn block_difficulty(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<U256>>;
/// Returns the base fee of the specified block.
fn block_base_fee(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<u64>>;
/// Returns the hash of the specified block.
fn block_hash(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<BlockHash>>;
/// Returns the timestamp of the specified block,
fn block_timestamp(
&self,
number: BlockNumberOrTag,
) -> impl Future<Output = Result<BlockTimestamp>>;
/// Returns the number of the last block.
fn last_block_number(&self) -> impl Future<Output = Result<BlockNumber>>;
}
#[derive(Clone, Copy, Debug, Default)]
/// Contextual information required by the code that's performing the resolution.
pub struct ResolutionContext<'a> {
/// When provided the contracts provided here will be used for resolutions.
deployed_contracts: Option<&'a HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>>,
/// When provided the variables in here will be used for performing resolutions.
variables: Option<&'a HashMap<String, U256>>,
/// When provided this block number will be treated as the tip of the chain.
block_number: Option<&'a BlockNumber>,
/// When provided the resolver will use this transaction hash for all of its resolutions.
transaction_hash: Option<&'a TxHash>,
}
impl<'a> ResolutionContext<'a> {
pub fn new() -> Self {
Default::default()
}
pub fn new_from_parts(
deployed_contracts: impl Into<
Option<&'a HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>>,
>,
variables: impl Into<Option<&'a HashMap<String, U256>>>,
block_number: impl Into<Option<&'a BlockNumber>>,
transaction_hash: impl Into<Option<&'a TxHash>>,
) -> Self {
Self {
deployed_contracts: deployed_contracts.into(),
variables: variables.into(),
block_number: block_number.into(),
transaction_hash: transaction_hash.into(),
}
}
pub fn with_deployed_contracts(
mut self,
deployed_contracts: impl Into<
Option<&'a HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>>,
>,
) -> Self {
self.deployed_contracts = deployed_contracts.into();
self
}
pub fn with_variables(
mut self,
variables: impl Into<Option<&'a HashMap<String, U256>>>,
) -> Self {
self.variables = variables.into();
self
}
pub fn with_block_number(mut self, block_number: impl Into<Option<&'a BlockNumber>>) -> Self {
self.block_number = block_number.into();
self
}
pub fn with_transaction_hash(
mut self,
transaction_hash: impl Into<Option<&'a TxHash>>,
) -> Self {
self.transaction_hash = transaction_hash.into();
self
}
pub fn resolve_block_number(&self, number: BlockNumberOrTag) -> BlockNumberOrTag {
match self.block_number {
Some(block_number) => match number {
BlockNumberOrTag::Latest => BlockNumberOrTag::Number(*block_number),
n @ (BlockNumberOrTag::Finalized
| BlockNumberOrTag::Safe
| BlockNumberOrTag::Earliest
| BlockNumberOrTag::Pending
| BlockNumberOrTag::Number(_)) => n,
},
None => number,
}
}
pub fn deployed_contract(
&self,
instance: &ContractInstance,
) -> Option<&(ContractIdent, Address, JsonAbi)> {
self.deployed_contracts
.and_then(|deployed_contracts| deployed_contracts.get(instance))
}
pub fn deployed_contract_address(&self, instance: &ContractInstance) -> Option<&Address> {
self.deployed_contract(instance).map(|(_, a, _)| a)
}
pub fn deployed_contract_abi(&self, instance: &ContractInstance) -> Option<&JsonAbi> {
self.deployed_contract(instance).map(|(_, _, a)| a)
}
pub fn variable(&self, name: impl AsRef<str>) -> Option<&U256> {
self.variables
.and_then(|variables| variables.get(name.as_ref()))
}
pub fn tip_block_number(&self) -> Option<&'a BlockNumber> {
self.block_number
}
pub fn transaction_hash(&self) -> Option<&'a TxHash> {
self.transaction_hash
}
}
+3 -3
View File
@@ -11,6 +11,6 @@ rust-version.workspace = true
[dependencies] [dependencies]
alloy = { workspace = true } alloy = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
log = { workspace = true }
once_cell = { workspace = true } [lints]
tokio = { workspace = true } workspace = true
+21 -10
View File
@@ -1,12 +1,9 @@
//! This crate implements all node interactions. //! This crate implements all node interactions.
use alloy::rpc::types::trace::geth::{DiffMode, GethTrace}; use alloy::primitives::{Address, StorageKey, U256};
use alloy::rpc::types::{TransactionReceipt, TransactionRequest}; use alloy::rpc::types::trace::geth::{DiffMode, GethDebugTracingOptions, GethTrace};
use tokio_runtime::TO_TOKIO; use alloy::rpc::types::{EIP1186AccountProofResponse, TransactionReceipt, TransactionRequest};
use anyhow::Result;
mod tokio_runtime;
pub mod trace;
pub mod transaction;
/// An interface for all interactions with Ethereum compatible nodes. /// An interface for all interactions with Ethereum compatible nodes.
pub trait EthereumNode { pub trait EthereumNode {
@@ -14,11 +11,25 @@ pub trait EthereumNode {
fn execute_transaction( fn execute_transaction(
&self, &self,
transaction: TransactionRequest, transaction: TransactionRequest,
) -> anyhow::Result<TransactionReceipt>; ) -> impl Future<Output = Result<TransactionReceipt>>;
/// Trace the transaction in the [TransactionReceipt] and return a [GethTrace]. /// Trace the transaction in the [TransactionReceipt] and return a [GethTrace].
fn trace_transaction(&self, transaction: TransactionReceipt) -> anyhow::Result<GethTrace>; fn trace_transaction(
&self,
receipt: &TransactionReceipt,
trace_options: GethDebugTracingOptions,
) -> impl Future<Output = Result<GethTrace>>;
/// Returns the state diff of the transaction hash in the [TransactionReceipt]. /// Returns the state diff of the transaction hash in the [TransactionReceipt].
fn state_diff(&self, transaction: TransactionReceipt) -> anyhow::Result<DiffMode>; fn state_diff(&self, receipt: &TransactionReceipt) -> impl Future<Output = Result<DiffMode>>;
/// Returns the balance of the provided [`Address`] back.
fn balance_of(&self, address: Address) -> impl Future<Output = Result<U256>>;
/// Returns the latest storage proof of the provided [`Address`]
fn latest_state_proof(
&self,
address: Address,
keys: Vec<StorageKey>,
) -> impl Future<Output = Result<EIP1186AccountProofResponse>>;
} }
@@ -1,79 +0,0 @@
//! The alloy crate __requires__ a tokio runtime.
//! We contain any async rust right here.
use once_cell::sync::Lazy;
use std::pin::Pin;
use std::sync::Mutex;
use std::thread;
use tokio::runtime::Runtime;
use tokio::spawn;
use tokio::sync::{mpsc, oneshot};
use tokio::task::JoinError;
use crate::trace::Trace;
use crate::transaction::Transaction;
pub(crate) static TO_TOKIO: Lazy<Mutex<TokioRuntime>> =
Lazy::new(|| Mutex::new(TokioRuntime::spawn()));
/// Common interface for executing async node interactions from a non-async context.
#[allow(clippy::type_complexity)]
pub(crate) trait AsyncNodeInteraction: Send + 'static {
type Output: Send;
//// Returns the task and the output sender.
fn split(
self,
) -> (
Pin<Box<dyn Future<Output = Self::Output> + Send>>,
oneshot::Sender<Self::Output>,
);
}
pub(crate) struct TokioRuntime {
pub(crate) transaction_sender: mpsc::Sender<Transaction>,
pub(crate) trace_sender: mpsc::Sender<Trace>,
}
impl TokioRuntime {
fn spawn() -> Self {
let rt = Runtime::new().expect("should be able to create the tokio runtime");
let (transaction_sender, transaction_receiver) = mpsc::channel::<Transaction>(1024);
let (trace_sender, trace_receiver) = mpsc::channel::<Trace>(1024);
thread::spawn(move || {
rt.block_on(async move {
let transaction_task = spawn(interaction::<Transaction>(transaction_receiver));
let trace_task = spawn(interaction::<Trace>(trace_receiver));
if let Err(error) = transaction_task.await {
log::error!("tokio transaction task failed: {error}");
}
if let Err(error) = trace_task.await {
log::error!("tokio trace transaction task failed: {error}");
}
});
});
Self {
transaction_sender,
trace_sender,
}
}
}
async fn interaction<T>(mut receiver: mpsc::Receiver<T>) -> Result<(), JoinError>
where
T: AsyncNodeInteraction,
{
while let Some(task) = receiver.recv().await {
spawn(async move {
let (task, sender) = task.split();
sender
.send(task.await)
.unwrap_or_else(|_| panic!("failed to send task output"));
});
}
Ok(())
}
-43
View File
@@ -1,43 +0,0 @@
//! Trace transactions in a sync context.
use std::pin::Pin;
use alloy::rpc::types::trace::geth::GethTrace;
use tokio::sync::oneshot;
use crate::TO_TOKIO;
use crate::tokio_runtime::AsyncNodeInteraction;
pub type Task = Pin<Box<dyn Future<Output = anyhow::Result<GethTrace>> + Send>>;
pub(crate) struct Trace {
sender: oneshot::Sender<anyhow::Result<GethTrace>>,
task: Task,
}
impl AsyncNodeInteraction for Trace {
type Output = anyhow::Result<GethTrace>;
fn split(
self,
) -> (
std::pin::Pin<Box<dyn Future<Output = Self::Output> + Send>>,
oneshot::Sender<Self::Output>,
) {
(self.task, self.sender)
}
}
/// Execute some [Task] that return a [GethTrace] result.
pub fn trace_transaction(task: Task) -> anyhow::Result<GethTrace> {
let task_sender = TO_TOKIO.lock().unwrap().trace_sender.clone();
let (sender, receiver) = oneshot::channel();
task_sender
.blocking_send(Trace { task, sender })
.expect("we are not calling this from an async context");
receiver
.blocking_recv()
.unwrap_or_else(|error| anyhow::bail!("no trace received: {error}"))
}
@@ -1,46 +0,0 @@
//! Execute transactions in a sync context.
use std::pin::Pin;
use alloy::rpc::types::TransactionReceipt;
use tokio::sync::oneshot;
use crate::TO_TOKIO;
use crate::tokio_runtime::AsyncNodeInteraction;
pub type Task = Pin<Box<dyn Future<Output = anyhow::Result<TransactionReceipt>> + Send>>;
pub(crate) struct Transaction {
receipt_sender: oneshot::Sender<anyhow::Result<TransactionReceipt>>,
task: Task,
}
impl AsyncNodeInteraction for Transaction {
type Output = anyhow::Result<TransactionReceipt>;
fn split(
self,
) -> (
Pin<Box<dyn Future<Output = Self::Output> + Send>>,
oneshot::Sender<Self::Output>,
) {
(self.task, self.receipt_sender)
}
}
/// Execute some [Task] that returns a [TransactionReceipt].
pub fn execute_transaction(task: Task) -> anyhow::Result<TransactionReceipt> {
let request_sender = TO_TOKIO.lock().unwrap().transaction_sender.clone();
let (receipt_sender, receipt_receiver) = oneshot::channel();
request_sender
.blocking_send(Transaction {
receipt_sender,
task,
})
.expect("we are not calling this from an async context");
receipt_receiver
.blocking_recv()
.unwrap_or_else(|error| anyhow::bail!("no receipt received: {error}"))
}
+11 -2
View File
@@ -11,11 +11,16 @@ rust-version.workspace = true
[dependencies] [dependencies]
anyhow = { workspace = true } anyhow = { workspace = true }
alloy = { workspace = true } alloy = { workspace = true }
log = { workspace = true } tracing = { workspace = true }
tokio = { workspace = true }
revive-dt-node-interaction = { workspace = true } revive-common = { workspace = true }
revive-dt-common = { workspace = true }
revive-dt-config = { workspace = true } revive-dt-config = { workspace = true }
revive-dt-format = { workspace = true }
revive-dt-node-interaction = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
sp-core = { workspace = true } sp-core = { workspace = true }
@@ -23,3 +28,7 @@ sp-runtime = { workspace = true }
[dev-dependencies] [dev-dependencies]
temp-dir = { workspace = true } temp-dir = { workspace = true }
tokio = { workspace = true }
[lints]
workspace = true
+78
View File
@@ -0,0 +1,78 @@
use alloy::{
network::{Network, TransactionBuilder},
providers::{
Provider, SendableTx,
fillers::{GasFiller, TxFiller},
},
transports::TransportResult,
};
#[derive(Clone, Debug)]
pub struct FallbackGasFiller {
inner: GasFiller,
default_gas_limit: u64,
default_max_fee_per_gas: u128,
default_priority_fee: u128,
}
impl FallbackGasFiller {
pub fn new(
default_gas_limit: u64,
default_max_fee_per_gas: u128,
default_priority_fee: u128,
) -> Self {
Self {
inner: GasFiller,
default_gas_limit,
default_max_fee_per_gas,
default_priority_fee,
}
}
}
impl<N> TxFiller<N> for FallbackGasFiller
where
N: Network,
{
type Fillable = Option<<GasFiller as TxFiller<N>>::Fillable>;
fn status(
&self,
tx: &<N as Network>::TransactionRequest,
) -> alloy::providers::fillers::FillerControlFlow {
<GasFiller as TxFiller<N>>::status(&self.inner, tx)
}
fn fill_sync(&self, _: &mut alloy::providers::SendableTx<N>) {}
async fn prepare<P: Provider<N>>(
&self,
provider: &P,
tx: &<N as Network>::TransactionRequest,
) -> TransportResult<Self::Fillable> {
// Try to fetch GasFillers “fillable” (gas_price, base_fee, estimate_gas, …)
// If it errors (i.e. tx would revert under eth_estimateGas), swallow it.
match self.inner.prepare(provider, tx).await {
Ok(fill) => Ok(Some(fill)),
Err(_) => Ok(None),
}
}
async fn fill(
&self,
fillable: Self::Fillable,
mut tx: alloy::providers::SendableTx<N>,
) -> TransportResult<SendableTx<N>> {
if let Some(fill) = fillable {
// our inner GasFiller succeeded — use it
self.inner.fill(fill, tx).await
} else {
if let Some(builder) = tx.as_mut_builder() {
builder.set_gas_limit(self.default_gas_limit);
builder.set_max_fee_per_gas(self.default_max_fee_per_gas);
builder.set_max_priority_fee_per_gas(self.default_priority_fee);
}
Ok(tx)
}
}
}
+5
View File
@@ -0,0 +1,5 @@
/// This constant defines how much Wei accounts are pre-seeded with in genesis.
///
/// Note: After changing this number, check that the tests for kitchensink work as we encountered
/// some issues with different values of the initial balance on Kitchensink.
pub const INITIAL_BALANCE: u128 = 10u128.pow(37);
+606 -102
View File
@@ -1,29 +1,49 @@
//! The go-ethereum node implementation. //! The go-ethereum node implementation.
use std::{ use std::{
fs::{File, create_dir_all, remove_dir_all}, fs::{File, OpenOptions, create_dir_all, remove_dir_all},
io::{BufRead, BufReader, Read, Write}, io::{BufRead, BufReader, Read, Write},
ops::ControlFlow,
path::PathBuf, path::PathBuf,
process::{Child, Command, Stdio}, process::{Child, Command, Stdio},
sync::atomic::{AtomicU32, Ordering}, sync::{
thread, Arc,
atomic::{AtomicU32, Ordering},
},
time::{Duration, Instant}, time::{Duration, Instant},
}; };
use alloy::{ use alloy::{
network::EthereumWallet, eips::BlockNumberOrTag,
providers::{Provider, ProviderBuilder, ext::DebugApi}, genesis::{Genesis, GenesisAccount},
network::{Ethereum, EthereumWallet, NetworkWallet},
primitives::{
Address, BlockHash, BlockNumber, BlockTimestamp, FixedBytes, StorageKey, TxHash, U256,
},
providers::{
Provider, ProviderBuilder,
ext::DebugApi,
fillers::{CachedNonceManager, ChainIdFiller, FillProvider, NonceFiller, TxFiller},
},
rpc::types::{ rpc::types::{
TransactionReceipt, TransactionRequest, EIP1186AccountProofResponse, TransactionReceipt, TransactionRequest,
trace::geth::{DiffMode, GethDebugTracingOptions, PreStateConfig, PreStateFrame}, trace::geth::{DiffMode, GethDebugTracingOptions, PreStateConfig, PreStateFrame},
}, },
signers::local::PrivateKeySigner,
};
use anyhow::Context;
use revive_common::EVMVersion;
use tracing::{Instrument, instrument};
use revive_dt_common::{
fs::clear_directory,
futures::{PollingWaitBehavior, poll},
}; };
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use revive_dt_node_interaction::{ use revive_dt_format::traits::ResolverApi;
EthereumNode, trace::trace_transaction, transaction::execute_transaction, use revive_dt_node_interaction::EthereumNode;
};
use crate::Node; use crate::{Node, common::FallbackGasFiller, constants::INITIAL_BALANCE};
static NODE_COUNT: AtomicU32 = AtomicU32::new(0); static NODE_COUNT: AtomicU32 = AtomicU32::new(0);
@@ -35,21 +55,30 @@ static NODE_COUNT: AtomicU32 = AtomicU32::new(0);
/// ///
/// Prunes the child process and the base directory on drop. /// Prunes the child process and the base directory on drop.
#[derive(Debug)] #[derive(Debug)]
pub struct Instance { #[allow(clippy::type_complexity)]
pub struct GethNode {
connection_string: String, connection_string: String,
base_directory: PathBuf, base_directory: PathBuf,
data_directory: PathBuf, data_directory: PathBuf,
logs_directory: PathBuf,
geth: PathBuf, geth: PathBuf,
id: u32, id: u32,
handle: Option<Child>, handle: Option<Child>,
network_id: u64,
start_timeout: u64, start_timeout: u64,
wallet: EthereumWallet, wallet: Arc<EthereumWallet>,
nonce_manager: CachedNonceManager,
chain_id_filler: ChainIdFiller,
/// This vector stores [`File`] objects that we use for logging which we want to flush when the
/// node object is dropped. We do not store them in a structured fashion at the moment (in
/// separate fields) as the logic that we need to apply to them is all the same regardless of
/// what it belongs to, we just want to flush them on [`Drop`] of the node.
logs_file_to_flush: Vec<File>,
} }
impl Instance { impl GethNode {
const BASE_DIRECTORY: &str = "geth"; const BASE_DIRECTORY: &str = "geth";
const DATA_DIRECTORY: &str = "data"; const DATA_DIRECTORY: &str = "data";
const LOGS_DIRECTORY: &str = "logs";
const IPC_FILE: &str = "geth.ipc"; const IPC_FILE: &str = "geth.ipc";
const GENESIS_JSON_FILE: &str = "genesis.json"; const GENESIS_JSON_FILE: &str = "genesis.json";
@@ -57,30 +86,70 @@ impl Instance {
const READY_MARKER: &str = "IPC endpoint opened"; const READY_MARKER: &str = "IPC endpoint opened";
const ERROR_MARKER: &str = "Fatal:"; const ERROR_MARKER: &str = "Fatal:";
/// Create the node directory and call `geth init` to configure the genesis. const GETH_STDOUT_LOG_FILE_NAME: &str = "node_stdout.log";
fn init(&mut self, genesis: String) -> anyhow::Result<&mut Self> { const GETH_STDERR_LOG_FILE_NAME: &str = "node_stderr.log";
create_dir_all(&self.base_directory)?;
const TRANSACTION_INDEXING_ERROR: &str = "transaction indexing is in progress";
const TRANSACTION_TRACING_ERROR: &str = "historical state not available in path scheme yet";
const RECEIPT_POLLING_DURATION: Duration = Duration::from_secs(5 * 60);
const TRACE_POLLING_DURATION: Duration = Duration::from_secs(60);
/// Create the node directory and call `geth init` to configure the genesis.
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn init(&mut self, genesis: String) -> anyhow::Result<&mut Self> {
let _ = clear_directory(&self.base_directory);
let _ = clear_directory(&self.logs_directory);
create_dir_all(&self.base_directory)
.context("Failed to create base directory for geth node")?;
create_dir_all(&self.logs_directory)
.context("Failed to create logs directory for geth node")?;
let mut genesis = serde_json::from_str::<Genesis>(&genesis)
.context("Failed to deserialize geth genesis JSON")?;
for signer_address in
<EthereumWallet as NetworkWallet<Ethereum>>::signer_addresses(&self.wallet)
{
// Note, the use of the entry API here means that we only modify the entries for any
// account that is not in the `alloc` field of the genesis state.
genesis
.alloc
.entry(signer_address)
.or_insert(GenesisAccount::default().with_balance(U256::from(INITIAL_BALANCE)));
}
let genesis_path = self.base_directory.join(Self::GENESIS_JSON_FILE); let genesis_path = self.base_directory.join(Self::GENESIS_JSON_FILE);
File::create(&genesis_path)?.write_all(genesis.as_bytes())?; serde_json::to_writer(
File::create(&genesis_path).context("Failed to create geth genesis file")?,
&genesis,
)
.context("Failed to serialize geth genesis JSON to file")?;
let mut child = Command::new(&self.geth) let mut child = Command::new(&self.geth)
.arg("--state.scheme")
.arg("hash")
.arg("init") .arg("init")
.arg("--datadir") .arg("--datadir")
.arg(&self.data_directory) .arg(&self.data_directory)
.arg(genesis_path) .arg(genesis_path)
.stderr(Stdio::piped()) .stderr(Stdio::piped())
.stdout(Stdio::null()) .stdout(Stdio::null())
.spawn()?; .spawn()
.context("Failed to spawn geth --init process")?;
let mut stderr = String::new(); let mut stderr = String::new();
child child
.stderr .stderr
.take() .take()
.expect("should be piped") .expect("should be piped")
.read_to_string(&mut stderr)?; .read_to_string(&mut stderr)
.context("Failed to read geth --init stderr")?;
if !child.wait()?.success() { if !child
.wait()
.context("Failed waiting for geth --init process to finish")?
.success()
{
anyhow::bail!("failed to initialize geth node #{:?}: {stderr}", &self.id); anyhow::bail!("failed to initialize geth node #{:?}: {stderr}", &self.id);
} }
@@ -89,181 +158,517 @@ impl Instance {
/// Spawn the go-ethereum node child process. /// Spawn the go-ethereum node child process.
/// ///
/// [Instance::init] must be called priorly. /// [Instance::init] must be called prior.
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn spawn_process(&mut self) -> anyhow::Result<&mut Self> { fn spawn_process(&mut self) -> anyhow::Result<&mut Self> {
// This is the `OpenOptions` that we wish to use for all of the log files that we will be
// opening in this method. We need to construct it in this way to:
// 1. Be consistent
// 2. Less verbose and more dry
// 3. Because the builder pattern uses mutable references so we need to get around that.
let open_options = {
let mut options = OpenOptions::new();
options.create(true).truncate(true).write(true);
options
};
let stdout_logs_file = open_options
.clone()
.open(self.geth_stdout_log_file_path())
.context("Failed to open geth stdout logs file")?;
let stderr_logs_file = open_options
.open(self.geth_stderr_log_file_path())
.context("Failed to open geth stderr logs file")?;
self.handle = Command::new(&self.geth) self.handle = Command::new(&self.geth)
.arg("--dev") .arg("--dev")
.arg("--datadir") .arg("--datadir")
.arg(&self.data_directory) .arg(&self.data_directory)
.arg("--ipcpath") .arg("--ipcpath")
.arg(&self.connection_string) .arg(&self.connection_string)
.arg("--networkid")
.arg(self.network_id.to_string())
.arg("--nodiscover") .arg("--nodiscover")
.arg("--maxpeers") .arg("--maxpeers")
.arg("0") .arg("0")
.stderr(Stdio::piped()) .arg("--txlookuplimit")
.stdout(Stdio::null()) .arg("0")
.spawn()? .arg("--cache.blocklogs")
.arg("512")
.arg("--state.scheme")
.arg("hash")
.arg("--syncmode")
.arg("full")
.arg("--gcmode")
.arg("archive")
.stderr(
stderr_logs_file
.try_clone()
.context("Failed to clone geth stderr log file handle")?,
)
.stdout(
stdout_logs_file
.try_clone()
.context("Failed to clone geth stdout log file handle")?,
)
.spawn()
.context("Failed to spawn geth node process")?
.into(); .into();
if let Err(error) = self.wait_ready() {
tracing::error!(?error, "Failed to start geth, shutting down gracefully");
self.shutdown()
.context("Failed to gracefully shutdown after geth start error")?;
return Err(error);
}
self.logs_file_to_flush
.extend([stderr_logs_file, stdout_logs_file]);
Ok(self) Ok(self)
} }
/// Wait for the g-ethereum node child process getting ready. /// Wait for the g-ethereum node child process getting ready.
/// ///
/// [Instance::spawn_process] must be called priorly. /// [Instance::spawn_process] must be called priorly.
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn wait_ready(&mut self) -> anyhow::Result<&mut Self> { fn wait_ready(&mut self) -> anyhow::Result<&mut Self> {
// Thanks clippy but geth is a server; we don't `wait` but eventually kill it.
#[allow(clippy::zombie_processes)]
let mut child = self.handle.take().expect("should be spawned");
let start_time = Instant::now(); let start_time = Instant::now();
let maximum_wait_time = Duration::from_millis(self.start_timeout);
let mut stderr = BufReader::new(child.stderr.take().expect("should be piped")).lines();
let error = loop {
let Some(Ok(line)) = stderr.next() else {
break "child process stderr reading error".to_string();
};
if line.contains(Self::ERROR_MARKER) {
break line;
}
if line.contains(Self::READY_MARKER) {
// Keep stderr alive
// https://github.com/alloy-rs/alloy/issues/2091#issuecomment-2676134147
thread::spawn(move || for _ in stderr.by_ref() {});
self.handle = child.into(); let logs_file = OpenOptions::new()
return Ok(self); .read(true)
.write(false)
.append(false)
.truncate(false)
.open(self.geth_stderr_log_file_path())
.context("Failed to open geth stderr logs file for readiness check")?;
let maximum_wait_time = Duration::from_millis(self.start_timeout);
let mut stderr = BufReader::new(logs_file).lines();
let mut lines = vec![];
loop {
if let Some(Ok(line)) = stderr.next() {
if line.contains(Self::ERROR_MARKER) {
anyhow::bail!("Failed to start geth {line}");
}
if line.contains(Self::READY_MARKER) {
return Ok(self);
}
lines.push(line);
} }
if Instant::now().duration_since(start_time) > maximum_wait_time { if Instant::now().duration_since(start_time) > maximum_wait_time {
break "spawn timeout".to_string(); anyhow::bail!(
"Timeout in starting geth: took longer than {}ms. stdout:\n\n{}\n",
self.start_timeout,
lines.join("\n")
);
} }
}; }
}
let _ = child.kill(); #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
anyhow::bail!("geth node #{} spawn error: {error}", self.id) fn geth_stdout_log_file_path(&self) -> PathBuf {
self.logs_directory.join(Self::GETH_STDOUT_LOG_FILE_NAME)
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn geth_stderr_log_file_path(&self) -> PathBuf {
self.logs_directory.join(Self::GETH_STDERR_LOG_FILE_NAME)
}
async fn provider(
&self,
) -> anyhow::Result<FillProvider<impl TxFiller<Ethereum>, impl Provider<Ethereum>, Ethereum>>
{
ProviderBuilder::new()
.disable_recommended_fillers()
.filler(FallbackGasFiller::new(
25_000_000,
1_000_000_000,
1_000_000_000,
))
.filler(self.chain_id_filler.clone())
.filler(NonceFiller::new(self.nonce_manager.clone()))
.wallet(self.wallet.clone())
.connect(&self.connection_string)
.await
.map_err(Into::into)
} }
} }
impl EthereumNode for Instance { impl EthereumNode for GethNode {
fn execute_transaction( #[instrument(
level = "info",
skip_all,
fields(geth_node_id = self.id, connection_string = self.connection_string),
err,
)]
async fn execute_transaction(
&self, &self,
transaction: TransactionRequest, transaction: TransactionRequest,
) -> anyhow::Result<alloy::rpc::types::TransactionReceipt> { ) -> anyhow::Result<alloy::rpc::types::TransactionReceipt> {
let connection_string = self.connection_string(); let provider = self
let wallet = self.wallet.clone(); .provider()
.await
.context("Failed to create provider for transaction submission")?;
execute_transaction(Box::pin(async move { let pending_transaction = provider
Ok(ProviderBuilder::new() .send_transaction(transaction)
.wallet(wallet) .await
.connect(&connection_string) .inspect_err(
.await? |err| tracing::error!(%err, "Encountered an error when submitting the transaction"),
.send_transaction(transaction) )
.await? .context("Failed to submit transaction to geth node")?;
.get_receipt() let transaction_hash = *pending_transaction.tx_hash();
.await?)
})) // The following is a fix for the "transaction indexing is in progress" error that we used
// to get. You can find more information on this in the following GH issue in geth
// https://github.com/ethereum/go-ethereum/issues/28877. To summarize what's going on,
// before we can get the receipt of the transaction it needs to have been indexed by the
// node's indexer. Just because the transaction has been confirmed it doesn't mean that it
// has been indexed. When we call alloy's `get_receipt` it checks if the transaction was
// confirmed. If it has been, then it will call `eth_getTransactionReceipt` method which
// _might_ return the above error if the tx has not yet been indexed yet. So, we need to
// implement a retry mechanism for the receipt to keep retrying to get it until it
// eventually works, but we only do that if the error we get back is the "transaction
// indexing is in progress" error or if the receipt is None.
//
// Getting the transaction indexed and taking a receipt can take a long time especially when
// a lot of transactions are being submitted to the node. Thus, while initially we only
// allowed for 60 seconds of waiting with a 1 second delay in polling, we need to allow for
// a larger wait time. Therefore, in here we allow for 5 minutes of waiting with exponential
// backoff each time we attempt to get the receipt and find that it's not available.
let provider = Arc::new(provider);
poll(
Self::RECEIPT_POLLING_DURATION,
PollingWaitBehavior::Constant(Duration::from_millis(200)),
move || {
let provider = provider.clone();
async move {
match provider.get_transaction_receipt(transaction_hash).await {
Ok(Some(receipt)) => Ok(ControlFlow::Break(receipt)),
Ok(None) => Ok(ControlFlow::Continue(())),
Err(error) => {
let error_string = error.to_string();
match error_string.contains(Self::TRANSACTION_INDEXING_ERROR) {
true => Ok(ControlFlow::Continue(())),
false => Err(error.into()),
}
}
}
}
},
)
.instrument(tracing::info_span!(
"Awaiting transaction receipt",
?transaction_hash
))
.await
} }
fn trace_transaction( #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn trace_transaction(
&self, &self,
transaction: TransactionReceipt, transaction: &TransactionReceipt,
trace_options: GethDebugTracingOptions,
) -> anyhow::Result<alloy::rpc::types::trace::geth::GethTrace> { ) -> anyhow::Result<alloy::rpc::types::trace::geth::GethTrace> {
let connection_string = self.connection_string(); let provider = Arc::new(
self.provider()
.await
.context("Failed to create provider for tracing")?,
);
poll(
Self::TRACE_POLLING_DURATION,
PollingWaitBehavior::Constant(Duration::from_millis(200)),
move || {
let provider = provider.clone();
let trace_options = trace_options.clone();
async move {
match provider
.debug_trace_transaction(transaction.transaction_hash, trace_options)
.await
{
Ok(trace) => Ok(ControlFlow::Break(trace)),
Err(error) => {
let error_string = error.to_string();
match error_string.contains(Self::TRANSACTION_TRACING_ERROR) {
true => Ok(ControlFlow::Continue(())),
false => Err(error.into()),
}
}
}
}
},
)
.await
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn state_diff(&self, transaction: &TransactionReceipt) -> anyhow::Result<DiffMode> {
let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig { let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig {
diff_mode: Some(true), diff_mode: Some(true),
disable_code: None, disable_code: None,
disable_storage: None, disable_storage: None,
}); });
let wallet = self.wallet.clone();
trace_transaction(Box::pin(async move {
Ok(ProviderBuilder::new()
.wallet(wallet)
.connect(&connection_string)
.await?
.debug_trace_transaction(transaction.transaction_hash, trace_options)
.await?)
}))
}
fn state_diff(
&self,
transaction: alloy::rpc::types::TransactionReceipt,
) -> anyhow::Result<DiffMode> {
match self match self
.trace_transaction(transaction)? .trace_transaction(transaction, trace_options)
.try_into_pre_state_frame()? .await
.context("Failed to trace transaction for prestate diff")?
.try_into_pre_state_frame()
.context("Failed to convert trace into pre-state frame")?
{ {
PreStateFrame::Diff(diff) => Ok(diff), PreStateFrame::Diff(diff) => Ok(diff),
_ => anyhow::bail!("expected a diff mode trace"), _ => anyhow::bail!("expected a diff mode trace"),
} }
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn balance_of(&self, address: Address) -> anyhow::Result<U256> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_balance(address)
.await
.map_err(Into::into)
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn latest_state_proof(
&self,
address: Address,
keys: Vec<StorageKey>,
) -> anyhow::Result<EIP1186AccountProofResponse> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_proof(address, keys)
.latest()
.await
.map_err(Into::into)
}
} }
impl Node for Instance { impl ResolverApi for GethNode {
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn chain_id(&self) -> anyhow::Result<alloy::primitives::ChainId> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_chain_id()
.await
.map_err(Into::into)
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn transaction_gas_price(&self, tx_hash: &TxHash) -> anyhow::Result<u128> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_transaction_receipt(*tx_hash)
.await?
.context("Failed to get the transaction receipt")
.map(|receipt| receipt.effective_gas_price)
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_gas_limit(&self, number: BlockNumberOrTag) -> anyhow::Result<u128> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_block_by_number(number)
.await
.context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| block.header.gas_limit as _)
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_coinbase(&self, number: BlockNumberOrTag) -> anyhow::Result<Address> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_block_by_number(number)
.await
.context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| block.header.beneficiary)
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_difficulty(&self, number: BlockNumberOrTag) -> anyhow::Result<U256> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_block_by_number(number)
.await
.context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| U256::from_be_bytes(block.header.mix_hash.0))
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_base_fee(&self, number: BlockNumberOrTag) -> anyhow::Result<u64> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_block_by_number(number)
.await
.context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.and_then(|block| {
block
.header
.base_fee_per_gas
.context("Failed to get the base fee per gas")
})
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_hash(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockHash> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_block_by_number(number)
.await
.context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| block.header.hash)
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_timestamp(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockTimestamp> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_block_by_number(number)
.await
.context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| block.header.timestamp)
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn last_block_number(&self) -> anyhow::Result<BlockNumber> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_block_number()
.await
.map_err(Into::into)
}
}
impl Node for GethNode {
fn new(config: &Arguments) -> Self { fn new(config: &Arguments) -> Self {
let geth_directory = config.directory().join(Self::BASE_DIRECTORY); let geth_directory = config.directory().join(Self::BASE_DIRECTORY);
let id = NODE_COUNT.fetch_add(1, Ordering::SeqCst); let id = NODE_COUNT.fetch_add(1, Ordering::SeqCst);
let base_directory = geth_directory.join(id.to_string()); let base_directory = geth_directory.join(id.to_string());
let mut wallet = config.wallet();
for signer in (1..=config.private_keys_to_add)
.map(|id| U256::from(id))
.map(|id| id.to_be_bytes::<32>())
.map(|id| PrivateKeySigner::from_bytes(&FixedBytes(id)).unwrap())
{
wallet.register_signer(signer);
}
Self { Self {
connection_string: base_directory.join(Self::IPC_FILE).display().to_string(), connection_string: base_directory.join(Self::IPC_FILE).display().to_string(),
data_directory: base_directory.join(Self::DATA_DIRECTORY), data_directory: base_directory.join(Self::DATA_DIRECTORY),
logs_directory: base_directory.join(Self::LOGS_DIRECTORY),
base_directory, base_directory,
geth: config.geth.clone(), geth: config.geth.clone(),
id, id,
handle: None, handle: None,
network_id: config.network_id,
start_timeout: config.geth_start_timeout, start_timeout: config.geth_start_timeout,
wallet: config.wallet(), wallet: Arc::new(wallet),
chain_id_filler: Default::default(),
nonce_manager: Default::default(),
// We know that we only need to be storing 2 files so we can specify that when creating
// the vector. It's the stdout and stderr of the geth node.
logs_file_to_flush: Vec::with_capacity(2),
} }
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn id(&self) -> usize {
self.id as _
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn connection_string(&self) -> String { fn connection_string(&self) -> String {
self.connection_string.clone() self.connection_string.clone()
} }
fn shutdown(self) -> anyhow::Result<()> { #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn shutdown(&mut self) -> anyhow::Result<()> {
// Terminate the processes in a graceful manner to allow for the output to be flushed.
if let Some(mut child) = self.handle.take() {
child
.kill()
.map_err(|error| anyhow::anyhow!("Failed to kill the geth process: {error:?}"))?;
}
// Flushing the files that we're using for keeping the logs before shutdown.
for file in self.logs_file_to_flush.iter_mut() {
file.flush()?
}
// Remove the node's database so that subsequent runs do not run on the same database. We
// ignore the error just in case the directory didn't exist in the first place and therefore
// there's nothing to be deleted.
let _ = remove_dir_all(self.base_directory.join(Self::DATA_DIRECTORY));
Ok(()) Ok(())
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn spawn(&mut self, genesis: String) -> anyhow::Result<()> { fn spawn(&mut self, genesis: String) -> anyhow::Result<()> {
self.init(genesis)?.spawn_process()?.wait_ready()?; self.init(genesis)?.spawn_process()?;
Ok(()) Ok(())
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn version(&self) -> anyhow::Result<String> { fn version(&self) -> anyhow::Result<String> {
let output = Command::new(&self.geth) let output = Command::new(&self.geth)
.arg("--version") .arg("--version")
.stdin(Stdio::null()) .stdin(Stdio::null())
.stdout(Stdio::piped()) .stdout(Stdio::piped())
.stderr(Stdio::null()) .stderr(Stdio::null())
.spawn()? .spawn()
.wait_with_output()? .context("Failed to spawn geth --version process")?
.wait_with_output()
.context("Failed to wait for geth --version output")?
.stdout; .stdout;
Ok(String::from_utf8_lossy(&output).into()) Ok(String::from_utf8_lossy(&output).into())
} }
fn matches_target(targets: Option<&[String]>) -> bool {
match targets {
None => true,
Some(targets) => targets.iter().any(|str| str.as_str() == "evm"),
}
}
fn evm_version() -> EVMVersion {
EVMVersion::Cancun
}
} }
impl Drop for Instance { impl Drop for GethNode {
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn drop(&mut self) { fn drop(&mut self) {
if let Some(child) = self.handle.as_mut() { self.shutdown().expect("Failed to shutdown")
let _ = child.kill();
}
if self.base_directory.exists() {
let _ = remove_dir_all(&self.base_directory);
}
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use temp_dir::TempDir; use temp_dir::TempDir;
use crate::{GENESIS_JSON, Node}; use crate::{GENESIS_JSON, Node};
use super::Instance; use super::*;
fn test_config() -> (Arguments, TempDir) { fn test_config() -> (Arguments, TempDir) {
let mut config = Arguments::default(); let mut config = Arguments::default();
@@ -273,26 +678,125 @@ mod tests {
(config, temp_dir) (config, temp_dir)
} }
fn new_node() -> (GethNode, TempDir) {
let (args, temp_dir) = test_config();
let mut node = GethNode::new(&args);
node.init(GENESIS_JSON.to_owned())
.expect("Failed to initialize the node")
.spawn_process()
.expect("Failed to spawn the node process");
(node, temp_dir)
}
#[test] #[test]
fn init_works() { fn init_works() {
Instance::new(&test_config().0) GethNode::new(&test_config().0)
.init(GENESIS_JSON.to_string()) .init(GENESIS_JSON.to_string())
.unwrap(); .unwrap();
} }
#[test] #[test]
fn spawn_works() { fn spawn_works() {
Instance::new(&test_config().0) GethNode::new(&test_config().0)
.spawn(GENESIS_JSON.to_string()) .spawn(GENESIS_JSON.to_string())
.unwrap(); .unwrap();
} }
#[test] #[test]
fn version_works() { fn version_works() {
let version = Instance::new(&test_config().0).version().unwrap(); let version = GethNode::new(&test_config().0).version().unwrap();
assert!( assert!(
version.starts_with("geth version"), version.starts_with("geth version"),
"expected version string, got: '{version}'" "expected version string, got: '{version}'"
); );
} }
#[tokio::test]
async fn can_get_chain_id_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let chain_id = node.chain_id().await;
// Assert
let chain_id = chain_id.expect("Failed to get the chain id");
assert_eq!(chain_id, 420_420_420);
}
#[tokio::test]
async fn can_get_gas_limit_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let gas_limit = node.block_gas_limit(BlockNumberOrTag::Latest).await;
// Assert
let gas_limit = gas_limit.expect("Failed to get the gas limit");
assert_eq!(gas_limit, u32::MAX as u128)
}
#[tokio::test]
async fn can_get_coinbase_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let coinbase = node.block_coinbase(BlockNumberOrTag::Latest).await;
// Assert
let coinbase = coinbase.expect("Failed to get the coinbase");
assert_eq!(coinbase, Address::new([0xFF; 20]))
}
#[tokio::test]
async fn can_get_block_difficulty_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let block_difficulty = node.block_difficulty(BlockNumberOrTag::Latest).await;
// Assert
let block_difficulty = block_difficulty.expect("Failed to get the block difficulty");
assert_eq!(block_difficulty, U256::ZERO)
}
#[tokio::test]
async fn can_get_block_hash_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let block_hash = node.block_hash(BlockNumberOrTag::Latest).await;
// Assert
let _ = block_hash.expect("Failed to get the block hash");
}
#[tokio::test]
async fn can_get_block_timestamp_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let block_timestamp = node.block_timestamp(BlockNumberOrTag::Latest).await;
// Assert
let _ = block_timestamp.expect("Failed to get the block timestamp");
}
#[tokio::test]
async fn can_get_block_number_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let block_number = node.last_block_number().await;
// Assert
let block_number = block_number.expect("Failed to get the block number");
assert_eq!(block_number, 0)
}
} }
File diff suppressed because it is too large Load Diff
+14 -1
View File
@@ -1,8 +1,11 @@
//! This crate implements the testing nodes. //! This crate implements the testing nodes.
use revive_common::EVMVersion;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use revive_dt_node_interaction::EthereumNode; use revive_dt_node_interaction::EthereumNode;
pub mod common;
pub mod constants;
pub mod geth; pub mod geth;
pub mod kitchensink; pub mod kitchensink;
pub mod pool; pub mod pool;
@@ -15,6 +18,9 @@ pub trait Node: EthereumNode {
/// Create a new uninitialized instance. /// Create a new uninitialized instance.
fn new(config: &Arguments) -> Self; fn new(config: &Arguments) -> Self;
/// Returns the identifier of the node.
fn id(&self) -> usize;
/// Spawns a node configured according to the genesis json. /// Spawns a node configured according to the genesis json.
/// ///
/// Blocking until it's ready to accept transactions. /// Blocking until it's ready to accept transactions.
@@ -23,11 +29,18 @@ pub trait Node: EthereumNode {
/// Prune the node instance and related data. /// Prune the node instance and related data.
/// ///
/// Blocking until it's completely stopped. /// Blocking until it's completely stopped.
fn shutdown(self) -> anyhow::Result<()>; fn shutdown(&mut self) -> anyhow::Result<()>;
/// Returns the nodes connection string. /// Returns the nodes connection string.
fn connection_string(&self) -> String; fn connection_string(&self) -> String;
/// Returns the node version. /// Returns the node version.
fn version(&self) -> anyhow::Result<String>; fn version(&self) -> anyhow::Result<String>;
/// Given a list of targets from the metadata file, this function determines if the metadata
/// file can be ran on this node or not.
fn matches_target(targets: Option<&[String]>) -> bool;
/// Returns the EVM version of the node.
fn evm_version() -> EVMVersion;
} }
+20 -6
View File
@@ -1,13 +1,15 @@
//! This crate implements concurrent handling of testing node. //! This crate implements concurrent handling of testing node.
use std::{ use std::{
fs::read_to_string,
sync::atomic::{AtomicUsize, Ordering}, sync::atomic::{AtomicUsize, Ordering},
thread, thread,
}; };
use revive_dt_common::cached_fs::read_to_string;
use anyhow::Context; use anyhow::Context;
use revive_dt_config::Arguments; use revive_dt_config::Arguments;
use tracing::info;
use crate::Node; use crate::Node;
@@ -24,7 +26,7 @@ where
{ {
/// Create a new Pool. This will start as many nodes as there are workers in `config`. /// Create a new Pool. This will start as many nodes as there are workers in `config`.
pub fn new(config: &Arguments) -> anyhow::Result<Self> { pub fn new(config: &Arguments) -> anyhow::Result<Self> {
let nodes = config.workers; let nodes = config.number_of_nodes;
let genesis = read_to_string(&config.genesis_file).context(format!( let genesis = read_to_string(&config.genesis_file).context(format!(
"can not read genesis file: {}", "can not read genesis file: {}",
config.genesis_file.display() config.genesis_file.display()
@@ -42,8 +44,10 @@ where
nodes.push( nodes.push(
handle handle
.join() .join()
.map_err(|error| anyhow::anyhow!("failed to spawn node: {:?}", error))? .map_err(|error| anyhow::anyhow!("failed to spawn node: {:?}", error))
.map_err(|error| anyhow::anyhow!("node failed to spawn: {error}"))?, .context("Failed to join node spawn thread")?
.map_err(|error| anyhow::anyhow!("node failed to spawn: {error}"))
.context("Node failed to spawn")?,
); );
} }
@@ -62,7 +66,17 @@ where
fn spawn_node<T: Node + Send>(args: &Arguments, genesis: String) -> anyhow::Result<T> { fn spawn_node<T: Node + Send>(args: &Arguments, genesis: String) -> anyhow::Result<T> {
let mut node = T::new(args); let mut node = T::new(args);
log::info!("starting node: {}", node.connection_string()); info!(
node.spawn(genesis)?; id = node.id(),
connection_string = node.connection_string(),
"Spawning node"
);
node.spawn(genesis)
.context("Failed to spawn node process")?;
info!(
id = node.id(),
connection_string = node.connection_string(),
"Spawned node"
);
Ok(node) Ok(node)
} }
+11 -2
View File
@@ -8,12 +8,21 @@ repository.workspace = true
rust-version.workspace = true rust-version.workspace = true
[dependencies] [dependencies]
revive-dt-common = { workspace = true }
revive-dt-config = { workspace = true } revive-dt-config = { workspace = true }
revive-dt-format = { workspace = true } revive-dt-format = { workspace = true }
revive-dt-compiler = { workspace = true }
alloy-primitives = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
log = { workspace = true } paste = { workspace = true }
indexmap = { workspace = true, features = ["serde"] }
semver = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
revive-solc-json-interface = { workspace = true } serde_with = { workspace = true }
tokio = { workspace = true }
tracing = { workspace = true }
[lints]
workspace = true
+545
View File
@@ -0,0 +1,545 @@
//! Implementation of the report aggregator task which consumes the events sent by the various
//! reporters and combines them into a single unified report.
use std::{
collections::{BTreeMap, BTreeSet, HashMap, HashSet},
fs::OpenOptions,
path::PathBuf,
time::{SystemTime, UNIX_EPOCH},
};
use alloy_primitives::Address;
use anyhow::{Context as _, Result};
use indexmap::IndexMap;
use revive_dt_compiler::{CompilerInput, CompilerOutput, Mode};
use revive_dt_config::{Arguments, TestingPlatform};
use revive_dt_format::{case::CaseIdx, corpus::Corpus, metadata::ContractInstance};
use semver::Version;
use serde::Serialize;
use serde_with::{DisplayFromStr, serde_as};
use tokio::sync::{
broadcast::{Sender, channel},
mpsc::{UnboundedReceiver, UnboundedSender, unbounded_channel},
};
use tracing::debug;
use crate::*;
pub struct ReportAggregator {
/* Internal Report State */
report: Report,
remaining_cases: HashMap<MetadataFilePath, HashMap<Mode, HashSet<CaseIdx>>>,
/* Channels */
runner_tx: Option<UnboundedSender<RunnerEvent>>,
runner_rx: UnboundedReceiver<RunnerEvent>,
listener_tx: Sender<ReporterEvent>,
}
impl ReportAggregator {
pub fn new(config: Arguments) -> Self {
let (runner_tx, runner_rx) = unbounded_channel::<RunnerEvent>();
let (listener_tx, _) = channel::<ReporterEvent>(1024);
Self {
report: Report::new(config),
remaining_cases: Default::default(),
runner_tx: Some(runner_tx),
runner_rx,
listener_tx,
}
}
pub fn into_task(mut self) -> (Reporter, impl Future<Output = Result<()>>) {
let reporter = self
.runner_tx
.take()
.map(Into::into)
.expect("Can't fail since this can only be called once");
(reporter, async move { self.aggregate().await })
}
async fn aggregate(mut self) -> Result<()> {
debug!("Starting to aggregate report");
while let Some(event) = self.runner_rx.recv().await {
debug!(?event, "Received Event");
match event {
RunnerEvent::SubscribeToEvents(event) => {
self.handle_subscribe_to_events_event(*event);
}
RunnerEvent::CorpusFileDiscovery(event) => {
self.handle_corpus_file_discovered_event(*event)
}
RunnerEvent::MetadataFileDiscovery(event) => {
self.handle_metadata_file_discovery_event(*event);
}
RunnerEvent::TestCaseDiscovery(event) => {
self.handle_test_case_discovery(*event);
}
RunnerEvent::TestSucceeded(event) => {
self.handle_test_succeeded_event(*event);
}
RunnerEvent::TestFailed(event) => {
self.handle_test_failed_event(*event);
}
RunnerEvent::TestIgnored(event) => {
self.handle_test_ignored_event(*event);
}
RunnerEvent::LeaderNodeAssigned(event) => {
self.handle_leader_node_assigned_event(*event);
}
RunnerEvent::FollowerNodeAssigned(event) => {
self.handle_follower_node_assigned_event(*event);
}
RunnerEvent::PreLinkContractsCompilationSucceeded(event) => {
self.handle_pre_link_contracts_compilation_succeeded_event(*event)
}
RunnerEvent::PostLinkContractsCompilationSucceeded(event) => {
self.handle_post_link_contracts_compilation_succeeded_event(*event)
}
RunnerEvent::PreLinkContractsCompilationFailed(event) => {
self.handle_pre_link_contracts_compilation_failed_event(*event)
}
RunnerEvent::PostLinkContractsCompilationFailed(event) => {
self.handle_post_link_contracts_compilation_failed_event(*event)
}
RunnerEvent::LibrariesDeployed(event) => {
self.handle_libraries_deployed_event(*event);
}
RunnerEvent::ContractDeployed(event) => {
self.handle_contract_deployed_event(*event);
}
}
}
debug!("Report aggregation completed");
let file_name = {
let current_timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.context("System clock is before UNIX_EPOCH; cannot compute report timestamp")?
.as_secs();
let mut file_name = current_timestamp.to_string();
file_name.push_str(".json");
file_name
};
let file_path = self.report.config.directory().join(file_name);
let file = OpenOptions::new()
.create(true)
.write(true)
.truncate(true)
.read(false)
.open(&file_path)
.with_context(|| {
format!(
"Failed to open report file for writing: {}",
file_path.display()
)
})?;
serde_json::to_writer_pretty(&file, &self.report).with_context(|| {
format!("Failed to serialize report JSON to {}", file_path.display())
})?;
Ok(())
}
fn handle_subscribe_to_events_event(&self, event: SubscribeToEventsEvent) {
let _ = event.tx.send(self.listener_tx.subscribe());
}
fn handle_corpus_file_discovered_event(&mut self, event: CorpusFileDiscoveryEvent) {
self.report.corpora.push(event.corpus);
}
fn handle_metadata_file_discovery_event(&mut self, event: MetadataFileDiscoveryEvent) {
self.report.metadata_files.insert(event.path.clone());
}
fn handle_test_case_discovery(&mut self, event: TestCaseDiscoveryEvent) {
self.remaining_cases
.entry(event.test_specifier.metadata_file_path.clone().into())
.or_default()
.entry(event.test_specifier.solc_mode.clone())
.or_default()
.insert(event.test_specifier.case_idx);
}
fn handle_test_succeeded_event(&mut self, event: TestSucceededEvent) {
// Remove this from the set of cases we're tracking since it has completed.
self.remaining_cases
.entry(event.test_specifier.metadata_file_path.clone().into())
.or_default()
.entry(event.test_specifier.solc_mode.clone())
.or_default()
.remove(&event.test_specifier.case_idx);
// Add information on the fact that the case was ignored to the report.
let test_case_report = self.test_case_report(&event.test_specifier);
test_case_report.status = Some(TestCaseStatus::Succeeded {
steps_executed: event.steps_executed,
});
self.handle_post_test_case_status_update(&event.test_specifier);
}
fn handle_test_failed_event(&mut self, event: TestFailedEvent) {
// Remove this from the set of cases we're tracking since it has completed.
self.remaining_cases
.entry(event.test_specifier.metadata_file_path.clone().into())
.or_default()
.entry(event.test_specifier.solc_mode.clone())
.or_default()
.remove(&event.test_specifier.case_idx);
// Add information on the fact that the case was ignored to the report.
let test_case_report = self.test_case_report(&event.test_specifier);
test_case_report.status = Some(TestCaseStatus::Failed {
reason: event.reason,
});
self.handle_post_test_case_status_update(&event.test_specifier);
}
fn handle_test_ignored_event(&mut self, event: TestIgnoredEvent) {
// Remove this from the set of cases we're tracking since it has completed.
self.remaining_cases
.entry(event.test_specifier.metadata_file_path.clone().into())
.or_default()
.entry(event.test_specifier.solc_mode.clone())
.or_default()
.remove(&event.test_specifier.case_idx);
// Add information on the fact that the case was ignored to the report.
let test_case_report = self.test_case_report(&event.test_specifier);
test_case_report.status = Some(TestCaseStatus::Ignored {
reason: event.reason,
additional_fields: event.additional_fields,
});
self.handle_post_test_case_status_update(&event.test_specifier);
}
fn handle_post_test_case_status_update(&mut self, specifier: &TestSpecifier) {
let remaining_cases = self
.remaining_cases
.entry(specifier.metadata_file_path.clone().into())
.or_default()
.entry(specifier.solc_mode.clone())
.or_default();
if !remaining_cases.is_empty() {
return;
}
let case_status = self
.report
.test_case_information
.entry(specifier.metadata_file_path.clone().into())
.or_default()
.entry(specifier.solc_mode.clone())
.or_default()
.iter()
.map(|(case_idx, case_report)| {
(
*case_idx,
case_report.status.clone().expect("Can't be uninitialized"),
)
})
.collect::<BTreeMap<_, _>>();
let event = ReporterEvent::MetadataFileSolcModeCombinationExecutionCompleted {
metadata_file_path: specifier.metadata_file_path.clone().into(),
mode: specifier.solc_mode.clone(),
case_status,
};
// According to the documentation on send, the sending fails if there are no more receiver
// handles. Therefore, this isn't an error that we want to bubble up or anything. If we fail
// to send then we ignore the error.
let _ = self.listener_tx.send(event);
}
fn handle_leader_node_assigned_event(&mut self, event: LeaderNodeAssignedEvent) {
let execution_information = self.execution_information(&ExecutionSpecifier {
test_specifier: event.test_specifier,
node_id: event.id,
node_designation: NodeDesignation::Leader,
});
execution_information.node = Some(TestCaseNodeInformation {
id: event.id,
platform: event.platform,
connection_string: event.connection_string,
});
}
fn handle_follower_node_assigned_event(&mut self, event: FollowerNodeAssignedEvent) {
let execution_information = self.execution_information(&ExecutionSpecifier {
test_specifier: event.test_specifier,
node_id: event.id,
node_designation: NodeDesignation::Follower,
});
execution_information.node = Some(TestCaseNodeInformation {
id: event.id,
platform: event.platform,
connection_string: event.connection_string,
});
}
fn handle_pre_link_contracts_compilation_succeeded_event(
&mut self,
event: PreLinkContractsCompilationSucceededEvent,
) {
let include_input = self.report.config.report_include_compiler_input;
let include_output = self.report.config.report_include_compiler_output;
let execution_information = self.execution_information(&event.execution_specifier);
let compiler_input = if include_input {
event.compiler_input
} else {
None
};
let compiler_output = if include_output {
Some(event.compiler_output)
} else {
None
};
execution_information.pre_link_compilation_status = Some(CompilationStatus::Success {
is_cached: event.is_cached,
compiler_version: event.compiler_version,
compiler_path: event.compiler_path,
compiler_input,
compiler_output,
});
}
fn handle_post_link_contracts_compilation_succeeded_event(
&mut self,
event: PostLinkContractsCompilationSucceededEvent,
) {
let include_input = self.report.config.report_include_compiler_input;
let include_output = self.report.config.report_include_compiler_output;
let execution_information = self.execution_information(&event.execution_specifier);
let compiler_input = if include_input {
event.compiler_input
} else {
None
};
let compiler_output = if include_output {
Some(event.compiler_output)
} else {
None
};
execution_information.post_link_compilation_status = Some(CompilationStatus::Success {
is_cached: event.is_cached,
compiler_version: event.compiler_version,
compiler_path: event.compiler_path,
compiler_input,
compiler_output,
});
}
fn handle_pre_link_contracts_compilation_failed_event(
&mut self,
event: PreLinkContractsCompilationFailedEvent,
) {
let execution_information = self.execution_information(&event.execution_specifier);
execution_information.pre_link_compilation_status = Some(CompilationStatus::Failure {
reason: event.reason,
compiler_version: event.compiler_version,
compiler_path: event.compiler_path,
compiler_input: event.compiler_input,
});
}
fn handle_post_link_contracts_compilation_failed_event(
&mut self,
event: PostLinkContractsCompilationFailedEvent,
) {
let execution_information = self.execution_information(&event.execution_specifier);
execution_information.post_link_compilation_status = Some(CompilationStatus::Failure {
reason: event.reason,
compiler_version: event.compiler_version,
compiler_path: event.compiler_path,
compiler_input: event.compiler_input,
});
}
fn handle_libraries_deployed_event(&mut self, event: LibrariesDeployedEvent) {
self.execution_information(&event.execution_specifier)
.deployed_libraries = Some(event.libraries);
}
fn handle_contract_deployed_event(&mut self, event: ContractDeployedEvent) {
self.execution_information(&event.execution_specifier)
.deployed_contracts
.get_or_insert_default()
.insert(event.contract_instance, event.address);
}
fn test_case_report(&mut self, specifier: &TestSpecifier) -> &mut TestCaseReport {
self.report
.test_case_information
.entry(specifier.metadata_file_path.clone().into())
.or_default()
.entry(specifier.solc_mode.clone())
.or_default()
.entry(specifier.case_idx)
.or_default()
}
fn execution_information(
&mut self,
specifier: &ExecutionSpecifier,
) -> &mut ExecutionInformation {
let test_case_report = self.test_case_report(&specifier.test_specifier);
match specifier.node_designation {
NodeDesignation::Leader => test_case_report
.leader_execution_information
.get_or_insert_default(),
NodeDesignation::Follower => test_case_report
.follower_execution_information
.get_or_insert_default(),
}
}
}
#[serde_as]
#[derive(Clone, Debug, Serialize)]
pub struct Report {
/// The configuration that the tool was started up with.
pub config: Arguments,
/// The platform of the leader chain.
pub leader_platform: TestingPlatform,
/// The platform of the follower chain.
pub follower_platform: TestingPlatform,
/// The list of corpus files that the tool found.
pub corpora: Vec<Corpus>,
/// The list of metadata files that were found by the tool.
pub metadata_files: BTreeSet<MetadataFilePath>,
/// Information relating to each test case.
#[serde_as(as = "BTreeMap<_, HashMap<DisplayFromStr, BTreeMap<DisplayFromStr, _>>>")]
pub test_case_information:
BTreeMap<MetadataFilePath, HashMap<Mode, BTreeMap<CaseIdx, TestCaseReport>>>,
}
impl Report {
pub fn new(config: Arguments) -> Self {
Self {
leader_platform: config.leader,
follower_platform: config.follower,
config,
corpora: Default::default(),
metadata_files: Default::default(),
test_case_information: Default::default(),
}
}
}
#[derive(Clone, Debug, Serialize, Default)]
pub struct TestCaseReport {
/// Information on the status of the test case and whether it succeeded, failed, or was ignored.
#[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<TestCaseStatus>,
/// Information related to the execution on the leader.
#[serde(skip_serializing_if = "Option::is_none")]
pub leader_execution_information: Option<ExecutionInformation>,
/// Information related to the execution on the follower.
#[serde(skip_serializing_if = "Option::is_none")]
pub follower_execution_information: Option<ExecutionInformation>,
}
/// Information related to the status of the test. Could be that the test succeeded, failed, or that
/// it was ignored.
#[derive(Clone, Debug, Serialize)]
#[serde(tag = "status")]
pub enum TestCaseStatus {
/// The test case succeeded.
Succeeded {
/// The number of steps of the case that were executed.
steps_executed: usize,
},
/// The test case failed.
Failed {
/// The reason for the failure of the test case.
reason: String,
},
/// The test case was ignored. This variant carries information related to why it was ignored.
Ignored {
/// The reason behind the test case being ignored.
reason: String,
/// Additional fields that describe more information on why the test case is ignored.
#[serde(flatten)]
additional_fields: IndexMap<String, serde_json::Value>,
},
}
/// Information related to the leader or follower node that's being used to execute the step.
#[derive(Clone, Debug, Serialize)]
pub struct TestCaseNodeInformation {
/// The ID of the node that this case is being executed on.
pub id: usize,
/// The platform of the node.
pub platform: TestingPlatform,
/// The connection string of the node.
pub connection_string: String,
}
/// Execution information tied to the leader or the follower.
#[derive(Clone, Debug, Default, Serialize)]
pub struct ExecutionInformation {
/// Information related to the node assigned to this test case.
#[serde(skip_serializing_if = "Option::is_none")]
pub node: Option<TestCaseNodeInformation>,
/// Information on the pre-link compiled contracts.
#[serde(skip_serializing_if = "Option::is_none")]
pub pre_link_compilation_status: Option<CompilationStatus>,
/// Information on the post-link compiled contracts.
#[serde(skip_serializing_if = "Option::is_none")]
pub post_link_compilation_status: Option<CompilationStatus>,
/// Information on the deployed libraries.
#[serde(skip_serializing_if = "Option::is_none")]
pub deployed_libraries: Option<BTreeMap<ContractInstance, Address>>,
/// Information on the deployed contracts.
#[serde(skip_serializing_if = "Option::is_none")]
pub deployed_contracts: Option<BTreeMap<ContractInstance, Address>>,
}
/// Information related to compilation
#[derive(Clone, Debug, Serialize)]
#[serde(tag = "status")]
pub enum CompilationStatus {
/// The compilation was successful.
Success {
/// A flag with information on whether the compilation artifacts were cached or not.
is_cached: bool,
/// The version of the compiler used to compile the contracts.
compiler_version: Version,
/// The path of the compiler used to compile the contracts.
compiler_path: PathBuf,
/// The input provided to the compiler to compile the contracts. This is only included if
/// the appropriate flag is set in the CLI configuration and if the contracts were not
/// cached and the compiler was invoked.
#[serde(skip_serializing_if = "Option::is_none")]
compiler_input: Option<CompilerInput>,
/// The output of the compiler. This is only included if the appropriate flag is set in the
/// CLI configurations.
#[serde(skip_serializing_if = "Option::is_none")]
compiler_output: Option<CompilerOutput>,
},
/// The compilation failed.
Failure {
/// The failure reason.
reason: String,
/// The version of the compiler used to compile the contracts.
#[serde(skip_serializing_if = "Option::is_none")]
compiler_version: Option<Version>,
/// The path of the compiler used to compile the contracts.
#[serde(skip_serializing_if = "Option::is_none")]
compiler_path: Option<PathBuf>,
/// The input provided to the compiler to compile the contracts. This is only included if
/// the appropriate flag is set in the CLI configuration and if the contracts were not
/// cached and the compiler was invoked.
#[serde(skip_serializing_if = "Option::is_none")]
compiler_input: Option<CompilerInput>,
},
}
-94
View File
@@ -1,94 +0,0 @@
//! The report analyzer enriches the raw report data.
use serde::{Deserialize, Serialize};
use crate::reporter::CompilationTask;
/// Provides insights into how well the compilers perform.
#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq, PartialOrd)]
pub struct CompilerStatistics {
/// The sum of contracts observed.
pub n_contracts: usize,
/// The mean size of compiled contracts.
pub mean_code_size: usize,
/// The mean size of the optimized YUL IR.
pub mean_yul_size: usize,
/// Is a proxy because the YUL also containes a lot of comments.
pub yul_to_bytecode_size_ratio: f32,
}
impl CompilerStatistics {
/// Cumulatively update the statistics with the next compiler task.
pub fn sample(&mut self, compilation_task: &CompilationTask) {
let Some(output) = &compilation_task.json_output else {
return;
};
let Some(contracts) = &output.contracts else {
return;
};
for (_solidity, contracts) in contracts.iter() {
for (_name, contract) in contracts.iter() {
let Some(evm) = &contract.evm else {
continue;
};
let Some(deploy_code) = &evm.deployed_bytecode else {
continue;
};
// The EVM bytecode can be unlinked and thus is not necessarily a decodable hex
// string; for our statistics this is a good enough approximation.
let bytecode_size = deploy_code.object.len() / 2;
let yul_size = contract
.ir_optimized
.as_ref()
.expect("if the contract has a deploy code it should also have the opimized IR")
.len();
self.update_sizes(bytecode_size, yul_size);
}
}
}
/// Updates the size statistics cumulatively.
fn update_sizes(&mut self, bytecode_size: usize, yul_size: usize) {
let n_previous = self.n_contracts;
let n_current = self.n_contracts + 1;
self.n_contracts = n_current;
self.mean_code_size = (n_previous * self.mean_code_size + bytecode_size) / n_current;
self.mean_yul_size = (n_previous * self.mean_yul_size + yul_size) / n_current;
if self.mean_code_size > 0 {
self.yul_to_bytecode_size_ratio =
self.mean_yul_size as f32 / self.mean_code_size as f32;
}
}
}
#[cfg(test)]
mod tests {
use super::CompilerStatistics;
#[test]
fn compiler_statistics() {
let mut received = CompilerStatistics::default();
received.update_sizes(0, 0);
received.update_sizes(3, 37);
received.update_sizes(123, 456);
let mean_code_size = 41; // rounding error from integer truncation
let mean_yul_size = 164;
let expected = CompilerStatistics {
n_contracts: 3,
mean_code_size,
mean_yul_size,
yul_to_bytecode_size_ratio: mean_yul_size as f32 / mean_code_size as f32,
};
assert_eq!(received, expected);
}
}
+43
View File
@@ -0,0 +1,43 @@
//! Common types and functions used throughout the crate.
use std::{path::PathBuf, sync::Arc};
use revive_dt_common::define_wrapper_type;
use revive_dt_compiler::Mode;
use revive_dt_format::{case::CaseIdx, input::StepIdx};
use serde::{Deserialize, Serialize};
define_wrapper_type!(
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
#[serde(transparent)]
pub struct MetadataFilePath(PathBuf);
);
/// An absolute specifier for a test.
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct TestSpecifier {
pub solc_mode: Mode,
pub metadata_file_path: PathBuf,
pub case_idx: CaseIdx,
}
/// An absolute path for a test that also includes information about the node that it's assigned to
/// and whether it's the leader or follower.
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct ExecutionSpecifier {
pub test_specifier: Arc<TestSpecifier>,
pub node_id: usize,
pub node_designation: NodeDesignation,
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub enum NodeDesignation {
Leader,
Follower,
}
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct StepExecutionSpecifier {
pub execution_specifier: Arc<ExecutionSpecifier>,
pub step_idx: StepIdx,
}
+10 -3
View File
@@ -1,4 +1,11 @@
//! The revive differential tests reporting facility. //! This crate implements the reporting infrastructure for the differential testing tool.
pub mod analyzer; mod aggregator;
pub mod reporter; mod common;
mod reporter_event;
mod runner_event;
pub use aggregator::*;
pub use common::*;
pub use reporter_event::*;
pub use runner_event::*;
-243
View File
@@ -1,243 +0,0 @@
//! The reporter is the central place observing test execution by collecting data.
//!
//! The data collected gives useful insights into the outcome of the test run
//! and helps identifying and reproducing failing cases.
use std::{
collections::HashMap,
fs::{self, File, create_dir_all},
path::PathBuf,
sync::{Mutex, OnceLock},
time::{SystemTime, UNIX_EPOCH},
};
use anyhow::Context;
use serde::{Deserialize, Serialize};
use revive_dt_config::{Arguments, TestingPlatform};
use revive_dt_format::{corpus::Corpus, mode::SolcMode};
use revive_solc_json_interface::{SolcStandardJsonInput, SolcStandardJsonOutput};
use crate::analyzer::CompilerStatistics;
pub(crate) static REPORTER: OnceLock<Mutex<Report>> = OnceLock::new();
/// The `Report` datastructure stores all relevant inforamtion required for generating reports.
#[derive(Clone, Debug, Default, Serialize, Deserialize)]
pub struct Report {
/// The configuration used during the test.
pub config: Arguments,
/// The observed test corpora.
pub corpora: Vec<Corpus>,
/// The observed test definitions.
pub metadata_files: Vec<PathBuf>,
/// The observed compilation results.
pub compiler_results: HashMap<TestingPlatform, Vec<CompilationResult>>,
/// The observed compilation statistics.
pub compiler_statistics: HashMap<TestingPlatform, CompilerStatistics>,
/// The file name this is serialized to.
#[serde(skip)]
directory: PathBuf,
}
/// Contains a compiled contract.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct CompilationTask {
/// The observed compiler input.
pub json_input: SolcStandardJsonInput,
/// The observed compiler output.
pub json_output: Option<SolcStandardJsonOutput>,
/// The observed compiler mode.
pub mode: SolcMode,
/// The observed compiler version.
pub compiler_version: String,
/// The observed error, if any.
pub error: Option<String>,
}
/// Represents a report about a compilation task.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct CompilationResult {
/// The observed compilation task.
pub compilation_task: CompilationTask,
/// The linked span.
pub span: Span,
}
/// The [Span] struct indicates the context of what is being reported.
#[derive(Clone, Copy, Debug, Serialize, Deserialize)]
pub struct Span {
/// The corpus index this belongs to.
corpus: usize,
/// The metadata file this belongs to.
metadata_file: usize,
/// The index of the case definition this belongs to.
case: usize,
/// The index of the case input this belongs to.
input: usize,
}
impl Report {
/// The file name where this report will be written to.
pub const FILE_NAME: &str = "report.json";
/// The [Span] is expected to initialize the reporter by providing the config.
const INITIALIZED_VIA_SPAN: &str = "requires a Span which initializes the reporter";
/// Create a new [Report].
fn new(config: Arguments) -> anyhow::Result<Self> {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis();
let directory = config.directory().join("report").join(format!("{now}"));
if !directory.exists() {
create_dir_all(&directory)?;
}
Ok(Self {
config,
directory,
..Default::default()
})
}
/// Add a compilation task to the report.
pub fn compilation(span: Span, platform: TestingPlatform, compilation_task: CompilationTask) {
let mut report = REPORTER
.get()
.expect(Report::INITIALIZED_VIA_SPAN)
.lock()
.unwrap();
report
.compiler_statistics
.entry(platform)
.or_default()
.sample(&compilation_task);
report
.compiler_results
.entry(platform)
.or_default()
.push(CompilationResult {
compilation_task,
span,
});
}
/// Write the report to disk.
pub fn save() -> anyhow::Result<()> {
let Some(reporter) = REPORTER.get() else {
return Ok(());
};
let report = reporter.lock().unwrap();
if let Err(error) = report.write_to_file() {
anyhow::bail!("can not write report: {error}");
}
if report.config.extract_problems {
if let Err(error) = report.save_compiler_problems() {
anyhow::bail!("can not write compiler problems: {error}");
}
}
Ok(())
}
/// Write compiler problems to disk for later debugging.
pub fn save_compiler_problems(&self) -> anyhow::Result<()> {
for (platform, results) in self.compiler_results.iter() {
for result in results {
// ignore if there were no errors
if result.compilation_task.error.is_none()
&& result
.compilation_task
.json_output
.as_ref()
.and_then(|output| output.errors.as_ref())
.map(|errors| errors.is_empty())
.unwrap_or(true)
{
continue;
}
let path = &self.metadata_files[result.span.metadata_file]
.parent()
.unwrap()
.join(format!("{platform}_errors"));
if !path.exists() {
create_dir_all(path)?;
}
if let Some(error) = result.compilation_task.error.as_ref() {
fs::write(path.join("compiler_error.txt"), error)?;
}
if let Some(errors) = result.compilation_task.json_output.as_ref() {
let file = File::create(path.join("compiler_output.txt"))?;
serde_json::to_writer_pretty(file, &errors)?;
}
}
}
Ok(())
}
fn write_to_file(&self) -> anyhow::Result<()> {
let path = self.directory.join(Self::FILE_NAME);
let file = File::create(&path).context(path.display().to_string())?;
serde_json::to_writer_pretty(file, &self)?;
log::info!("report written to: {}", path.display());
Ok(())
}
}
impl Span {
/// Create a new [Span] with case and input index at 0.
///
/// Initializes the reporting facility on the first call.
pub fn new(corpus: Corpus, config: Arguments) -> anyhow::Result<Self> {
let report = Mutex::new(Report::new(config)?);
let mut reporter = REPORTER.get_or_init(|| report).lock().unwrap();
reporter.corpora.push(corpus);
Ok(Self {
corpus: reporter.corpora.len() - 1,
metadata_file: 0,
case: 0,
input: 0,
})
}
/// Advance to the next metadata file: Resets the case input index to 0.
pub fn next_metadata(&mut self, metadata_file: PathBuf) {
let mut reporter = REPORTER
.get()
.expect(Report::INITIALIZED_VIA_SPAN)
.lock()
.unwrap();
reporter.metadata_files.push(metadata_file);
self.metadata_file = reporter.metadata_files.len() - 1;
self.case = 0;
self.input = 0;
}
/// Advance to the next case: Increas the case index by one and resets the input index to 0.
pub fn next_case(&mut self) {
self.case += 1;
self.input = 0;
}
/// Advance to the next input.
pub fn next_input(&mut self) {
self.input += 1;
}
}
+22
View File
@@ -0,0 +1,22 @@
//! A reporter event sent by the report aggregator to the various listeners.
use std::collections::BTreeMap;
use revive_dt_compiler::Mode;
use revive_dt_format::case::CaseIdx;
use crate::{MetadataFilePath, TestCaseStatus};
#[derive(Clone, Debug)]
pub enum ReporterEvent {
/// An event sent by the reporter once an entire metadata file and solc mode combination has
/// finished execution.
MetadataFileSolcModeCombinationExecutionCompleted {
/// The path of the metadata file.
metadata_file_path: MetadataFilePath,
/// The Solc mode that this metadata file was executed in.
mode: Mode,
/// The status of each one of the cases.
case_status: BTreeMap<CaseIdx, TestCaseStatus>,
},
}
+642
View File
@@ -0,0 +1,642 @@
//! The types associated with the events sent by the runner to the reporter.
#![allow(dead_code)]
use std::{collections::BTreeMap, path::PathBuf, sync::Arc};
use alloy_primitives::Address;
use anyhow::Context as _;
use indexmap::IndexMap;
use revive_dt_compiler::{CompilerInput, CompilerOutput};
use revive_dt_config::TestingPlatform;
use revive_dt_format::metadata::Metadata;
use revive_dt_format::{corpus::Corpus, metadata::ContractInstance};
use semver::Version;
use tokio::sync::{broadcast, oneshot};
use crate::{ExecutionSpecifier, ReporterEvent, TestSpecifier, common::MetadataFilePath};
macro_rules! __report_gen_emit_test_specific {
(
$ident:ident,
$variant_ident:ident,
$skip_field:ident;
$( $bname:ident : $bty:ty, )*
;
$( $aname:ident : $aty:ty, )*
) => {
paste::paste! {
pub fn [< report_ $variant_ident:snake _event >](
&self
$(, $bname: impl Into<$bty> )*
$(, $aname: impl Into<$aty> )*
) -> anyhow::Result<()> {
self.report([< $variant_ident Event >] {
$skip_field: self.test_specifier.clone()
$(, $bname: $bname.into() )*
$(, $aname: $aname.into() )*
})
}
}
};
}
macro_rules! __report_gen_emit_test_specific_by_parse {
(
$ident:ident,
$variant_ident:ident,
$skip_field:ident;
$( $bname:ident : $bty:ty, )* ; $( $aname:ident : $aty:ty, )*
) => {
__report_gen_emit_test_specific!(
$ident, $variant_ident, $skip_field;
$( $bname : $bty, )* ; $( $aname : $aty, )*
);
};
}
macro_rules! __report_gen_scan_before {
(
$ident:ident, $variant_ident:ident;
$( $before:ident : $bty:ty, )*
;
test_specifier : $skip_ty:ty,
$( $after:ident : $aty:ty, )*
;
) => {
__report_gen_emit_test_specific_by_parse!(
$ident, $variant_ident, test_specifier;
$( $before : $bty, )* ; $( $after : $aty, )*
);
};
(
$ident:ident, $variant_ident:ident;
$( $before:ident : $bty:ty, )*
;
$name:ident : $ty:ty, $( $after:ident : $aty:ty, )*
;
) => {
__report_gen_scan_before!(
$ident, $variant_ident;
$( $before : $bty, )* $name : $ty,
;
$( $after : $aty, )*
;
);
};
(
$ident:ident, $variant_ident:ident;
$( $before:ident : $bty:ty, )*
;
;
) => {};
}
macro_rules! __report_gen_for_variant {
(
$ident:ident,
$variant_ident:ident;
) => {};
(
$ident:ident,
$variant_ident:ident;
$( $field_ident:ident : $field_ty:ty ),+ $(,)?
) => {
__report_gen_scan_before!(
$ident, $variant_ident;
;
$( $field_ident : $field_ty, )*
;
);
};
}
macro_rules! __report_gen_emit_execution_specific {
(
$ident:ident,
$variant_ident:ident,
$skip_field:ident;
$( $bname:ident : $bty:ty, )*
;
$( $aname:ident : $aty:ty, )*
) => {
paste::paste! {
pub fn [< report_ $variant_ident:snake _event >](
&self
$(, $bname: impl Into<$bty> )*
$(, $aname: impl Into<$aty> )*
) -> anyhow::Result<()> {
self.report([< $variant_ident Event >] {
$skip_field: self.execution_specifier.clone()
$(, $bname: $bname.into() )*
$(, $aname: $aname.into() )*
})
}
}
};
}
macro_rules! __report_gen_emit_execution_specific_by_parse {
(
$ident:ident,
$variant_ident:ident,
$skip_field:ident;
$( $bname:ident : $bty:ty, )* ; $( $aname:ident : $aty:ty, )*
) => {
__report_gen_emit_execution_specific!(
$ident, $variant_ident, $skip_field;
$( $bname : $bty, )* ; $( $aname : $aty, )*
);
};
}
macro_rules! __report_gen_scan_before_exec {
(
$ident:ident, $variant_ident:ident;
$( $before:ident : $bty:ty, )*
;
execution_specifier : $skip_ty:ty,
$( $after:ident : $aty:ty, )*
;
) => {
__report_gen_emit_execution_specific_by_parse!(
$ident, $variant_ident, execution_specifier;
$( $before : $bty, )* ; $( $after : $aty, )*
);
};
(
$ident:ident, $variant_ident:ident;
$( $before:ident : $bty:ty, )*
;
$name:ident : $ty:ty, $( $after:ident : $aty:ty, )*
;
) => {
__report_gen_scan_before_exec!(
$ident, $variant_ident;
$( $before : $bty, )* $name : $ty,
;
$( $after : $aty, )*
;
);
};
(
$ident:ident, $variant_ident:ident;
$( $before:ident : $bty:ty, )*
;
;
) => {};
}
macro_rules! __report_gen_for_variant_exec {
(
$ident:ident,
$variant_ident:ident;
) => {};
(
$ident:ident,
$variant_ident:ident;
$( $field_ident:ident : $field_ty:ty ),+ $(,)?
) => {
__report_gen_scan_before_exec!(
$ident, $variant_ident;
;
$( $field_ident : $field_ty, )*
;
);
};
}
macro_rules! __report_gen_emit_step_execution_specific {
(
$ident:ident,
$variant_ident:ident,
$skip_field:ident;
$( $bname:ident : $bty:ty, )*
;
$( $aname:ident : $aty:ty, )*
) => {
paste::paste! {
pub fn [< report_ $variant_ident:snake _event >](
&self
$(, $bname: impl Into<$bty> )*
$(, $aname: impl Into<$aty> )*
) -> anyhow::Result<()> {
self.report([< $variant_ident Event >] {
$skip_field: self.step_specifier.clone()
$(, $bname: $bname.into() )*
$(, $aname: $aname.into() )*
})
}
}
};
}
macro_rules! __report_gen_emit_step_execution_specific_by_parse {
(
$ident:ident,
$variant_ident:ident,
$skip_field:ident;
$( $bname:ident : $bty:ty, )* ; $( $aname:ident : $aty:ty, )*
) => {
__report_gen_emit_step_execution_specific!(
$ident, $variant_ident, $skip_field;
$( $bname : $bty, )* ; $( $aname : $aty, )*
);
};
}
macro_rules! __report_gen_scan_before_step {
(
$ident:ident, $variant_ident:ident;
$( $before:ident : $bty:ty, )*
;
step_specifier : $skip_ty:ty,
$( $after:ident : $aty:ty, )*
;
) => {
__report_gen_emit_step_execution_specific_by_parse!(
$ident, $variant_ident, step_specifier;
$( $before : $bty, )* ; $( $after : $aty, )*
);
};
(
$ident:ident, $variant_ident:ident;
$( $before:ident : $bty:ty, )*
;
$name:ident : $ty:ty, $( $after:ident : $aty:ty, )*
;
) => {
__report_gen_scan_before_step!(
$ident, $variant_ident;
$( $before : $bty, )* $name : $ty,
;
$( $after : $aty, )*
;
);
};
(
$ident:ident, $variant_ident:ident;
$( $before:ident : $bty:ty, )*
;
;
) => {};
}
macro_rules! __report_gen_for_variant_step {
(
$ident:ident,
$variant_ident:ident;
) => {};
(
$ident:ident,
$variant_ident:ident;
$( $field_ident:ident : $field_ty:ty ),+ $(,)?
) => {
__report_gen_scan_before_step!(
$ident, $variant_ident;
;
$( $field_ident : $field_ty, )*
;
);
};
}
/// Defines the runner-event which is sent from the test runners to the report aggregator.
///
/// This macro defines a number of things related to the reporting infrastructure and the interface
/// used. First of all, it defines the enum of all of the possible events that the runners can send
/// to the aggregator. For each one of the variants it defines a separate struct for it to allow the
/// variant field in the enum to be put in a [`Box`].
///
/// In addition to the above, it defines [`From`] implementations for the various event types for
/// the [`RunnerEvent`] enum essentially allowing for events such as [`CorpusFileDiscoveryEvent`] to
/// be converted into a [`RunnerEvent`].
///
/// In addition to the above, it also defines the [`RunnerEventReporter`] which is a wrapper around
/// an [`UnboundedSender`] allowing for events to be sent to the report aggregator.
///
/// With the above description, we can see that this macro defines almost all of the interface of
/// the reporting infrastructure, from the enum itself, to its associated types, and also to the
/// reporter that's used to report events to the aggregator.
///
/// [`UnboundedSender`]: tokio::sync::mpsc::UnboundedSender
macro_rules! define_event {
(
$(#[$enum_meta: meta])*
$vis: vis enum $ident: ident {
$(
$(#[$variant_meta: meta])*
$variant_ident: ident {
$(
$(#[$field_meta: meta])*
$field_ident: ident: $field_ty: ty
),* $(,)?
}
),* $(,)?
}
) => {
paste::paste! {
$(#[$enum_meta])*
#[derive(Debug)]
$vis enum $ident {
$(
$(#[$variant_meta])*
$variant_ident(Box<[<$variant_ident Event>]>)
),*
}
$(
#[derive(Debug)]
$(#[$variant_meta])*
$vis struct [<$variant_ident Event>] {
$(
$(#[$field_meta])*
$vis $field_ident: $field_ty
),*
}
)*
$(
impl From<[<$variant_ident Event>]> for $ident {
fn from(value: [<$variant_ident Event>]) -> Self {
Self::$variant_ident(Box::new(value))
}
}
)*
/// Provides a way to report events to the aggregator.
///
/// Under the hood, this is a wrapper around an [`UnboundedSender`] which abstracts away
/// the fact that channels are used and that implements high-level methods for reporting
/// various events to the aggregator.
#[derive(Clone, Debug)]
pub struct [< $ident Reporter >]($vis tokio::sync::mpsc::UnboundedSender<$ident>);
impl From<tokio::sync::mpsc::UnboundedSender<$ident>> for [< $ident Reporter >] {
fn from(value: tokio::sync::mpsc::UnboundedSender<$ident>) -> Self {
Self(value)
}
}
impl [< $ident Reporter >] {
pub fn test_specific_reporter(
&self,
test_specifier: impl Into<std::sync::Arc<crate::common::TestSpecifier>>
) -> [< $ident TestSpecificReporter >] {
[< $ident TestSpecificReporter >] {
reporter: self.clone(),
test_specifier: test_specifier.into(),
}
}
fn report(&self, event: impl Into<$ident>) -> anyhow::Result<()> {
self.0.send(event.into()).map_err(Into::into)
}
$(
pub fn [< report_ $variant_ident:snake _event >](&self, $($field_ident: impl Into<$field_ty>),*) -> anyhow::Result<()> {
self.report([< $variant_ident Event >] {
$($field_ident: $field_ident.into()),*
})
}
)*
}
/// A reporter that's tied to a specific test case.
#[derive(Clone, Debug)]
pub struct [< $ident TestSpecificReporter >] {
$vis reporter: [< $ident Reporter >],
$vis test_specifier: std::sync::Arc<crate::common::TestSpecifier>,
}
impl [< $ident TestSpecificReporter >] {
pub fn execution_specific_reporter(
&self,
node_id: impl Into<usize>,
node_designation: impl Into<$crate::common::NodeDesignation>
) -> [< $ident ExecutionSpecificReporter >] {
[< $ident ExecutionSpecificReporter >] {
reporter: self.reporter.clone(),
execution_specifier: Arc::new($crate::common::ExecutionSpecifier {
test_specifier: self.test_specifier.clone(),
node_id: node_id.into(),
node_designation: node_designation.into(),
})
}
}
fn report(&self, event: impl Into<$ident>) -> anyhow::Result<()> {
self.reporter.report(event)
}
$(
__report_gen_for_variant! { $ident, $variant_ident; $( $field_ident : $field_ty ),* }
)*
}
/// A reporter that's tied to a specific execution of the test case such as execution on
/// a specific node like the leader or follower.
#[derive(Clone, Debug)]
pub struct [< $ident ExecutionSpecificReporter >] {
$vis reporter: [< $ident Reporter >],
$vis execution_specifier: std::sync::Arc<$crate::common::ExecutionSpecifier>,
}
impl [< $ident ExecutionSpecificReporter >] {
fn report(&self, event: impl Into<$ident>) -> anyhow::Result<()> {
self.reporter.report(event)
}
$(
__report_gen_for_variant_exec! { $ident, $variant_ident; $( $field_ident : $field_ty ),* }
)*
}
/// A reporter that's tied to a specific step execution
#[derive(Clone, Debug)]
pub struct [< $ident StepExecutionSpecificReporter >] {
$vis reporter: [< $ident Reporter >],
$vis step_specifier: std::sync::Arc<$crate::common::StepExecutionSpecifier>,
}
impl [< $ident StepExecutionSpecificReporter >] {
fn report(&self, event: impl Into<$ident>) -> anyhow::Result<()> {
self.reporter.report(event)
}
$(
__report_gen_for_variant_step! { $ident, $variant_ident; $( $field_ident : $field_ty ),* }
)*
}
}
};
}
define_event! {
/// An event type that's sent by the test runners/drivers to the report aggregator.
pub(crate) enum RunnerEvent {
/// An event emitted by the reporter when it wishes to listen to events emitted by the
/// aggregator.
SubscribeToEvents {
/// The channel that the aggregator is to send the receive side of the channel on.
tx: oneshot::Sender<broadcast::Receiver<ReporterEvent>>
},
/// An event emitted by runners when they've discovered a corpus file.
CorpusFileDiscovery {
/// The contents of the corpus file.
corpus: Corpus
},
/// An event emitted by runners when they've discovered a metadata file.
MetadataFileDiscovery {
/// The path of the metadata file discovered.
path: MetadataFilePath,
/// The content of the metadata file.
metadata: Metadata
},
/// An event emitted by the runners when they discover a test case.
TestCaseDiscovery {
/// A specifier for the test that was discovered.
test_specifier: Arc<TestSpecifier>,
},
/// An event emitted by the runners when a test case is ignored.
TestIgnored {
/// A specifier for the test that's been ignored.
test_specifier: Arc<TestSpecifier>,
/// A reason for the test to be ignored.
reason: String,
/// Additional fields that describe more information on why the test was ignored.
additional_fields: IndexMap<String, serde_json::Value>
},
/// An event emitted by the runners when a test case has succeeded.
TestSucceeded {
/// A specifier for the test that succeeded.
test_specifier: Arc<TestSpecifier>,
/// The number of steps of the case that were executed by the driver.
steps_executed: usize,
},
/// An event emitted by the runners when a test case has failed.
TestFailed {
/// A specifier for the test that succeeded.
test_specifier: Arc<TestSpecifier>,
/// A reason for the failure of the test.
reason: String,
},
/// An event emitted when the test case is assigned a leader node.
LeaderNodeAssigned {
/// A specifier for the test that the assignment is for.
test_specifier: Arc<TestSpecifier>,
/// The ID of the node that this case is being executed on.
id: usize,
/// The platform of the node.
platform: TestingPlatform,
/// The connection string of the node.
connection_string: String,
},
/// An event emitted when the test case is assigned a follower node.
FollowerNodeAssigned {
/// A specifier for the test that the assignment is for.
test_specifier: Arc<TestSpecifier>,
/// The ID of the node that this case is being executed on.
id: usize,
/// The platform of the node.
platform: TestingPlatform,
/// The connection string of the node.
connection_string: String,
},
/// An event emitted by the runners when the compilation of the contracts has succeeded
/// on the pre-link contracts.
PreLinkContractsCompilationSucceeded {
/// A specifier for the execution that's taking place.
execution_specifier: Arc<ExecutionSpecifier>,
/// The version of the compiler used to compile the contracts.
compiler_version: Version,
/// The path of the compiler used to compile the contracts.
compiler_path: PathBuf,
/// A flag of whether the contract bytecode and ABI were cached or if they were compiled
/// anew.
is_cached: bool,
/// The input provided to the compiler - this is optional and not provided if the
/// contracts were obtained from the cache.
compiler_input: Option<CompilerInput>,
/// The output of the compiler.
compiler_output: CompilerOutput
},
/// An event emitted by the runners when the compilation of the contracts has succeeded
/// on the post-link contracts.
PostLinkContractsCompilationSucceeded {
/// A specifier for the execution that's taking place.
execution_specifier: Arc<ExecutionSpecifier>,
/// The version of the compiler used to compile the contracts.
compiler_version: Version,
/// The path of the compiler used to compile the contracts.
compiler_path: PathBuf,
/// A flag of whether the contract bytecode and ABI were cached or if they were compiled
/// anew.
is_cached: bool,
/// The input provided to the compiler - this is optional and not provided if the
/// contracts were obtained from the cache.
compiler_input: Option<CompilerInput>,
/// The output of the compiler.
compiler_output: CompilerOutput
},
/// An event emitted by the runners when the compilation of the pre-link contract has
/// failed.
PreLinkContractsCompilationFailed {
/// A specifier for the execution that's taking place.
execution_specifier: Arc<ExecutionSpecifier>,
/// The version of the compiler used to compile the contracts.
compiler_version: Option<Version>,
/// The path of the compiler used to compile the contracts.
compiler_path: Option<PathBuf>,
/// The input provided to the compiler - this is optional and not provided if the
/// contracts were obtained from the cache.
compiler_input: Option<CompilerInput>,
/// The failure reason.
reason: String,
},
/// An event emitted by the runners when the compilation of the post-link contract has
/// failed.
PostLinkContractsCompilationFailed {
/// A specifier for the execution that's taking place.
execution_specifier: Arc<ExecutionSpecifier>,
/// The version of the compiler used to compile the contracts.
compiler_version: Option<Version>,
/// The path of the compiler used to compile the contracts.
compiler_path: Option<PathBuf>,
/// The input provided to the compiler - this is optional and not provided if the
/// contracts were obtained from the cache.
compiler_input: Option<CompilerInput>,
/// The failure reason.
reason: String,
},
/// An event emitted by the runners when a library has been deployed.
LibrariesDeployed {
/// A specifier for the execution that's taking place.
execution_specifier: Arc<ExecutionSpecifier>,
/// The addresses of the libraries that were deployed.
libraries: BTreeMap<ContractInstance, Address>
},
/// An event emitted by the runners when they've deployed a new contract.
ContractDeployed {
/// A specifier for the execution that's taking place.
execution_specifier: Arc<ExecutionSpecifier>,
/// The instance name of the contract.
contract_instance: ContractInstance,
/// The address of the contract.
address: Address
},
}
}
/// An extension to the [`Reporter`] implemented by the macro.
impl RunnerEventReporter {
pub async fn subscribe(&self) -> anyhow::Result<broadcast::Receiver<ReporterEvent>> {
let (tx, rx) = oneshot::channel::<broadcast::Receiver<ReporterEvent>>();
self.report_subscribe_to_events_event(tx)
.context("Failed to send subscribe request to reporter task")?;
rx.await.map_err(Into::into)
}
}
pub type Reporter = RunnerEventReporter;
pub type TestSpecificReporter = RunnerEventTestSpecificReporter;
pub type ExecutionSpecificReporter = RunnerEventExecutionSpecificReporter;
+7 -1
View File
@@ -9,10 +9,16 @@ repository.workspace = true
rust-version.workspace = true rust-version.workspace = true
[dependencies] [dependencies]
revive-dt-common = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
hex = { workspace = true } hex = { workspace = true }
log = { workspace = true } tracing = { workspace = true }
tokio = { workspace = true }
reqwest = { workspace = true } reqwest = { workspace = true }
semver = { workspace = true } semver = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
sha2 = { workspace = true } sha2 = { workspace = true }
[lints]
workspace = true
+58 -21
View File
@@ -6,54 +6,79 @@ use std::{
io::{BufWriter, Write}, io::{BufWriter, Write},
os::unix::fs::PermissionsExt, os::unix::fs::PermissionsExt,
path::{Path, PathBuf}, path::{Path, PathBuf},
sync::{LazyLock, Mutex}, sync::LazyLock,
}; };
use crate::download::GHDownloader; use semver::Version;
use tokio::sync::Mutex;
use crate::download::SolcDownloader;
use anyhow::Context;
pub const SOLC_CACHE_DIRECTORY: &str = "solc"; pub const SOLC_CACHE_DIRECTORY: &str = "solc";
pub(crate) static SOLC_CACHER: LazyLock<Mutex<HashSet<PathBuf>>> = LazyLock::new(Default::default); pub(crate) static SOLC_CACHER: LazyLock<Mutex<HashSet<PathBuf>>> = LazyLock::new(Default::default);
pub(crate) fn get_or_download( pub(crate) async fn get_or_download(
working_directory: &Path, working_directory: &Path,
downloader: &GHDownloader, downloader: &SolcDownloader,
) -> anyhow::Result<PathBuf> { ) -> anyhow::Result<(Version, PathBuf)> {
let target_directory = working_directory let target_directory = working_directory
.join(SOLC_CACHE_DIRECTORY) .join(SOLC_CACHE_DIRECTORY)
.join(downloader.version.to_string()); .join(downloader.version.to_string());
let target_file = target_directory.join(downloader.target); let target_file = target_directory.join(downloader.target);
let mut cache = SOLC_CACHER.lock().unwrap(); let mut cache = SOLC_CACHER.lock().await;
if cache.contains(&target_file) { if cache.contains(&target_file) {
log::debug!("using cached solc: {}", target_file.display()); tracing::debug!("using cached solc: {}", target_file.display());
return Ok(target_file); return Ok((downloader.version.clone(), target_file));
} }
create_dir_all(target_directory)?; create_dir_all(&target_directory).with_context(|| {
download_to_file(&target_file, downloader)?; format!(
"Failed to create solc cache directory: {}",
target_directory.display()
)
})?;
download_to_file(&target_file, downloader)
.await
.with_context(|| {
format!(
"Failed to write downloaded solc to {}",
target_file.display()
)
})?;
cache.insert(target_file.clone()); cache.insert(target_file.clone());
Ok(target_file) Ok((downloader.version.clone(), target_file))
} }
fn download_to_file(path: &Path, downloader: &GHDownloader) -> anyhow::Result<()> { async fn download_to_file(path: &Path, downloader: &SolcDownloader) -> anyhow::Result<()> {
log::info!("caching file: {}", path.display());
let Ok(file) = File::create_new(path) else { let Ok(file) = File::create_new(path) else {
log::debug!("cache file already exists: {}", path.display());
return Ok(()); return Ok(());
}; };
#[cfg(unix)] #[cfg(unix)]
{ {
let mut permissions = file.metadata()?.permissions(); let mut permissions = file
.metadata()
.with_context(|| format!("Failed to read metadata for {}", path.display()))?
.permissions();
permissions.set_mode(permissions.mode() | 0o111); permissions.set_mode(permissions.mode() | 0o111);
file.set_permissions(permissions)?; file.set_permissions(permissions).with_context(|| {
format!("Failed to set executable permissions on {}", path.display())
})?;
} }
let mut file = BufWriter::new(file); let mut file = BufWriter::new(file);
file.write_all(&downloader.download()?)?; file.write_all(
file.flush()?; &downloader
.download()
.await
.context("Failed to download solc binary bytes")?,
)
.with_context(|| format!("Failed to write solc binary to {}", path.display()))?;
file.flush()
.with_context(|| format!("Failed to flush file {}", path.display()))?;
drop(file); drop(file);
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
@@ -64,8 +89,20 @@ fn download_to_file(path: &Path, downloader: &GHDownloader) -> anyhow::Result<()
.stderr(std::process::Stdio::null()) .stderr(std::process::Stdio::null())
.stdout(std::process::Stdio::null()) .stdout(std::process::Stdio::null())
.stdout(std::process::Stdio::null()) .stdout(std::process::Stdio::null())
.spawn()? .spawn()
.wait()?; .with_context(|| {
format!(
"Failed to spawn xattr to remove quarantine attribute on {}",
path.display()
)
})?
.wait()
.with_context(|| {
format!(
"Failed waiting for xattr operation to complete on {}",
path.display()
)
})?;
Ok(()) Ok(())
} }
+122 -56
View File
@@ -5,10 +5,13 @@ use std::{
sync::{LazyLock, Mutex}, sync::{LazyLock, Mutex},
}; };
use revive_dt_common::types::VersionOrRequirement;
use semver::Version; use semver::Version;
use sha2::{Digest, Sha256}; use sha2::{Digest, Sha256};
use crate::list::List; use crate::list::List;
use anyhow::Context;
pub static LIST_CACHE: LazyLock<Mutex<HashMap<&'static str, List>>> = pub static LIST_CACHE: LazyLock<Mutex<HashMap<&'static str, List>>> =
LazyLock::new(Default::default); LazyLock::new(Default::default);
@@ -23,12 +26,17 @@ impl List {
/// ///
/// Caches the list retrieved from the `url` into [LIST_CACHE], /// Caches the list retrieved from the `url` into [LIST_CACHE],
/// subsequent calls with the same `url` will return the cached list. /// subsequent calls with the same `url` will return the cached list.
pub fn download(url: &'static str) -> anyhow::Result<Self> { pub async fn download(url: &'static str) -> anyhow::Result<Self> {
if let Some(list) = LIST_CACHE.lock().unwrap().get(url) { if let Some(list) = LIST_CACHE.lock().unwrap().get(url) {
return Ok(list.clone()); return Ok(list.clone());
} }
let body: List = reqwest::blocking::get(url)?.json()?; let body: List = reqwest::get(url)
.await
.with_context(|| format!("Failed to GET solc list from {url}"))?
.json()
.await
.with_context(|| format!("Failed to deserialize solc list JSON from {url}"))?;
LIST_CACHE.lock().unwrap().insert(url, body.clone()); LIST_CACHE.lock().unwrap().insert(url, body.clone());
@@ -36,65 +44,106 @@ impl List {
} }
} }
/// Download solc binaries from GitHub releases (IPFS links aren't reliable). /// Download solc binaries from the official SolidityLang site
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct GHDownloader { pub struct SolcDownloader {
pub version: Version, pub version: Version,
pub target: &'static str, pub target: &'static str,
pub list: &'static str, pub list: &'static str,
} }
impl GHDownloader { impl SolcDownloader {
pub const BASE_URL: &str = "https://github.com/ethereum/solidity/releases/download"; pub const BASE_URL: &str = "https://binaries.soliditylang.org";
pub const LINUX_NAME: &str = "solc-static-linux"; pub const LINUX_NAME: &str = "linux-amd64";
pub const MACOSX_NAME: &str = "solc-macos"; pub const MACOSX_NAME: &str = "macosx-amd64";
pub const WINDOWS_NAME: &str = "solc-windows.exe"; pub const WINDOWS_NAME: &str = "windows-amd64";
pub const WASM_NAME: &str = "soljson.js"; pub const WASM_NAME: &str = "wasm";
fn new(version: Version, target: &'static str, list: &'static str) -> Self { async fn new(
Self { version: impl Into<VersionOrRequirement>,
version, target: &'static str,
target, list: &'static str,
list, ) -> anyhow::Result<Self> {
let version_or_requirement = version.into();
match version_or_requirement {
VersionOrRequirement::Version(version) => Ok(Self {
version,
target,
list,
}),
VersionOrRequirement::Requirement(requirement) => {
let Some(version) = List::download(list)
.await
.with_context(|| format!("Failed to download solc builds list from {list}"))?
.builds
.into_iter()
.map(|build| build.version)
.filter(|version| requirement.matches(version))
.max()
else {
anyhow::bail!("Failed to find a version that satisfies {requirement:?}");
};
Ok(Self {
version,
target,
list,
})
}
} }
} }
pub fn linux(version: Version) -> Self { pub async fn linux(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::LINUX_NAME, List::LINUX_URL) Self::new(version, Self::LINUX_NAME, List::LINUX_URL).await
} }
pub fn macosx(version: Version) -> Self { pub async fn macosx(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::MACOSX_NAME, List::MACOSX_URL) Self::new(version, Self::MACOSX_NAME, List::MACOSX_URL).await
} }
pub fn windows(version: Version) -> Self { pub async fn windows(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::WINDOWS_NAME, List::WINDOWS_URL) Self::new(version, Self::WINDOWS_NAME, List::WINDOWS_URL).await
} }
pub fn wasm(version: Version) -> Self { pub async fn wasm(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::WASM_NAME, List::WASM_URL) Self::new(version, Self::WASM_NAME, List::WASM_URL).await
}
/// Returns the download link.
pub fn url(&self) -> String {
format!("{}/v{}/{}", Self::BASE_URL, &self.version, &self.target)
} }
/// Download the solc binary. /// Download the solc binary.
/// ///
/// Errors out if the download fails or the digest of the downloaded file /// Errors out if the download fails or the digest of the downloaded file
/// mismatches the expected digest from the release [List]. /// mismatches the expected digest from the release [List].
pub fn download(&self) -> anyhow::Result<Vec<u8>> { pub async fn download(&self) -> anyhow::Result<Vec<u8>> {
log::info!("downloading solc: {self:?}"); let builds = List::download(self.list)
let expected_digest = List::download(self.list)? .await
.builds .with_context(|| format!("Failed to download solc builds list from {}", self.list))?
.builds;
let build = builds
.iter() .iter()
.find(|build| build.version == self.version) .find(|build| build.version == self.version)
.ok_or_else(|| anyhow::anyhow!("solc v{} not found builds", self.version)) .ok_or_else(|| anyhow::anyhow!("solc v{} not found builds", self.version))
.map(|b| b.sha256.strip_prefix("0x").unwrap_or(&b.sha256).to_string())?; .with_context(|| {
format!(
"Requested solc version {} was not found in builds list fetched from {}",
self.version, self.list
)
})?;
let file = reqwest::blocking::get(self.url())?.bytes()?.to_vec(); let path = build.path.clone();
let expected_digest = build
.sha256
.strip_prefix("0x")
.unwrap_or(&build.sha256)
.to_string();
let url = format!("{}/{}/{}", Self::BASE_URL, self.target, path.display());
let file = reqwest::get(&url)
.await
.with_context(|| format!("Failed to GET solc binary from {url}"))?
.bytes()
.await
.with_context(|| format!("Failed to read solc binary bytes from {url}"))?
.to_vec();
if hex::encode(Sha256::digest(&file)) != expected_digest { if hex::encode(Sha256::digest(&file)) != expected_digest {
anyhow::bail!("sha256 mismatch for solc version {}", self.version); anyhow::bail!("sha256 mismatch for solc version {}", self.version);
@@ -106,41 +155,58 @@ impl GHDownloader {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::{download::GHDownloader, list::List}; use crate::{download::SolcDownloader, list::List};
#[test] #[tokio::test]
fn try_get_windows() { async fn try_get_windows() {
let version = List::download(List::WINDOWS_URL) let version = List::download(List::WINDOWS_URL)
.await
.unwrap() .unwrap()
.latest_release .latest_release;
.into(); SolcDownloader::windows(version)
GHDownloader::windows(version).download().unwrap(); .await
.unwrap()
.download()
.await
.unwrap();
} }
#[test] #[tokio::test]
fn try_get_macosx() { async fn try_get_macosx() {
let version = List::download(List::MACOSX_URL) let version = List::download(List::MACOSX_URL)
.await
.unwrap() .unwrap()
.latest_release .latest_release;
.into(); SolcDownloader::macosx(version)
GHDownloader::macosx(version).download().unwrap(); .await
.unwrap()
.download()
.await
.unwrap();
} }
#[test] #[tokio::test]
fn try_get_linux() { async fn try_get_linux() {
let version = List::download(List::LINUX_URL) let version = List::download(List::LINUX_URL)
.await
.unwrap() .unwrap()
.latest_release .latest_release;
.into(); SolcDownloader::linux(version)
GHDownloader::linux(version).download().unwrap(); .await
.unwrap()
.download()
.await
.unwrap();
} }
#[test] #[tokio::test]
fn try_get_wasm() { async fn try_get_wasm() {
let version = List::download(List::WASM_URL) let version = List::download(List::WASM_URL).await.unwrap().latest_release;
SolcDownloader::wasm(version)
.await
.unwrap() .unwrap()
.latest_release .download()
.into(); .await
GHDownloader::wasm(version).download().unwrap(); .unwrap();
} }
} }
+14 -10
View File
@@ -5,8 +5,11 @@
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use anyhow::Context;
use cache::get_or_download; use cache::get_or_download;
use download::GHDownloader; use download::SolcDownloader;
use revive_dt_common::types::VersionOrRequirement;
use semver::Version; use semver::Version;
pub mod cache; pub mod cache;
@@ -18,22 +21,23 @@ pub mod list;
/// ///
/// Subsequent calls for the same version will use a cached artifact /// Subsequent calls for the same version will use a cached artifact
/// and not download it again. /// and not download it again.
pub fn download_solc( pub async fn download_solc(
cache_directory: &Path, cache_directory: &Path,
version: Version, version: impl Into<VersionOrRequirement>,
wasm: bool, wasm: bool,
) -> anyhow::Result<PathBuf> { ) -> anyhow::Result<(Version, PathBuf)> {
let downloader = if wasm { let downloader = if wasm {
GHDownloader::wasm(version) SolcDownloader::wasm(version).await
} else if cfg!(target_os = "linux") { } else if cfg!(target_os = "linux") {
GHDownloader::linux(version) SolcDownloader::linux(version).await
} else if cfg!(target_os = "macos") { } else if cfg!(target_os = "macos") {
GHDownloader::macosx(version) SolcDownloader::macosx(version).await
} else if cfg!(target_os = "windows") { } else if cfg!(target_os = "windows") {
GHDownloader::windows(version) SolcDownloader::windows(version).await
} else { } else {
unimplemented!() unimplemented!()
}; }
.context("Failed to initialize the Solc Downloader")?;
get_or_download(cache_directory, &downloader) get_or_download(cache_directory, &downloader).await
} }
+1 -5
View File
@@ -33,9 +33,5 @@
"mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000", "mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp": "0x00", "timestamp": "0x00",
"alloc": { "alloc": {}
"90F8bf6A479f320ead074411a4B0e7944Ea8c9C1": {
"balance": "1000000000000000000"
}
}
} }
Executable
+102
View File
@@ -0,0 +1,102 @@
#!/bin/bash
# Revive Differential Tests - Quick Start Script
# This script clones the test repository, sets up the corpus file, and runs the tool
set -e # Exit on any error
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Configuration
TEST_REPO_URL="https://github.com/paritytech/resolc-compiler-tests"
TEST_REPO_DIR="resolc-compiler-tests"
CORPUS_FILE="./corpus.json"
WORKDIR="workdir"
# Optional positional argument: path to polkadot-sdk directory
POLKADOT_SDK_DIR="${1:-}"
# Binary paths (default to names in $PATH)
REVIVE_DEV_NODE_BIN="revive-dev-node"
ETH_RPC_BIN="eth-rpc"
SUBSTRATE_NODE_BIN="substrate-node"
echo -e "${GREEN}=== Revive Differential Tests Quick Start ===${NC}"
echo ""
# Check if test repo already exists
if [ -d "$TEST_REPO_DIR" ]; then
echo -e "${YELLOW}Test repository already exists. Pulling latest changes...${NC}"
cd "$TEST_REPO_DIR"
git pull
cd ..
else
echo -e "${GREEN}Cloning test repository...${NC}"
git clone "$TEST_REPO_URL"
fi
# If polkadot-sdk path is provided, verify and use binaries from there; build if needed
if [ -n "$POLKADOT_SDK_DIR" ]; then
if [ ! -d "$POLKADOT_SDK_DIR" ]; then
echo -e "${RED}Provided polkadot-sdk directory does not exist: $POLKADOT_SDK_DIR${NC}"
exit 1
fi
POLKADOT_SDK_DIR=$(realpath "$POLKADOT_SDK_DIR")
echo -e "${GREEN}Using polkadot-sdk at: $POLKADOT_SDK_DIR${NC}"
REVIVE_DEV_NODE_BIN="$POLKADOT_SDK_DIR/target/release/revive-dev-node"
ETH_RPC_BIN="$POLKADOT_SDK_DIR/target/release/eth-rpc"
SUBSTRATE_NODE_BIN="$POLKADOT_SDK_DIR/target/release/substrate-node"
if [ ! -x "$REVIVE_DEV_NODE_BIN" ] || [ ! -x "$ETH_RPC_BIN" ] || [ ! -x "$SUBSTRATE_NODE_BIN" ]; then
echo -e "${YELLOW}Required binaries not found in release target. Building...${NC}"
(cd "$POLKADOT_SDK_DIR" && cargo build --release --package staging-node-cli --package pallet-revive-eth-rpc --package revive-dev-node)
fi
for bin in "$REVIVE_DEV_NODE_BIN" "$ETH_RPC_BIN" "$SUBSTRATE_NODE_BIN"; do
if [ ! -x "$bin" ]; then
echo -e "${RED}Expected binary not found after build: $bin${NC}"
exit 1
fi
done
else
echo -e "${YELLOW}No polkadot-sdk path provided. Using binaries from $PATH.${NC}"
fi
# Create corpus file with absolute path resolved at runtime
echo -e "${GREEN}Creating corpus file...${NC}"
ABSOLUTE_PATH=$(realpath "$TEST_REPO_DIR/fixtures/solidity/")
cat > "$CORPUS_FILE" << EOF
{
"name": "MatterLabs Solidity Simple, Complex, and Semantic Tests",
"path": "$ABSOLUTE_PATH"
}
EOF
echo -e "${GREEN}Corpus file created: $CORPUS_FILE${NC}"
# Create workdir if it doesn't exist
mkdir -p "$WORKDIR"
echo -e "${GREEN}Starting differential tests...${NC}"
echo "This may take a while..."
echo ""
# Run the tool
RUST_LOG="error" cargo run --release -- \
--corpus "$CORPUS_FILE" \
--workdir "$WORKDIR" \
--number-of-nodes 5 \
--kitchensink "$SUBSTRATE_NODE_BIN" \
--revive-dev-node "$REVIVE_DEV_NODE_BIN" \
--eth_proxy "$ETH_RPC_BIN" \
> logs.log \
2> output.log
echo -e "${GREEN}=== Test run completed! ===${NC}"