Compare commits

...

56 Commits

Author SHA1 Message Date
Omar Abdulla ed1a0a1dcd Edit the formatting of the CLI case reporter 2025-08-06 15:09:22 +03:00
Omar Abdulla 746f5db66f Add a maximum to the exponential backoff wait duration 2025-08-06 14:55:48 +03:00
Omar Abdulla 1400086794 Set the gc mode to archive in geth 2025-08-06 13:53:37 +03:00
Omar edba49b301 Use SolidityLang for solc downloads (#117) 2025-08-06 10:35:05 +00:00
Omar 9980926d40 Add a case ignore flag (#114)
* Added a resolver tied to a specific block

* Increase the number of private keys

* Increase kitchensink wait time to 60 seconds

* Add a case ignore flag
2025-08-04 16:40:53 +00:00
Omar ff993d44a5 Added a resolver tied to a specific block (#111)
* Added a resolver tied to a specific block

* Increase the number of private keys

* Increase kitchensink wait time to 60 seconds
2025-08-04 12:45:47 +00:00
Omar 8cbb1a9f77 Added basic console reporting (#110)
* Added basic console reporting

* Add some waiting period to the printing task

* Print to the stderr and print logs to stdout
2025-08-04 06:05:49 +00:00
Omar 56c2fe8c0c Parallelize Cases (#109)
* Parallelize over cases

* Rename the state and driver

* Parallelize execution

* Update the default config of the tool

* Make codebase async

* Fix machete

* Fix tests & clear node directories before startup

* Cleanup the cleanup logic

* Rename geth node
2025-08-01 11:00:08 +00:00
Omar 330a773a1c Add variables support (#96) 2025-07-30 08:41:03 +00:00
Omar f51693cb9f Support multiple compiler versions (#92)
* Allow for downloader to use version requirements.

We will soon add support for the compiler version requirement from the
metadata files to be honored. The compiler version is specified in the
solc modes section of the file and its specified as a `VersionReq` and
not as a version.

Therefore, we need to have the ability to honor this version requirement
and find the best version that satisfies the requirement.

* Request `VersionOrRequirement` in compiler interface

* Honor the compiler version requirement in metadata

This commit honors the compiler version requirement listed in the solc
modes of the metadata file. If this version requirement is provided then
it overrides what was passed in the CLI. Otherwise, the CLI version will
be used.

* Make compiler IO completely generic.

Before this commit, the types that were used for the compiler input and
output were the resolc compiler types which was a leaky abstraction as
we have traits to abstract the compilers away but we expose their
internal types out to other crates.

This commit did the following:
1. Made the compiler IO types fully generic so that all of the logic for
   constructing the map of compiled contracts is all done by the
   compiler implementation and not by the consuming code.
2. Changed the input types used for Solc to be the forge standard JSON
   types for Solc instead of resolc.

* Fix machete

* Add resolc to CI

* Add resolc to CI

* Add resolc to CI

* Add resolc to CI
2025-07-30 04:56:23 +00:00
James Wilson 4db7009640 Ensure path in corpus is relative to corpus file (#85) 2025-07-29 13:12:16 +00:00
Omar 5a36e242ec Allow for files in corpus definitions (#87)
* Allow for files to be specified in the corpus file

* Attempt to improve the geth tx indexing issue.

We're facing an issue where Geth transaction indexing can sometimes stall
on some of the nodes we're running. The logs show that for all transactions
we always need 1 second of waiting time. However, during certain runs we
sometimes run into an issue with some of the nodes where it seems like
their transaction indexer fails (either at the start or after some amount
of time) which leads us to never get the receipts back from these specific
nodes.

This is not a load issue as it appears like all of the other nodes handle
it just fine. However, it looks like once a node gets into this state it
can not get out of it and its bricked for the entire run.

This commit adds some more command line arguments to the geth command in
hopes of improving this issue.
2025-07-29 13:02:53 +00:00
James Wilson 33329632b5 Increase geth instantiate timeout from 2s to 5s (#86) 2025-07-29 10:34:31 +00:00
Omar 429f2e92a2 Fix contract discovery for simple tests (#83) 2025-07-28 07:05:53 +00:00
Omar 65f41f2038 Correct the type of address in matterlabs events (#82) 2025-07-28 05:01:52 +00:00
Omar 3ed8a1ca1c Support compiler-version aware exceptions (#81) 2025-07-25 14:23:17 +00:00
Omar 2923d675cd Support Compile-time Linking (#79)
* Use wrappers for libraries in metadata.

* Create a unified way to access deployed contracts

* Support linking at compile time
2025-07-25 07:03:21 +00:00
Omar 8f5bcf08ad Support Calldata arithmetic (#77)
* Re-order the input file.

This commit reorders the input file such that we have a definitions
section and an implementations section and such that the the order of
the items in both sections is the same.

* Implement a reverse polish calculator for calldata arithmetic
2025-07-24 15:35:25 +00:00
Omar 90fb89adc0 Add a common crate (#75)
* Add a barebones common crate

* Refactor some code into the common crate

* Add a `ResolverApi` interface.

This commit adds a `ResolverApi` trait to the `format` crate that can be
implemented by any type that can act as a resolver. A resolver is able
to provide information on the chain state. This chain state could be
fresh or it could be cached (which is something that we will do in a
future PR).

This cleans up our crate graph so that `format` is not depending on the
node interactions crate for the `EthereumNode` trait.

* Cleanup the blocking executor
2025-07-24 12:42:45 +00:00
Omar b03ad3027e Pre-seed accounts with more ETH. (#73)
* Pre-seed accounts with more ETH.

This commit fixes and solves some issues around how much ETH we seed an
account with in genesis. Currently, any account that the node has keys
to sign for will be seeded with u128::MAX WEI in genesis. This also
includes the default signer account.

* Bump commit hash of polkadot SDK

* Change how the cache key is computed

* Revert "Change how the cache key is computed"

This reverts commit 75afdd9cfd.

* Revert "Bump commit hash of polkadot SDK"

This reverts commit 8aaa69780e.

* Add extra comments

* Revert "Add extra comments"

This reverts commit bd4de2c83d.

* Update the initial balance
2025-07-24 08:46:14 +00:00
Omar 972f3b6d5b Wait longer for geth receipts (#74) 2025-07-24 04:40:19 +00:00
Omar 6f4aa731ab Handle exceptions (#54)
* Add support for wrapper types

* Move `FilesWithExtensionIterator` to `core::common`

* Remove unneeded use of two `HashMap`s

* Make metadata structs more typed

* Impl new_from for wrapper types

* Implement the new input handling logic

* Fix edge-case in input handling

* Ignore macro doc comment tests

* Correct comment

* Fix edge-case in deployment order

* Handle calldata better

* Allow for the use of function signatures

* Add support for exceptions

* Cached nonce allocator

* Fix tests

* Add support for address replacement

* Cleanup implementation

* Cleanup mutability

* Wire up address replacement with rest of code

* Implement caller replacement

* Switch to callframe trace for exceptions

* Add a way to skip tests if they don't match the target

* Handle values from the metadata files

* Remove address replacement

* Correct the arguments

* Remove empty impl

* Remove address replacement

* Correct the arguments

* Remove empty impl

* Fix size_requirement underflow

* Add support for wildcards in exceptions

* Fix calldata construction of single calldata

* Better handling for length in equivalency checks

* Make initial balance a constant

* Fix size_requirement underflow

* Add support for wildcards in exceptions

* Fix calldata construction of single calldata

* Better handling for length in equivalency checks

* Fix tests
2025-07-24 03:45:53 +00:00
Omar 589a5dc988 Handle calldata better (#49)
* Add support for wrapper types

* Move `FilesWithExtensionIterator` to `core::common`

* Remove unneeded use of two `HashMap`s

* Make metadata structs more typed

* Impl new_from for wrapper types

* Implement the new input handling logic

* Fix edge-case in input handling

* Ignore macro doc comment tests

* Correct comment

* Fix edge-case in deployment order

* Handle calldata better

* Remove todo
2025-07-22 03:39:35 +00:00
Omar c6d55515be Allow for the use of function signatures (#50)
* Allow for the use of function signatures

* Add test
2025-07-21 10:43:17 +00:00
Omar a9970eb2bb Refactor the input handling logic (#48)
* Add support for wrapper types

* Move `FilesWithExtensionIterator` to `core::common`

* Remove unneeded use of two `HashMap`s

* Make metadata structs more typed

* Impl new_from for wrapper types

* Implement the new input handling logic

* Fix edge-case in input handling

* Ignore macro doc comment tests

* Correct comment

* Fix edge-case in deployment order
2025-07-21 09:01:52 +00:00
Omar 2259942363 Cleanup execution logic (#45)
* Introduce a custom kitchensink network

* fix formatting

* Added `--dev` to `substrate-node` arguments.

This commit adds the `--dev` argument to the `substrate-node` to allow
the chain to keep advancing as time goes own. We have found that if this
option is not added then the chain won't advance forward.

* fix clippy warning

* fix clippy warning

* Fix the ABI finding logic

* Fix function selector and argument encoding

* Avoid extra buffer allocation

* Remove reliance on the web3 crate

* Implement ABI fix in the compiler trait impl

* Update the async runtime with syntactic sugar.

* Fix tests

* Fix doc test

* Give nodes a standard way to get their alloy provider

* Add ability to get the chain_id from node

* Get kitchensink provider to use kitchensink network

* Use provider method in tests

* Add support for getting the gas limit from the node

* Add a way to get the coinbase address

* Add a way to get the block difficulty from the node

* Add a way to get block info from the node

* Expose APIs for getting the info of a specific block

* Add resolution logic for other matterlabs variables

* Fix tests

* Add comment on alternative solutions

* Change kitchensink gas limit assertion

* Cleanup execution logic
2025-07-18 12:08:13 +00:00
Omar 0b97d7dc29 Support other matterlabs variables (#43)
* Introduce a custom kitchensink network

* fix formatting

* Added `--dev` to `substrate-node` arguments.

This commit adds the `--dev` argument to the `substrate-node` to allow
the chain to keep advancing as time goes own. We have found that if this
option is not added then the chain won't advance forward.

* fix clippy warning

* fix clippy warning

* Fix function selector and argument encoding

* Avoid extra buffer allocation

* Remove reliance on the web3 crate

* Update the async runtime with syntactic sugar.

* Fix tests

* Fix doc test

* Give nodes a standard way to get their alloy provider

* Add ability to get the chain_id from node

* Get kitchensink provider to use kitchensink network

* Use provider method in tests

* Add support for getting the gas limit from the node

* Add a way to get the coinbase address

* Add a way to get the block difficulty from the node

* Add a way to get block info from the node

* Expose APIs for getting the info of a specific block

* Add resolution logic for other matterlabs variables

* Fix tests

* Add comment on alternative solutions

* Change kitchensink gas limit assertion

* Remove un-needed profile config
2025-07-18 12:06:40 +00:00
Omar 2bee2d5c8b Fix the ABI finding logic (#38)
* Fix the ABI finding logic

* Implement ABI fix in the compiler trait impl
2025-07-18 11:22:51 +00:00
Omar 854e8d9690 Fix deserialization error: invalid value: string "0x2d79dd80ff729c000" (#34)
* Introduce a custom kitchensink network

* fix formatting

* Added `--dev` to `substrate-node` arguments.

This commit adds the `--dev` argument to the `substrate-node` to allow
the chain to keep advancing as time goes own. We have found that if this
option is not added then the chain won't advance forward.

* fix clippy warning

* fix clippy warning
2025-07-18 11:22:13 +00:00
Omar 2d517784dd Better logging for contract deployment (#46)
* Log certain errors better

* Remove unneeded code
2025-07-16 18:16:12 +00:00
Omar baa11ad28f Correctly identify which contracts to compile (#44)
* Compile all contracts for a test file

* Fix compilation errors related to paths

* Set the base path if specified
2025-07-16 11:52:40 +00:00
Omar c2e65f9e33 Fix function selector & argument encoding (#39)
* Fix function selector and argument encoding

* Avoid extra buffer allocation

* Remove reliance on the web3 crate

* Fix tests
2025-07-15 20:00:10 +00:00
Omar 14888f9767 Update the async runtime (#42)
* Update the async runtime with syntactic sugar.

* Fix doc test

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Update crates/node-interaction/src/blocking_executor.rs

Co-authored-by: xermicus <cyrill@parity.io>

* Improve the comments

* Update the release profile

---------

Co-authored-by: xermicus <cyrill@parity.io>
2025-07-15 11:19:17 +00:00
Omar 3e99d1c2a5 Allow alloy to estimate tx gas (#37) 2025-07-14 17:34:44 +00:00
Omar 4e234aa1bd Remove code that was accidentally committed. (#41)
* Remove code that was accidentally committed.

* Remove unneeded dependency
2025-07-14 16:24:39 +00:00
Omar b204de5484 Persist node logs (#36)
* Persist node logs

* Fix clippy lints

* Delete the node's db on shutdown but persist logs

* Fix tests

* Separate stdout and stderr and use more consts.

* More consistent handling of open options

* Revert the use of subprocess

* Remove outdated comment

* Flush the log files on drop

* Rename `log_files` -> `logs_file_to_flush`
2025-07-14 16:08:47 +00:00
Omar 5eb3a0e1b5 Fix for "transaction indexing is in progress" (#32)
* Retry getting transaction receipt

* Small fix to logging consistency

* Introduce a custom kitchensink network

* Fix formtting and clippy
2025-07-14 09:32:57 +00:00
Omar 772bd217c3 Fixing the CI on Ubuntu (#31)
* pin the version of geth used in CI

* pin the version of geth used in CI

* temp: run on each push

* pin the version of geth used in CI

* Make geth installation arch dependent

* Remove temp run on push to branch

* Add a comment on the need for pre-built binaries
2025-07-14 09:17:13 +00:00
Omar 0513a4befb Use tracing for logging. (#29)
This commit updates how logging is done in the differential testing
harness to use `tracing` instead of using the `log` crate. This allows
us to be able to better associate logs with the cases being executed
which makes it easier to debug and understand what the harness is doing.
2025-07-10 07:28:16 +00:00
activecoder10 de7c7d6703 Compute transaction input for executing transactions (#28)
* Parsed ABI field in order to get method parameter

* Added logic for ABI

* Refactored dependencies

* Small refactoring

* Added unit tests for ABI parameter extraction logic

* Fixed format issues

* Fixed format

* Added new changes to format

* Added bail to stop execution when we have an error during deployment
2025-07-09 11:03:38 +00:00
activecoder10 3a537c2812 Added extra logging for critical part of the flow. (#27)
* Fix legacy_transaction to address for execution part

* updated polkadot-sdk to latest

* Update polkadot-sdk to latest main with fixes

* Added extra logging

* Applied some clippy improvements
2025-06-27 15:24:57 +00:00
activecoder10 4ab79ed97e Fixed the contract deployment logic. Added new tracing logging for differential for leader and follower receipt structure (#26) 2025-06-20 13:02:54 +00:00
activecoder10 ee97b62e70 Added fetch_add_nonce method for NodeInteraction trait. Added extra logging. (#25)
* added logging

* added fetch_add_nonce method

* Added nonce for legacy transaction also

* Addressed PR comments
2025-06-18 19:43:16 +00:00
xermicus e9b5a06aec fix the simple test case definition (#24)
Signed-off-by: xermicus <cyrill@parity.io>
2025-06-17 10:23:09 +00:00
xermicus 534170db6f dont fail machete on polkadot-sdk submodule (#23)
Signed-off-by: xermicus <cyrill@parity.io>
2025-06-14 10:12:30 +00:00
activecoder10 090b56c46a deploy contracts (#22) 2025-06-12 11:09:01 +00:00
activecoder10 547563e718 Extended execute_input method (#21)
* Extended execute_input method

* Improve tracing part
2025-06-10 08:23:37 +00:00
xermicus c8eb8cf7b0 the state diff method belongs to node interactions (#20)
Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>
2025-06-05 07:50:54 +00:00
activecoder10 3b26e1e1d6 Implement the Node trait for kitchensink (#16)
* feat: implement Node trait for Kitchensink node

* removed self from eth_to_substrate_address method
2025-06-05 06:12:54 +00:00
xermicus 1bc20d088f update dependencies (#19)
Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>
2025-05-26 07:02:27 +00:00
xermicus 10bfaed461 Implement basic reporting facility (#18)
* wip

Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>

* save to file after all tasks done

Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>

* error out early if the workdir does not exist

Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>

* the compiler statistics

Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>

* allow compiler statistics per implementation

Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>

* save compiler problems

Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>

* add flag whether to extract compiler errors

Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>

* whitespace

Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>

---------

Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>
2025-05-23 17:15:04 +00:00
xermicus 399f7820cd add all cargo tasks to the test target (#14)
Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>
2025-05-15 11:15:50 +00:00
activecoder10 ae1174febe Added basic CI workflow (#13) 2025-05-12 13:00:13 +03:00
activecoder10 38b42560ec Added implementation for resolc trait (#12)
Implement the Solidity Compiler trait for resolc
2025-05-08 11:09:02 +02:00
Cyrill Leutwiler 8009f5880c update README.md
Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>
2025-03-31 16:44:16 +02:00
xermicus c590fa7bfd Scaffold utility and library (#3)
Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>
Signed-off-by: xermicus <bigcyrill@hotmail.com>
2025-03-31 11:40:05 +02:00
63 changed files with 14906 additions and 1 deletions
+163
View File
@@ -0,0 +1,163 @@
name: Test workflow
on:
push:
branches:
- main
pull_request:
branches:
- main
types: [opened, synchronize]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
jobs:
cache-polkadot:
name: Build and cache Polkadot binaries on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-24.04, macos-14]
steps:
- name: Checkout repo and submodules
uses: actions/checkout@v4
with:
submodules: recursive
- name: Install dependencies (Linux)
if: matrix.os == 'ubuntu-24.04'
run: |
sudo apt-get update
sudo apt-get install -y protobuf-compiler clang libclang-dev
rustup target add wasm32-unknown-unknown
rustup component add rust-src
- name: Install dependencies (macOS)
if: matrix.os == 'macos-14'
run: |
brew install protobuf
rustup target add wasm32-unknown-unknown
rustup component add rust-src
- name: Cache binaries
id: cache
uses: actions/cache@v3
with:
path: |
~/.cargo/bin/substrate-node
~/.cargo/bin/eth-rpc
key: polkadot-binaries-${{ matrix.os }}-${{ hashFiles('polkadot-sdk/.git') }}
- name: Build substrate-node
if: steps.cache.outputs.cache-hit != 'true'
run: |
cd polkadot-sdk
cargo install --locked --force --profile=production --path substrate/bin/node/cli --bin substrate-node --features cli
- name: Build eth-rpc
if: steps.cache.outputs.cache-hit != 'true'
run: |
cd polkadot-sdk
cargo install --path substrate/frame/revive/rpc --bin eth-rpc
ci:
name: CI on ${{ matrix.os }}
needs: cache-polkadot
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-24.04, macos-14]
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Restore binaries from cache
uses: actions/cache@v3
with:
path: |
~/.cargo/bin/substrate-node
~/.cargo/bin/eth-rpc
key: polkadot-binaries-${{ matrix.os }}-${{ hashFiles('polkadot-sdk/.git') }}
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
rustflags: ""
- name: Add wasm32 target
run: |
rustup target add wasm32-unknown-unknown
rustup component add rust-src
- name: Install Geth on Ubuntu
if: matrix.os == 'ubuntu-24.04'
run: |
sudo add-apt-repository -y ppa:ethereum/ethereum
sudo apt-get update
sudo apt-get install -y protobuf-compiler
sudo apt-get install -y solc
# We were facing some issues in CI with the 1.16.* versions of geth, and specifically on
# Ubuntu. Eventually, we found out that the last version of geth that worked in our CI was
# version 1.15.11. Thus, this is the version that we want to use in CI. The PPA sadly does
# not have historic versions of Geth and therefore we need to resort to downloading pre
# built binaries for Geth and the surrounding tools which is what the following parts of
# the script do.
sudo apt-get install -y wget ca-certificates tar
ARCH=$(uname -m)
if [ "$ARCH" = "x86_64" ]; then
URL="https://gethstore.blob.core.windows.net/builds/geth-alltools-linux-amd64-1.15.11-36b2371c.tar.gz"
elif [ "$ARCH" = "aarch64" ]; then
URL="https://gethstore.blob.core.windows.net/builds/geth-alltools-linux-arm64-1.15.11-36b2371c.tar.gz"
else
echo "Unsupported architecture: $ARCH"
exit 1
fi
wget -qO- "$URL" | sudo tar xz -C /usr/local/bin --strip-components=1
geth --version
curl -sL https://github.com/paritytech/revive/releases/download/v0.3.0/resolc-x86_64-unknown-linux-musl -o resolc
chmod +x resolc
sudo mv resolc /usr/local/bin
- name: Install Geth on macOS
if: matrix.os == 'macos-14'
run: |
brew tap ethereum/ethereum
brew install ethereum protobuf
brew install solidity
curl -sL https://github.com/paritytech/revive/releases/download/v0.3.0/resolc-universal-apple-darwin -o resolc
chmod +x resolc
sudo mv resolc /usr/local/bin
- name: Machete
uses: bnjbvr/cargo-machete@v0.7.1
- name: Format
run: make format
- name: Clippy
run: make clippy
- name: Check substrate-node version
run: substrate-node --version
- name: Check eth-rpc version
run: eth-rpc --version
- name: Check resolc version
run: resolc --version
- name: Test cargo workspace
run: make test
+9
View File
@@ -0,0 +1,9 @@
/target
.vscode/
.DS_Store
node_modules
/*.json
# We do not want to commit any log files that we produce from running the code locally so this is
# added to the .gitignore file.
*.log
+3
View File
@@ -0,0 +1,3 @@
[submodule "polkadot-sdk"]
path = polkadot-sdk
url = https://github.com/paritytech/polkadot-sdk.git
Generated
+6449
View File
File diff suppressed because it is too large Load Diff
+85
View File
@@ -0,0 +1,85 @@
[workspace]
resolver = "2"
members = ["crates/*"]
[workspace.package]
version = "0.1.0"
authors = ["Parity Technologies <admin@parity.io>"]
license = "MIT/Apache-2.0"
edition = "2024"
repository = "https://github.com/paritytech/revive-differential-testing.git"
rust-version = "1.85.0"
[workspace.dependencies]
revive-dt-common = { version = "0.1.0", path = "crates/common" }
revive-dt-compiler = { version = "0.1.0", path = "crates/compiler" }
revive-dt-config = { version = "0.1.0", path = "crates/config" }
revive-dt-core = { version = "0.1.0", path = "crates/core" }
revive-dt-format = { version = "0.1.0", path = "crates/format" }
revive-dt-node = { version = "0.1.0", path = "crates/node" }
revive-dt-node-interaction = { version = "0.1.0", path = "crates/node-interaction" }
revive-dt-node-pool = { version = "0.1.0", path = "crates/node-pool" }
revive-dt-report = { version = "0.1.0", path = "crates/report" }
revive-dt-solc-binaries = { version = "0.1.0", path = "crates/solc-binaries" }
alloy-primitives = "1.2.1"
alloy-sol-types = "1.2.1"
anyhow = "1.0"
clap = { version = "4", features = ["derive"] }
foundry-compilers-artifacts = { version = "0.18.0" }
futures = { version = "0.3.31" }
hex = "0.4.3"
reqwest = { version = "0.12.15", features = ["json"] }
once_cell = "1.21"
semver = { version = "1.0", features = ["serde"] }
serde = { version = "1.0", default-features = false, features = ["derive"] }
serde_json = { version = "1.0", default-features = false, features = [
"arbitrary_precision",
"std",
] }
sha2 = { version = "0.10.9" }
sp-core = "36.1.0"
sp-runtime = "41.1.0"
temp-dir = { version = "0.1.16" }
tempfile = "3.3"
tokio = { version = "1.47.0", default-features = false, features = [
"rt-multi-thread",
"process",
"rt",
] }
uuid = { version = "1.8", features = ["v4"] }
tracing = "0.1.41"
tracing-subscriber = { version = "0.3.19", default-features = false, features = [
"fmt",
"json",
"env-filter",
] }
indexmap = { version = "2.10.0", default-features = false }
# revive compiler
revive-solc-json-interface = { git = "https://github.com/paritytech/revive", rev = "3389865af7c3ff6f29a586d82157e8bc573c1a8e" }
revive-common = { git = "https://github.com/paritytech/revive", rev = "3389865af7c3ff6f29a586d82157e8bc573c1a8e" }
revive-differential = { git = "https://github.com/paritytech/revive", rev = "3389865af7c3ff6f29a586d82157e8bc573c1a8e" }
[workspace.dependencies.alloy]
version = "1.0.22"
default-features = false
features = [
"json-abi",
"providers",
"provider-ipc",
"provider-debug-api",
"reqwest",
"rpc-types",
"signer-local",
"std",
"network",
"serde",
"rpc-types-eth",
"genesis",
]
[profile.bench]
inherits = "release"
lto = true
codegen-units = 1
+15
View File
@@ -0,0 +1,15 @@
.PHONY: format clippy test machete
format:
cargo fmt --all -- --check
clippy:
cargo clippy --all-features --workspace -- --deny warnings
machete:
cargo install cargo-machete
cargo machete crates
test: format clippy machete
cargo test --workspace -- --nocapture
+33 -1
View File
@@ -1,2 +1,34 @@
# revive-differential-tests # revive-differential-tests
revive differential testing framework
The revive differential testing framework allows to define smart contract tests in a declarative manner in order to compile and execute them against different Ethereum-compatible blockchain implmentations. This is useful to:
- Analyze observable differences in contract compilation and execution across different blockchain implementations, including contract storage, account balances, transaction output and emitted events on a per-transaction base.
- Collect and compare benchmark metrics such as code size, gas usage or transaction throughput per seconds (TPS) of different blockchain implementations.
- Ensure reproducible contract builds across multiple compiler implementations or multiple host platforms.
- Implement end-to-end regression tests for Ethereum-compatible smart contract stacks.
# Declarative test format
For now, the format used to write tests is the [matter-labs era compiler format](https://github.com/matter-labs/era-compiler-tests?tab=readme-ov-file#matter-labs-simplecomplex-format). This allows us to re-use many tests from their corpora.
# The `retester` utility
The `retester` helper utilty is used to run the tests. To get an idea of what `retester` can do, please consults its command line help:
```
cargo run -p revive-dt-core -- --help
```
For example, to run the [complex Solidity tests](https://github.com/matter-labs/era-compiler-tests/tree/main/solidity/complex), define a corpus structure as follows:
```json
{
"name": "ML Solidity Complex",
"path": "/path/to/era-compiler-tests/solidity/complex"
}
```
Assuming this to be saved in a `ml-solidity-complex.json` file, the following command will try to compile and execute the tests found inside the corpus:
```bash
RUST_LOG=debug cargo r --release -p revive-dt-core -- --corpus ml-solidity-complex.json
```
+326
View File
@@ -0,0 +1,326 @@
{
"modes": [
"Y >=0.8.9",
"E",
"I"
],
"cases": [
{
"name": "first",
"inputs": [
{
"instance": "WBTC_1",
"method": "#deployer",
"calldata": [
"0x40",
"0x80",
"4",
"0x5742544300000000000000000000000000000000000000000000000000000000",
"14",
"0x5772617070656420425443000000000000000000000000000000000000000000"
],
"expected": [
"WBTC_1.address"
]
},
{
"instance": "WBTC_2",
"method": "#deployer",
"calldata": [
"0x40",
"0x80",
"4",
"0x5742544300000000000000000000000000000000000000000000000000000000",
"14",
"0x5772617070656420425443000000000000000000000000000000000000000000"
],
"expected": [
"WBTC_2.address"
]
},
{
"instance": "Mooniswap",
"method": "#deployer",
"calldata": [
"0x0000000000000000000000000000000000000000000000000000000000000060",
"0x00000000000000000000000000000000000000000000000000000000000000c0",
"0x0000000000000000000000000000000000000000000000000000000000000100",
"0x0000000000000000000000000000000000000000000000000000000000000002",
"WBTC_1.address",
"WBTC_2.address",
"4",
"0x5742544300000000000000000000000000000000000000000000000000000000",
"14",
"0x5772617070656420425443000000000000000000000000000000000000000000"
],
"expected": {
"return_data": [
"Mooniswap.address"
],
"events": [
{
"topics": [
"0x8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e0",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0xdeadbeef01000000000000000000000000000000"
],
"values": []
}
],
"exception": false
}
},
{
"instance": "WBTC_1",
"method": "_mint",
"calldata": [
"0xdeadbeef00000000000000000000000000000042",
"1000000000"
],
"expected": {
"return_data": [],
"events": [
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0xdeadbeef00000000000000000000000000000042"
],
"values": [
"1000000000"
]
}
],
"exception": false
}
},
{
"instance": "WBTC_2",
"method": "_mint",
"calldata": [
"0xdeadbeef00000000000000000000000000000042",
"1000000000"
],
"expected": {
"return_data": [],
"events": [
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0xdeadbeef00000000000000000000000000000042"
],
"values": [
"1000000000"
]
}
],
"exception": false
}
},
{
"instance": "WBTC_1",
"caller": "0xdeadbeef00000000000000000000000000000042",
"method": "approve",
"calldata": [
"Mooniswap.address",
"500000000"
],
"expected": {
"return_data": [
"0x0000000000000000000000000000000000000000000000000000000000000001"
],
"events": [
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"500000000"
]
}
],
"exception": false
}
},
{
"instance": "WBTC_2",
"caller": "0xdeadbeef00000000000000000000000000000042",
"method": "approve",
"calldata": [
"Mooniswap.address",
"500000000"
],
"expected": {
"return_data": [
"0x0000000000000000000000000000000000000000000000000000000000000001"
],
"events": [
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"500000000"
]
}
],
"exception": false
}
},
{
"instance": "Mooniswap",
"caller": "0xdeadbeef00000000000000000000000000000042",
"method": "deposit",
"calldata": [
"0x0000000000000000000000000000000000000000000000000000000000000040",
"0x00000000000000000000000000000000000000000000000000000000000000a0",
"0x0000000000000000000000000000000000000000000000000000000000000002",
"10000000",
"10000000",
"0x0000000000000000000000000000000000000000000000000000000000000002",
"1000000",
"1000000"
],
"expected": {
"return_data": [
"10000000"
],
"events": [
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"Mooniswap.address"
],
"values": [
"1000"
]
},
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"10000000"
]
},
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"490000000"
]
},
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"10000000"
]
},
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"490000000"
]
},
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0xdeadbeef00000000000000000000000000000042"
],
"values": [
"10000000"
]
},
{
"topics": [
"0x2da466a7b24304f47e87fa2e1e5a81b9831ce54fec19055ce277ca2f39ba42c4",
"0xdeadbeef00000000000000000000000000000042"
],
"values": [
"10000000"
]
}
],
"exception": false
}
},
{
"instance": "Mooniswap",
"caller": "0xdeadbeef00000000000000000000000000000042",
"method": "swap",
"calldata": [
"WBTC_1.address",
"WBTC_2.address",
"5000",
"5000",
"0"
]
}
],
"expected": {
"return_data": [
"5000"
],
"events": [
{
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"5000"
]
},
{
"topics": [
"0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925",
"0xdeadbeef00000000000000000000000000000042",
"Mooniswap.address"
],
"values": [
"489995000"
]
}
],
"exception": false
}
}
],
"contracts": {
"Mooniswap": "Mooniswap.sol:Mooniswap",
"WBTC_1": "ERC20/ERC20.sol:ERC20",
"WBTC_2": "ERC20/ERC20.sol:ERC20",
"VirtualBalance": "Mooniswap.sol:VirtualBalance",
"Math": "math/Math.sol:Math"
},
"libraries": {
"Mooniswap.sol": {
"VirtualBalance": "VirtualBalance"
},
"math/Math.sol": {
"Math": "Math"
}
},
"group": "Real life"
}
+14
View File
@@ -0,0 +1,14 @@
[package]
name = "revive-dt-common"
description = "A library containing common concepts that other crates in the workspace can rely on"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
anyhow = { workspace = true }
semver = { workspace = true }
tokio = { workspace = true, default-features = false, features = ["time"] }
+22
View File
@@ -0,0 +1,22 @@
use std::{
fs::{read_dir, remove_dir_all, remove_file},
path::Path,
};
use anyhow::Result;
/// This method clears the passed directory of all of the files and directories contained within
/// without deleting the directory.
pub fn clear_directory(path: impl AsRef<Path>) -> Result<()> {
for entry in read_dir(path.as_ref())? {
let entry = entry?;
let entry_path = entry.path();
if entry_path.is_file() {
remove_file(entry_path)?
} else {
remove_dir_all(entry_path)?
}
}
Ok(())
}
+3
View File
@@ -0,0 +1,3 @@
mod clear_dir;
pub use clear_dir::*;
+3
View File
@@ -0,0 +1,3 @@
mod poll;
pub use poll::*;
+69
View File
@@ -0,0 +1,69 @@
use std::ops::ControlFlow;
use std::time::Duration;
use anyhow::{Result, anyhow};
const EXPONENTIAL_BACKOFF_MAX_WAIT_DURATION: Duration = Duration::from_secs(60);
/// A function that polls for a fallible future for some period of time and errors if it fails to
/// get a result after polling.
///
/// Given a future that returns a [`Result<ControlFlow<O, ()>>`], this function calls the future
/// repeatedly (with some wait period) until the future returns a [`ControlFlow::Break`] or until it
/// returns an [`Err`] in which case the function stops polling and returns the error.
///
/// If the future keeps returning [`ControlFlow::Continue`] and fails to return a [`Break`] within
/// the permitted polling duration then this function returns an [`Err`]
///
/// [`Break`]: ControlFlow::Break
/// [`Continue`]: ControlFlow::Continue
pub async fn poll<F, O>(
polling_duration: Duration,
polling_wait_behavior: PollingWaitBehavior,
mut future: impl FnMut() -> F,
) -> Result<O>
where
F: Future<Output = Result<ControlFlow<O, ()>>>,
{
let mut retries = 0;
let mut total_wait_duration = Duration::ZERO;
let max_allowed_wait_duration = polling_duration;
loop {
if total_wait_duration >= max_allowed_wait_duration {
break Err(anyhow!(
"Polling failed after {} retries and a total of {:?} of wait time",
retries,
total_wait_duration
));
}
match future().await? {
ControlFlow::Continue(()) => {
let next_wait_duration = match polling_wait_behavior {
PollingWaitBehavior::Constant(duration) => duration,
PollingWaitBehavior::ExponentialBackoff => {
Duration::from_secs(2u64.pow(retries))
.min(EXPONENTIAL_BACKOFF_MAX_WAIT_DURATION)
}
};
let next_wait_duration =
next_wait_duration.min(max_allowed_wait_duration - total_wait_duration);
total_wait_duration += next_wait_duration;
retries += 1;
tokio::time::sleep(next_wait_duration).await;
}
ControlFlow::Break(output) => {
break Ok(output);
}
}
}
}
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default)]
pub enum PollingWaitBehavior {
Constant(Duration),
#[default]
ExponentialBackoff,
}
@@ -0,0 +1,73 @@
use std::{borrow::Cow, collections::HashSet, path::PathBuf};
/// An iterator that finds files of a certain extension in the provided directory. You can think of
/// this a glob pattern similar to: `${path}/**/*.md`
pub struct FilesWithExtensionIterator {
/// The set of allowed extensions that that match the requirement and that should be returned
/// when found.
allowed_extensions: HashSet<Cow<'static, str>>,
/// The set of directories to visit next. This iterator does BFS and so these directories will
/// only be visited if we can't find any files in our state.
directories_to_search: Vec<PathBuf>,
/// The set of files matching the allowed extensions that were found. If there are entries in
/// this vector then they will be returned when the [`Iterator::next`] method is called. If not
/// then we visit one of the next directories to visit.
files_matching_allowed_extensions: Vec<PathBuf>,
}
impl FilesWithExtensionIterator {
pub fn new(root_directory: PathBuf) -> Self {
Self {
allowed_extensions: Default::default(),
directories_to_search: vec![root_directory],
files_matching_allowed_extensions: Default::default(),
}
}
pub fn with_allowed_extension(
mut self,
allowed_extension: impl Into<Cow<'static, str>>,
) -> Self {
self.allowed_extensions.insert(allowed_extension.into());
self
}
}
impl Iterator for FilesWithExtensionIterator {
type Item = PathBuf;
fn next(&mut self) -> Option<Self::Item> {
if let Some(file_path) = self.files_matching_allowed_extensions.pop() {
return Some(file_path);
};
let directory_to_search = self.directories_to_search.pop()?;
// Read all of the entries in the directory. If we failed to read this dir's entires then we
// elect to just ignore it and look in the next directory, we do that by calling the next
// method again on the iterator, which is an intentional decision that we made here instead
// of panicking.
let Ok(dir_entries) = std::fs::read_dir(directory_to_search) else {
return self.next();
};
for entry in dir_entries.flatten() {
let entry_path = entry.path();
if entry_path.is_dir() {
self.directories_to_search.push(entry_path)
} else if entry_path.is_file()
&& entry_path.extension().is_some_and(|ext| {
self.allowed_extensions
.iter()
.any(|allowed| ext.eq_ignore_ascii_case(allowed.as_ref()))
})
{
self.files_matching_allowed_extensions.push(entry_path)
}
}
self.next()
}
}
+3
View File
@@ -0,0 +1,3 @@
mod files_with_extension_iterator;
pub use files_with_extension_iterator::*;
+8
View File
@@ -0,0 +1,8 @@
//! This crate provides common concepts, functionality, types, macros, and more that other crates in
//! the workspace can benefit from.
pub mod fs;
pub mod futures;
pub mod iterators;
pub mod macros;
pub mod types;
@@ -0,0 +1,106 @@
/// Defines wrappers around types.
///
/// For example, the macro invocation seen below:
///
/// ```rust,ignore
/// define_wrapper_type!(CaseId => usize);
/// ```
///
/// Would define a wrapper type that looks like the following:
///
/// ```rust,ignore
/// pub struct CaseId(usize);
/// ```
///
/// And would also implement a number of methods on this type making it easier to use.
///
/// These wrapper types become very useful as they make the code a lot easier to read.
///
/// Take the following as an example:
///
/// ```rust,ignore
/// struct State {
/// contracts: HashMap<usize, HashMap<String, Vec<u8>>>
/// }
/// ```
///
/// In the above code it's hard to understand what the various types refer to or what to expect them
/// to contain.
///
/// With these wrapper types we're able to create code that's self-documenting in that the types
/// tell us what the code is referring to. The above code is transformed into
///
/// ```rust,ignore
/// struct State {
/// contracts: HashMap<CaseId, HashMap<ContractName, ContractByteCode>>
/// }
/// ```
///
/// Note that we follow the same syntax for defining wrapper structs but we do not permit the use of
/// generics.
#[macro_export]
macro_rules! define_wrapper_type {
(
$(#[$meta: meta])*
$vis:vis struct $ident: ident($ty: ty);
) => {
$(#[$meta])*
$vis struct $ident($ty);
impl $ident {
pub fn new(value: impl Into<$ty>) -> Self {
Self(value.into())
}
pub fn into_inner(self) -> $ty {
self.0
}
pub fn as_inner(&self) -> &$ty {
&self.0
}
}
impl AsRef<$ty> for $ident {
fn as_ref(&self) -> &$ty {
&self.0
}
}
impl AsMut<$ty> for $ident {
fn as_mut(&mut self) -> &mut $ty {
&mut self.0
}
}
impl std::ops::Deref for $ident {
type Target = $ty;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl std::ops::DerefMut for $ident {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl From<$ty> for $ident {
fn from(value: $ty) -> Self {
Self(value)
}
}
impl From<$ident> for $ty {
fn from(value: $ident) -> Self {
value.0
}
}
};
}
/// Technically not needed but this allows for the macro to be found in the `macros` module of the
/// crate in addition to being found in the root of the crate.
pub use define_wrapper_type;
+3
View File
@@ -0,0 +1,3 @@
mod define_wrapper_type;
pub use define_wrapper_type::*;
+3
View File
@@ -0,0 +1,3 @@
mod version_or_requirement;
pub use version_or_requirement::*;
@@ -0,0 +1,41 @@
use semver::{Version, VersionReq};
#[derive(Clone, Debug)]
pub enum VersionOrRequirement {
Version(Version),
Requirement(VersionReq),
}
impl From<Version> for VersionOrRequirement {
fn from(value: Version) -> Self {
Self::Version(value)
}
}
impl From<VersionReq> for VersionOrRequirement {
fn from(value: VersionReq) -> Self {
Self::Requirement(value)
}
}
impl TryFrom<VersionOrRequirement> for Version {
type Error = anyhow::Error;
fn try_from(value: VersionOrRequirement) -> Result<Self, Self::Error> {
let VersionOrRequirement::Version(version) = value else {
anyhow::bail!("Version or requirement was not a version");
};
Ok(version)
}
}
impl TryFrom<VersionOrRequirement> for VersionReq {
type Error = anyhow::Error;
fn try_from(value: VersionOrRequirement) -> Result<Self, Self::Error> {
let VersionOrRequirement::Requirement(requirement) = value else {
anyhow::bail!("Version or requirement was not a requirement");
};
Ok(requirement)
}
}
+26
View File
@@ -0,0 +1,26 @@
[package]
name = "revive-dt-compiler"
description = "Library for compiling Solidity contracts to EVM and PVM"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
revive-solc-json-interface = { workspace = true }
revive-dt-common = { workspace = true }
revive-dt-config = { workspace = true }
revive-dt-solc-binaries = { workspace = true }
revive-common = { workspace = true }
alloy = { workspace = true }
alloy-primitives = { workspace = true }
anyhow = { workspace = true }
foundry-compilers-artifacts = { workspace = true }
semver = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tracing = { workspace = true }
tokio = { workspace = true }
+162
View File
@@ -0,0 +1,162 @@
//! This crate provides compiler helpers for all supported Solidity targets:
//! - Ethereum solc compiler
//! - Polkadot revive resolc compiler
//! - Polkadot revive Wasm compiler
use std::{
collections::HashMap,
fs::read_to_string,
hash::Hash,
path::{Path, PathBuf},
};
use alloy::json_abi::JsonAbi;
use alloy_primitives::Address;
use semver::Version;
use serde::{Deserialize, Serialize};
use revive_common::EVMVersion;
use revive_dt_common::types::VersionOrRequirement;
use revive_dt_config::Arguments;
pub mod revive_js;
pub mod revive_resolc;
pub mod solc;
/// A common interface for all supported Solidity compilers.
pub trait SolidityCompiler {
/// Extra options specific to the compiler.
type Options: Default + PartialEq + Eq + Hash;
/// The low-level compiler interface.
fn build(
&self,
input: CompilerInput,
additional_options: Self::Options,
) -> impl Future<Output = anyhow::Result<CompilerOutput>>;
fn new(solc_executable: PathBuf) -> Self;
fn get_compiler_executable(
config: &Arguments,
version: impl Into<VersionOrRequirement>,
) -> impl Future<Output = anyhow::Result<PathBuf>>;
fn version(&self) -> anyhow::Result<Version>;
}
/// The generic compilation input configuration.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CompilerInput {
pub enable_optimization: Option<bool>,
pub via_ir: Option<bool>,
pub evm_version: Option<EVMVersion>,
pub allow_paths: Vec<PathBuf>,
pub base_path: Option<PathBuf>,
pub sources: HashMap<PathBuf, String>,
pub libraries: HashMap<PathBuf, HashMap<String, Address>>,
}
/// The generic compilation output configuration.
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct CompilerOutput {
/// The compiled contracts. The bytecode of the contract is kept as a string incase linking is
/// required and the compiled source has placeholders.
pub contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
}
/// A generic builder style interface for configuring the supported compiler options.
pub struct Compiler<T: SolidityCompiler> {
input: CompilerInput,
additional_options: T::Options,
}
impl Default for Compiler<solc::Solc> {
fn default() -> Self {
Self::new()
}
}
impl<T> Compiler<T>
where
T: SolidityCompiler,
{
pub fn new() -> Self {
Self {
input: CompilerInput {
enable_optimization: Default::default(),
via_ir: Default::default(),
evm_version: Default::default(),
allow_paths: Default::default(),
base_path: Default::default(),
sources: Default::default(),
libraries: Default::default(),
},
additional_options: T::Options::default(),
}
}
pub fn with_optimization(mut self, value: impl Into<Option<bool>>) -> Self {
self.input.enable_optimization = value.into();
self
}
pub fn with_via_ir(mut self, value: impl Into<Option<bool>>) -> Self {
self.input.via_ir = value.into();
self
}
pub fn with_evm_version(mut self, version: impl Into<Option<EVMVersion>>) -> Self {
self.input.evm_version = version.into();
self
}
pub fn with_allow_path(mut self, path: impl AsRef<Path>) -> Self {
self.input.allow_paths.push(path.as_ref().into());
self
}
pub fn with_base_path(mut self, path: impl Into<Option<PathBuf>>) -> Self {
self.input.base_path = path.into();
self
}
pub fn with_source(mut self, path: impl AsRef<Path>) -> anyhow::Result<Self> {
self.input
.sources
.insert(path.as_ref().to_path_buf(), read_to_string(path.as_ref())?);
Ok(self)
}
pub fn with_library(
mut self,
path: impl AsRef<Path>,
name: impl AsRef<str>,
address: Address,
) -> Self {
self.input
.libraries
.entry(path.as_ref().to_path_buf())
.or_default()
.insert(name.as_ref().into(), address);
self
}
pub fn with_additional_options(mut self, options: impl Into<T::Options>) -> Self {
self.additional_options = options.into();
self
}
pub async fn try_build(
self,
compiler_path: impl AsRef<Path>,
) -> anyhow::Result<CompilerOutput> {
T::new(compiler_path.as_ref().to_path_buf())
.build(self.input, self.additional_options)
.await
}
pub fn input(&self) -> CompilerInput {
self.input.clone()
}
}
+2
View File
@@ -0,0 +1,2 @@
//! Implements the [crate::SolidityCompiler] trait with revive Wasm for
//! compiling contracts to PVM bytecode (via Wasm).
+253
View File
@@ -0,0 +1,253 @@
//! Implements the [SolidityCompiler] trait with `resolc` for
//! compiling contracts to PolkaVM (PVM) bytecode.
use std::{
path::PathBuf,
process::{Command, Stdio},
};
use revive_dt_common::types::VersionOrRequirement;
use revive_dt_config::Arguments;
use revive_solc_json_interface::{
SolcStandardJsonInput, SolcStandardJsonInputLanguage, SolcStandardJsonInputSettings,
SolcStandardJsonInputSettingsOptimizer, SolcStandardJsonInputSettingsSelection,
SolcStandardJsonOutput,
};
use crate::{CompilerInput, CompilerOutput, SolidityCompiler};
use alloy::json_abi::JsonAbi;
use anyhow::Context;
use semver::Version;
use tokio::{io::AsyncWriteExt, process::Command as AsyncCommand};
// TODO: I believe that we need to also pass the solc compiler to resolc so that resolc uses the
// specified solc compiler. I believe that currently we completely ignore the specified solc binary
// when invoking resolc which doesn't seem right if we're using solc as a compiler frontend.
/// A wrapper around the `resolc` binary, emitting PVM-compatible bytecode.
#[derive(Debug)]
pub struct Resolc {
/// Path to the `resolc` executable
resolc_path: PathBuf,
}
impl SolidityCompiler for Resolc {
type Options = Vec<String>;
#[tracing::instrument(level = "debug", ret)]
async fn build(
&self,
CompilerInput {
enable_optimization,
// Ignored and not honored since this is required for the resolc compilation.
via_ir: _via_ir,
evm_version,
allow_paths,
base_path,
sources,
libraries,
}: CompilerInput,
additional_options: Self::Options,
) -> anyhow::Result<CompilerOutput> {
let input = SolcStandardJsonInput {
language: SolcStandardJsonInputLanguage::Solidity,
sources: sources
.into_iter()
.map(|(path, source)| (path.display().to_string(), source.into()))
.collect(),
settings: SolcStandardJsonInputSettings {
evm_version,
libraries: Some(
libraries
.into_iter()
.map(|(source_code, libraries_map)| {
(
source_code.display().to_string(),
libraries_map
.into_iter()
.map(|(library_ident, library_address)| {
(library_ident, library_address.to_string())
})
.collect(),
)
})
.collect(),
),
remappings: None,
output_selection: Some(SolcStandardJsonInputSettingsSelection::new_required()),
via_ir: Some(true),
optimizer: SolcStandardJsonInputSettingsOptimizer::new(
enable_optimization.unwrap_or(false),
None,
&Version::new(0, 0, 0),
false,
),
metadata: None,
polkavm: None,
},
};
let mut command = AsyncCommand::new(&self.resolc_path);
command
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.arg("--standard-json");
if let Some(ref base_path) = base_path {
command.arg("--base-path").arg(base_path);
}
if !allow_paths.is_empty() {
command.arg("--allow-paths").arg(
allow_paths
.iter()
.map(|path| path.display().to_string())
.collect::<Vec<_>>()
.join(","),
);
}
let mut child = command.spawn()?;
let stdin_pipe = child.stdin.as_mut().expect("stdin must be piped");
let serialized_input = serde_json::to_vec(&input)?;
stdin_pipe.write_all(&serialized_input).await?;
let output = child.wait_with_output().await?;
let stdout = output.stdout;
let stderr = output.stderr;
if !output.status.success() {
let json_in = serde_json::to_string_pretty(&input)?;
let message = String::from_utf8_lossy(&stderr);
tracing::error!(
status = %output.status,
message = %message,
json_input = json_in,
"Compilation using resolc failed"
);
anyhow::bail!("Compilation failed with an error: {message}");
}
let parsed = serde_json::from_slice::<SolcStandardJsonOutput>(&stdout).map_err(|e| {
anyhow::anyhow!(
"failed to parse resolc JSON output: {e}\nstderr: {}",
String::from_utf8_lossy(&stderr)
)
})?;
tracing::debug!(
output = %serde_json::to_string(&parsed).unwrap(),
"Compiled successfully"
);
// Detecting if the compiler output contained errors and reporting them through logs and
// errors instead of returning the compiler output that might contain errors.
for error in parsed.errors.iter().flatten() {
if error.severity == "error" {
tracing::error!(
?error,
?input,
output = %serde_json::to_string(&parsed).unwrap(),
"Encountered an error in the compilation"
);
anyhow::bail!("Encountered an error in the compilation: {error}")
}
}
let Some(contracts) = parsed.contracts else {
anyhow::bail!("Unexpected error - resolc output doesn't have a contracts section");
};
let mut compiler_output = CompilerOutput::default();
for (source_path, contracts) in contracts.into_iter() {
let source_path = PathBuf::from(source_path).canonicalize()?;
let map = compiler_output.contracts.entry(source_path).or_default();
for (contract_name, contract_information) in contracts.into_iter() {
let bytecode = contract_information
.evm
.and_then(|evm| evm.bytecode.clone())
.context("Unexpected - Contract compiled with resolc has no bytecode")?;
let abi = contract_information
.metadata
.as_ref()
.and_then(|metadata| metadata.as_object())
.and_then(|metadata| metadata.get("solc_metadata"))
.and_then(|solc_metadata| solc_metadata.as_str())
.and_then(|metadata| serde_json::from_str::<serde_json::Value>(metadata).ok())
.and_then(|metadata| {
metadata.get("output").and_then(|output| {
output
.get("abi")
.and_then(|abi| serde_json::from_value::<JsonAbi>(abi.clone()).ok())
})
})
.context(
"Unexpected - Failed to get the ABI for a contract compiled with resolc",
)?;
map.insert(contract_name, (bytecode.object, abi));
}
}
Ok(compiler_output)
}
fn new(resolc_path: PathBuf) -> Self {
Resolc { resolc_path }
}
async fn get_compiler_executable(
config: &Arguments,
_version: impl Into<VersionOrRequirement>,
) -> anyhow::Result<PathBuf> {
if !config.resolc.as_os_str().is_empty() {
return Ok(config.resolc.clone());
}
Ok(PathBuf::from("resolc"))
}
fn version(&self) -> anyhow::Result<semver::Version> {
// Logic for parsing the resolc version from the following string:
// Solidity frontend for the revive compiler version 0.3.0+commit.b238913.llvm-18.1.8
let output = Command::new(self.resolc_path.as_path())
.arg("--version")
.stdout(Stdio::piped())
.spawn()?
.wait_with_output()?
.stdout;
let output = String::from_utf8_lossy(&output);
let version_string = output
.split("version ")
.nth(1)
.context("Version parsing failed")?
.split("+")
.next()
.context("Version parsing failed")?;
Version::parse(version_string).map_err(Into::into)
}
}
#[cfg(test)]
mod test {
use super::*;
#[tokio::test]
async fn compiler_version_can_be_obtained() {
// Arrange
let args = Arguments::default();
let path = Resolc::get_compiler_executable(&args, Version::new(0, 7, 6))
.await
.unwrap();
let compiler = Resolc::new(path);
// Act
let version = compiler.version();
// Assert
let _ = version.expect("Failed to get version");
}
}
+262
View File
@@ -0,0 +1,262 @@
//! Implements the [SolidityCompiler] trait with solc for
//! compiling contracts to EVM bytecode.
use std::{
path::PathBuf,
process::{Command, Stdio},
};
use revive_dt_common::types::VersionOrRequirement;
use revive_dt_config::Arguments;
use revive_dt_solc_binaries::download_solc;
use crate::{CompilerInput, CompilerOutput, SolidityCompiler};
use anyhow::Context;
use foundry_compilers_artifacts::{
output_selection::{
BytecodeOutputSelection, ContractOutputSelection, EvmOutputSelection, OutputSelection,
},
solc::CompilerOutput as SolcOutput,
solc::*,
};
use semver::Version;
use tokio::{io::AsyncWriteExt, process::Command as AsyncCommand};
#[derive(Debug)]
pub struct Solc {
solc_path: PathBuf,
}
impl SolidityCompiler for Solc {
type Options = ();
#[tracing::instrument(level = "debug", ret)]
async fn build(
&self,
CompilerInput {
enable_optimization,
via_ir,
evm_version,
allow_paths,
base_path,
sources,
libraries,
}: CompilerInput,
_: Self::Options,
) -> anyhow::Result<CompilerOutput> {
let input = SolcInput {
language: SolcLanguage::Solidity,
sources: Sources(
sources
.into_iter()
.map(|(source_path, source_code)| (source_path, Source::new(source_code)))
.collect(),
),
settings: Settings {
optimizer: Optimizer {
enabled: enable_optimization,
details: Some(Default::default()),
..Default::default()
},
output_selection: OutputSelection::common_output_selection(
[
ContractOutputSelection::Abi,
ContractOutputSelection::Evm(EvmOutputSelection::ByteCode(
BytecodeOutputSelection::Object,
)),
]
.into_iter()
.map(|item| item.to_string()),
),
evm_version: evm_version.map(|version| version.to_string().parse().unwrap()),
via_ir,
libraries: Libraries {
libs: libraries
.into_iter()
.map(|(file_path, libraries)| {
(
file_path,
libraries
.into_iter()
.map(|(library_name, library_address)| {
(library_name, library_address.to_string())
})
.collect(),
)
})
.collect(),
},
..Default::default()
},
};
let mut command = AsyncCommand::new(&self.solc_path);
command
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.arg("--standard-json");
if let Some(ref base_path) = base_path {
command.arg("--base-path").arg(base_path);
}
if !allow_paths.is_empty() {
command.arg("--allow-paths").arg(
allow_paths
.iter()
.map(|path| path.display().to_string())
.collect::<Vec<_>>()
.join(","),
);
}
let mut child = command.spawn()?;
let stdin = child.stdin.as_mut().expect("should be piped");
let serialized_input = serde_json::to_vec(&input)?;
stdin.write_all(&serialized_input).await?;
let output = child.wait_with_output().await?;
if !output.status.success() {
let json_in = serde_json::to_string_pretty(&input)?;
let message = String::from_utf8_lossy(&output.stderr);
tracing::error!(
status = %output.status,
message = %message,
json_input = json_in,
"Compilation using solc failed"
);
anyhow::bail!("Compilation failed with an error: {message}");
}
let parsed = serde_json::from_slice::<SolcOutput>(&output.stdout).map_err(|e| {
anyhow::anyhow!(
"failed to parse resolc JSON output: {e}\nstderr: {}",
String::from_utf8_lossy(&output.stdout)
)
})?;
// Detecting if the compiler output contained errors and reporting them through logs and
// errors instead of returning the compiler output that might contain errors.
for error in parsed.errors.iter() {
if error.severity == Severity::Error {
tracing::error!(?error, ?input, "Encountered an error in the compilation");
anyhow::bail!("Encountered an error in the compilation: {error}")
}
}
tracing::debug!(
output = %String::from_utf8_lossy(&output.stdout).to_string(),
"Compiled successfully"
);
let mut compiler_output = CompilerOutput::default();
for (contract_path, contracts) in parsed.contracts {
let map = compiler_output
.contracts
.entry(contract_path.canonicalize()?)
.or_default();
for (contract_name, contract_info) in contracts.into_iter() {
let source_code = contract_info
.evm
.and_then(|evm| evm.bytecode)
.map(|bytecode| match bytecode.object {
BytecodeObject::Bytecode(bytecode) => bytecode.to_string(),
BytecodeObject::Unlinked(unlinked) => unlinked,
})
.context("Unexpected - contract compiled with solc has no source code")?;
let abi = contract_info
.abi
.context("Unexpected - contract compiled with solc as no ABI")?;
map.insert(contract_name, (source_code, abi));
}
}
Ok(compiler_output)
}
fn new(solc_path: PathBuf) -> Self {
Self { solc_path }
}
async fn get_compiler_executable(
config: &Arguments,
version: impl Into<VersionOrRequirement>,
) -> anyhow::Result<PathBuf> {
let path = download_solc(config.directory(), version, config.wasm).await?;
Ok(path)
}
fn version(&self) -> anyhow::Result<semver::Version> {
// The following is the parsing code for the version from the solc version strings which
// look like the following:
// ```
// solc, the solidity compiler commandline interface
// Version: 0.8.30+commit.73712a01.Darwin.appleclang
// ```
let child = Command::new(self.solc_path.as_path())
.arg("--version")
.stdout(Stdio::piped())
.spawn()?;
let output = child.wait_with_output()?;
let output = String::from_utf8_lossy(&output.stdout);
let version_line = output
.split("Version: ")
.nth(1)
.context("Version parsing failed")?;
let version_string = version_line
.split("+")
.next()
.context("Version parsing failed")?;
Version::parse(version_string).map_err(Into::into)
}
}
#[cfg(test)]
mod test {
use super::*;
#[tokio::test]
async fn compiler_version_can_be_obtained() {
// Arrange
let args = Arguments::default();
println!("Getting compiler path");
let path = Solc::get_compiler_executable(&args, Version::new(0, 7, 6))
.await
.unwrap();
println!("Got compiler path");
let compiler = Solc::new(path);
// Act
let version = compiler.version();
// Assert
assert_eq!(
version.expect("Failed to get version"),
Version::new(0, 7, 6)
)
}
#[tokio::test]
async fn compiler_version_can_be_obtained1() {
// Arrange
let args = Arguments::default();
println!("Getting compiler path");
let path = Solc::get_compiler_executable(&args, Version::new(0, 4, 21))
.await
.unwrap();
println!("Got compiler path");
let compiler = Solc::new(path);
// Act
let version = compiler.version();
// Assert
assert_eq!(
version.expect("Failed to get version"),
Version::new(0, 4, 21)
)
}
}
@@ -0,0 +1,9 @@
// SPDX-License-Identifier: MIT
pragma solidity >=0.6.9;
contract Callable {
function f(uint[1] memory p1) public pure returns(uint) {
return p1[0];
}
}
@@ -0,0 +1,13 @@
// SPDX-License-Identifier: MIT
// Report https://linear.app/matterlabs/issue/CPR-269/call-with-calldata-variable-bug
pragma solidity >=0.6.9;
import "./callable.sol";
contract Main {
function main(uint[1] calldata p1, Callable callable) public returns(uint) {
return callable.f(p1);
}
}
@@ -0,0 +1,21 @@
{ "cases": [ {
"name": "first",
"inputs": [
{
"instance": "Main",
"method": "main",
"calldata": [
"1",
"Callable.address"
]
}
],
"expected": [
"1"
]
} ],
"contracts": {
"Main": "main.sol:Main",
"Callable": "callable.sol:Callable"
}
}
+88
View File
@@ -0,0 +1,88 @@
use std::path::PathBuf;
use revive_dt_compiler::{Compiler, SolidityCompiler, revive_resolc::Resolc, solc::Solc};
use revive_dt_config::Arguments;
use semver::Version;
#[tokio::test]
async fn contracts_can_be_compiled_with_solc() {
// Arrange
let args = Arguments::default();
let compiler_path = Solc::get_compiler_executable(&args, Version::new(0, 8, 30))
.await
.unwrap();
println!("About to assert");
// Act
let output = Compiler::<Solc>::new()
.with_source("./tests/assets/array_one_element/callable.sol")
.unwrap()
.with_source("./tests/assets/array_one_element/main.sol")
.unwrap()
.try_build(compiler_path)
.await;
// Assert
let output = output.expect("Failed to compile");
assert_eq!(output.contracts.len(), 2);
let main_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/main.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
let callable_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/callable.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
assert!(main_file_contracts.contains_key("Main"));
assert!(callable_file_contracts.contains_key("Callable"));
}
#[tokio::test]
async fn contracts_can_be_compiled_with_resolc() {
// Arrange
let args = Arguments::default();
let compiler_path = Resolc::get_compiler_executable(&args, Version::new(0, 8, 30))
.await
.unwrap();
// Act
let output = Compiler::<Resolc>::new()
.with_source("./tests/assets/array_one_element/callable.sol")
.unwrap()
.with_source("./tests/assets/array_one_element/main.sol")
.unwrap()
.try_build(compiler_path)
.await;
// Assert
let output = output.expect("Failed to compile");
assert_eq!(output.contracts.len(), 2);
let main_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/main.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
let callable_file_contracts = output
.contracts
.get(
&PathBuf::from("./tests/assets/array_one_element/callable.sol")
.canonicalize()
.unwrap(),
)
.unwrap();
assert!(main_file_contracts.contains_key("Main"));
assert!(callable_file_contracts.contains_key("Callable"));
}
+17
View File
@@ -0,0 +1,17 @@
[package]
name = "revive-dt-config"
description = "global configuration for the revive differential tester"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
alloy = { workspace = true }
clap = { workspace = true }
semver = { workspace = true }
temp-dir = { workspace = true }
serde = { workspace = true }
+184
View File
@@ -0,0 +1,184 @@
//! The global configuration used across all revive differential testing crates.
use std::{
fmt::Display,
path::{Path, PathBuf},
sync::LazyLock,
};
use alloy::{network::EthereumWallet, signers::local::PrivateKeySigner};
use clap::{Parser, ValueEnum};
use semver::Version;
use serde::{Deserialize, Serialize};
use temp_dir::TempDir;
#[derive(Debug, Parser, Clone, Serialize, Deserialize)]
#[command(name = "retester")]
pub struct Arguments {
/// The `solc` version to use if the test didn't specify it explicitly.
#[arg(long = "solc", short, default_value = "0.8.29")]
pub solc: Version,
/// Use the Wasm compiler versions.
#[arg(long = "wasm")]
pub wasm: bool,
/// The path to the `resolc` executable to be tested.
///
/// By default it uses the `resolc` binary found in `$PATH`.
///
/// If `--wasm` is set, this should point to the resolc Wasm ile.
#[arg(long = "resolc", short, default_value = "resolc")]
pub resolc: PathBuf,
/// A list of test corpus JSON files to be tested.
#[arg(long = "corpus", short)]
pub corpus: Vec<PathBuf>,
/// A place to store temporary artifacts during test execution.
///
/// Creates a temporary dir if not specified.
#[arg(long = "workdir", short)]
pub working_directory: Option<PathBuf>,
/// Add a tempdir manually if `working_directory` was not given.
///
/// We attach it here because [TempDir] prunes itself on drop.
#[clap(skip)]
#[serde(skip)]
pub temp_dir: Option<&'static TempDir>,
/// The path to the `geth` executable.
///
/// By default it uses `geth` binary found in `$PATH`.
#[arg(short, long = "geth", default_value = "geth")]
pub geth: PathBuf,
/// The maximum time in milliseconds to wait for geth to start.
#[arg(long = "geth-start-timeout", default_value = "5000")]
pub geth_start_timeout: u64,
/// The test network chain ID.
#[arg(short, long = "network-id", default_value = "420420420")]
pub network_id: u64,
/// Configure nodes according to this genesis.json file.
#[arg(long = "genesis", default_value = "genesis.json")]
pub genesis_file: PathBuf,
/// The signing account private key.
#[arg(
short,
long = "account",
default_value = "0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d"
)]
pub account: String,
/// This argument controls which private keys the nodes should have access to and be added to
/// its wallet signers. With a value of N, private keys (0, N] will be added to the signer set
/// of the node.
#[arg(long = "private-keys-count", default_value_t = 100_000)]
pub private_keys_to_add: usize,
/// The differential testing leader node implementation.
#[arg(short, long = "leader", default_value = "geth")]
pub leader: TestingPlatform,
/// The differential testing follower node implementation.
#[arg(short, long = "follower", default_value = "kitchensink")]
pub follower: TestingPlatform,
/// Only compile against this testing platform (doesn't execute the tests).
#[arg(long = "compile-only")]
pub compile_only: Option<TestingPlatform>,
/// Determines the amount of nodes that will be spawned for each chain.
#[arg(long, default_value = "1")]
pub number_of_nodes: usize,
/// Determines the amount of threads that will will be used.
#[arg(long, default_value = "12")]
pub number_of_threads: usize,
/// Extract problems back to the test corpus.
#[arg(short, long = "extract-problems")]
pub extract_problems: bool,
/// The path to the `kitchensink` executable.
///
/// By default it uses `substrate-node` binary found in `$PATH`.
#[arg(short, long = "kitchensink", default_value = "substrate-node")]
pub kitchensink: PathBuf,
/// The path to the `eth_proxy` executable.
///
/// By default it uses `eth-rpc` binary found in `$PATH`.
#[arg(short = 'p', long = "eth_proxy", default_value = "eth-rpc")]
pub eth_proxy: PathBuf,
}
impl Arguments {
/// Return the configured working directory with the following precedence:
/// 1. `self.working_directory` if it was provided.
/// 2. `self.temp_dir` if it it was provided
/// 3. Panic.
pub fn directory(&self) -> &Path {
if let Some(path) = &self.working_directory {
return path.as_path();
}
if let Some(temp_dir) = &self.temp_dir {
return temp_dir.path();
}
panic!("should have a workdir configured")
}
/// Try to parse `self.account` into a [PrivateKeySigner],
/// panicing on error.
pub fn wallet(&self) -> EthereumWallet {
let signer = self
.account
.parse::<PrivateKeySigner>()
.unwrap_or_else(|error| {
panic!("private key '{}' parsing error: {error}", self.account);
});
EthereumWallet::new(signer)
}
}
impl Default for Arguments {
fn default() -> Self {
static TEMP_DIR: LazyLock<TempDir> = LazyLock::new(|| TempDir::new().unwrap());
let default = Arguments::parse_from(["retester"]);
Arguments {
temp_dir: Some(&TEMP_DIR),
..default
}
}
}
/// The Solidity compatible node implementation.
///
/// This describes the solutions to be tested against on a high level.
#[derive(
Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, ValueEnum, Serialize, Deserialize,
)]
#[clap(rename_all = "lower")]
pub enum TestingPlatform {
/// The go-ethereum reference full node EVM implementation.
Geth,
/// The kitchensink runtime provides the PolkaVM (PVM) based node implentation.
Kitchensink,
}
impl Display for TestingPlatform {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Geth => f.write_str("geth"),
Self::Kitchensink => f.write_str("revive"),
}
}
}
+33
View File
@@ -0,0 +1,33 @@
[package]
name = "revive-dt-core"
description = "revive differential testing core utility"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[[bin]]
name = "retester"
path = "src/main.rs"
[dependencies]
revive-dt-common = { workspace = true }
revive-dt-compiler = { workspace = true }
revive-dt-config = { workspace = true }
revive-dt-format = { workspace = true }
revive-dt-node = { workspace = true }
revive-dt-node-interaction = { workspace = true }
revive-dt-report = { workspace = true }
alloy = { workspace = true }
anyhow = { workspace = true }
clap = { workspace = true }
futures = { workspace = true }
indexmap = { workspace = true }
tokio = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
semver = { workspace = true }
temp-dir = { workspace = true }
+751
View File
@@ -0,0 +1,751 @@
//! The test driver handles the compilation and execution of the test cases.
use std::collections::HashMap;
use std::marker::PhantomData;
use std::path::PathBuf;
use alloy::eips::BlockNumberOrTag;
use alloy::hex;
use alloy::json_abi::JsonAbi;
use alloy::network::{Ethereum, TransactionBuilder};
use alloy::primitives::{BlockNumber, U256};
use alloy::rpc::types::TransactionReceipt;
use alloy::rpc::types::trace::geth::{
CallFrame, GethDebugBuiltInTracerType, GethDebugTracerType, GethDebugTracingOptions, GethTrace,
PreStateConfig,
};
use alloy::{
primitives::Address,
rpc::types::{
TransactionRequest,
trace::geth::{AccountState, DiffMode},
},
};
use anyhow::Context;
use indexmap::IndexMap;
use revive_dt_format::traits::ResolverApi;
use semver::Version;
use revive_dt_format::case::{Case, CaseIdx};
use revive_dt_format::input::{Calldata, EtherValue, Expected, ExpectedOutput, Method};
use revive_dt_format::metadata::{ContractInstance, ContractPathAndIdent};
use revive_dt_format::{input::Input, metadata::Metadata};
use revive_dt_node::Node;
use revive_dt_node_interaction::EthereumNode;
use tracing::Instrument;
use crate::Platform;
pub struct CaseState<T: Platform> {
/// A map of all of the compiled contracts for the given metadata file.
compiled_contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
/// This map stores the contracts deployments for this case.
deployed_contracts: HashMap<ContractInstance, (Address, JsonAbi)>,
/// This map stores the variables used for each one of the cases contained in the metadata
/// file.
variables: HashMap<String, U256>,
/// Stores the version used for the current case.
compiler_version: Version,
phantom: PhantomData<T>,
}
impl<T> CaseState<T>
where
T: Platform,
{
pub fn new(
compiler_version: Version,
compiled_contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
deployed_contracts: HashMap<ContractInstance, (Address, JsonAbi)>,
) -> Self {
Self {
compiled_contracts,
deployed_contracts,
variables: Default::default(),
compiler_version,
phantom: PhantomData,
}
}
pub async fn handle_input(
&mut self,
metadata: &Metadata,
case_idx: CaseIdx,
input: &Input,
node: &T::Blockchain,
) -> anyhow::Result<(TransactionReceipt, GethTrace, DiffMode)> {
let deployment_receipts = self
.handle_contract_deployment(metadata, case_idx, input, node)
.await?;
let execution_receipt = self
.handle_input_execution(input, deployment_receipts, node)
.await?;
let tracing_result = self
.handle_input_call_frame_tracing(&execution_receipt, node)
.await?;
self.handle_input_variable_assignment(input, &tracing_result)?;
let resolver = BlockPinnedResolver::<'_, T> {
node,
block_number: execution_receipt
.block_number
.context("Transaction was not included in a block")?,
};
self.handle_input_expectations(input, &execution_receipt, &resolver, &tracing_result)
.await?;
self.handle_input_diff(case_idx, execution_receipt, node)
.await
}
/// Handles the contract deployment for a given input performing it if it needs to be performed.
async fn handle_contract_deployment(
&mut self,
metadata: &Metadata,
case_idx: CaseIdx,
input: &Input,
node: &T::Blockchain,
) -> anyhow::Result<HashMap<ContractInstance, TransactionReceipt>> {
let span = tracing::debug_span!(
"Handling contract deployment",
?case_idx,
instance = ?input.instance
);
let _guard = span.enter();
let mut instances_we_must_deploy = IndexMap::<ContractInstance, bool>::new();
for instance in input.find_all_contract_instances().into_iter() {
if !self.deployed_contracts.contains_key(&instance) {
instances_we_must_deploy.entry(instance).or_insert(false);
}
}
if let Method::Deployer = input.method {
instances_we_must_deploy.swap_remove(&input.instance);
instances_we_must_deploy.insert(input.instance.clone(), true);
}
tracing::debug!(
instances_to_deploy = instances_we_must_deploy.len(),
"Computed the number of required deployments for input"
);
let mut receipts = HashMap::new();
for (instance, deploy_with_constructor_arguments) in instances_we_must_deploy.into_iter() {
let calldata = deploy_with_constructor_arguments.then_some(&input.calldata);
let value = deploy_with_constructor_arguments
.then_some(input.value)
.flatten();
if let (_, _, Some(receipt)) = self
.get_or_deploy_contract_instance(
&instance,
metadata,
input.caller,
calldata,
value,
node,
)
.await?
{
receipts.insert(instance.clone(), receipt);
}
}
Ok(receipts)
}
/// Handles the execution of the input in terms of the calls that need to be made.
async fn handle_input_execution(
&mut self,
input: &Input,
mut deployment_receipts: HashMap<ContractInstance, TransactionReceipt>,
node: &T::Blockchain,
) -> anyhow::Result<TransactionReceipt> {
match input.method {
// This input was already executed when `handle_input` was called. We just need to
// lookup the transaction receipt in this case and continue on.
Method::Deployer => deployment_receipts
.remove(&input.instance)
.context("Failed to find deployment receipt"),
Method::Fallback | Method::FunctionName(_) => {
let tx = match input
.legacy_transaction(&self.deployed_contracts, &self.variables, node)
.await
{
Ok(tx) => {
tracing::debug!("Legacy transaction data: {tx:#?}");
tx
}
Err(err) => {
tracing::error!("Failed to construct legacy transaction: {err:?}");
return Err(err);
}
};
tracing::trace!("Executing transaction for input: {input:?}");
match node.execute_transaction(tx).await {
Ok(receipt) => Ok(receipt),
Err(err) => {
tracing::error!(
"Failed to execute transaction when executing the contract: {}, {:?}",
&*input.instance,
err
);
Err(err)
}
}
}
}
}
async fn handle_input_call_frame_tracing(
&self,
execution_receipt: &TransactionReceipt,
node: &T::Blockchain,
) -> anyhow::Result<CallFrame> {
node.trace_transaction(
execution_receipt,
GethDebugTracingOptions {
tracer: Some(GethDebugTracerType::BuiltInTracer(
GethDebugBuiltInTracerType::CallTracer,
)),
..Default::default()
},
)
.await
.map(|trace| {
trace
.try_into_call_frame()
.expect("Impossible - we requested a callframe trace so we must get it back")
})
}
fn handle_input_variable_assignment(
&mut self,
input: &Input,
tracing_result: &CallFrame,
) -> anyhow::Result<()> {
let Some(ref assignments) = input.variable_assignments else {
return Ok(());
};
// Handling the return data variable assignments.
for (variable_name, output_word) in assignments.return_data.iter().zip(
tracing_result
.output
.as_ref()
.unwrap_or_default()
.to_vec()
.chunks(32),
) {
let value = U256::from_be_slice(output_word);
self.variables.insert(variable_name.clone(), value);
tracing::info!(
variable_name,
variable_value = hex::encode(value.to_be_bytes::<32>()),
"Assigned variable"
);
}
Ok(())
}
async fn handle_input_expectations(
&mut self,
input: &Input,
execution_receipt: &TransactionReceipt,
resolver: &impl ResolverApi,
tracing_result: &CallFrame,
) -> anyhow::Result<()> {
let span = tracing::info_span!("Handling input expectations");
let _guard = span.enter();
// Resolving the `input.expected` into a series of expectations that we can then assert on.
let mut expectations = match input {
Input {
expected: Some(Expected::Calldata(calldata)),
..
} => vec![ExpectedOutput::new().with_calldata(calldata.clone())],
Input {
expected: Some(Expected::Expected(expected)),
..
} => vec![expected.clone()],
Input {
expected: Some(Expected::ExpectedMany(expected)),
..
} => expected.clone(),
Input { expected: None, .. } => vec![ExpectedOutput::new().with_success()],
};
// This is a bit of a special case and we have to support it separately on it's own. If it's
// a call to the deployer method, then the tests will assert that it "returns" the address
// of the contract. Deployments do not return the address of the contract but the runtime
// code of the contracts. Therefore, this assertion would always fail. So, we replace it
// with an assertion of "check if it succeeded"
if let Method::Deployer = &input.method {
for expectation in expectations.iter_mut() {
expectation.return_data = None;
}
}
for expectation in expectations.iter() {
self.handle_input_expectation_item(
execution_receipt,
resolver,
expectation,
tracing_result,
)
.await?;
}
Ok(())
}
async fn handle_input_expectation_item(
&mut self,
execution_receipt: &TransactionReceipt,
resolver: &impl ResolverApi,
expectation: &ExpectedOutput,
tracing_result: &CallFrame,
) -> anyhow::Result<()> {
if let Some(ref version_requirement) = expectation.compiler_version {
if !version_requirement.matches(&self.compiler_version) {
return Ok(());
}
}
let deployed_contracts = &mut self.deployed_contracts;
let variables = &mut self.variables;
// Handling the receipt state assertion.
let expected = !expectation.exception;
let actual = execution_receipt.status();
if actual != expected {
tracing::error!(
expected,
actual,
?execution_receipt,
?tracing_result,
"Transaction status assertion failed"
);
anyhow::bail!(
"Transaction status assertion failed - Expected {expected} but got {actual}",
);
}
// Handling the calldata assertion
if let Some(ref expected_calldata) = expectation.return_data {
let expected = expected_calldata;
let actual = &tracing_result.output.as_ref().unwrap_or_default();
if !expected
.is_equivalent(actual, deployed_contracts, &*variables, resolver)
.await?
{
tracing::error!(
?execution_receipt,
?expected,
%actual,
"Calldata assertion failed"
);
anyhow::bail!("Calldata assertion failed - Expected {expected:?} but got {actual}",);
}
}
// Handling the events assertion
if let Some(ref expected_events) = expectation.events {
// Handling the events length assertion.
let expected = expected_events.len();
let actual = execution_receipt.logs().len();
if actual != expected {
tracing::error!(expected, actual, "Event count assertion failed",);
anyhow::bail!(
"Event count assertion failed - Expected {expected} but got {actual}",
);
}
// Handling the events assertion.
for (event_idx, (expected_event, actual_event)) in expected_events
.iter()
.zip(execution_receipt.logs())
.enumerate()
{
// Handling the emitter assertion.
if let Some(ref expected_address) = expected_event.address {
let expected = Address::from_slice(
Calldata::new_compound([expected_address])
.calldata(deployed_contracts, &*variables, resolver)
.await?
.get(12..32)
.expect("Can't fail"),
);
let actual = actual_event.address();
if actual != expected {
tracing::error!(
event_idx,
%expected,
%actual,
"Event emitter assertion failed",
);
anyhow::bail!(
"Event emitter assertion failed - Expected {expected} but got {actual}",
);
}
}
// Handling the topics assertion.
for (expected, actual) in expected_event
.topics
.as_slice()
.iter()
.zip(actual_event.topics())
{
let expected = Calldata::new_compound([expected]);
if !expected
.is_equivalent(&actual.0, deployed_contracts, &*variables, resolver)
.await?
{
tracing::error!(
event_idx,
?execution_receipt,
?expected,
?actual,
"Event topics assertion failed",
);
anyhow::bail!(
"Event topics assertion failed - Expected {expected:?} but got {actual:?}",
);
}
}
// Handling the values assertion.
let expected = &expected_event.values;
let actual = &actual_event.data().data;
if !expected
.is_equivalent(&actual.0, deployed_contracts, &*variables, resolver)
.await?
{
tracing::error!(
event_idx,
?execution_receipt,
?expected,
?actual,
"Event value assertion failed",
);
anyhow::bail!(
"Event value assertion failed - Expected {expected:?} but got {actual:?}",
);
}
}
}
Ok(())
}
async fn handle_input_diff(
&mut self,
_: CaseIdx,
execution_receipt: TransactionReceipt,
node: &T::Blockchain,
) -> anyhow::Result<(TransactionReceipt, GethTrace, DiffMode)> {
let span = tracing::info_span!("Handling input diff");
let _guard = span.enter();
let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig {
diff_mode: Some(true),
disable_code: None,
disable_storage: None,
});
let trace = node
.trace_transaction(&execution_receipt, trace_options)
.await?;
let diff = node.state_diff(&execution_receipt).await?;
Ok((execution_receipt, trace, diff))
}
/// Gets the information of a deployed contract or library from the state. If it's found to not
/// be deployed then it will be deployed.
///
/// If a [`CaseIdx`] is not specified then this contact instance address will be stored in the
/// cross-case deployed contracts address mapping.
#[allow(clippy::too_many_arguments)]
pub async fn get_or_deploy_contract_instance(
&mut self,
contract_instance: &ContractInstance,
metadata: &Metadata,
deployer: Address,
calldata: Option<&Calldata>,
value: Option<EtherValue>,
node: &T::Blockchain,
) -> anyhow::Result<(Address, JsonAbi, Option<TransactionReceipt>)> {
if let Some((address, abi)) = self.deployed_contracts.get(contract_instance) {
return Ok((*address, abi.clone(), None));
}
let Some(ContractPathAndIdent {
contract_source_path,
contract_ident,
}) = metadata.contract_sources()?.remove(contract_instance)
else {
tracing::error!("Contract source not found for instance");
anyhow::bail!(
"Contract source not found for instance {:?}",
contract_instance
)
};
let Some((code, abi)) = self
.compiled_contracts
.get(&contract_source_path)
.and_then(|source_file_contracts| source_file_contracts.get(contract_ident.as_ref()))
.cloned()
else {
tracing::error!(
contract_source_path = contract_source_path.display().to_string(),
contract_ident = contract_ident.as_ref(),
"Failed to find information for contract"
);
anyhow::bail!(
"Failed to find information for contract {:?}",
contract_instance
)
};
let mut code = match alloy::hex::decode(&code) {
Ok(code) => code,
Err(error) => {
tracing::error!(
?error,
contract_source_path = contract_source_path.display().to_string(),
contract_ident = contract_ident.as_ref(),
"Failed to hex-decode byte code - This could possibly mean that the bytecode requires linking"
);
anyhow::bail!("Failed to hex-decode the byte code {}", error)
}
};
if let Some(calldata) = calldata {
let calldata = calldata
.calldata(&self.deployed_contracts, None, node)
.await?;
code.extend(calldata);
}
let tx = {
let tx = TransactionRequest::default().from(deployer);
let tx = match value {
Some(ref value) => tx.value(value.into_inner()),
_ => tx,
};
TransactionBuilder::<Ethereum>::with_deploy_code(tx, code)
};
let receipt = match node.execute_transaction(tx).await {
Ok(receipt) => receipt,
Err(error) => {
tracing::error!(
node = std::any::type_name::<T>(),
?error,
"Contract deployment transaction failed."
);
return Err(error);
}
};
let Some(address) = receipt.contract_address else {
tracing::error!("Contract deployment transaction didn't return an address");
anyhow::bail!("Contract deployment didn't return an address");
};
tracing::info!(
instance_name = ?contract_instance,
instance_address = ?address,
"Deployed contract"
);
self.deployed_contracts
.insert(contract_instance.clone(), (address, abi.clone()));
Ok((address, abi, Some(receipt)))
}
}
pub struct CaseDriver<'a, Leader: Platform, Follower: Platform> {
metadata: &'a Metadata,
case: &'a Case,
case_idx: CaseIdx,
leader_node: &'a Leader::Blockchain,
follower_node: &'a Follower::Blockchain,
leader_state: CaseState<Leader>,
follower_state: CaseState<Follower>,
}
impl<'a, L, F> CaseDriver<'a, L, F>
where
L: Platform,
F: Platform,
{
#[allow(clippy::too_many_arguments)]
pub fn new(
metadata: &'a Metadata,
case: &'a Case,
case_idx: impl Into<CaseIdx>,
leader_node: &'a L::Blockchain,
follower_node: &'a F::Blockchain,
leader_state: CaseState<L>,
follower_state: CaseState<F>,
) -> CaseDriver<'a, L, F> {
Self {
metadata,
case,
case_idx: case_idx.into(),
leader_node,
follower_node,
leader_state,
follower_state,
}
}
pub fn trace_diff_mode(label: &str, diff: &DiffMode) {
tracing::trace!("{label} - PRE STATE:");
for (addr, state) in &diff.pre {
Self::trace_account_state(" [pre]", addr, state);
}
tracing::trace!("{label} - POST STATE:");
for (addr, state) in &diff.post {
Self::trace_account_state(" [post]", addr, state);
}
}
fn trace_account_state(prefix: &str, addr: &Address, state: &AccountState) {
tracing::trace!("{prefix} 0x{addr:x}");
if let Some(balance) = &state.balance {
tracing::trace!("{prefix} balance: {balance}");
}
if let Some(nonce) = &state.nonce {
tracing::trace!("{prefix} nonce: {nonce}");
}
if let Some(code) = &state.code {
tracing::trace!("{prefix} code: {code}");
}
}
pub async fn execute(&mut self) -> anyhow::Result<usize> {
if !self
.leader_node
.matches_target(self.metadata.targets.as_deref())
|| !self
.follower_node
.matches_target(self.metadata.targets.as_deref())
{
tracing::warn!(
targets = ?self.metadata.targets,
"Either the leader or follower node do not support the targets of the file"
);
return Ok(0);
}
let mut inputs_executed = 0;
for (input_idx, input) in self.case.inputs_iterator().enumerate() {
let tracing_span = tracing::info_span!("Handling input", input_idx);
let (leader_receipt, _, leader_diff) = self
.leader_state
.handle_input(self.metadata, self.case_idx, &input, self.leader_node)
.instrument(tracing_span.clone())
.await?;
let (follower_receipt, _, follower_diff) = self
.follower_state
.handle_input(self.metadata, self.case_idx, &input, self.follower_node)
.instrument(tracing_span)
.await?;
if leader_diff == follower_diff {
tracing::debug!("State diffs match between leader and follower.");
} else {
tracing::debug!("State diffs mismatch between leader and follower.");
Self::trace_diff_mode("Leader", &leader_diff);
Self::trace_diff_mode("Follower", &follower_diff);
}
if leader_receipt.logs() != follower_receipt.logs() {
tracing::debug!("Log/event mismatch between leader and follower.");
tracing::trace!("Leader logs: {:?}", leader_receipt.logs());
tracing::trace!("Follower logs: {:?}", follower_receipt.logs());
}
inputs_executed += 1;
}
Ok(inputs_executed)
}
}
pub struct BlockPinnedResolver<'a, T: Platform> {
block_number: BlockNumber,
node: &'a T::Blockchain,
}
impl<'a, T: Platform> ResolverApi for BlockPinnedResolver<'a, T> {
async fn chain_id(&self) -> anyhow::Result<alloy::primitives::ChainId> {
self.node.chain_id().await
}
async fn block_gas_limit(&self, number: BlockNumberOrTag) -> anyhow::Result<u128> {
self.node
.block_gas_limit(self.resolve_block_number_or_tag(number))
.await
}
async fn block_coinbase(&self, number: BlockNumberOrTag) -> anyhow::Result<Address> {
self.node
.block_coinbase(self.resolve_block_number_or_tag(number))
.await
}
async fn block_difficulty(&self, number: BlockNumberOrTag) -> anyhow::Result<U256> {
self.node
.block_difficulty(self.resolve_block_number_or_tag(number))
.await
}
async fn block_hash(
&self,
number: BlockNumberOrTag,
) -> anyhow::Result<alloy::primitives::BlockHash> {
self.node
.block_hash(self.resolve_block_number_or_tag(number))
.await
}
async fn block_timestamp(
&self,
number: BlockNumberOrTag,
) -> anyhow::Result<alloy::primitives::BlockTimestamp> {
self.node
.block_timestamp(self.resolve_block_number_or_tag(number))
.await
}
async fn last_block_number(&self) -> anyhow::Result<alloy::primitives::BlockNumber> {
Ok(self.block_number)
}
}
impl<'a, T: Platform> BlockPinnedResolver<'a, T> {
fn resolve_block_number_or_tag(&self, number: BlockNumberOrTag) -> BlockNumberOrTag {
match number {
BlockNumberOrTag::Latest => BlockNumberOrTag::Number(self.block_number),
n @ BlockNumberOrTag::Finalized
| n @ BlockNumberOrTag::Safe
| n @ BlockNumberOrTag::Earliest
| n @ BlockNumberOrTag::Pending
| n @ BlockNumberOrTag::Number(_) => n,
}
}
}
+47
View File
@@ -0,0 +1,47 @@
//! The revive differential testing core library.
//!
//! This crate defines the testing configuration and
//! provides a helper utility to execute tests.
use revive_dt_compiler::{SolidityCompiler, revive_resolc, solc};
use revive_dt_config::TestingPlatform;
use revive_dt_format::traits::ResolverApi;
use revive_dt_node::{Node, geth, kitchensink::KitchensinkNode};
use revive_dt_node_interaction::EthereumNode;
pub mod driver;
/// One platform can be tested differentially against another.
///
/// For this we need a blockchain node implementation and a compiler.
pub trait Platform {
type Blockchain: EthereumNode + Node + ResolverApi;
type Compiler: SolidityCompiler;
/// Returns the matching [TestingPlatform] of the [revive_dt_config::Arguments].
fn config_id() -> TestingPlatform;
}
#[derive(Default)]
pub struct Geth;
impl Platform for Geth {
type Blockchain = geth::GethNode;
type Compiler = solc::Solc;
fn config_id() -> TestingPlatform {
TestingPlatform::Geth
}
}
#[derive(Default)]
pub struct Kitchensink;
impl Platform for Kitchensink {
type Blockchain = KitchensinkNode;
type Compiler = revive_resolc::Resolc;
fn config_id() -> TestingPlatform {
TestingPlatform::Kitchensink
}
}
+744
View File
@@ -0,0 +1,744 @@
use std::{
collections::HashMap,
path::Path,
sync::{Arc, LazyLock},
};
use alloy::{
json_abi::JsonAbi,
network::{Ethereum, TransactionBuilder},
primitives::Address,
rpc::types::TransactionRequest,
};
use anyhow::Context;
use clap::Parser;
use futures::StreamExt;
use revive_dt_common::iterators::FilesWithExtensionIterator;
use revive_dt_node_interaction::EthereumNode;
use semver::Version;
use temp_dir::TempDir;
use tokio::sync::{Mutex, RwLock};
use tracing::{Instrument, Level};
use tracing_subscriber::{EnvFilter, FmtSubscriber};
use revive_dt_compiler::SolidityCompiler;
use revive_dt_compiler::{Compiler, CompilerOutput};
use revive_dt_config::*;
use revive_dt_core::{
Geth, Kitchensink, Platform,
driver::{CaseDriver, CaseState},
};
use revive_dt_format::{
case::{Case, CaseIdx},
corpus::Corpus,
input::Input,
metadata::{ContractInstance, ContractPathAndIdent, Metadata, MetadataFile},
mode::SolcMode,
};
use revive_dt_node::pool::NodePool;
use revive_dt_report::reporter::{Report, Span};
static TEMP_DIR: LazyLock<TempDir> = LazyLock::new(|| TempDir::new().unwrap());
type CompilationCache<'a> = Arc<
RwLock<
HashMap<
(&'a Path, SolcMode, TestingPlatform),
Arc<Mutex<Option<Arc<(Version, CompilerOutput)>>>>,
>,
>,
>;
fn main() -> anyhow::Result<()> {
let args = init_cli()?;
let body = async {
for (corpus, tests) in collect_corpora(&args)? {
let span = Span::new(corpus, args.clone())?;
match &args.compile_only {
Some(platform) => compile_corpus(&args, &tests, platform, span).await,
None => execute_corpus(&args, &tests, span).await?,
}
Report::save()?;
}
Ok(())
};
tokio::runtime::Builder::new_multi_thread()
.worker_threads(args.number_of_threads)
.enable_all()
.build()
.expect("Failed building the Runtime")
.block_on(body)
}
fn init_cli() -> anyhow::Result<Arguments> {
let subscriber = FmtSubscriber::builder()
.with_thread_ids(true)
.with_thread_names(true)
.with_env_filter(EnvFilter::from_default_env())
.with_ansi(false)
.pretty()
.finish();
tracing::subscriber::set_global_default(subscriber)?;
let mut args = Arguments::parse();
if args.corpus.is_empty() {
anyhow::bail!("no test corpus specified");
}
match args.working_directory.as_ref() {
Some(dir) => {
if !dir.exists() {
anyhow::bail!("workdir {} does not exist", dir.display());
}
}
None => {
args.temp_dir = Some(&TEMP_DIR);
}
}
tracing::info!("workdir: {}", args.directory().display());
Ok(args)
}
fn collect_corpora(args: &Arguments) -> anyhow::Result<HashMap<Corpus, Vec<MetadataFile>>> {
let mut corpora = HashMap::new();
for path in &args.corpus {
let corpus = Corpus::try_from_path(path)?;
tracing::info!("found corpus: {}", path.display());
let tests = corpus.enumerate_tests();
tracing::info!("corpus '{}' contains {} tests", &corpus.name, tests.len());
corpora.insert(corpus, tests);
}
Ok(corpora)
}
async fn run_driver<L, F>(
args: &Arguments,
tests: &[MetadataFile],
span: Span,
) -> anyhow::Result<()>
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
{
let leader_nodes = NodePool::<L::Blockchain>::new(args)?;
let follower_nodes = NodePool::<F::Blockchain>::new(args)?;
let test_cases = tests
.iter()
.flat_map(
|MetadataFile {
path,
content: metadata,
}| {
metadata
.cases
.iter()
.enumerate()
.flat_map(move |(case_idx, case)| {
metadata
.solc_modes()
.into_iter()
.map(move |solc_mode| (path, metadata, case_idx, case, solc_mode))
})
},
)
.filter(
|(metadata_file_path, metadata, _, _, _)| match metadata.ignore {
Some(true) => {
tracing::warn!(
metadata_file_path = %metadata_file_path.display(),
"Ignoring metadata file"
);
false
}
Some(false) | None => true,
},
)
.filter(
|(metadata_file_path, _, case_idx, case, _)| match case.ignore {
Some(true) => {
tracing::warn!(
metadata_file_path = %metadata_file_path.display(),
case_idx,
case_name = ?case.name,
"Ignoring case"
);
false
}
Some(false) | None => true,
},
)
.collect::<Vec<_>>();
let metadata_case_status = Arc::new(RwLock::new(test_cases.iter().fold(
HashMap::<_, HashMap<_, _>>::new(),
|mut map, (path, _, case_idx, case, solc_mode)| {
map.entry((path.to_path_buf(), solc_mode.clone()))
.or_default()
.insert((CaseIdx::new(*case_idx), case.name.clone()), None::<bool>);
map
},
)));
let status_reporter_task = {
let metadata_case_status = metadata_case_status.clone();
async move {
const GREEN: &str = "\x1B[32m";
const RED: &str = "\x1B[31m";
const RESET: &str = "\x1B[0m";
let mut entries_to_delete = Vec::new();
let mut number_of_successes = 0;
let mut number_of_failures = 0;
loop {
let metadata_case_status_read = metadata_case_status.read().await;
if metadata_case_status_read.is_empty() {
break;
}
for ((metadata_file_path, solc_mode), case_status) in
metadata_case_status_read.iter()
{
if case_status.values().any(|value| value.is_none()) {
continue;
}
let contains_failures = case_status
.values()
.any(|value| value.is_some_and(|value| !value));
if !contains_failures {
eprintln!(
"{}Succeeded:{} {} - {:?}",
GREEN,
RESET,
metadata_file_path.display(),
solc_mode
)
} else {
eprintln!(
"{}Failed:{} {} - {:?}",
RED,
RESET,
metadata_file_path.display(),
solc_mode
)
};
number_of_successes += case_status
.values()
.filter(|value| value.is_some_and(|value| value))
.count();
number_of_failures += case_status
.values()
.filter(|value| value.is_some_and(|value| !value))
.count();
let mut case_status = case_status
.iter()
.map(|((case_idx, case_name), case_status)| {
(case_idx.into_inner(), case_name, case_status.unwrap())
})
.collect::<Vec<_>>();
case_status.sort_by(|a, b| a.0.cmp(&b.0));
for (case_idx, case_name, case_status) in case_status.into_iter() {
if case_status {
eprintln!(
" {GREEN}Case Succeeded:{RESET} {} - Case Idx: {case_idx}",
case_name
.as_ref()
.map(|string| string.as_str())
.unwrap_or("Unnamed case")
)
} else {
eprintln!(
" {RED}Case Failed:{RESET} {} - Case Idx: {case_idx}",
case_name
.as_ref()
.map(|string| string.as_str())
.unwrap_or("Unnamed case")
)
};
}
eprintln!();
entries_to_delete.push((metadata_file_path.clone(), solc_mode.clone()));
}
drop(metadata_case_status_read);
let mut metadata_case_status_write = metadata_case_status.write().await;
for entry in entries_to_delete.drain(..) {
metadata_case_status_write.remove(&entry);
}
tokio::time::sleep(std::time::Duration::from_secs(3)).await;
}
eprintln!(
"{GREEN}{number_of_successes}{RESET} cases succeeded, {RED}{number_of_failures}{RESET} cases failed"
);
}
};
let compilation_cache = Arc::new(RwLock::new(HashMap::new()));
let driver_task = futures::stream::iter(test_cases).for_each_concurrent(
None,
|(metadata_file_path, metadata, case_idx, case, solc_mode)| {
let compilation_cache = compilation_cache.clone();
let leader_node = leader_nodes.round_robbin();
let follower_node = follower_nodes.round_robbin();
let tracing_span = tracing::span!(
Level::INFO,
"Running driver",
metadata_file_path = %metadata_file_path.display(),
case_idx = case_idx,
solc_mode = ?solc_mode,
);
let metadata_case_status = metadata_case_status.clone();
async move {
let result = handle_case_driver::<L, F>(
metadata_file_path.as_path(),
metadata,
case_idx.into(),
case,
solc_mode.clone(),
args,
compilation_cache.clone(),
leader_node,
follower_node,
span,
)
.await;
let mut metadata_case_status = metadata_case_status.write().await;
match result {
Ok(inputs_executed) => {
tracing::info!(inputs_executed, "Execution succeeded");
metadata_case_status
.entry((metadata_file_path.clone(), solc_mode))
.or_default()
.insert((CaseIdx::new(case_idx), case.name.clone()), Some(true));
}
Err(error) => {
metadata_case_status
.entry((metadata_file_path.clone(), solc_mode))
.or_default()
.insert((CaseIdx::new(case_idx), case.name.clone()), Some(false));
tracing::error!(%error, "Execution failed")
}
}
tracing::info!("Execution completed");
}
.instrument(tracing_span)
},
);
tokio::join!(status_reporter_task, driver_task);
Ok(())
}
#[allow(clippy::too_many_arguments)]
async fn handle_case_driver<'a, L, F>(
metadata_file_path: &'a Path,
metadata: &'a Metadata,
case_idx: CaseIdx,
case: &Case,
mode: SolcMode,
config: &Arguments,
compilation_cache: CompilationCache<'a>,
leader_node: &L::Blockchain,
follower_node: &F::Blockchain,
_: Span,
) -> anyhow::Result<usize>
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
{
let leader_pre_link_contracts = get_or_build_contracts::<L>(
metadata,
metadata_file_path,
mode.clone(),
config,
compilation_cache.clone(),
&HashMap::new(),
)
.await?;
let follower_pre_link_contracts = get_or_build_contracts::<F>(
metadata,
metadata_file_path,
mode.clone(),
config,
compilation_cache.clone(),
&HashMap::new(),
)
.await?;
let mut leader_deployed_libraries = HashMap::new();
let mut follower_deployed_libraries = HashMap::new();
let mut contract_sources = metadata.contract_sources()?;
for library_instance in metadata
.libraries
.iter()
.flatten()
.flat_map(|(_, map)| map.values())
{
let ContractPathAndIdent {
contract_source_path: library_source_path,
contract_ident: library_ident,
} = contract_sources
.remove(library_instance)
.context("Failed to find the contract source")?;
let (leader_code, leader_abi) = leader_pre_link_contracts
.1
.contracts
.get(&library_source_path)
.and_then(|contracts| contracts.get(library_ident.as_str()))
.context("Declared library was not compiled")?;
let (follower_code, follower_abi) = follower_pre_link_contracts
.1
.contracts
.get(&library_source_path)
.and_then(|contracts| contracts.get(library_ident.as_str()))
.context("Declared library was not compiled")?;
let leader_code = match alloy::hex::decode(leader_code) {
Ok(code) => code,
Err(error) => {
tracing::error!(
?error,
contract_source_path = library_source_path.display().to_string(),
contract_ident = library_ident.as_ref(),
"Failed to hex-decode byte code - This could possibly mean that the bytecode requires linking"
);
anyhow::bail!("Failed to hex-decode the byte code {}", error)
}
};
let follower_code = match alloy::hex::decode(follower_code) {
Ok(code) => code,
Err(error) => {
tracing::error!(
?error,
contract_source_path = library_source_path.display().to_string(),
contract_ident = library_ident.as_ref(),
"Failed to hex-decode byte code - This could possibly mean that the bytecode requires linking"
);
anyhow::bail!("Failed to hex-decode the byte code {}", error)
}
};
// Getting the deployer address from the cases themselves. This is to ensure that we're
// doing the deployments from different accounts and therefore we're not slowed down by
// the nonce.
let deployer_address = case
.inputs
.iter()
.map(|input| input.caller)
.next()
.unwrap_or(Input::default_caller());
let leader_tx = TransactionBuilder::<Ethereum>::with_deploy_code(
TransactionRequest::default().from(deployer_address),
leader_code,
);
let follower_tx = TransactionBuilder::<Ethereum>::with_deploy_code(
TransactionRequest::default().from(deployer_address),
follower_code,
);
let leader_receipt = match leader_node.execute_transaction(leader_tx).await {
Ok(receipt) => receipt,
Err(error) => {
tracing::error!(
node = std::any::type_name::<L>(),
?error,
"Contract deployment transaction failed."
);
return Err(error);
}
};
let follower_receipt = match follower_node.execute_transaction(follower_tx).await {
Ok(receipt) => receipt,
Err(error) => {
tracing::error!(
node = std::any::type_name::<F>(),
?error,
"Contract deployment transaction failed."
);
return Err(error);
}
};
tracing::info!(
?library_instance,
library_address = ?leader_receipt.contract_address,
"Deployed library to leader"
);
tracing::info!(
?library_instance,
library_address = ?follower_receipt.contract_address,
"Deployed library to follower"
);
let Some(leader_library_address) = leader_receipt.contract_address else {
tracing::error!("Contract deployment transaction didn't return an address");
anyhow::bail!("Contract deployment didn't return an address");
};
let Some(follower_library_address) = follower_receipt.contract_address else {
tracing::error!("Contract deployment transaction didn't return an address");
anyhow::bail!("Contract deployment didn't return an address");
};
leader_deployed_libraries.insert(
library_instance.clone(),
(leader_library_address, leader_abi.clone()),
);
follower_deployed_libraries.insert(
library_instance.clone(),
(follower_library_address, follower_abi.clone()),
);
}
let metadata_file_contains_libraries = metadata
.libraries
.iter()
.flat_map(|map| map.iter())
.flat_map(|(_, value)| value.iter())
.next()
.is_some();
let compiled_contracts_require_linking = leader_pre_link_contracts
.1
.contracts
.values()
.chain(follower_pre_link_contracts.1.contracts.values())
.flat_map(|value| value.values())
.any(|(code, _)| !code.chars().all(|char| char.is_ascii_hexdigit()));
let (leader_compiled_contracts, follower_compiled_contracts) =
if metadata_file_contains_libraries && compiled_contracts_require_linking {
let leader_key = (metadata_file_path, mode.clone(), L::config_id());
let follower_key = (metadata_file_path, mode.clone(), L::config_id());
{
let mut cache = compilation_cache.write().await;
cache.remove(&leader_key);
cache.remove(&follower_key);
}
let leader_post_link_contracts = get_or_build_contracts::<L>(
metadata,
metadata_file_path,
mode.clone(),
config,
compilation_cache.clone(),
&leader_deployed_libraries,
)
.await?;
let follower_post_link_contracts = get_or_build_contracts::<F>(
metadata,
metadata_file_path,
mode.clone(),
config,
compilation_cache,
&follower_deployed_libraries,
)
.await?;
(leader_post_link_contracts, follower_post_link_contracts)
} else {
(leader_pre_link_contracts, follower_pre_link_contracts)
};
let leader_state = CaseState::<L>::new(
leader_compiled_contracts.0.clone(),
leader_compiled_contracts.1.contracts.clone(),
leader_deployed_libraries,
);
let follower_state = CaseState::<F>::new(
follower_compiled_contracts.0.clone(),
follower_compiled_contracts.1.contracts.clone(),
follower_deployed_libraries,
);
let mut driver = CaseDriver::<L, F>::new(
metadata,
case,
case_idx,
leader_node,
follower_node,
leader_state,
follower_state,
);
driver.execute().await
}
async fn get_or_build_contracts<'a, P: Platform>(
metadata: &'a Metadata,
metadata_file_path: &'a Path,
mode: SolcMode,
config: &Arguments,
compilation_cache: CompilationCache<'a>,
deployed_libraries: &HashMap<ContractInstance, (Address, JsonAbi)>,
) -> anyhow::Result<Arc<(Version, CompilerOutput)>> {
let key = (metadata_file_path, mode.clone(), P::config_id());
if let Some(compilation_artifact) = compilation_cache.read().await.get(&key).cloned() {
let mut compilation_artifact = compilation_artifact.lock().await;
match *compilation_artifact {
Some(ref compiled_contracts) => {
tracing::debug!(?key, "Compiled contracts cache hit");
return Ok(compiled_contracts.clone());
}
None => {
tracing::debug!(?key, "Compiled contracts cache miss");
let compiled_contracts = Arc::new(
compile_contracts::<P>(
metadata,
metadata_file_path,
&mode,
config,
deployed_libraries,
)
.await?,
);
*compilation_artifact = Some(compiled_contracts.clone());
return Ok(compiled_contracts.clone());
}
}
};
tracing::debug!(?key, "Compiled contracts cache miss");
let mutex = {
let mut compilation_cache = compilation_cache.write().await;
let mutex = Arc::new(Mutex::new(None));
compilation_cache.insert(key, mutex.clone());
mutex
};
let mut compilation_artifact = mutex.lock().await;
let compiled_contracts = Arc::new(
compile_contracts::<P>(
metadata,
metadata_file_path,
&mode,
config,
deployed_libraries,
)
.await?,
);
*compilation_artifact = Some(compiled_contracts.clone());
Ok(compiled_contracts.clone())
}
async fn compile_contracts<P: Platform>(
metadata: &Metadata,
metadata_file_path: &Path,
mode: &SolcMode,
config: &Arguments,
deployed_libraries: &HashMap<ContractInstance, (Address, JsonAbi)>,
) -> anyhow::Result<(Version, CompilerOutput)> {
let compiler_version_or_requirement = mode.compiler_version_to_use(config.solc.clone());
let compiler_path =
P::Compiler::get_compiler_executable(config, compiler_version_or_requirement).await?;
let compiler_version = P::Compiler::new(compiler_path.clone()).version()?;
tracing::info!(
%compiler_version,
metadata_file_path = %metadata_file_path.display(),
mode = ?mode,
"Compiling contracts"
);
let compiler = Compiler::<P::Compiler>::new()
.with_allow_path(metadata.directory()?)
.with_optimization(mode.solc_optimize());
let mut compiler = metadata
.files_to_compile()?
.try_fold(compiler, |compiler, path| compiler.with_source(&path))?;
for (library_instance, (library_address, _)) in deployed_libraries.iter() {
let library_ident = &metadata
.contracts
.as_ref()
.and_then(|contracts| contracts.get(library_instance))
.expect("Impossible for library to not be found in contracts")
.contract_ident;
// Note the following: we need to tell solc which files require the libraries to be linked
// into them. We do not have access to this information and therefore we choose an easier,
// yet more compute intensive route, of telling solc that all of the files need to link the
// library and it will only perform the linking for the files that do actually need the
// library.
compiler = FilesWithExtensionIterator::new(metadata.directory()?)
.with_allowed_extension("sol")
.fold(compiler, |compiler, path| {
compiler.with_library(&path, library_ident.as_str(), *library_address)
});
}
let compiler_output = compiler.try_build(compiler_path).await?;
Ok((compiler_version, compiler_output))
}
async fn execute_corpus(
args: &Arguments,
tests: &[MetadataFile],
span: Span,
) -> anyhow::Result<()> {
match (&args.leader, &args.follower) {
(TestingPlatform::Geth, TestingPlatform::Kitchensink) => {
run_driver::<Geth, Kitchensink>(args, tests, span).await?
}
(TestingPlatform::Geth, TestingPlatform::Geth) => {
run_driver::<Geth, Geth>(args, tests, span).await?
}
_ => unimplemented!(),
}
Ok(())
}
async fn compile_corpus(
config: &Arguments,
tests: &[MetadataFile],
platform: &TestingPlatform,
_: Span,
) {
let tests = tests.iter().flat_map(|metadata| {
metadata
.solc_modes()
.into_iter()
.map(move |solc_mode| (metadata, solc_mode))
});
futures::stream::iter(tests)
.for_each_concurrent(None, |(metadata, mode)| async move {
match platform {
TestingPlatform::Geth => {
let _ = compile_contracts::<Geth>(
&metadata.content,
&metadata.path,
&mode,
config,
&Default::default(),
)
.await;
}
TestingPlatform::Kitchensink => {
let _ = compile_contracts::<Geth>(
&metadata.content,
&metadata.path,
&mode,
config,
&Default::default(),
)
.await;
}
}
})
.await;
}
+24
View File
@@ -0,0 +1,24 @@
[package]
name = "revive-dt-format"
description = "declarative test definition format"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
revive-dt-common = { workspace = true }
alloy = { workspace = true }
alloy-primitives = { workspace = true }
alloy-sol-types = { workspace = true }
anyhow = { workspace = true }
tracing = { workspace = true }
semver = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
[dev-dependencies]
tokio = { workspace = true }
+51
View File
@@ -0,0 +1,51 @@
use serde::Deserialize;
use revive_dt_common::macros::define_wrapper_type;
use crate::{
input::{Expected, Input},
mode::Mode,
};
#[derive(Debug, Default, Deserialize, Clone, Eq, PartialEq)]
pub struct Case {
pub name: Option<String>,
pub comment: Option<String>,
pub modes: Option<Vec<Mode>>,
pub inputs: Vec<Input>,
pub group: Option<String>,
pub expected: Option<Expected>,
pub ignore: Option<bool>,
}
impl Case {
pub fn inputs_iterator(&self) -> impl Iterator<Item = Input> {
let inputs_len = self.inputs.len();
self.inputs
.clone()
.into_iter()
.enumerate()
.map(move |(idx, mut input)| {
if idx + 1 == inputs_len {
if input.expected.is_none() {
input.expected = self.expected.clone();
}
// TODO: What does it mean for us to have an `expected` field on the case itself
// but the final input also has an expected field that doesn't match the one on
// the case? What are we supposed to do with that final expected field on the
// case?
input
} else {
input
}
})
}
}
define_wrapper_type!(
/// A wrapper type for the index of test cases found in metadata file.
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct CaseIdx(usize);
);
+99
View File
@@ -0,0 +1,99 @@
use std::{
fs::File,
path::{Path, PathBuf},
};
use serde::{Deserialize, Serialize};
use crate::metadata::MetadataFile;
#[derive(Clone, Debug, Default, Serialize, Deserialize, Eq, PartialEq, Hash)]
pub struct Corpus {
pub name: String,
pub path: PathBuf,
}
impl Corpus {
/// Try to read and parse the corpus definition file at given `path`.
pub fn try_from_path(path: &Path) -> anyhow::Result<Self> {
let file = File::open(path)?;
let mut corpus: Corpus = serde_json::from_reader(file)?;
// Ensure that the path mentioned in the corpus is relative to the corpus file.
// Canonicalizing also helps make the path in any errors unambiguous.
corpus.path = path
.parent()
.ok_or_else(|| {
anyhow::anyhow!("Corpus path '{}' does not point to a file", path.display())
})?
.canonicalize()
.map_err(|error| {
anyhow::anyhow!(
"Failed to canonicalize path to corpus '{}': {error}",
path.display()
)
})?
.join(corpus.path);
Ok(corpus)
}
/// Scan the corpus base directory and return all tests found.
pub fn enumerate_tests(&self) -> Vec<MetadataFile> {
let mut tests = Vec::new();
collect_metadata(&self.path, &mut tests);
tests
}
}
/// Recursively walks `path` and parses any JSON or Solidity file into a test
/// definition [Metadata].
///
/// Found tests are inserted into `tests`.
///
/// `path` is expected to be a directory.
pub fn collect_metadata(path: &Path, tests: &mut Vec<MetadataFile>) {
if path.is_dir() {
let dir_entry = match std::fs::read_dir(path) {
Ok(dir_entry) => dir_entry,
Err(error) => {
tracing::error!("failed to read dir '{}': {error}", path.display());
return;
}
};
for entry in dir_entry {
let entry = match entry {
Ok(entry) => entry,
Err(error) => {
tracing::error!("error reading dir entry: {error}");
continue;
}
};
let path = entry.path();
if path.is_dir() {
collect_metadata(&path, tests);
continue;
}
if path.is_file() {
if let Some(metadata) = MetadataFile::try_from_file(&path) {
tests.push(metadata)
}
}
}
} else {
let Some(extension) = path.extension() else {
tracing::error!("Failed to get file extension");
return;
};
if extension.eq_ignore_ascii_case("sol") || extension.eq_ignore_ascii_case("json") {
if let Some(metadata) = MetadataFile::try_from_file(path) {
tests.push(metadata)
}
} else {
tracing::error!(?extension, "Unsupported file extension");
}
}
}
File diff suppressed because it is too large Load Diff
+8
View File
@@ -0,0 +1,8 @@
//! The revive differential tests case format.
pub mod case;
pub mod corpus;
pub mod input;
pub mod metadata;
pub mod mode;
pub mod traits;
+384
View File
@@ -0,0 +1,384 @@
use std::{
collections::BTreeMap,
fmt::Display,
fs::{File, read_to_string},
ops::Deref,
path::{Path, PathBuf},
str::FromStr,
};
use serde::{Deserialize, Serialize};
use revive_dt_common::{iterators::FilesWithExtensionIterator, macros::define_wrapper_type};
use crate::{
case::Case,
mode::{Mode, SolcMode},
};
pub const METADATA_FILE_EXTENSION: &str = "json";
pub const SOLIDITY_CASE_FILE_EXTENSION: &str = "sol";
pub const SOLIDITY_CASE_COMMENT_MARKER: &str = "//!";
#[derive(Debug, Default, Deserialize, Clone, Eq, PartialEq)]
pub struct MetadataFile {
pub path: PathBuf,
pub content: Metadata,
}
impl MetadataFile {
pub fn try_from_file(path: &Path) -> Option<Self> {
Metadata::try_from_file(path).map(|metadata| Self {
path: path.to_owned(),
content: metadata,
})
}
}
impl Deref for MetadataFile {
type Target = Metadata;
fn deref(&self) -> &Self::Target {
&self.content
}
}
#[derive(Debug, Default, Deserialize, Clone, Eq, PartialEq)]
pub struct Metadata {
pub targets: Option<Vec<String>>,
pub cases: Vec<Case>,
pub contracts: Option<BTreeMap<ContractInstance, ContractPathAndIdent>>,
// TODO: Convert into wrapper types for clarity.
pub libraries: Option<BTreeMap<PathBuf, BTreeMap<ContractIdent, ContractInstance>>>,
pub ignore: Option<bool>,
pub modes: Option<Vec<Mode>>,
pub file_path: Option<PathBuf>,
}
impl Metadata {
/// Returns the solc modes of this metadata, inserting a default mode if not present.
pub fn solc_modes(&self) -> Vec<SolcMode> {
self.modes
.to_owned()
.unwrap_or_else(|| vec![Mode::Solidity(Default::default())])
.iter()
.filter_map(|mode| match mode {
Mode::Solidity(solc_mode) => Some(solc_mode),
Mode::Unknown(mode) => {
tracing::debug!("compiler: ignoring unknown mode '{mode}'");
None
}
})
.cloned()
.collect()
}
/// Returns the base directory of this metadata.
pub fn directory(&self) -> anyhow::Result<PathBuf> {
Ok(self
.file_path
.as_ref()
.and_then(|path| path.parent())
.ok_or_else(|| anyhow::anyhow!("metadata invalid file path: {:?}", self.file_path))?
.to_path_buf())
}
/// Returns the contract sources with canonicalized paths for the files
pub fn contract_sources(
&self,
) -> anyhow::Result<BTreeMap<ContractInstance, ContractPathAndIdent>> {
let directory = self.directory()?;
let mut sources = BTreeMap::new();
let Some(contracts) = &self.contracts else {
return Ok(sources);
};
for (
alias,
ContractPathAndIdent {
contract_source_path,
contract_ident,
},
) in contracts
{
let alias = alias.clone();
let absolute_path = directory.join(contract_source_path).canonicalize()?;
let contract_ident = contract_ident.clone();
sources.insert(
alias,
ContractPathAndIdent {
contract_source_path: absolute_path,
contract_ident,
},
);
}
Ok(sources)
}
/// Try to parse the test metadata struct from the given file at `path`.
///
/// Returns `None` if `path` didn't contain a test metadata or case definition.
///
/// # Panics
/// Expects the supplied `path` to be a file.
pub fn try_from_file(path: &Path) -> Option<Self> {
assert!(path.is_file(), "not a file: {}", path.display());
let Some(file_extension) = path.extension() else {
tracing::debug!("skipping corpus file: {}", path.display());
return None;
};
if file_extension == METADATA_FILE_EXTENSION {
return Self::try_from_json(path);
}
if file_extension == SOLIDITY_CASE_FILE_EXTENSION {
return Self::try_from_solidity(path);
}
tracing::debug!("ignoring invalid corpus file: {}", path.display());
None
}
fn try_from_json(path: &Path) -> Option<Self> {
let file = File::open(path)
.inspect_err(|error| {
tracing::error!(
"opening JSON test metadata file '{}' error: {error}",
path.display()
);
})
.ok()?;
match serde_json::from_reader::<_, Metadata>(file) {
Ok(mut metadata) => {
metadata.file_path = Some(path.to_path_buf());
Some(metadata)
}
Err(error) => {
tracing::error!(
"parsing JSON test metadata file '{}' error: {error}",
path.display()
);
None
}
}
}
fn try_from_solidity(path: &Path) -> Option<Self> {
let spec = read_to_string(path)
.inspect_err(|error| {
tracing::error!(
"opening JSON test metadata file '{}' error: {error}",
path.display()
);
})
.ok()?
.lines()
.filter_map(|line| line.strip_prefix(SOLIDITY_CASE_COMMENT_MARKER))
.fold(String::new(), |mut buf, string| {
buf.push_str(string);
buf
});
if spec.is_empty() {
return None;
}
match serde_json::from_str::<Self>(&spec) {
Ok(mut metadata) => {
metadata.file_path = Some(path.to_path_buf());
metadata.contracts = Some(
[(
ContractInstance::new("Test"),
ContractPathAndIdent {
contract_source_path: path.to_path_buf(),
contract_ident: ContractIdent::new("Test"),
},
)]
.into(),
);
Some(metadata)
}
Err(error) => {
tracing::error!(
"parsing Solidity test metadata file '{}' error: '{error}' from data: {spec}",
path.display()
);
None
}
}
}
/// Returns an iterator over all of the solidity files that needs to be compiled for this
/// [`Metadata`] object
///
/// Note: if the metadata is contained within a solidity file then this is the only file that
/// we wish to compile since this is a self-contained test. Otherwise, if it's a JSON file
/// then we need to compile all of the contracts that are in the directory since imports are
/// allowed in there.
pub fn files_to_compile(&self) -> anyhow::Result<Box<dyn Iterator<Item = PathBuf>>> {
let Some(ref metadata_file_path) = self.file_path else {
anyhow::bail!("The metadata file path is not defined");
};
if metadata_file_path
.extension()
.is_some_and(|extension| extension.eq_ignore_ascii_case("sol"))
{
Ok(Box::new(std::iter::once(metadata_file_path.clone())))
} else {
Ok(Box::new(
FilesWithExtensionIterator::new(self.directory()?).with_allowed_extension("sol"),
))
}
}
}
define_wrapper_type!(
/// Represents a contract instance found a metadata file.
///
/// Typically, this is used as the key to the "contracts" field of metadata files.
#[derive(
Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize,
)]
#[serde(transparent)]
pub struct ContractInstance(String);
);
define_wrapper_type!(
/// Represents a contract identifier found a metadata file.
///
/// A contract identifier is the name of the contract in the source code.
#[derive(
Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize,
)]
#[serde(transparent)]
pub struct ContractIdent(String);
);
/// Represents an identifier used for contracts.
///
/// The type supports serialization from and into the following string format:
///
/// ```text
/// ${path}:${contract_ident}
/// ```
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
#[serde(try_from = "String", into = "String")]
pub struct ContractPathAndIdent {
/// The path of the contract source code relative to the directory containing the metadata file.
pub contract_source_path: PathBuf,
/// The identifier of the contract.
pub contract_ident: ContractIdent,
}
impl Display for ContractPathAndIdent {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}:{}",
self.contract_source_path.display(),
self.contract_ident.as_ref()
)
}
}
impl FromStr for ContractPathAndIdent {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut splitted_string = s.split(":").peekable();
let mut path = None::<String>;
let mut identifier = None::<String>;
loop {
let Some(next_item) = splitted_string.next() else {
break;
};
if splitted_string.peek().is_some() {
match path {
Some(ref mut path) => {
path.push(':');
path.push_str(next_item);
}
None => path = Some(next_item.to_owned()),
}
} else {
identifier = Some(next_item.to_owned())
}
}
match (path, identifier) {
(Some(path), Some(identifier)) => Ok(Self {
contract_source_path: PathBuf::from(path),
contract_ident: ContractIdent::new(identifier),
}),
(None, Some(path)) | (Some(path), None) => {
let Some(identifier) = path.split(".").next().map(ToOwned::to_owned) else {
anyhow::bail!("Failed to find identifier");
};
Ok(Self {
contract_source_path: PathBuf::from(path),
contract_ident: ContractIdent::new(identifier),
})
}
(None, None) => anyhow::bail!("Failed to find the path and identifier"),
}
}
}
impl TryFrom<String> for ContractPathAndIdent {
type Error = anyhow::Error;
fn try_from(value: String) -> Result<Self, Self::Error> {
Self::from_str(&value)
}
}
impl From<ContractPathAndIdent> for String {
fn from(value: ContractPathAndIdent) -> Self {
value.to_string()
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn contract_identifier_respects_roundtrip_property() {
// Arrange
let string = "ERC20/ERC20.sol:ERC20";
// Act
let identifier = ContractPathAndIdent::from_str(string);
// Assert
let identifier = identifier.expect("Failed to parse");
assert_eq!(
identifier.contract_source_path.display().to_string(),
"ERC20/ERC20.sol"
);
assert_eq!(identifier.contract_ident, "ERC20".to_owned().into());
// Act
let reserialized = identifier.to_string();
// Assert
assert_eq!(string, reserialized);
}
#[test]
fn complex_metadata_file_can_be_deserialized() {
// Arrange
const JSON: &str = include_str!("../../../assets/test_metadata.json");
// Act
let metadata = serde_json::from_str::<Metadata>(JSON);
// Assert
metadata.expect("Failed to deserialize metadata");
}
}
+106
View File
@@ -0,0 +1,106 @@
use revive_dt_common::types::VersionOrRequirement;
use semver::Version;
use serde::de::Deserializer;
use serde::{Deserialize, Serialize};
/// Specifies the compilation mode of the test artifact.
#[derive(Hash, Debug, Clone, Eq, PartialEq)]
pub enum Mode {
Solidity(SolcMode),
Unknown(String),
}
/// Specify Solidity specific compiler options.
#[derive(Hash, Debug, Default, Clone, Eq, PartialEq, Serialize, Deserialize)]
pub struct SolcMode {
pub solc_version: Option<semver::VersionReq>,
solc_optimize: Option<bool>,
pub llvm_optimizer_settings: Vec<String>,
}
impl SolcMode {
/// Try to parse a mode string into a solc mode.
/// Returns `None` if the string wasn't a solc YUL mode string.
///
/// The mode string is expected to start with the `Y` ID (YUL ID),
/// optionally followed by `+` or `-` for the solc optimizer settings.
///
/// Options can be separated by a whitespace contain the following
/// - A solc `SemVer version requirement` string
/// - One or more `-OX` where X is a supposed to be an LLVM opt mode
pub fn parse_from_mode_string(mode_string: &str) -> Option<Self> {
let mut result = Self::default();
let mut parts = mode_string.trim().split(" ");
match parts.next()? {
"Y" => {}
"Y+" => result.solc_optimize = Some(true),
"Y-" => result.solc_optimize = Some(false),
_ => return None,
}
for part in parts {
if let Ok(solc_version) = semver::VersionReq::parse(part) {
result.solc_version = Some(solc_version);
continue;
}
if let Some(level) = part.strip_prefix("-O") {
result.llvm_optimizer_settings.push(level.to_string());
continue;
}
panic!("the YUL mode string {mode_string} failed to parse, invalid part: {part}")
}
Some(result)
}
/// Returns whether to enable the solc optimizer.
pub fn solc_optimize(&self) -> bool {
self.solc_optimize.unwrap_or(true)
}
/// Calculate the latest matching solc patch version. Returns:
/// - `latest_supported` if no version request was specified.
/// - A matching version with the same minor version as `latest_supported`, if any.
/// - `None` if no minor version of the `latest_supported` version matches.
pub fn last_patch_version(&self, latest_supported: &Version) -> Option<Version> {
let Some(version_req) = self.solc_version.as_ref() else {
return Some(latest_supported.to_owned());
};
// lgtm
for patch in (0..latest_supported.patch + 1).rev() {
let version = Version::new(0, latest_supported.minor, patch);
if version_req.matches(&version) {
return Some(version);
}
}
None
}
/// Resolves the [`SolcMode`]'s solidity version requirement into a [`VersionOrRequirement`] if
/// the requirement is present on the object. Otherwise, the passed default version is used.
pub fn compiler_version_to_use(&self, default: Version) -> VersionOrRequirement {
match self.solc_version {
Some(ref requirement) => requirement.clone().into(),
None => default.into(),
}
}
}
impl<'de> Deserialize<'de> for Mode {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
let mode_string = String::deserialize(deserializer)?;
if let Some(solc_mode) = SolcMode::parse_from_mode_string(&mode_string) {
return Ok(Self::Solidity(solc_mode));
}
Ok(Self::Unknown(mode_string))
}
}
+33
View File
@@ -0,0 +1,33 @@
use alloy::eips::BlockNumberOrTag;
use alloy::primitives::{Address, BlockHash, BlockNumber, BlockTimestamp, ChainId, U256};
use anyhow::Result;
/// A trait of the interface are required to implement to be used by the resolution logic that this
/// crate implements to go from string calldata and into the bytes calldata.
pub trait ResolverApi {
/// Returns the ID of the chain that the node is on.
fn chain_id(&self) -> impl Future<Output = Result<ChainId>>;
// TODO: This is currently a u128 due to Kitchensink needing more than 64 bits for its gas limit
// when we implement the changes to the gas we need to adjust this to be a u64.
/// Returns the gas limit of the specified block.
fn block_gas_limit(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<u128>>;
/// Returns the coinbase of the specified block.
fn block_coinbase(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<Address>>;
/// Returns the difficulty of the specified block.
fn block_difficulty(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<U256>>;
/// Returns the hash of the specified block.
fn block_hash(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<BlockHash>>;
/// Returns the timestamp of the specified block,
fn block_timestamp(
&self,
number: BlockNumberOrTag,
) -> impl Future<Output = Result<BlockTimestamp>>;
/// Returns the number of the last block.
fn last_block_number(&self) -> impl Future<Output = Result<BlockNumber>>;
}
+13
View File
@@ -0,0 +1,13 @@
[package]
name = "revive-dt-node-interaction"
description = "send and trace transactions to nodes"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
alloy = { workspace = true }
anyhow = { workspace = true }
+24
View File
@@ -0,0 +1,24 @@
//! This crate implements all node interactions.
use alloy::rpc::types::trace::geth::{DiffMode, GethDebugTracingOptions, GethTrace};
use alloy::rpc::types::{TransactionReceipt, TransactionRequest};
use anyhow::Result;
/// An interface for all interactions with Ethereum compatible nodes.
pub trait EthereumNode {
/// Execute the [TransactionRequest] and return a [TransactionReceipt].
fn execute_transaction(
&self,
transaction: TransactionRequest,
) -> impl Future<Output = Result<TransactionReceipt>>;
/// Trace the transaction in the [TransactionReceipt] and return a [GethTrace].
fn trace_transaction(
&self,
receipt: &TransactionReceipt,
trace_options: GethDebugTracingOptions,
) -> impl Future<Output = Result<GethTrace>>;
/// Returns the state diff of the transaction hash in the [TransactionReceipt].
fn state_diff(&self, receipt: &TransactionReceipt) -> impl Future<Output = Result<DiffMode>>;
}
+30
View File
@@ -0,0 +1,30 @@
[package]
name = "revive-dt-node"
description = "abstraction over blockchain nodes"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
anyhow = { workspace = true }
alloy = { workspace = true }
tracing = { workspace = true }
tokio = { workspace = true }
revive-dt-common = { workspace = true }
revive-dt-config = { workspace = true }
revive-dt-format = { workspace = true }
revive-dt-node-interaction = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
sp-core = { workspace = true }
sp-runtime = { workspace = true }
[dev-dependencies]
temp-dir = { workspace = true }
tokio = { workspace = true }
+78
View File
@@ -0,0 +1,78 @@
use alloy::{
network::{Network, TransactionBuilder},
providers::{
Provider, SendableTx,
fillers::{GasFiller, TxFiller},
},
transports::TransportResult,
};
#[derive(Clone, Debug)]
pub struct FallbackGasFiller {
inner: GasFiller,
default_gas_limit: u64,
default_max_fee_per_gas: u128,
default_priority_fee: u128,
}
impl FallbackGasFiller {
pub fn new(
default_gas_limit: u64,
default_max_fee_per_gas: u128,
default_priority_fee: u128,
) -> Self {
Self {
inner: GasFiller,
default_gas_limit,
default_max_fee_per_gas,
default_priority_fee,
}
}
}
impl<N> TxFiller<N> for FallbackGasFiller
where
N: Network,
{
type Fillable = Option<<GasFiller as TxFiller<N>>::Fillable>;
fn status(
&self,
tx: &<N as Network>::TransactionRequest,
) -> alloy::providers::fillers::FillerControlFlow {
<GasFiller as TxFiller<N>>::status(&self.inner, tx)
}
fn fill_sync(&self, _: &mut alloy::providers::SendableTx<N>) {}
async fn prepare<P: Provider<N>>(
&self,
provider: &P,
tx: &<N as Network>::TransactionRequest,
) -> TransportResult<Self::Fillable> {
// Try to fetch GasFillers “fillable” (gas_price, base_fee, estimate_gas, …)
// If it errors (i.e. tx would revert under eth_estimateGas), swallow it.
match self.inner.prepare(provider, tx).await {
Ok(fill) => Ok(Some(fill)),
Err(_) => Ok(None),
}
}
async fn fill(
&self,
fillable: Self::Fillable,
mut tx: alloy::providers::SendableTx<N>,
) -> TransportResult<SendableTx<N>> {
if let Some(fill) = fillable {
// our inner GasFiller succeeded — use it
self.inner.fill(fill, tx).await
} else {
if let Some(builder) = tx.as_mut_builder() {
builder.set_gas_limit(self.default_gas_limit);
builder.set_max_fee_per_gas(self.default_max_fee_per_gas);
builder.set_max_priority_fee_per_gas(self.default_priority_fee);
}
Ok(tx)
}
}
}
+5
View File
@@ -0,0 +1,5 @@
/// This constant defines how much Wei accounts are pre-seeded with in genesis.
///
/// Note: After changing this number, check that the tests for kitchensink work as we encountered
/// some issues with different values of the initial balance on Kitchensink.
pub const INITIAL_BALANCE: u128 = 10u128.pow(37);
+679
View File
@@ -0,0 +1,679 @@
//! The go-ethereum node implementation.
use std::{
fs::{File, OpenOptions, create_dir_all, remove_dir_all},
io::{BufRead, BufReader, Read, Write},
ops::ControlFlow,
path::PathBuf,
process::{Child, Command, Stdio},
sync::{
Arc,
atomic::{AtomicU32, Ordering},
},
time::{Duration, Instant},
};
use alloy::{
eips::BlockNumberOrTag,
genesis::{Genesis, GenesisAccount},
network::{Ethereum, EthereumWallet, NetworkWallet},
primitives::{Address, BlockHash, BlockNumber, BlockTimestamp, FixedBytes, U256},
providers::{
Provider, ProviderBuilder,
ext::DebugApi,
fillers::{CachedNonceManager, ChainIdFiller, FillProvider, NonceFiller, TxFiller},
},
rpc::types::{
TransactionReceipt, TransactionRequest,
trace::geth::{DiffMode, GethDebugTracingOptions, PreStateConfig, PreStateFrame},
},
signers::local::PrivateKeySigner,
};
use tracing::{Instrument, Level};
use revive_dt_common::{fs::clear_directory, futures::poll};
use revive_dt_config::Arguments;
use revive_dt_format::traits::ResolverApi;
use revive_dt_node_interaction::EthereumNode;
use crate::{Node, common::FallbackGasFiller, constants::INITIAL_BALANCE};
static NODE_COUNT: AtomicU32 = AtomicU32::new(0);
/// The go-ethereum node instance implementation.
///
/// Implements helpers to initialize, spawn and wait the node.
///
/// Assumes dev mode and IPC only (`P2P`, `http`` etc. are kept disabled).
///
/// Prunes the child process and the base directory on drop.
#[derive(Debug)]
pub struct GethNode {
connection_string: String,
base_directory: PathBuf,
data_directory: PathBuf,
logs_directory: PathBuf,
geth: PathBuf,
id: u32,
handle: Option<Child>,
network_id: u64,
start_timeout: u64,
wallet: EthereumWallet,
nonce_manager: CachedNonceManager,
/// This vector stores [`File`] objects that we use for logging which we want to flush when the
/// node object is dropped. We do not store them in a structured fashion at the moment (in
/// separate fields) as the logic that we need to apply to them is all the same regardless of
/// what it belongs to, we just want to flush them on [`Drop`] of the node.
logs_file_to_flush: Vec<File>,
}
impl GethNode {
const BASE_DIRECTORY: &str = "geth";
const DATA_DIRECTORY: &str = "data";
const LOGS_DIRECTORY: &str = "logs";
const IPC_FILE: &str = "geth.ipc";
const GENESIS_JSON_FILE: &str = "genesis.json";
const READY_MARKER: &str = "IPC endpoint opened";
const ERROR_MARKER: &str = "Fatal:";
const GETH_STDOUT_LOG_FILE_NAME: &str = "node_stdout.log";
const GETH_STDERR_LOG_FILE_NAME: &str = "node_stderr.log";
const TRANSACTION_INDEXING_ERROR: &str = "transaction indexing is in progress";
const TRANSACTION_TRACING_ERROR: &str = "historical state not available in path scheme yet";
const RECEIPT_POLLING_DURATION: Duration = Duration::from_secs(5 * 60);
const TRACE_POLLING_DURATION: Duration = Duration::from_secs(60);
/// Create the node directory and call `geth init` to configure the genesis.
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn init(&mut self, genesis: String) -> anyhow::Result<&mut Self> {
let _ = clear_directory(&self.base_directory);
let _ = clear_directory(&self.logs_directory);
create_dir_all(&self.base_directory)?;
create_dir_all(&self.logs_directory)?;
let mut genesis = serde_json::from_str::<Genesis>(&genesis)?;
for signer_address in
<EthereumWallet as NetworkWallet<Ethereum>>::signer_addresses(&self.wallet)
{
// Note, the use of the entry API here means that we only modify the entries for any
// account that is not in the `alloc` field of the genesis state.
genesis
.alloc
.entry(signer_address)
.or_insert(GenesisAccount::default().with_balance(U256::from(INITIAL_BALANCE)));
}
let genesis_path = self.base_directory.join(Self::GENESIS_JSON_FILE);
serde_json::to_writer(File::create(&genesis_path)?, &genesis)?;
let mut child = Command::new(&self.geth)
.arg("--state.scheme")
.arg("hash")
.arg("init")
.arg("--datadir")
.arg(&self.data_directory)
.arg(genesis_path)
.stderr(Stdio::piped())
.stdout(Stdio::null())
.spawn()?;
let mut stderr = String::new();
child
.stderr
.take()
.expect("should be piped")
.read_to_string(&mut stderr)?;
if !child.wait()?.success() {
anyhow::bail!("failed to initialize geth node #{:?}: {stderr}", &self.id);
}
Ok(self)
}
/// Spawn the go-ethereum node child process.
///
/// [Instance::init] must be called prior.
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn spawn_process(&mut self) -> anyhow::Result<&mut Self> {
// This is the `OpenOptions` that we wish to use for all of the log files that we will be
// opening in this method. We need to construct it in this way to:
// 1. Be consistent
// 2. Less verbose and more dry
// 3. Because the builder pattern uses mutable references so we need to get around that.
let open_options = {
let mut options = OpenOptions::new();
options.create(true).truncate(true).write(true);
options
};
let stdout_logs_file = open_options
.clone()
.open(self.geth_stdout_log_file_path())?;
let stderr_logs_file = open_options.open(self.geth_stderr_log_file_path())?;
self.handle = Command::new(&self.geth)
.arg("--dev")
.arg("--datadir")
.arg(&self.data_directory)
.arg("--ipcpath")
.arg(&self.connection_string)
.arg("--networkid")
.arg(self.network_id.to_string())
.arg("--nodiscover")
.arg("--maxpeers")
.arg("0")
.arg("--txlookuplimit")
.arg("0")
.arg("--cache.blocklogs")
.arg("512")
.arg("--state.scheme")
.arg("hash")
.arg("--syncmode")
.arg("full")
.arg("--gcmode")
.arg("archive")
.stderr(stderr_logs_file.try_clone()?)
.stdout(stdout_logs_file.try_clone()?)
.spawn()?
.into();
if let Err(error) = self.wait_ready() {
tracing::error!(?error, "Failed to start geth, shutting down gracefully");
self.shutdown()?;
return Err(error);
}
self.logs_file_to_flush
.extend([stderr_logs_file, stdout_logs_file]);
Ok(self)
}
/// Wait for the g-ethereum node child process getting ready.
///
/// [Instance::spawn_process] must be called priorly.
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn wait_ready(&mut self) -> anyhow::Result<&mut Self> {
let start_time = Instant::now();
let logs_file = OpenOptions::new()
.read(true)
.write(false)
.append(false)
.truncate(false)
.open(self.geth_stderr_log_file_path())?;
let maximum_wait_time = Duration::from_millis(self.start_timeout);
let mut stderr = BufReader::new(logs_file).lines();
loop {
if let Some(Ok(line)) = stderr.next() {
if line.contains(Self::ERROR_MARKER) {
anyhow::bail!("Failed to start geth {line}");
}
if line.contains(Self::READY_MARKER) {
return Ok(self);
}
}
if Instant::now().duration_since(start_time) > maximum_wait_time {
anyhow::bail!("Timeout in starting geth");
}
}
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id), level = Level::TRACE)]
fn geth_stdout_log_file_path(&self) -> PathBuf {
self.logs_directory.join(Self::GETH_STDOUT_LOG_FILE_NAME)
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id), level = Level::TRACE)]
fn geth_stderr_log_file_path(&self) -> PathBuf {
self.logs_directory.join(Self::GETH_STDERR_LOG_FILE_NAME)
}
fn provider(
&self,
) -> impl Future<
Output = anyhow::Result<
FillProvider<impl TxFiller<Ethereum>, impl Provider<Ethereum>, Ethereum>,
>,
> + 'static {
let connection_string = self.connection_string();
let wallet = self.wallet.clone();
// Note: We would like all providers to make use of the same nonce manager so that we have
// monotonically increasing nonces that are cached. The cached nonce manager uses Arc's in
// its implementation and therefore it means that when we clone it then it still references
// the same state.
let nonce_manager = self.nonce_manager.clone();
Box::pin(async move {
ProviderBuilder::new()
.disable_recommended_fillers()
.filler(FallbackGasFiller::new(500_000_000, 500_000_000, 1))
.filler(ChainIdFiller::default())
.filler(NonceFiller::new(nonce_manager))
.wallet(wallet)
.connect(&connection_string)
.await
.map_err(Into::into)
})
}
}
impl EthereumNode for GethNode {
#[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn execute_transaction(
&self,
transaction: TransactionRequest,
) -> anyhow::Result<alloy::rpc::types::TransactionReceipt> {
let span = tracing::debug_span!("Submitting transaction", ?transaction);
let _guard = span.enter();
let provider = Arc::new(self.provider().await?);
let transaction_hash = *provider.send_transaction(transaction).await?.tx_hash();
// The following is a fix for the "transaction indexing is in progress" error that we
// used to get. You can find more information on this in the following GH issue in geth
// https://github.com/ethereum/go-ethereum/issues/28877. To summarize what's going on,
// before we can get the receipt of the transaction it needs to have been indexed by the
// node's indexer. Just because the transaction has been confirmed it doesn't mean that
// it has been indexed. When we call alloy's `get_receipt` it checks if the transaction
// was confirmed. If it has been, then it will call `eth_getTransactionReceipt` method
// which _might_ return the above error if the tx has not yet been indexed yet. So, we
// need to implement a retry mechanism for the receipt to keep retrying to get it until
// it eventually works, but we only do that if the error we get back is the "transaction
// indexing is in progress" error or if the receipt is None.
//
// Getting the transaction indexed and taking a receipt can take a long time especially
// when a lot of transactions are being submitted to the node. Thus, while initially we
// only allowed for 60 seconds of waiting with a 1 second delay in polling, we need to
// allow for a larger wait time. Therefore, in here we allow for 5 minutes of waiting
// with exponential backoff each time we attempt to get the receipt and find that it's
// not available.
poll(
Self::RECEIPT_POLLING_DURATION,
Default::default(),
move || {
let provider = provider.clone();
async move {
match provider.get_transaction_receipt(transaction_hash).await {
Ok(Some(receipt)) => Ok(ControlFlow::Break(receipt)),
Ok(None) => Ok(ControlFlow::Continue(())),
Err(error) => {
let error_string = error.to_string();
match error_string.contains(Self::TRANSACTION_INDEXING_ERROR) {
true => Ok(ControlFlow::Continue(())),
false => Err(error.into()),
}
}
}
}
},
)
.instrument(tracing::info_span!(
"Awaiting transaction receipt",
?transaction_hash
))
.await
}
#[tracing::instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn trace_transaction(
&self,
transaction: &TransactionReceipt,
trace_options: GethDebugTracingOptions,
) -> anyhow::Result<alloy::rpc::types::trace::geth::GethTrace> {
let provider = Arc::new(self.provider().await?);
poll(
Self::TRACE_POLLING_DURATION,
Default::default(),
move || {
let provider = provider.clone();
let trace_options = trace_options.clone();
async move {
match provider
.debug_trace_transaction(transaction.transaction_hash, trace_options)
.await
{
Ok(trace) => Ok(ControlFlow::Break(trace)),
Err(error) => {
let error_string = error.to_string();
match error_string.contains(Self::TRANSACTION_TRACING_ERROR) {
true => Ok(ControlFlow::Continue(())),
false => Err(error.into()),
}
}
}
}
},
)
.await
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
async fn state_diff(&self, transaction: &TransactionReceipt) -> anyhow::Result<DiffMode> {
let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig {
diff_mode: Some(true),
disable_code: None,
disable_storage: None,
});
match self
.trace_transaction(transaction, trace_options)
.await?
.try_into_pre_state_frame()?
{
PreStateFrame::Diff(diff) => Ok(diff),
_ => anyhow::bail!("expected a diff mode trace"),
}
}
}
impl ResolverApi for GethNode {
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
async fn chain_id(&self) -> anyhow::Result<alloy::primitives::ChainId> {
self.provider()
.await?
.get_chain_id()
.await
.map_err(Into::into)
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
async fn block_gas_limit(&self, number: BlockNumberOrTag) -> anyhow::Result<u128> {
self.provider()
.await?
.get_block_by_number(number)
.await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.gas_limit as _)
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
async fn block_coinbase(&self, number: BlockNumberOrTag) -> anyhow::Result<Address> {
self.provider()
.await?
.get_block_by_number(number)
.await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.beneficiary)
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
async fn block_difficulty(&self, number: BlockNumberOrTag) -> anyhow::Result<U256> {
self.provider()
.await?
.get_block_by_number(number)
.await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.difficulty)
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
async fn block_hash(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockHash> {
self.provider()
.await?
.get_block_by_number(number)
.await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.hash)
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
async fn block_timestamp(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockTimestamp> {
self.provider()
.await?
.get_block_by_number(number)
.await?
.ok_or(anyhow::Error::msg("Blockchain has no blocks"))
.map(|block| block.header.timestamp)
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
async fn last_block_number(&self) -> anyhow::Result<BlockNumber> {
self.provider()
.await?
.get_block_number()
.await
.map_err(Into::into)
}
}
impl Node for GethNode {
fn new(config: &Arguments) -> Self {
let geth_directory = config.directory().join(Self::BASE_DIRECTORY);
let id = NODE_COUNT.fetch_add(1, Ordering::SeqCst);
let base_directory = geth_directory.join(id.to_string());
let mut wallet = config.wallet();
for signer in (1..=config.private_keys_to_add)
.map(|id| U256::from(id))
.map(|id| id.to_be_bytes::<32>())
.map(|id| PrivateKeySigner::from_bytes(&FixedBytes(id)).unwrap())
{
wallet.register_signer(signer);
}
Self {
connection_string: base_directory.join(Self::IPC_FILE).display().to_string(),
data_directory: base_directory.join(Self::DATA_DIRECTORY),
logs_directory: base_directory.join(Self::LOGS_DIRECTORY),
base_directory,
geth: config.geth.clone(),
id,
handle: None,
network_id: config.network_id,
start_timeout: config.geth_start_timeout,
wallet,
// We know that we only need to be storing 2 files so we can specify that when creating
// the vector. It's the stdout and stderr of the geth node.
logs_file_to_flush: Vec::with_capacity(2),
nonce_manager: Default::default(),
}
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn connection_string(&self) -> String {
self.connection_string.clone()
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn shutdown(&mut self) -> anyhow::Result<()> {
// Terminate the processes in a graceful manner to allow for the output to be flushed.
if let Some(mut child) = self.handle.take() {
child
.kill()
.map_err(|error| anyhow::anyhow!("Failed to kill the geth process: {error:?}"))?;
}
// Flushing the files that we're using for keeping the logs before shutdown.
for file in self.logs_file_to_flush.iter_mut() {
file.flush()?
}
// Remove the node's database so that subsequent runs do not run on the same database. We
// ignore the error just in case the directory didn't exist in the first place and therefore
// there's nothing to be deleted.
let _ = remove_dir_all(self.base_directory.join(Self::DATA_DIRECTORY));
Ok(())
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn spawn(&mut self, genesis: String) -> anyhow::Result<()> {
self.init(genesis)?.spawn_process()?;
Ok(())
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn version(&self) -> anyhow::Result<String> {
let output = Command::new(&self.geth)
.arg("--version")
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null())
.spawn()?
.wait_with_output()?
.stdout;
Ok(String::from_utf8_lossy(&output).into())
}
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn matches_target(&self, targets: Option<&[String]>) -> bool {
match targets {
None => true,
Some(targets) => targets.iter().any(|str| str.as_str() == "evm"),
}
}
}
impl Drop for GethNode {
#[tracing::instrument(skip_all, fields(geth_node_id = self.id))]
fn drop(&mut self) {
self.shutdown().expect("Failed to shutdown")
}
}
#[cfg(test)]
mod tests {
use revive_dt_config::Arguments;
use temp_dir::TempDir;
use crate::{GENESIS_JSON, Node};
use super::*;
fn test_config() -> (Arguments, TempDir) {
let mut config = Arguments::default();
let temp_dir = TempDir::new().unwrap();
config.working_directory = temp_dir.path().to_path_buf().into();
(config, temp_dir)
}
fn new_node() -> (GethNode, TempDir) {
let (args, temp_dir) = test_config();
let mut node = GethNode::new(&args);
node.init(GENESIS_JSON.to_owned())
.expect("Failed to initialize the node")
.spawn_process()
.expect("Failed to spawn the node process");
(node, temp_dir)
}
#[test]
fn init_works() {
GethNode::new(&test_config().0)
.init(GENESIS_JSON.to_string())
.unwrap();
}
#[test]
fn spawn_works() {
GethNode::new(&test_config().0)
.spawn(GENESIS_JSON.to_string())
.unwrap();
}
#[test]
fn version_works() {
let version = GethNode::new(&test_config().0).version().unwrap();
assert!(
version.starts_with("geth version"),
"expected version string, got: '{version}'"
);
}
#[tokio::test]
async fn can_get_chain_id_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let chain_id = node.chain_id().await;
// Assert
let chain_id = chain_id.expect("Failed to get the chain id");
assert_eq!(chain_id, 420_420_420);
}
#[tokio::test]
async fn can_get_gas_limit_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let gas_limit = node.block_gas_limit(BlockNumberOrTag::Latest).await;
// Assert
let gas_limit = gas_limit.expect("Failed to get the gas limit");
assert_eq!(gas_limit, u32::MAX as u128)
}
#[tokio::test]
async fn can_get_coinbase_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let coinbase = node.block_coinbase(BlockNumberOrTag::Latest).await;
// Assert
let coinbase = coinbase.expect("Failed to get the coinbase");
assert_eq!(coinbase, Address::new([0xFF; 20]))
}
#[tokio::test]
async fn can_get_block_difficulty_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let block_difficulty = node.block_difficulty(BlockNumberOrTag::Latest).await;
// Assert
let block_difficulty = block_difficulty.expect("Failed to get the block difficulty");
assert_eq!(block_difficulty, U256::ZERO)
}
#[tokio::test]
async fn can_get_block_hash_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let block_hash = node.block_hash(BlockNumberOrTag::Latest).await;
// Assert
let _ = block_hash.expect("Failed to get the block hash");
}
#[tokio::test]
async fn can_get_block_timestamp_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let block_timestamp = node.block_timestamp(BlockNumberOrTag::Latest).await;
// Assert
let _ = block_timestamp.expect("Failed to get the block timestamp");
}
#[tokio::test]
async fn can_get_block_number_from_node() {
// Arrange
let (node, _temp_dir) = new_node();
// Act
let block_number = node.last_block_number().await;
// Assert
let block_number = block_number.expect("Failed to get the block number");
assert_eq!(block_number, 0)
}
}
File diff suppressed because it is too large Load Diff
+39
View File
@@ -0,0 +1,39 @@
//! This crate implements the testing nodes.
use revive_dt_config::Arguments;
use revive_dt_node_interaction::EthereumNode;
pub mod common;
pub mod constants;
pub mod geth;
pub mod kitchensink;
pub mod pool;
/// The default genesis configuration.
pub const GENESIS_JSON: &str = include_str!("../../../genesis.json");
/// An abstract interface for testing nodes.
pub trait Node: EthereumNode {
/// Create a new uninitialized instance.
fn new(config: &Arguments) -> Self;
/// Spawns a node configured according to the genesis json.
///
/// Blocking until it's ready to accept transactions.
fn spawn(&mut self, genesis: String) -> anyhow::Result<()>;
/// Prune the node instance and related data.
///
/// Blocking until it's completely stopped.
fn shutdown(&mut self) -> anyhow::Result<()>;
/// Returns the nodes connection string.
fn connection_string(&self) -> String;
/// Returns the node version.
fn version(&self) -> anyhow::Result<String>;
/// Given a list of targets from the metadata file, this function determines if the metadata
/// file can be ran on this node or not.
fn matches_target(&self, targets: Option<&[String]>) -> bool;
}
+68
View File
@@ -0,0 +1,68 @@
//! This crate implements concurrent handling of testing node.
use std::{
fs::read_to_string,
sync::atomic::{AtomicUsize, Ordering},
thread,
};
use anyhow::Context;
use revive_dt_config::Arguments;
use crate::Node;
/// The node pool starts one or more [Node] which then can be accessed
/// in a round robbin fasion.
pub struct NodePool<T> {
next: AtomicUsize,
nodes: Vec<T>,
}
impl<T> NodePool<T>
where
T: Node + Send + 'static,
{
/// Create a new Pool. This will start as many nodes as there are workers in `config`.
pub fn new(config: &Arguments) -> anyhow::Result<Self> {
let nodes = config.number_of_nodes;
let genesis = read_to_string(&config.genesis_file).context(format!(
"can not read genesis file: {}",
config.genesis_file.display()
))?;
let mut handles = Vec::with_capacity(nodes);
for _ in 0..nodes {
let config = config.clone();
let genesis = genesis.clone();
handles.push(thread::spawn(move || spawn_node::<T>(&config, genesis)));
}
let mut nodes = Vec::with_capacity(nodes);
for handle in handles {
nodes.push(
handle
.join()
.map_err(|error| anyhow::anyhow!("failed to spawn node: {:?}", error))?
.map_err(|error| anyhow::anyhow!("node failed to spawn: {error}"))?,
);
}
Ok(Self {
nodes,
next: Default::default(),
})
}
/// Get a handle to the next node.
pub fn round_robbin(&self) -> &T {
let current = self.next.fetch_add(1, Ordering::SeqCst) % self.nodes.len();
self.nodes.get(current).unwrap()
}
}
fn spawn_node<T: Node + Send>(args: &Arguments, genesis: String) -> anyhow::Result<T> {
let mut node = T::new(args);
tracing::info!("starting node: {}", node.connection_string());
node.spawn(genesis)?;
Ok(node)
}
+18
View File
@@ -0,0 +1,18 @@
[package]
name = "revive-dt-report"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
revive-dt-config = { workspace = true }
revive-dt-format = { workspace = true }
revive-dt-compiler = { workspace = true }
anyhow = { workspace = true }
tracing = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
+81
View File
@@ -0,0 +1,81 @@
//! The report analyzer enriches the raw report data.
use revive_dt_compiler::CompilerOutput;
use serde::{Deserialize, Serialize};
use crate::reporter::CompilationTask;
/// Provides insights into how well the compilers perform.
#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq, PartialOrd)]
pub struct CompilerStatistics {
/// The sum of contracts observed.
pub n_contracts: usize,
/// The mean size of compiled contracts.
pub mean_code_size: usize,
/// The mean size of the optimized YUL IR.
pub mean_yul_size: usize,
/// Is a proxy because the YUL also contains a lot of comments.
pub yul_to_bytecode_size_ratio: f32,
}
impl CompilerStatistics {
/// Cumulatively update the statistics with the next compiler task.
pub fn sample(&mut self, compilation_task: &CompilationTask) {
let Some(CompilerOutput { contracts }) = &compilation_task.json_output else {
return;
};
for (_solidity, contracts) in contracts.iter() {
for (_name, (bytecode, _)) in contracts.iter() {
// The EVM bytecode can be unlinked and thus is not necessarily a decodable hex
// string; for our statistics this is a good enough approximation.
let bytecode_size = bytecode.len() / 2;
// TODO: for the time being we set the yul_size to be zero. We need to change this
// when we overhaul the reporting.
self.update_sizes(bytecode_size, 0);
}
}
}
/// Updates the size statistics cumulatively.
fn update_sizes(&mut self, bytecode_size: usize, yul_size: usize) {
let n_previous = self.n_contracts;
let n_current = self.n_contracts + 1;
self.n_contracts = n_current;
self.mean_code_size = (n_previous * self.mean_code_size + bytecode_size) / n_current;
self.mean_yul_size = (n_previous * self.mean_yul_size + yul_size) / n_current;
if self.mean_code_size > 0 {
self.yul_to_bytecode_size_ratio =
self.mean_yul_size as f32 / self.mean_code_size as f32;
}
}
}
#[cfg(test)]
mod tests {
use super::CompilerStatistics;
#[test]
fn compiler_statistics() {
let mut received = CompilerStatistics::default();
received.update_sizes(0, 0);
received.update_sizes(3, 37);
received.update_sizes(123, 456);
let mean_code_size = 41; // rounding error from integer truncation
let mean_yul_size = 164;
let expected = CompilerStatistics {
n_contracts: 3,
mean_code_size,
mean_yul_size,
yul_to_bytecode_size_ratio: mean_yul_size as f32 / mean_code_size as f32,
};
assert_eq!(received, expected);
}
}
+4
View File
@@ -0,0 +1,4 @@
//! The revive differential tests reporting facility.
pub mod analyzer;
pub mod reporter;
+235
View File
@@ -0,0 +1,235 @@
//! The reporter is the central place observing test execution by collecting data.
//!
//! The data collected gives useful insights into the outcome of the test run
//! and helps identifying and reproducing failing cases.
use std::{
collections::HashMap,
fs::{self, File, create_dir_all},
path::PathBuf,
sync::{Mutex, OnceLock},
time::{SystemTime, UNIX_EPOCH},
};
use anyhow::Context;
use revive_dt_compiler::{CompilerInput, CompilerOutput};
use serde::{Deserialize, Serialize};
use revive_dt_config::{Arguments, TestingPlatform};
use revive_dt_format::{corpus::Corpus, mode::SolcMode};
use crate::analyzer::CompilerStatistics;
pub(crate) static REPORTER: OnceLock<Mutex<Report>> = OnceLock::new();
/// The `Report` datastructure stores all relevant inforamtion required for generating reports.
#[derive(Clone, Debug, Default, Serialize, Deserialize)]
pub struct Report {
/// The configuration used during the test.
pub config: Arguments,
/// The observed test corpora.
pub corpora: Vec<Corpus>,
/// The observed test definitions.
pub metadata_files: Vec<PathBuf>,
/// The observed compilation results.
pub compiler_results: HashMap<TestingPlatform, Vec<CompilationResult>>,
/// The observed compilation statistics.
pub compiler_statistics: HashMap<TestingPlatform, CompilerStatistics>,
/// The file name this is serialized to.
#[serde(skip)]
directory: PathBuf,
}
/// Contains a compiled contract.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct CompilationTask {
/// The observed compiler input.
pub json_input: CompilerInput,
/// The observed compiler output.
pub json_output: Option<CompilerOutput>,
/// The observed compiler mode.
pub mode: SolcMode,
/// The observed compiler version.
pub compiler_version: String,
/// The observed error, if any.
pub error: Option<String>,
}
/// Represents a report about a compilation task.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct CompilationResult {
/// The observed compilation task.
pub compilation_task: CompilationTask,
/// The linked span.
pub span: Span,
}
/// The [Span] struct indicates the context of what is being reported.
#[derive(Clone, Copy, Debug, Serialize, Deserialize)]
pub struct Span {
/// The corpus index this belongs to.
corpus: usize,
/// The metadata file this belongs to.
metadata_file: usize,
/// The index of the case definition this belongs to.
case: usize,
/// The index of the case input this belongs to.
input: usize,
}
impl Report {
/// The file name where this report will be written to.
pub const FILE_NAME: &str = "report.json";
/// The [Span] is expected to initialize the reporter by providing the config.
const INITIALIZED_VIA_SPAN: &str = "requires a Span which initializes the reporter";
/// Create a new [Report].
fn new(config: Arguments) -> anyhow::Result<Self> {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis();
let directory = config.directory().join("report").join(format!("{now}"));
if !directory.exists() {
create_dir_all(&directory)?;
}
Ok(Self {
config,
directory,
..Default::default()
})
}
/// Add a compilation task to the report.
pub fn compilation(span: Span, platform: TestingPlatform, compilation_task: CompilationTask) {
let mut report = REPORTER
.get()
.expect(Report::INITIALIZED_VIA_SPAN)
.lock()
.unwrap();
report
.compiler_statistics
.entry(platform)
.or_default()
.sample(&compilation_task);
report
.compiler_results
.entry(platform)
.or_default()
.push(CompilationResult {
compilation_task,
span,
});
}
/// Write the report to disk.
pub fn save() -> anyhow::Result<()> {
let Some(reporter) = REPORTER.get() else {
return Ok(());
};
let report = reporter.lock().unwrap();
if let Err(error) = report.write_to_file() {
anyhow::bail!("can not write report: {error}");
}
if report.config.extract_problems {
if let Err(error) = report.save_compiler_problems() {
anyhow::bail!("can not write compiler problems: {error}");
}
}
Ok(())
}
/// Write compiler problems to disk for later debugging.
pub fn save_compiler_problems(&self) -> anyhow::Result<()> {
for (platform, results) in self.compiler_results.iter() {
for result in results {
// ignore if there were no errors
if result.compilation_task.error.is_none() {
continue;
}
let path = &self.metadata_files[result.span.metadata_file]
.parent()
.unwrap()
.join(format!("{platform}_errors"));
if !path.exists() {
create_dir_all(path)?;
}
if let Some(error) = result.compilation_task.error.as_ref() {
fs::write(path.join("compiler_error.txt"), error)?;
}
if let Some(errors) = result.compilation_task.json_output.as_ref() {
let file = File::create(path.join("compiler_output.txt"))?;
serde_json::to_writer_pretty(file, &errors)?;
}
}
}
Ok(())
}
fn write_to_file(&self) -> anyhow::Result<()> {
let path = self.directory.join(Self::FILE_NAME);
let file = File::create(&path).context(path.display().to_string())?;
serde_json::to_writer_pretty(file, &self)?;
tracing::info!("report written to: {}", path.display());
Ok(())
}
}
impl Span {
/// Create a new [Span] with case and input index at 0.
///
/// Initializes the reporting facility on the first call.
pub fn new(corpus: Corpus, config: Arguments) -> anyhow::Result<Self> {
let report = Mutex::new(Report::new(config)?);
let mut reporter = REPORTER.get_or_init(|| report).lock().unwrap();
reporter.corpora.push(corpus);
Ok(Self {
corpus: reporter.corpora.len() - 1,
metadata_file: 0,
case: 0,
input: 0,
})
}
/// Advance to the next metadata file: Resets the case input index to 0.
pub fn next_metadata(&mut self, metadata_file: PathBuf) {
let mut reporter = REPORTER
.get()
.expect(Report::INITIALIZED_VIA_SPAN)
.lock()
.unwrap();
reporter.metadata_files.push(metadata_file);
self.metadata_file = reporter.metadata_files.len() - 1;
self.case = 0;
self.input = 0;
}
/// Advance to the next case: Increas the case index by one and resets the input index to 0.
pub fn next_case(&mut self) {
self.case += 1;
self.input = 0;
}
/// Advance to the next input.
pub fn next_input(&mut self) {
self.input += 1;
}
}
+21
View File
@@ -0,0 +1,21 @@
[package]
name = "revive-dt-solc-binaries"
description = "Download and cache solc binaries"
version.workspace = true
authors.workspace = true
license.workspace = true
edition.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
revive-dt-common = { workspace = true }
anyhow = { workspace = true }
hex = { workspace = true }
tracing = { workspace = true }
tokio = { workspace = true }
reqwest = { workspace = true }
semver = { workspace = true }
serde = { workspace = true }
sha2 = { workspace = true }
+73
View File
@@ -0,0 +1,73 @@
//! Helper for caching the solc binaries.
use std::{
collections::HashSet,
fs::{File, create_dir_all},
io::{BufWriter, Write},
os::unix::fs::PermissionsExt,
path::{Path, PathBuf},
sync::LazyLock,
};
use tokio::sync::Mutex;
use crate::download::SolcDownloader;
pub const SOLC_CACHE_DIRECTORY: &str = "solc";
pub(crate) static SOLC_CACHER: LazyLock<Mutex<HashSet<PathBuf>>> = LazyLock::new(Default::default);
pub(crate) async fn get_or_download(
working_directory: &Path,
downloader: &SolcDownloader,
) -> anyhow::Result<PathBuf> {
let target_directory = working_directory
.join(SOLC_CACHE_DIRECTORY)
.join(downloader.version.to_string());
let target_file = target_directory.join(downloader.target);
let mut cache = SOLC_CACHER.lock().await;
if cache.contains(&target_file) {
tracing::debug!("using cached solc: {}", target_file.display());
return Ok(target_file);
}
create_dir_all(target_directory)?;
download_to_file(&target_file, downloader).await?;
cache.insert(target_file.clone());
Ok(target_file)
}
async fn download_to_file(path: &Path, downloader: &SolcDownloader) -> anyhow::Result<()> {
tracing::info!("caching file: {}", path.display());
let Ok(file) = File::create_new(path) else {
tracing::debug!("cache file already exists: {}", path.display());
return Ok(());
};
#[cfg(unix)]
{
let mut permissions = file.metadata()?.permissions();
permissions.set_mode(permissions.mode() | 0o111);
file.set_permissions(permissions)?;
}
let mut file = BufWriter::new(file);
file.write_all(&downloader.download().await?)?;
file.flush()?;
drop(file);
#[cfg(target_os = "macos")]
std::process::Command::new("xattr")
.arg("-d")
.arg("com.apple.quarantine")
.arg(path)
.stderr(std::process::Stdio::null())
.stdout(std::process::Stdio::null())
.stdout(std::process::Stdio::null())
.spawn()?
.wait()?;
Ok(())
}
+191
View File
@@ -0,0 +1,191 @@
//! This module downloads solc binaries.
use std::{
collections::HashMap,
sync::{LazyLock, Mutex},
};
use revive_dt_common::types::VersionOrRequirement;
use semver::Version;
use sha2::{Digest, Sha256};
use crate::list::List;
pub static LIST_CACHE: LazyLock<Mutex<HashMap<&'static str, List>>> =
LazyLock::new(Default::default);
impl List {
pub const LINUX_URL: &str = "https://binaries.soliditylang.org/linux-amd64/list.json";
pub const WINDOWS_URL: &str = "https://binaries.soliditylang.org/windows-amd64/list.json";
pub const MACOSX_URL: &str = "https://binaries.soliditylang.org/macosx-amd64/list.json";
pub const WASM_URL: &str = "https://binaries.soliditylang.org/wasm/list.json";
/// Try to downloads the list from the given URL.
///
/// Caches the list retrieved from the `url` into [LIST_CACHE],
/// subsequent calls with the same `url` will return the cached list.
pub async fn download(url: &'static str) -> anyhow::Result<Self> {
if let Some(list) = LIST_CACHE.lock().unwrap().get(url) {
return Ok(list.clone());
}
let body: List = reqwest::get(url).await?.json().await?;
LIST_CACHE.lock().unwrap().insert(url, body.clone());
Ok(body)
}
}
/// Download solc binaries from the official SolidityLang site
#[derive(Clone, Debug)]
pub struct SolcDownloader {
pub version: Version,
pub target: &'static str,
pub list: &'static str,
}
impl SolcDownloader {
pub const BASE_URL: &str = "https://binaries.soliditylang.org";
pub const LINUX_NAME: &str = "linux-amd64";
pub const MACOSX_NAME: &str = "macosx-amd64";
pub const WINDOWS_NAME: &str = "windows-amd64";
pub const WASM_NAME: &str = "wasm";
async fn new(
version: impl Into<VersionOrRequirement>,
target: &'static str,
list: &'static str,
) -> anyhow::Result<Self> {
let version_or_requirement = version.into();
match version_or_requirement {
VersionOrRequirement::Version(version) => Ok(Self {
version,
target,
list,
}),
VersionOrRequirement::Requirement(requirement) => {
let Some(version) = List::download(list)
.await?
.builds
.into_iter()
.map(|build| build.version)
.filter(|version| requirement.matches(version))
.max()
else {
anyhow::bail!("Failed to find a version that satisfies {requirement:?}");
};
Ok(Self {
version,
target,
list,
})
}
}
}
pub async fn linux(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::LINUX_NAME, List::LINUX_URL).await
}
pub async fn macosx(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::MACOSX_NAME, List::MACOSX_URL).await
}
pub async fn windows(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::WINDOWS_NAME, List::WINDOWS_URL).await
}
pub async fn wasm(version: impl Into<VersionOrRequirement>) -> anyhow::Result<Self> {
Self::new(version, Self::WASM_NAME, List::WASM_URL).await
}
/// Download the solc binary.
///
/// Errors out if the download fails or the digest of the downloaded file
/// mismatches the expected digest from the release [List].
pub async fn download(&self) -> anyhow::Result<Vec<u8>> {
tracing::info!("downloading solc: {self:?}");
let builds = List::download(self.list).await?.builds;
let build = builds
.iter()
.find(|build| build.version == self.version)
.ok_or_else(|| anyhow::anyhow!("solc v{} not found builds", self.version))?;
let path = build.path.clone();
let expected_digest = build
.sha256
.strip_prefix("0x")
.unwrap_or(&build.sha256)
.to_string();
let url = format!("{}/{}/{}", Self::BASE_URL, self.target, path.display());
let file = reqwest::get(url).await?.bytes().await?.to_vec();
if hex::encode(Sha256::digest(&file)) != expected_digest {
anyhow::bail!("sha256 mismatch for solc version {}", self.version);
}
Ok(file)
}
}
#[cfg(test)]
mod tests {
use crate::{download::SolcDownloader, list::List};
#[tokio::test]
async fn try_get_windows() {
let version = List::download(List::WINDOWS_URL)
.await
.unwrap()
.latest_release;
SolcDownloader::windows(version)
.await
.unwrap()
.download()
.await
.unwrap();
}
#[tokio::test]
async fn try_get_macosx() {
let version = List::download(List::MACOSX_URL)
.await
.unwrap()
.latest_release;
SolcDownloader::macosx(version)
.await
.unwrap()
.download()
.await
.unwrap();
}
#[tokio::test]
async fn try_get_linux() {
let version = List::download(List::LINUX_URL)
.await
.unwrap()
.latest_release;
SolcDownloader::linux(version)
.await
.unwrap()
.download()
.await
.unwrap();
}
#[tokio::test]
async fn try_get_wasm() {
let version = List::download(List::WASM_URL).await.unwrap().latest_release;
SolcDownloader::wasm(version)
.await
.unwrap()
.download()
.await
.unwrap();
}
}
+40
View File
@@ -0,0 +1,40 @@
//! This crates provides serializable Rust type definitions for the [solc binary lists][0]
//! and download helpers.
//!
//! [0]: https://binaries.soliditylang.org
use std::path::{Path, PathBuf};
use cache::get_or_download;
use download::SolcDownloader;
use revive_dt_common::types::VersionOrRequirement;
pub mod cache;
pub mod download;
pub mod list;
/// Downloads the solc binary for Wasm is `wasm` is set, otherwise for
/// the target platform.
///
/// Subsequent calls for the same version will use a cached artifact
/// and not download it again.
pub async fn download_solc(
cache_directory: &Path,
version: impl Into<VersionOrRequirement>,
wasm: bool,
) -> anyhow::Result<PathBuf> {
let downloader = if wasm {
SolcDownloader::wasm(version).await
} else if cfg!(target_os = "linux") {
SolcDownloader::linux(version).await
} else if cfg!(target_os = "macos") {
SolcDownloader::macosx(version).await
} else if cfg!(target_os = "windows") {
SolcDownloader::windows(version).await
} else {
unimplemented!()
}?;
get_or_download(cache_directory, &downloader).await
}
+26
View File
@@ -0,0 +1,26 @@
//! Rust type definitions for the solc binary lists.
use std::{collections::HashMap, path::PathBuf};
use semver::Version;
use serde::Deserialize;
#[derive(Debug, Deserialize, Clone, Eq, PartialEq)]
pub struct List {
pub builds: Vec<Build>,
pub releases: HashMap<Version, String>,
#[serde(rename = "latestRelease")]
pub latest_release: Version,
}
#[derive(Debug, Deserialize, Clone, Eq, PartialEq)]
pub struct Build {
pub path: PathBuf,
pub version: Version,
pub build: String,
#[serde(rename = "longVersion")]
pub long_version: String,
pub keccak256: String,
pub sha256: String,
pub urls: Vec<String>,
}
+37
View File
@@ -0,0 +1,37 @@
{
"config": {
"chainId": 420420420,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"berlinBlock": 0,
"londonBlock": 0,
"arrowGlacierBlock": 0,
"grayGlacierBlock": 0,
"shanghaiTime": 0,
"cancunTime": 0,
"terminalTotalDifficulty": 0,
"terminalTotalDifficultyPassed": true,
"blobSchedule": {
"cancun": {
"target": 3,
"max": 6,
"baseFeeUpdateFraction": 3338477
}
}
},
"coinbase": "0xffffffffffffffffffffffffffffffffffffffff",
"difficulty": "0x00",
"extraData": "",
"gasLimit": "0xffffffff",
"nonce": "0x0000000000000042",
"mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp": "0x00",
"alloc": {}
}
Submodule
+1
Submodule polkadot-sdk added at dc3d0e5ab7