Compare commits

...

20 Commits

Author SHA1 Message Date
Omar Abdulla 372cd5c52b Update tests 2025-09-19 18:55:50 +03:00
Omar Abdulla e122bbd996 Update the default values of the cli 2025-09-19 18:15:50 +03:00
Omar Abdulla 6313ccb9b5 Resolve merge conflicts 2025-09-18 22:59:11 +03:00
Omar Abdulla 6b2516f639 Final set of renames 2025-09-18 22:44:39 +03:00
Omar Abdulla d4869deb68 Update the default values for the platforms 2025-09-18 20:16:57 +03:00
Omar Abdulla 52b21f8982 Remove an un-needed dependency 2025-09-18 20:11:33 +03:00
Omar Abdulla 13a5b5a7ee Remove the old traits 2025-09-18 20:10:32 +03:00
Omar Abdulla b962d032b9 Remoe all references to leader and follower 2025-09-18 20:03:33 +03:00
Omar Abdulla 496bc9a0ec Replace infra with the dyn infra 2025-09-18 19:59:52 +03:00
Omar Abdulla 92fc7894c0 Add a way to convert platform identifier into a platform 2025-09-17 21:27:33 +03:00
Omar Abdulla d7f69449af Add all of the platforms that we support 2025-09-17 21:06:29 +03:00
Omar Abdulla f0f59ad024 Provide a common node implementation for substrate chains 2025-09-17 20:23:31 +03:00
Omar Abdulla ac0f4e0cf2 Introduce a geth platform 2025-09-17 19:54:50 +03:00
Omar Abdulla 9e4f2e95f1 Support the dyn compiler in the builder pattern 2025-09-17 19:31:12 +03:00
Omar Abdulla 7aadd0a7f7 Implement the dyn compiler trait for compilers 2025-09-17 19:29:23 +03:00
Omar Abdulla 1a25c8e0ab Add more identifiers to the platform 2025-09-17 06:25:35 +03:00
Omar Abdulla 01d8042841 Allow for compilers to be created in the dyn trait 2025-09-17 06:10:44 +03:00
Omar Abdulla 8a05f8e6e8 Make the ethereum node trait object compatible 2025-09-17 06:01:13 +03:00
Omar Abdulla 9fc74aeea0 Groundwork for dyn traits 2025-09-17 05:47:13 +03:00
Omar Abdulla 49cbc51546 Generate schema for the metadata file 2025-09-08 17:09:35 +03:00
30 changed files with 2412 additions and 1824 deletions
Generated
+6
View File
@@ -4468,10 +4468,13 @@ name = "revive-dt-common"
version = "0.1.0" version = "0.1.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"clap",
"moka", "moka",
"once_cell", "once_cell",
"schemars 1.0.4",
"semver 1.0.26", "semver 1.0.26",
"serde", "serde",
"strum",
"tokio", "tokio",
] ]
@@ -4503,6 +4506,7 @@ dependencies = [
"alloy", "alloy",
"anyhow", "anyhow",
"clap", "clap",
"revive-dt-common",
"semver 1.0.26", "semver 1.0.26",
"serde", "serde",
"serde_json", "serde_json",
@@ -4584,6 +4588,8 @@ version = "0.1.0"
dependencies = [ dependencies = [
"alloy", "alloy",
"anyhow", "anyhow",
"revive-common",
"revive-dt-format",
] ]
[[package]] [[package]]
+109 -78
View File
@@ -52,122 +52,152 @@ All of the above need to be installed and available in the path in order for the
This tool is being updated quite frequently. Therefore, it's recommended that you don't install the tool and then run it, but rather that you run it from the root of the directory using `cargo run --release`. The help command of the tool gives you all of the information you need to know about each of the options and flags that the tool offers. This tool is being updated quite frequently. Therefore, it's recommended that you don't install the tool and then run it, but rather that you run it from the root of the directory using `cargo run --release`. The help command of the tool gives you all of the information you need to know about each of the options and flags that the tool offers.
```bash ```bash
$ cargo run --release -- --help $ cargo run --release -- execute-tests --help
Usage: retester [OPTIONS] Error: Executes tests in the MatterLabs format differentially on multiple targets concurrently
Usage: retester execute-tests [OPTIONS]
Options: Options:
-s, --solc <SOLC> -w, --working-directory <WORKING_DIRECTORY>
The `solc` version to use if the test didn't specify it explicitly The working directory that the program will use for all of the temporary artifacts needed at runtime.
[default: 0.8.29] If not specified, then a temporary directory will be created and used by the program for all temporary artifacts.
--wasm [default: ]
Use the Wasm compiler versions
-r, --resolc <RESOLC> -p, --platform <PLATFORMS>
The path to the `resolc` executable to be tested. The set of platforms that the differential tests should run on
By default it uses the `resolc` binary found in `$PATH`. [default: geth-evm-solc,revive-dev-node-polkavm-resolc]
If `--wasm` is set, this should point to the resolc Wasm ile. Possible values:
- geth-evm-solc: The Go-ethereum reference full node EVM implementation with the solc compiler
[default: resolc] - kitchensink-polkavm-resolc: The kitchensink node with the PolkaVM backend with the resolc compiler
- kitchensink-revm-solc: The kitchensink node with the REVM backend with the solc compiler
- revive-dev-node-polkavm-resolc: The revive dev node with the PolkaVM backend with the resolc compiler
- revive-dev-node-revm-solc: The revive dev node with the REVM backend with the solc compiler
-c, --corpus <CORPUS> -c, --corpus <CORPUS>
A list of test corpus JSON files to be tested A list of test corpus JSON files to be tested
-w, --workdir <WORKING_DIRECTORY> -h, --help
A place to store temporary artifacts during test execution. Print help (see a summary with '-h')
Creates a temporary dir if not specified. Solc Configuration:
--solc.version <VERSION>
Specifies the default version of the Solc compiler that should be used if there is no override specified by one of the test cases
-g, --geth <GETH> [default: 0.8.29]
The path to the `geth` executable.
By default it uses `geth` binary found in `$PATH`. Resolc Configuration:
--resolc.path <resolc.path>
Specifies the path of the resolc compiler to be used by the tool.
If this is not specified, then the tool assumes that it should use the resolc binary that's provided in the user's $PATH.
[default: resolc]
Geth Configuration:
--geth.path <geth.path>
Specifies the path of the geth node to be used by the tool.
If this is not specified, then the tool assumes that it should use the geth binary that's provided in the user's $PATH.
[default: geth] [default: geth]
--geth-start-timeout <GETH_START_TIMEOUT> --geth.start-timeout-ms <geth.start-timeout-ms>
The maximum time in milliseconds to wait for geth to start The amount of time to wait upon startup before considering that the node timed out
[default: 5000] [default: 5000]
--genesis <GENESIS_FILE> Kitchensink Configuration:
Configure nodes according to this genesis.json file --kitchensink.path <kitchensink.path>
Specifies the path of the kitchensink node to be used by the tool.
[default: genesis.json] If this is not specified, then the tool assumes that it should use the kitchensink binary that's provided in the user's $PATH.
-a, --account <ACCOUNT> [default: substrate-node]
The signing account private key
--kitchensink.start-timeout-ms <kitchensink.start-timeout-ms>
The amount of time to wait upon startup before considering that the node timed out
[default: 5000]
--kitchensink.dont-use-dev-node
This configures the tool to use Kitchensink instead of using the revive-dev-node
Revive Dev Node Configuration:
--revive-dev-node.path <revive-dev-node.path>
Specifies the path of the revive dev node to be used by the tool.
If this is not specified, then the tool assumes that it should use the revive dev node binary that's provided in the user's $PATH.
[default: revive-dev-node]
--revive-dev-node.start-timeout-ms <revive-dev-node.start-timeout-ms>
The amount of time to wait upon startup before considering that the node timed out
[default: 5000]
Eth RPC Configuration:
--eth-rpc.path <eth-rpc.path>
Specifies the path of the ETH RPC to be used by the tool.
If this is not specified, then the tool assumes that it should use the ETH RPC binary that's provided in the user's $PATH.
[default: eth-rpc]
--eth-rpc.start-timeout-ms <eth-rpc.start-timeout-ms>
The amount of time to wait upon startup before considering that the node timed out
[default: 5000]
Genesis Configuration:
--genesis.path <genesis.path>
Specifies the path of the genesis file to use for the nodes that are started.
This is expected to be the path of a JSON geth genesis file.
Wallet Configuration:
--wallet.default-private-key <DEFAULT_KEY>
The private key of the default signer
[default: 0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d] [default: 0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d]
--private-keys-count <PRIVATE_KEYS_TO_ADD> --wallet.additional-keys <ADDITIONAL_KEYS>
This argument controls which private keys the nodes should have access to and be added to its wallet signers. With a value of N, private keys (0, N] will be added to the signer set of the node This argument controls which private keys the nodes should have access to and be added to its wallet signers. With a value of N, private keys (0, N] will be added to the signer set of the node
[default: 100000] [default: 100000]
-l, --leader <LEADER> Concurrency Configuration:
The differential testing leader node implementation --concurrency.number-of-nodes <NUMBER_OF_NODES>
[default: geth]
Possible values:
- geth: The go-ethereum reference full node EVM implementation
- kitchensink: The kitchensink runtime provides the PolkaVM (PVM) based node implentation
-f, --follower <FOLLOWER>
The differential testing follower node implementation
[default: kitchensink]
Possible values:
- geth: The go-ethereum reference full node EVM implementation
- kitchensink: The kitchensink runtime provides the PolkaVM (PVM) based node implentation
--compile-only <COMPILE_ONLY>
Only compile against this testing platform (doesn't execute the tests)
Possible values:
- geth: The go-ethereum reference full node EVM implementation
- kitchensink: The kitchensink runtime provides the PolkaVM (PVM) based node implentation
--number-of-nodes <NUMBER_OF_NODES>
Determines the amount of nodes that will be spawned for each chain Determines the amount of nodes that will be spawned for each chain
[default: 1] [default: 5]
--number-of-threads <NUMBER_OF_THREADS> --concurrency.number-of-threads <NUMBER_OF_THREADS>
Determines the amount of tokio worker threads that will will be used Determines the amount of tokio worker threads that will will be used
[default: 16] [default: 16]
--number-concurrent-tasks <NUMBER_CONCURRENT_TASKS> --concurrency.number-of-concurrent-tasks <NUMBER_CONCURRENT_TASKS>
Determines the amount of concurrent tasks that will be spawned to run tests. Defaults to 10 x the number of nodes Determines the amount of concurrent tasks that will be spawned to run tests.
-e, --extract-problems Defaults to 10 x the number of nodes.
Extract problems back to the test corpus
-k, --kitchensink <KITCHENSINK> --concurrency.ignore-concurrency-limit
The path to the `kitchensink` executable. Determines if the concurrency limit should be ignored or not
By default it uses `substrate-node` binary found in `$PATH`. Compilation Configuration:
--compilation.invalidate-cache
[default: substrate-node]
-p, --eth_proxy <ETH_PROXY>
The path to the `eth_proxy` executable.
By default it uses `eth-rpc` binary found in `$PATH`.
[default: eth-rpc]
-i, --invalidate-compilation-cache
Controls if the compilation cache should be invalidated or not Controls if the compilation cache should be invalidated or not
-h, --help Report Configuration:
Print help (see a summary with '-h') --report.include-compiler-input
Controls if the compiler input is included in the final report
--report.include-compiler-output
Controls if the compiler output is included in the final report
``` ```
To run tests with this tool you need a corpus JSON file that defines the tests included in the corpus. The simplest corpus file looks like the following: To run tests with this tool you need a corpus JSON file that defines the tests included in the corpus. The simplest corpus file looks like the following:
@@ -188,10 +218,11 @@ The simplest command to run this tool is the following:
```bash ```bash
RUST_LOG="info" cargo run --release -- execute-tests \ RUST_LOG="info" cargo run --release -- execute-tests \
--follower geth \ --platform geth-evm-solc \
--corpus path_to_your_corpus_file.json \ --corpus corp.json \
--working-directory path_to_a_temporary_directory_to_cache_things_in \ --working-directory workdir \
--concurrency.number-of-nodes 5 \ --concurrency.number-of-nodes 5 \
--concurrency.ignore-concurrency-limit \
> logs.log \ > logs.log \
2> output.log 2> output.log
``` ```
+3
View File
@@ -10,10 +10,13 @@ rust-version.workspace = true
[dependencies] [dependencies]
anyhow = { workspace = true } anyhow = { workspace = true }
clap = { workspace = true }
moka = { workspace = true, features = ["sync"] } moka = { workspace = true, features = ["sync"] }
once_cell = { workspace = true } once_cell = { workspace = true }
semver = { workspace = true } semver = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
schemars = { workspace = true }
strum = { workspace = true }
tokio = { workspace = true, default-features = false, features = ["time"] } tokio = { workspace = true, default-features = false, features = ["time"] }
[lints] [lints]
+124
View File
@@ -0,0 +1,124 @@
use clap::ValueEnum;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use strum::{AsRefStr, Display, EnumString, IntoStaticStr};
/// An enum of the platform identifiers of all of the platforms supported by this framework. This
/// could be thought of like the target triple from Rust and LLVM where it specifies the platform
/// completely starting with the node, the vm, and finally the compiler used for this combination.
#[derive(
Clone,
Copy,
Debug,
PartialEq,
Eq,
PartialOrd,
Ord,
Hash,
Serialize,
Deserialize,
ValueEnum,
EnumString,
Display,
AsRefStr,
IntoStaticStr,
JsonSchema,
)]
#[serde(rename_all = "kebab-case")]
#[strum(serialize_all = "kebab-case")]
pub enum PlatformIdentifier {
/// The Go-ethereum reference full node EVM implementation with the solc compiler.
GethEvmSolc,
/// The kitchensink node with the PolkaVM backend with the resolc compiler.
KitchensinkPolkavmResolc,
/// The kitchensink node with the REVM backend with the solc compiler.
KitchensinkRevmSolc,
/// The revive dev node with the PolkaVM backend with the resolc compiler.
ReviveDevNodePolkavmResolc,
/// The revive dev node with the REVM backend with the solc compiler.
ReviveDevNodeRevmSolc,
}
/// An enum of the platform identifiers of all of the platforms supported by this framework.
#[derive(
Clone,
Copy,
Debug,
PartialEq,
Eq,
PartialOrd,
Ord,
Hash,
Serialize,
Deserialize,
ValueEnum,
EnumString,
Display,
AsRefStr,
IntoStaticStr,
JsonSchema,
)]
pub enum CompilerIdentifier {
/// The solc compiler.
Solc,
/// The resolc compiler.
Resolc,
}
/// An enum representing the identifiers of the supported nodes.
#[derive(
Clone,
Copy,
Debug,
PartialEq,
Eq,
PartialOrd,
Ord,
Hash,
Serialize,
Deserialize,
ValueEnum,
EnumString,
Display,
AsRefStr,
IntoStaticStr,
JsonSchema,
)]
pub enum NodeIdentifier {
/// The go-ethereum node implementation.
Geth,
/// The Kitchensink node implementation.
Kitchensink,
/// The revive dev node implementation.
ReviveDevNode,
}
/// An enum representing the identifiers of the supported VMs.
#[derive(
Clone,
Copy,
Debug,
PartialEq,
Eq,
PartialOrd,
Ord,
Hash,
Serialize,
Deserialize,
ValueEnum,
EnumString,
Display,
AsRefStr,
IntoStaticStr,
JsonSchema,
)]
#[serde(rename_all = "lowercase")]
#[strum(serialize_all = "lowercase")]
pub enum VmIdentifier {
/// The ethereum virtual machine.
Evm,
/// The EraVM virtual machine.
EraVM,
/// Polkadot's PolaVM Risc-v based virtual machine.
PolkaVM,
}
+2
View File
@@ -1,5 +1,7 @@
mod identifiers;
mod mode; mod mode;
mod version_or_requirement; mod version_or_requirement;
pub use identifiers::*;
pub use mode::*; pub use mode::*;
pub use version_or_requirement::*; pub use version_or_requirement::*;
+8 -18
View File
@@ -7,6 +7,7 @@ use std::{
collections::HashMap, collections::HashMap,
hash::Hash, hash::Hash,
path::{Path, PathBuf}, path::{Path, PathBuf},
pin::Pin,
}; };
use alloy::json_abi::JsonAbi; use alloy::json_abi::JsonAbi;
@@ -17,8 +18,6 @@ use serde::{Deserialize, Serialize};
use revive_common::EVMVersion; use revive_common::EVMVersion;
use revive_dt_common::cached_fs::read_to_string; use revive_dt_common::cached_fs::read_to_string;
use revive_dt_common::types::VersionOrRequirement;
use revive_dt_config::{ResolcConfiguration, SolcConfiguration, WorkingDirectoryConfiguration};
// Re-export this as it's a part of the compiler interface. // Re-export this as it's a part of the compiler interface.
pub use revive_dt_common::types::{Mode, ModeOptimizerSetting, ModePipeline}; pub use revive_dt_common::types::{Mode, ModeOptimizerSetting, ModePipeline};
@@ -28,19 +27,7 @@ pub mod revive_resolc;
pub mod solc; pub mod solc;
/// A common interface for all supported Solidity compilers. /// A common interface for all supported Solidity compilers.
pub trait SolidityCompiler: Sized { pub trait SolidityCompiler {
/// Instantiates a new compiler object.
///
/// Based on the given [`Context`] and [`VersionOrRequirement`] this function instantiates a
/// new compiler object. Certain implementations of this trait might choose to cache cache the
/// compiler objects and return the same ones over and over again.
fn new(
context: impl AsRef<SolcConfiguration>
+ AsRef<ResolcConfiguration>
+ AsRef<WorkingDirectoryConfiguration>,
version: impl Into<Option<VersionOrRequirement>>,
) -> impl Future<Output = Result<Self>>;
/// Returns the version of the compiler. /// Returns the version of the compiler.
fn version(&self) -> &Version; fn version(&self) -> &Version;
@@ -48,7 +35,10 @@ pub trait SolidityCompiler: Sized {
fn path(&self) -> &Path; fn path(&self) -> &Path;
/// The low-level compiler interface. /// The low-level compiler interface.
fn build(&self, input: CompilerInput) -> impl Future<Output = Result<CompilerOutput>>; fn build(
&self,
input: CompilerInput,
) -> Pin<Box<dyn Future<Output = Result<CompilerOutput>> + '_>>;
/// Does the compiler support the provided mode and version settings. /// Does the compiler support the provided mode and version settings.
fn supports_mode( fn supports_mode(
@@ -74,7 +64,7 @@ pub struct CompilerInput {
/// The generic compilation output configuration. /// The generic compilation output configuration.
#[derive(Debug, Clone, Default, Serialize, Deserialize)] #[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct CompilerOutput { pub struct CompilerOutput {
/// The compiled contracts. The bytecode of the contract is kept as a string incase linking is /// The compiled contracts. The bytecode of the contract is kept as a string in case linking is
/// required and the compiled source has placeholders. /// required and the compiled source has placeholders.
pub contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>, pub contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
} }
@@ -164,7 +154,7 @@ impl Compiler {
callback(self) callback(self)
} }
pub async fn try_build(self, compiler: &impl SolidityCompiler) -> Result<CompilerOutput> { pub async fn try_build(self, compiler: &dyn SolidityCompiler) -> Result<CompilerOutput> {
compiler.build(self.input).await compiler.build(self.input).await
} }
+190 -179
View File
@@ -3,6 +3,7 @@
use std::{ use std::{
path::PathBuf, path::PathBuf,
pin::Pin,
process::Stdio, process::Stdio,
sync::{Arc, LazyLock}, sync::{Arc, LazyLock},
}; };
@@ -37,8 +38,8 @@ struct ResolcInner {
resolc_path: PathBuf, resolc_path: PathBuf,
} }
impl SolidityCompiler for Resolc { impl Resolc {
async fn new( pub async fn new(
context: impl AsRef<SolcConfiguration> context: impl AsRef<SolcConfiguration>
+ AsRef<ResolcConfiguration> + AsRef<ResolcConfiguration>
+ AsRef<WorkingDirectoryConfiguration>, + AsRef<WorkingDirectoryConfiguration>,
@@ -65,11 +66,13 @@ impl SolidityCompiler for Resolc {
}) })
.clone()) .clone())
} }
}
impl SolidityCompiler for Resolc {
fn version(&self) -> &Version { fn version(&self) -> &Version {
// We currently return the solc compiler version since we do not support multiple resolc // We currently return the solc compiler version since we do not support multiple resolc
// compiler versions. // compiler versions.
self.0.solc.version() SolidityCompiler::version(&self.0.solc)
} }
fn path(&self) -> &std::path::Path { fn path(&self) -> &std::path::Path {
@@ -77,7 +80,7 @@ impl SolidityCompiler for Resolc {
} }
#[tracing::instrument(level = "debug", ret)] #[tracing::instrument(level = "debug", ret)]
async fn build( fn build(
&self, &self,
CompilerInput { CompilerInput {
pipeline, pipeline,
@@ -91,189 +94,196 @@ impl SolidityCompiler for Resolc {
// resolc. So, we need to go back to this later once it's supported. // resolc. So, we need to go back to this later once it's supported.
revert_string_handling: _, revert_string_handling: _,
}: CompilerInput, }: CompilerInput,
) -> Result<CompilerOutput> { ) -> Pin<Box<dyn Future<Output = Result<CompilerOutput>> + '_>> {
if !matches!(pipeline, None | Some(ModePipeline::ViaYulIR)) { Box::pin(async move {
anyhow::bail!( if !matches!(pipeline, None | Some(ModePipeline::ViaYulIR)) {
"Resolc only supports the Y (via Yul IR) pipeline, but the provided pipeline is {pipeline:?}" anyhow::bail!(
); "Resolc only supports the Y (via Yul IR) pipeline, but the provided pipeline is {pipeline:?}"
}
let input = SolcStandardJsonInput {
language: SolcStandardJsonInputLanguage::Solidity,
sources: sources
.into_iter()
.map(|(path, source)| (path.display().to_string(), source.into()))
.collect(),
settings: SolcStandardJsonInputSettings {
evm_version,
libraries: Some(
libraries
.into_iter()
.map(|(source_code, libraries_map)| {
(
source_code.display().to_string(),
libraries_map
.into_iter()
.map(|(library_ident, library_address)| {
(library_ident, library_address.to_string())
})
.collect(),
)
})
.collect(),
),
remappings: None,
output_selection: Some(SolcStandardJsonInputSettingsSelection::new_required()),
via_ir: Some(true),
optimizer: SolcStandardJsonInputSettingsOptimizer::new(
optimization
.unwrap_or(ModeOptimizerSetting::M0)
.optimizations_enabled(),
None,
&Version::new(0, 0, 0),
false,
),
metadata: None,
polkavm: None,
},
};
let mut command = AsyncCommand::new(self.path());
command
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.arg("--standard-json");
if let Some(ref base_path) = base_path {
command.arg("--base-path").arg(base_path);
}
if !allow_paths.is_empty() {
command.arg("--allow-paths").arg(
allow_paths
.iter()
.map(|path| path.display().to_string())
.collect::<Vec<_>>()
.join(","),
);
}
let mut child = command
.spawn()
.with_context(|| format!("Failed to spawn resolc at {}", self.path().display()))?;
let stdin_pipe = child.stdin.as_mut().expect("stdin must be piped");
let serialized_input = serde_json::to_vec(&input)
.context("Failed to serialize Standard JSON input for resolc")?;
stdin_pipe
.write_all(&serialized_input)
.await
.context("Failed to write Standard JSON to resolc stdin")?;
let output = child
.wait_with_output()
.await
.context("Failed while waiting for resolc process to finish")?;
let stdout = output.stdout;
let stderr = output.stderr;
if !output.status.success() {
let json_in = serde_json::to_string_pretty(&input)
.context("Failed to pretty-print Standard JSON input for logging")?;
let message = String::from_utf8_lossy(&stderr);
tracing::error!(
status = %output.status,
message = %message,
json_input = json_in,
"Compilation using resolc failed"
);
anyhow::bail!("Compilation failed with an error: {message}");
}
let parsed = serde_json::from_slice::<SolcStandardJsonOutput>(&stdout)
.map_err(|e| {
anyhow::anyhow!(
"failed to parse resolc JSON output: {e}\nstderr: {}",
String::from_utf8_lossy(&stderr)
)
})
.context("Failed to parse resolc standard JSON output")?;
tracing::debug!(
output = %serde_json::to_string(&parsed).unwrap(),
"Compiled successfully"
);
// Detecting if the compiler output contained errors and reporting them through logs and
// errors instead of returning the compiler output that might contain errors.
for error in parsed.errors.iter().flatten() {
if error.severity == "error" {
tracing::error!(
?error,
?input,
output = %serde_json::to_string(&parsed).unwrap(),
"Encountered an error in the compilation"
); );
anyhow::bail!("Encountered an error in the compilation: {error}")
} }
}
let Some(contracts) = parsed.contracts else { let input = SolcStandardJsonInput {
anyhow::bail!("Unexpected error - resolc output doesn't have a contracts section"); language: SolcStandardJsonInputLanguage::Solidity,
}; sources: sources
.into_iter()
.map(|(path, source)| (path.display().to_string(), source.into()))
.collect(),
settings: SolcStandardJsonInputSettings {
evm_version,
libraries: Some(
libraries
.into_iter()
.map(|(source_code, libraries_map)| {
(
source_code.display().to_string(),
libraries_map
.into_iter()
.map(|(library_ident, library_address)| {
(library_ident, library_address.to_string())
})
.collect(),
)
})
.collect(),
),
remappings: None,
output_selection: Some(SolcStandardJsonInputSettingsSelection::new_required()),
via_ir: Some(true),
optimizer: SolcStandardJsonInputSettingsOptimizer::new(
optimization
.unwrap_or(ModeOptimizerSetting::M0)
.optimizations_enabled(),
None,
&Version::new(0, 0, 0),
false,
),
metadata: None,
polkavm: None,
},
};
let mut compiler_output = CompilerOutput::default(); let path = &self.0.resolc_path;
for (source_path, contracts) in contracts.into_iter() { let mut command = AsyncCommand::new(path);
let src_for_msg = source_path.clone(); command
let source_path = PathBuf::from(source_path) .stdin(Stdio::piped())
.canonicalize() .stdout(Stdio::piped())
.with_context(|| format!("Failed to canonicalize path {src_for_msg}"))?; .stderr(Stdio::piped())
.arg("--standard-json");
let map = compiler_output.contracts.entry(source_path).or_default(); if let Some(ref base_path) = base_path {
for (contract_name, contract_information) in contracts.into_iter() { command.arg("--base-path").arg(base_path);
let bytecode = contract_information }
.evm if !allow_paths.is_empty() {
.and_then(|evm| evm.bytecode.clone()) command.arg("--allow-paths").arg(
.context("Unexpected - Contract compiled with resolc has no bytecode")?; allow_paths
let abi = { .iter()
let metadata = contract_information .map(|path| path.display().to_string())
.metadata .collect::<Vec<_>>()
.as_ref() .join(","),
.context("No metadata found for the contract")?; );
let solc_metadata_str = match metadata { }
serde_json::Value::String(solc_metadata_str) => solc_metadata_str.as_str(), let mut child = command
serde_json::Value::Object(metadata_object) => { .spawn()
let solc_metadata_value = metadata_object .with_context(|| format!("Failed to spawn resolc at {}", path.display()))?;
.get("solc_metadata")
.context("Contract doesn't have a 'solc_metadata' field")?; let stdin_pipe = child.stdin.as_mut().expect("stdin must be piped");
solc_metadata_value let serialized_input = serde_json::to_vec(&input)
.as_str() .context("Failed to serialize Standard JSON input for resolc")?;
.context("The 'solc_metadata' field is not a string")? stdin_pipe
} .write_all(&serialized_input)
serde_json::Value::Null .await
| serde_json::Value::Bool(_) .context("Failed to write Standard JSON to resolc stdin")?;
| serde_json::Value::Number(_)
| serde_json::Value::Array(_) => { let output = child
anyhow::bail!("Unsupported type of metadata {metadata:?}") .wait_with_output()
} .await
}; .context("Failed while waiting for resolc process to finish")?;
let solc_metadata = let stdout = output.stdout;
serde_json::from_str::<serde_json::Value>(solc_metadata_str).context( let stderr = output.stderr;
if !output.status.success() {
let json_in = serde_json::to_string_pretty(&input)
.context("Failed to pretty-print Standard JSON input for logging")?;
let message = String::from_utf8_lossy(&stderr);
tracing::error!(
status = %output.status,
message = %message,
json_input = json_in,
"Compilation using resolc failed"
);
anyhow::bail!("Compilation failed with an error: {message}");
}
let parsed = serde_json::from_slice::<SolcStandardJsonOutput>(&stdout)
.map_err(|e| {
anyhow::anyhow!(
"failed to parse resolc JSON output: {e}\nstderr: {}",
String::from_utf8_lossy(&stderr)
)
})
.context("Failed to parse resolc standard JSON output")?;
tracing::debug!(
output = %serde_json::to_string(&parsed).unwrap(),
"Compiled successfully"
);
// Detecting if the compiler output contained errors and reporting them through logs and
// errors instead of returning the compiler output that might contain errors.
for error in parsed.errors.iter().flatten() {
if error.severity == "error" {
tracing::error!(
?error,
?input,
output = %serde_json::to_string(&parsed).unwrap(),
"Encountered an error in the compilation"
);
anyhow::bail!("Encountered an error in the compilation: {error}")
}
}
let Some(contracts) = parsed.contracts else {
anyhow::bail!("Unexpected error - resolc output doesn't have a contracts section");
};
let mut compiler_output = CompilerOutput::default();
for (source_path, contracts) in contracts.into_iter() {
let src_for_msg = source_path.clone();
let source_path = PathBuf::from(source_path)
.canonicalize()
.with_context(|| format!("Failed to canonicalize path {src_for_msg}"))?;
let map = compiler_output.contracts.entry(source_path).or_default();
for (contract_name, contract_information) in contracts.into_iter() {
let bytecode = contract_information
.evm
.and_then(|evm| evm.bytecode.clone())
.context("Unexpected - Contract compiled with resolc has no bytecode")?;
let abi = {
let metadata = contract_information
.metadata
.as_ref()
.context("No metadata found for the contract")?;
let solc_metadata_str = match metadata {
serde_json::Value::String(solc_metadata_str) => {
solc_metadata_str.as_str()
}
serde_json::Value::Object(metadata_object) => {
let solc_metadata_value = metadata_object
.get("solc_metadata")
.context("Contract doesn't have a 'solc_metadata' field")?;
solc_metadata_value
.as_str()
.context("The 'solc_metadata' field is not a string")?
}
serde_json::Value::Null
| serde_json::Value::Bool(_)
| serde_json::Value::Number(_)
| serde_json::Value::Array(_) => {
anyhow::bail!("Unsupported type of metadata {metadata:?}")
}
};
let solc_metadata = serde_json::from_str::<serde_json::Value>(
solc_metadata_str,
)
.context(
"Failed to deserialize the solc_metadata as a serde_json generic value", "Failed to deserialize the solc_metadata as a serde_json generic value",
)?; )?;
let output_value = solc_metadata let output_value = solc_metadata
.get("output") .get("output")
.context("solc_metadata doesn't have an output field")?; .context("solc_metadata doesn't have an output field")?;
let abi_value = output_value let abi_value = output_value
.get("abi") .get("abi")
.context("solc_metadata output doesn't contain an abi field")?; .context("solc_metadata output doesn't contain an abi field")?;
serde_json::from_value::<JsonAbi>(abi_value.clone()) serde_json::from_value::<JsonAbi>(abi_value.clone())
.context("ABI found in solc_metadata output is not valid ABI")? .context("ABI found in solc_metadata output is not valid ABI")?
}; };
map.insert(contract_name, (bytecode.object, abi)); map.insert(contract_name, (bytecode.object, abi));
}
} }
}
Ok(compiler_output) Ok(compiler_output)
})
} }
fn supports_mode( fn supports_mode(
@@ -281,6 +291,7 @@ impl SolidityCompiler for Resolc {
optimize_setting: ModeOptimizerSetting, optimize_setting: ModeOptimizerSetting,
pipeline: ModePipeline, pipeline: ModePipeline,
) -> bool { ) -> bool {
pipeline == ModePipeline::ViaYulIR && self.0.solc.supports_mode(optimize_setting, pipeline) pipeline == ModePipeline::ViaYulIR
&& SolidityCompiler::supports_mode(&self.0.solc, optimize_setting, pipeline)
} }
} }
+164 -158
View File
@@ -3,6 +3,7 @@
use std::{ use std::{
path::PathBuf, path::PathBuf,
pin::Pin,
process::Stdio, process::Stdio,
sync::{Arc, LazyLock}, sync::{Arc, LazyLock},
}; };
@@ -36,8 +37,8 @@ struct SolcInner {
solc_version: Version, solc_version: Version,
} }
impl SolidityCompiler for Solc { impl Solc {
async fn new( pub async fn new(
context: impl AsRef<SolcConfiguration> context: impl AsRef<SolcConfiguration>
+ AsRef<ResolcConfiguration> + AsRef<ResolcConfiguration>
+ AsRef<WorkingDirectoryConfiguration>, + AsRef<WorkingDirectoryConfiguration>,
@@ -75,7 +76,9 @@ impl SolidityCompiler for Solc {
}) })
.clone()) .clone())
} }
}
impl SolidityCompiler for Solc {
fn version(&self) -> &Version { fn version(&self) -> &Version {
&self.0.solc_version &self.0.solc_version
} }
@@ -85,7 +88,7 @@ impl SolidityCompiler for Solc {
} }
#[tracing::instrument(level = "debug", ret)] #[tracing::instrument(level = "debug", ret)]
async fn build( fn build(
&self, &self,
CompilerInput { CompilerInput {
pipeline, pipeline,
@@ -97,170 +100,173 @@ impl SolidityCompiler for Solc {
libraries, libraries,
revert_string_handling, revert_string_handling,
}: CompilerInput, }: CompilerInput,
) -> Result<CompilerOutput> { ) -> Pin<Box<dyn Future<Output = Result<CompilerOutput>> + '_>> {
// Be careful to entirely omit the viaIR field if the compiler does not support it, Box::pin(async move {
// as it will error if you provide fields it does not know about. Because // Be careful to entirely omit the viaIR field if the compiler does not support it,
// `supports_mode` is called prior to instantiating a compiler, we should never // as it will error if you provide fields it does not know about. Because
// ask for something which is invalid. // `supports_mode` is called prior to instantiating a compiler, we should never
let via_ir = match (pipeline, self.compiler_supports_yul()) { // ask for something which is invalid.
(pipeline, true) => pipeline.map(|p| p.via_yul_ir()), let via_ir = match (pipeline, self.compiler_supports_yul()) {
(_pipeline, false) => None, (pipeline, true) => pipeline.map(|p| p.via_yul_ir()),
}; (_pipeline, false) => None,
};
let input = SolcInput { let input = SolcInput {
language: SolcLanguage::Solidity, language: SolcLanguage::Solidity,
sources: Sources( sources: Sources(
sources sources
.into_iter() .into_iter()
.map(|(source_path, source_code)| (source_path, Source::new(source_code))) .map(|(source_path, source_code)| (source_path, Source::new(source_code)))
.collect(), .collect(),
), ),
settings: Settings { settings: Settings {
optimizer: Optimizer { optimizer: Optimizer {
enabled: optimization.map(|o| o.optimizations_enabled()), enabled: optimization.map(|o| o.optimizations_enabled()),
details: Some(Default::default()), details: Some(Default::default()),
..Default::default()
},
output_selection: OutputSelection::common_output_selection(
[
ContractOutputSelection::Abi,
ContractOutputSelection::Evm(EvmOutputSelection::ByteCode(
BytecodeOutputSelection::Object,
)),
]
.into_iter()
.map(|item| item.to_string()),
),
evm_version: evm_version.map(|version| version.to_string().parse().unwrap()),
via_ir,
libraries: Libraries {
libs: libraries
.into_iter()
.map(|(file_path, libraries)| {
(
file_path,
libraries
.into_iter()
.map(|(library_name, library_address)| {
(library_name, library_address.to_string())
})
.collect(),
)
})
.collect(),
},
debug: revert_string_handling.map(|revert_string_handling| DebuggingSettings {
revert_strings: match revert_string_handling {
crate::RevertString::Default => Some(RevertStrings::Default),
crate::RevertString::Debug => Some(RevertStrings::Debug),
crate::RevertString::Strip => Some(RevertStrings::Strip),
crate::RevertString::VerboseDebug => Some(RevertStrings::VerboseDebug),
},
debug_info: Default::default(),
}),
..Default::default() ..Default::default()
}, },
output_selection: OutputSelection::common_output_selection( };
[
ContractOutputSelection::Abi,
ContractOutputSelection::Evm(EvmOutputSelection::ByteCode(
BytecodeOutputSelection::Object,
)),
]
.into_iter()
.map(|item| item.to_string()),
),
evm_version: evm_version.map(|version| version.to_string().parse().unwrap()),
via_ir,
libraries: Libraries {
libs: libraries
.into_iter()
.map(|(file_path, libraries)| {
(
file_path,
libraries
.into_iter()
.map(|(library_name, library_address)| {
(library_name, library_address.to_string())
})
.collect(),
)
})
.collect(),
},
debug: revert_string_handling.map(|revert_string_handling| DebuggingSettings {
revert_strings: match revert_string_handling {
crate::RevertString::Default => Some(RevertStrings::Default),
crate::RevertString::Debug => Some(RevertStrings::Debug),
crate::RevertString::Strip => Some(RevertStrings::Strip),
crate::RevertString::VerboseDebug => Some(RevertStrings::VerboseDebug),
},
debug_info: Default::default(),
}),
..Default::default()
},
};
let mut command = AsyncCommand::new(self.path()); let path = &self.0.solc_path;
command let mut command = AsyncCommand::new(path);
.stdin(Stdio::piped()) command
.stdout(Stdio::piped()) .stdin(Stdio::piped())
.stderr(Stdio::piped()) .stdout(Stdio::piped())
.arg("--standard-json"); .stderr(Stdio::piped())
.arg("--standard-json");
if let Some(ref base_path) = base_path { if let Some(ref base_path) = base_path {
command.arg("--base-path").arg(base_path); command.arg("--base-path").arg(base_path);
}
if !allow_paths.is_empty() {
command.arg("--allow-paths").arg(
allow_paths
.iter()
.map(|path| path.display().to_string())
.collect::<Vec<_>>()
.join(","),
);
}
let mut child = command
.spawn()
.with_context(|| format!("Failed to spawn solc at {}", self.path().display()))?;
let stdin = child.stdin.as_mut().expect("should be piped");
let serialized_input = serde_json::to_vec(&input)
.context("Failed to serialize Standard JSON input for solc")?;
stdin
.write_all(&serialized_input)
.await
.context("Failed to write Standard JSON to solc stdin")?;
let output = child
.wait_with_output()
.await
.context("Failed while waiting for solc process to finish")?;
if !output.status.success() {
let json_in = serde_json::to_string_pretty(&input)
.context("Failed to pretty-print Standard JSON input for logging")?;
let message = String::from_utf8_lossy(&output.stderr);
tracing::error!(
status = %output.status,
message = %message,
json_input = json_in,
"Compilation using solc failed"
);
anyhow::bail!("Compilation failed with an error: {message}");
}
let parsed = serde_json::from_slice::<SolcOutput>(&output.stdout)
.map_err(|e| {
anyhow::anyhow!(
"failed to parse resolc JSON output: {e}\nstderr: {}",
String::from_utf8_lossy(&output.stdout)
)
})
.context("Failed to parse solc standard JSON output")?;
// Detecting if the compiler output contained errors and reporting them through logs and
// errors instead of returning the compiler output that might contain errors.
for error in parsed.errors.iter() {
if error.severity == Severity::Error {
tracing::error!(?error, ?input, "Encountered an error in the compilation");
anyhow::bail!("Encountered an error in the compilation: {error}")
} }
} if !allow_paths.is_empty() {
command.arg("--allow-paths").arg(
allow_paths
.iter()
.map(|path| path.display().to_string())
.collect::<Vec<_>>()
.join(","),
);
}
let mut child = command
.spawn()
.with_context(|| format!("Failed to spawn solc at {}", path.display()))?;
tracing::debug!( let stdin = child.stdin.as_mut().expect("should be piped");
output = %String::from_utf8_lossy(&output.stdout).to_string(), let serialized_input = serde_json::to_vec(&input)
"Compiled successfully" .context("Failed to serialize Standard JSON input for solc")?;
); stdin
.write_all(&serialized_input)
.await
.context("Failed to write Standard JSON to solc stdin")?;
let output = child
.wait_with_output()
.await
.context("Failed while waiting for solc process to finish")?;
let mut compiler_output = CompilerOutput::default(); if !output.status.success() {
for (contract_path, contracts) in parsed.contracts { let json_in = serde_json::to_string_pretty(&input)
let map = compiler_output .context("Failed to pretty-print Standard JSON input for logging")?;
.contracts let message = String::from_utf8_lossy(&output.stderr);
.entry(contract_path.canonicalize().with_context(|| { tracing::error!(
format!( status = %output.status,
"Failed to canonicalize contract path {}", message = %message,
contract_path.display() json_input = json_in,
"Compilation using solc failed"
);
anyhow::bail!("Compilation failed with an error: {message}");
}
let parsed = serde_json::from_slice::<SolcOutput>(&output.stdout)
.map_err(|e| {
anyhow::anyhow!(
"failed to parse resolc JSON output: {e}\nstderr: {}",
String::from_utf8_lossy(&output.stdout)
) )
})?) })
.or_default(); .context("Failed to parse solc standard JSON output")?;
for (contract_name, contract_info) in contracts.into_iter() {
let source_code = contract_info
.evm
.and_then(|evm| evm.bytecode)
.map(|bytecode| match bytecode.object {
BytecodeObject::Bytecode(bytecode) => bytecode.to_string(),
BytecodeObject::Unlinked(unlinked) => unlinked,
})
.context("Unexpected - contract compiled with solc has no source code")?;
let abi = contract_info
.abi
.context("Unexpected - contract compiled with solc as no ABI")?;
map.insert(contract_name, (source_code, abi));
}
}
Ok(compiler_output) // Detecting if the compiler output contained errors and reporting them through logs and
// errors instead of returning the compiler output that might contain errors.
for error in parsed.errors.iter() {
if error.severity == Severity::Error {
tracing::error!(?error, ?input, "Encountered an error in the compilation");
anyhow::bail!("Encountered an error in the compilation: {error}")
}
}
tracing::debug!(
output = %String::from_utf8_lossy(&output.stdout).to_string(),
"Compiled successfully"
);
let mut compiler_output = CompilerOutput::default();
for (contract_path, contracts) in parsed.contracts {
let map = compiler_output
.contracts
.entry(contract_path.canonicalize().with_context(|| {
format!(
"Failed to canonicalize contract path {}",
contract_path.display()
)
})?)
.or_default();
for (contract_name, contract_info) in contracts.into_iter() {
let source_code = contract_info
.evm
.and_then(|evm| evm.bytecode)
.map(|bytecode| match bytecode.object {
BytecodeObject::Bytecode(bytecode) => bytecode.to_string(),
BytecodeObject::Unlinked(unlinked) => unlinked,
})
.context("Unexpected - contract compiled with solc has no source code")?;
let abi = contract_info
.abi
.context("Unexpected - contract compiled with solc as no ABI")?;
map.insert(contract_name, (source_code, abi));
}
}
Ok(compiler_output)
})
} }
fn supports_mode( fn supports_mode(
@@ -278,6 +284,6 @@ impl SolidityCompiler for Solc {
impl Solc { impl Solc {
fn compiler_supports_yul(&self) -> bool { fn compiler_supports_yul(&self) -> bool {
const SOLC_VERSION_SUPPORTING_VIA_YUL_IR: Version = Version::new(0, 8, 13); const SOLC_VERSION_SUPPORTING_VIA_YUL_IR: Version = Version::new(0, 8, 13);
self.version() >= &SOLC_VERSION_SUPPORTING_VIA_YUL_IR SolidityCompiler::version(self) >= &SOLC_VERSION_SUPPORTING_VIA_YUL_IR
} }
} }
+4 -4
View File
@@ -1,14 +1,14 @@
use std::path::PathBuf; use std::path::PathBuf;
use revive_dt_common::types::VersionOrRequirement; use revive_dt_common::types::VersionOrRequirement;
use revive_dt_compiler::{Compiler, SolidityCompiler, revive_resolc::Resolc, solc::Solc}; use revive_dt_compiler::{Compiler, revive_resolc::Resolc, solc::Solc};
use revive_dt_config::ExecutionContext; use revive_dt_config::TestExecutionContext;
use semver::Version; use semver::Version;
#[tokio::test] #[tokio::test]
async fn contracts_can_be_compiled_with_solc() { async fn contracts_can_be_compiled_with_solc() {
// Arrange // Arrange
let args = ExecutionContext::default(); let args = TestExecutionContext::default();
let solc = Solc::new(&args, VersionOrRequirement::Version(Version::new(0, 8, 30))) let solc = Solc::new(&args, VersionOrRequirement::Version(Version::new(0, 8, 30)))
.await .await
.unwrap(); .unwrap();
@@ -49,7 +49,7 @@ async fn contracts_can_be_compiled_with_solc() {
#[tokio::test] #[tokio::test]
async fn contracts_can_be_compiled_with_resolc() { async fn contracts_can_be_compiled_with_resolc() {
// Arrange // Arrange
let args = ExecutionContext::default(); let args = TestExecutionContext::default();
let resolc = Resolc::new(&args, VersionOrRequirement::Version(Version::new(0, 8, 30))) let resolc = Resolc::new(&args, VersionOrRequirement::Version(Version::new(0, 8, 30)))
.await .await
.unwrap(); .unwrap();
+2
View File
@@ -9,6 +9,8 @@ repository.workspace = true
rust-version.workspace = true rust-version.workspace = true
[dependencies] [dependencies]
revive-dt-common = { workspace = true }
alloy = { workspace = true } alloy = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
clap = { workspace = true } clap = { workspace = true }
+118 -27
View File
@@ -18,6 +18,7 @@ use alloy::{
signers::local::PrivateKeySigner, signers::local::PrivateKeySigner,
}; };
use clap::{Parser, ValueEnum, ValueHint}; use clap::{Parser, ValueEnum, ValueHint};
use revive_dt_common::types::PlatformIdentifier;
use semver::Version; use semver::Version;
use serde::{Serialize, Serializer}; use serde::{Serialize, Serializer};
use strum::{AsRefStr, Display, EnumString, IntoStaticStr}; use strum::{AsRefStr, Display, EnumString, IntoStaticStr};
@@ -26,8 +27,8 @@ use temp_dir::TempDir;
#[derive(Clone, Debug, Parser, Serialize)] #[derive(Clone, Debug, Parser, Serialize)]
#[command(name = "retester")] #[command(name = "retester")]
pub enum Context { pub enum Context {
/// Executes tests in the MatterLabs format differentially against a leader and a follower. /// Executes tests in the MatterLabs format differentially on multiple targets concurrently.
ExecuteTests(Box<ExecutionContext>), ExecuteTests(Box<TestExecutionContext>),
/// Exports the JSON schema of the MatterLabs test format used by the tool. /// Exports the JSON schema of the MatterLabs test format used by the tool.
ExportJsonSchema, ExportJsonSchema,
} }
@@ -45,8 +46,98 @@ impl Context {
impl AsRef<WorkingDirectoryConfiguration> for Context { impl AsRef<WorkingDirectoryConfiguration> for Context {
fn as_ref(&self) -> &WorkingDirectoryConfiguration { fn as_ref(&self) -> &WorkingDirectoryConfiguration {
match self { match self {
Context::ExecuteTests(execution_context) => &execution_context.working_directory, Self::ExecuteTests(context) => context.as_ref().as_ref(),
Context::ExportJsonSchema => unreachable!(), Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<SolcConfiguration> for Context {
fn as_ref(&self) -> &SolcConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<ResolcConfiguration> for Context {
fn as_ref(&self) -> &ResolcConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<GethConfiguration> for Context {
fn as_ref(&self) -> &GethConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<KitchensinkConfiguration> for Context {
fn as_ref(&self) -> &KitchensinkConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<ReviveDevNodeConfiguration> for Context {
fn as_ref(&self) -> &ReviveDevNodeConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<EthRpcConfiguration> for Context {
fn as_ref(&self) -> &EthRpcConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<GenesisConfiguration> for Context {
fn as_ref(&self) -> &GenesisConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<WalletConfiguration> for Context {
fn as_ref(&self) -> &WalletConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<ConcurrencyConfiguration> for Context {
fn as_ref(&self) -> &ConcurrencyConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
}
}
}
impl AsRef<CompilationConfiguration> for Context {
fn as_ref(&self) -> &CompilationConfiguration {
match self {
Self::ExecuteTests(context) => context.as_ref().as_ref(),
Self::ExportJsonSchema => unreachable!(),
} }
} }
} }
@@ -54,14 +145,14 @@ impl AsRef<WorkingDirectoryConfiguration> for Context {
impl AsRef<ReportConfiguration> for Context { impl AsRef<ReportConfiguration> for Context {
fn as_ref(&self) -> &ReportConfiguration { fn as_ref(&self) -> &ReportConfiguration {
match self { match self {
Context::ExecuteTests(execution_context) => &execution_context.report_configuration, Self::ExecuteTests(context) => context.as_ref().as_ref(),
Context::ExportJsonSchema => unreachable!(), Self::ExportJsonSchema => unreachable!(),
} }
} }
} }
#[derive(Clone, Debug, Parser, Serialize)] #[derive(Clone, Debug, Parser, Serialize)]
pub struct ExecutionContext { pub struct TestExecutionContext {
/// The working directory that the program will use for all of the temporary artifacts needed at /// The working directory that the program will use for all of the temporary artifacts needed at
/// runtime. /// runtime.
/// ///
@@ -75,13 +166,13 @@ pub struct ExecutionContext {
)] )]
pub working_directory: WorkingDirectoryConfiguration, pub working_directory: WorkingDirectoryConfiguration,
/// The differential testing leader node implementation. /// The set of platforms that the differential tests should run on.
#[arg(short, long = "leader", default_value_t = TestingPlatform::Geth)] #[arg(
pub leader: TestingPlatform, short = 'p',
long = "platform",
/// The differential testing follower node implementation. default_values = ["geth-evm-solc", "revive-dev-node-polkavm-resolc"]
#[arg(short, long = "follower", default_value_t = TestingPlatform::Kitchensink)] )]
pub follower: TestingPlatform, pub platforms: Vec<PlatformIdentifier>,
/// A list of test corpus JSON files to be tested. /// A list of test corpus JSON files to be tested.
#[arg(long = "corpus", short)] #[arg(long = "corpus", short)]
@@ -132,79 +223,79 @@ pub struct ExecutionContext {
pub report_configuration: ReportConfiguration, pub report_configuration: ReportConfiguration,
} }
impl Default for ExecutionContext { impl Default for TestExecutionContext {
fn default() -> Self { fn default() -> Self {
Self::parse_from(["execution-context"]) Self::parse_from(["execution-context"])
} }
} }
impl AsRef<WorkingDirectoryConfiguration> for ExecutionContext { impl AsRef<WorkingDirectoryConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &WorkingDirectoryConfiguration { fn as_ref(&self) -> &WorkingDirectoryConfiguration {
&self.working_directory &self.working_directory
} }
} }
impl AsRef<SolcConfiguration> for ExecutionContext { impl AsRef<SolcConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &SolcConfiguration { fn as_ref(&self) -> &SolcConfiguration {
&self.solc_configuration &self.solc_configuration
} }
} }
impl AsRef<ResolcConfiguration> for ExecutionContext { impl AsRef<ResolcConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &ResolcConfiguration { fn as_ref(&self) -> &ResolcConfiguration {
&self.resolc_configuration &self.resolc_configuration
} }
} }
impl AsRef<GethConfiguration> for ExecutionContext { impl AsRef<GethConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &GethConfiguration { fn as_ref(&self) -> &GethConfiguration {
&self.geth_configuration &self.geth_configuration
} }
} }
impl AsRef<KitchensinkConfiguration> for ExecutionContext { impl AsRef<KitchensinkConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &KitchensinkConfiguration { fn as_ref(&self) -> &KitchensinkConfiguration {
&self.kitchensink_configuration &self.kitchensink_configuration
} }
} }
impl AsRef<ReviveDevNodeConfiguration> for ExecutionContext { impl AsRef<ReviveDevNodeConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &ReviveDevNodeConfiguration { fn as_ref(&self) -> &ReviveDevNodeConfiguration {
&self.revive_dev_node_configuration &self.revive_dev_node_configuration
} }
} }
impl AsRef<EthRpcConfiguration> for ExecutionContext { impl AsRef<EthRpcConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &EthRpcConfiguration { fn as_ref(&self) -> &EthRpcConfiguration {
&self.eth_rpc_configuration &self.eth_rpc_configuration
} }
} }
impl AsRef<GenesisConfiguration> for ExecutionContext { impl AsRef<GenesisConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &GenesisConfiguration { fn as_ref(&self) -> &GenesisConfiguration {
&self.genesis_configuration &self.genesis_configuration
} }
} }
impl AsRef<WalletConfiguration> for ExecutionContext { impl AsRef<WalletConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &WalletConfiguration { fn as_ref(&self) -> &WalletConfiguration {
&self.wallet_configuration &self.wallet_configuration
} }
} }
impl AsRef<ConcurrencyConfiguration> for ExecutionContext { impl AsRef<ConcurrencyConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &ConcurrencyConfiguration { fn as_ref(&self) -> &ConcurrencyConfiguration {
&self.concurrency_configuration &self.concurrency_configuration
} }
} }
impl AsRef<CompilationConfiguration> for ExecutionContext { impl AsRef<CompilationConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &CompilationConfiguration { fn as_ref(&self) -> &CompilationConfiguration {
&self.compilation_configuration &self.compilation_configuration
} }
} }
impl AsRef<ReportConfiguration> for ExecutionContext { impl AsRef<ReportConfiguration> for TestExecutionContext {
fn as_ref(&self) -> &ReportConfiguration { fn as_ref(&self) -> &ReportConfiguration {
&self.report_configuration &self.report_configuration
} }
+13 -15
View File
@@ -9,9 +9,9 @@ use std::{
}; };
use futures::FutureExt; use futures::FutureExt;
use revive_dt_common::iterators::FilesWithExtensionIterator; use revive_dt_common::{iterators::FilesWithExtensionIterator, types::CompilerIdentifier};
use revive_dt_compiler::{Compiler, CompilerOutput, Mode, SolidityCompiler}; use revive_dt_compiler::{Compiler, CompilerOutput, Mode, SolidityCompiler};
use revive_dt_config::TestingPlatform; use revive_dt_core::Platform;
use revive_dt_format::metadata::{ContractIdent, ContractInstance, Metadata}; use revive_dt_format::metadata::{ContractIdent, ContractInstance, Metadata};
use alloy::{hex::ToHexExt, json_abi::JsonAbi, primitives::Address}; use alloy::{hex::ToHexExt, json_abi::JsonAbi, primitives::Address};
@@ -22,8 +22,6 @@ use serde::{Deserialize, Serialize};
use tokio::sync::{Mutex, RwLock}; use tokio::sync::{Mutex, RwLock};
use tracing::{Instrument, debug, debug_span, instrument}; use tracing::{Instrument, debug, debug_span, instrument};
use crate::Platform;
pub struct CachedCompiler<'a> { pub struct CachedCompiler<'a> {
/// The cache that stores the compiled contracts. /// The cache that stores the compiled contracts.
artifacts_cache: ArtifactsCache, artifacts_cache: ArtifactsCache,
@@ -57,21 +55,22 @@ impl<'a> CachedCompiler<'a> {
fields( fields(
metadata_file_path = %metadata_file_path.display(), metadata_file_path = %metadata_file_path.display(),
%mode, %mode,
platform = P::config_id().to_string() platform = %platform.platform_identifier()
), ),
err err
)] )]
pub async fn compile_contracts<P: Platform>( pub async fn compile_contracts(
&self, &self,
metadata: &'a Metadata, metadata: &'a Metadata,
metadata_file_path: &'a Path, metadata_file_path: &'a Path,
mode: Cow<'a, Mode>, mode: Cow<'a, Mode>,
deployed_libraries: Option<&HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>>, deployed_libraries: Option<&HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>>,
compiler: &P::Compiler, compiler: &dyn SolidityCompiler,
platform: &dyn Platform,
reporter: &ExecutionSpecificReporter, reporter: &ExecutionSpecificReporter,
) -> Result<CompilerOutput> { ) -> Result<CompilerOutput> {
let cache_key = CacheKey { let cache_key = CacheKey {
platform_key: P::config_id(), compiler_identifier: platform.compiler_identifier(),
compiler_version: compiler.version().clone(), compiler_version: compiler.version().clone(),
metadata_file_path, metadata_file_path,
solc_mode: mode.clone(), solc_mode: mode.clone(),
@@ -79,7 +78,7 @@ impl<'a> CachedCompiler<'a> {
let compilation_callback = || { let compilation_callback = || {
async move { async move {
compile_contracts::<P>( compile_contracts(
metadata metadata
.directory() .directory()
.context("Failed to get metadata directory while preparing compilation")?, .context("Failed to get metadata directory while preparing compilation")?,
@@ -96,7 +95,7 @@ impl<'a> CachedCompiler<'a> {
} }
.instrument(debug_span!( .instrument(debug_span!(
"Running compilation for the cache key", "Running compilation for the cache key",
cache_key.platform_key = %cache_key.platform_key, cache_key.compiler_identifier = %cache_key.compiler_identifier,
cache_key.compiler_version = %cache_key.compiler_version, cache_key.compiler_version = %cache_key.compiler_version,
cache_key.metadata_file_path = %cache_key.metadata_file_path.display(), cache_key.metadata_file_path = %cache_key.metadata_file_path.display(),
cache_key.solc_mode = %cache_key.solc_mode, cache_key.solc_mode = %cache_key.solc_mode,
@@ -179,12 +178,12 @@ impl<'a> CachedCompiler<'a> {
} }
} }
async fn compile_contracts<P: Platform>( async fn compile_contracts(
metadata_directory: impl AsRef<Path>, metadata_directory: impl AsRef<Path>,
mut files_to_compile: impl Iterator<Item = PathBuf>, mut files_to_compile: impl Iterator<Item = PathBuf>,
mode: &Mode, mode: &Mode,
deployed_libraries: Option<&HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>>, deployed_libraries: Option<&HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>>,
compiler: &P::Compiler, compiler: &dyn SolidityCompiler,
reporter: &ExecutionSpecificReporter, reporter: &ExecutionSpecificReporter,
) -> Result<CompilerOutput> { ) -> Result<CompilerOutput> {
let all_sources_in_dir = FilesWithExtensionIterator::new(metadata_directory.as_ref()) let all_sources_in_dir = FilesWithExtensionIterator::new(metadata_directory.as_ref())
@@ -332,9 +331,8 @@ impl ArtifactsCache {
#[derive(Clone, Debug, PartialEq, Eq, Hash, Serialize)] #[derive(Clone, Debug, PartialEq, Eq, Hash, Serialize)]
struct CacheKey<'a> { struct CacheKey<'a> {
/// The platform name that this artifact was compiled for. For example, this could be EVM or /// The identifier of the used compiler.
/// PVM. compiler_identifier: CompilerIdentifier,
platform_key: &'a TestingPlatform,
/// The version of the compiler that was used to compile the artifacts. /// The version of the compiler that was used to compile the artifacts.
compiler_version: Version, compiler_version: Version,
+85 -97
View File
@@ -1,14 +1,13 @@
//! The test driver handles the compilation and execution of the test cases. //! The test driver handles the compilation and execution of the test cases.
use std::collections::HashMap; use std::collections::HashMap;
use std::marker::PhantomData;
use std::path::PathBuf; use std::path::PathBuf;
use alloy::consensus::EMPTY_ROOT_HASH; use alloy::consensus::EMPTY_ROOT_HASH;
use alloy::hex; use alloy::hex;
use alloy::json_abi::JsonAbi; use alloy::json_abi::JsonAbi;
use alloy::network::{Ethereum, TransactionBuilder}; use alloy::network::{Ethereum, TransactionBuilder};
use alloy::primitives::U256; use alloy::primitives::{TxHash, U256};
use alloy::rpc::types::TransactionReceipt; use alloy::rpc::types::TransactionReceipt;
use alloy::rpc::types::trace::geth::{ use alloy::rpc::types::trace::geth::{
CallFrame, GethDebugBuiltInTracerType, GethDebugTracerConfig, GethDebugTracerType, CallFrame, GethDebugBuiltInTracerType, GethDebugTracerConfig, GethDebugTracerType,
@@ -19,8 +18,9 @@ use alloy::{
rpc::types::{TransactionRequest, trace::geth::DiffMode}, rpc::types::{TransactionRequest, trace::geth::DiffMode},
}; };
use anyhow::Context as _; use anyhow::Context as _;
use futures::TryStreamExt; use futures::{TryStreamExt, future::try_join_all};
use indexmap::IndexMap; use indexmap::IndexMap;
use revive_dt_common::types::PlatformIdentifier;
use revive_dt_format::traits::{ResolutionContext, ResolverApi}; use revive_dt_format::traits::{ResolutionContext, ResolverApi};
use revive_dt_report::ExecutionSpecificReporter; use revive_dt_report::ExecutionSpecificReporter;
use semver::Version; use semver::Version;
@@ -36,9 +36,7 @@ use revive_dt_node_interaction::EthereumNode;
use tokio::try_join; use tokio::try_join;
use tracing::{Instrument, info, info_span, instrument}; use tracing::{Instrument, info, info_span, instrument};
use crate::Platform; pub struct CaseState {
pub struct CaseState<T: Platform> {
/// A map of all of the compiled contracts for the given metadata file. /// A map of all of the compiled contracts for the given metadata file.
compiled_contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>, compiled_contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
@@ -54,14 +52,9 @@ pub struct CaseState<T: Platform> {
/// The execution reporter. /// The execution reporter.
execution_reporter: ExecutionSpecificReporter, execution_reporter: ExecutionSpecificReporter,
phantom: PhantomData<T>,
} }
impl<T> CaseState<T> impl CaseState {
where
T: Platform,
{
pub fn new( pub fn new(
compiler_version: Version, compiler_version: Version,
compiled_contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>, compiled_contracts: HashMap<PathBuf, HashMap<String, (String, JsonAbi)>>,
@@ -74,7 +67,6 @@ where
variables: Default::default(), variables: Default::default(),
compiler_version, compiler_version,
execution_reporter, execution_reporter,
phantom: PhantomData,
} }
} }
@@ -82,7 +74,7 @@ where
&mut self, &mut self,
metadata: &Metadata, metadata: &Metadata,
step: &Step, step: &Step,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<StepOutput> { ) -> anyhow::Result<StepOutput> {
match step { match step {
Step::FunctionCall(input) => { Step::FunctionCall(input) => {
@@ -113,8 +105,10 @@ where
&mut self, &mut self,
metadata: &Metadata, metadata: &Metadata,
input: &Input, input: &Input,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<(TransactionReceipt, GethTrace, DiffMode)> { ) -> anyhow::Result<(TransactionReceipt, GethTrace, DiffMode)> {
let resolver = node.resolver().await?;
let deployment_receipts = self let deployment_receipts = self
.handle_input_contract_deployment(metadata, input, node) .handle_input_contract_deployment(metadata, input, node)
.await .await
@@ -124,14 +118,19 @@ where
.await .await
.context("Failed during transaction execution phase of input handling")?; .context("Failed during transaction execution phase of input handling")?;
let tracing_result = self let tracing_result = self
.handle_input_call_frame_tracing(&execution_receipt, node) .handle_input_call_frame_tracing(execution_receipt.transaction_hash, node)
.await .await
.context("Failed during callframe tracing phase of input handling")?; .context("Failed during callframe tracing phase of input handling")?;
self.handle_input_variable_assignment(input, &tracing_result) self.handle_input_variable_assignment(input, &tracing_result)
.context("Failed to assign variables from callframe output")?; .context("Failed to assign variables from callframe output")?;
let (_, (geth_trace, diff_mode)) = try_join!( let (_, (geth_trace, diff_mode)) = try_join!(
self.handle_input_expectations(input, &execution_receipt, node, &tracing_result), self.handle_input_expectations(
self.handle_input_diff(&execution_receipt, node) input,
&execution_receipt,
resolver.as_ref(),
&tracing_result
),
self.handle_input_diff(execution_receipt.transaction_hash, node)
) )
.context("Failed while evaluating expectations and diffs in parallel")?; .context("Failed while evaluating expectations and diffs in parallel")?;
Ok((execution_receipt, geth_trace, diff_mode)) Ok((execution_receipt, geth_trace, diff_mode))
@@ -142,7 +141,7 @@ where
&mut self, &mut self,
metadata: &Metadata, metadata: &Metadata,
balance_assertion: &BalanceAssertion, balance_assertion: &BalanceAssertion,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
self.handle_balance_assertion_contract_deployment(metadata, balance_assertion, node) self.handle_balance_assertion_contract_deployment(metadata, balance_assertion, node)
.await .await
@@ -158,7 +157,7 @@ where
&mut self, &mut self,
metadata: &Metadata, metadata: &Metadata,
storage_empty: &StorageEmptyAssertion, storage_empty: &StorageEmptyAssertion,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
self.handle_storage_empty_assertion_contract_deployment(metadata, storage_empty, node) self.handle_storage_empty_assertion_contract_deployment(metadata, storage_empty, node)
.await .await
@@ -175,7 +174,7 @@ where
&mut self, &mut self,
metadata: &Metadata, metadata: &Metadata,
input: &Input, input: &Input,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<HashMap<ContractInstance, TransactionReceipt>> { ) -> anyhow::Result<HashMap<ContractInstance, TransactionReceipt>> {
let mut instances_we_must_deploy = IndexMap::<ContractInstance, bool>::new(); let mut instances_we_must_deploy = IndexMap::<ContractInstance, bool>::new();
for instance in input.find_all_contract_instances().into_iter() { for instance in input.find_all_contract_instances().into_iter() {
@@ -220,7 +219,7 @@ where
&mut self, &mut self,
input: &Input, input: &Input,
mut deployment_receipts: HashMap<ContractInstance, TransactionReceipt>, mut deployment_receipts: HashMap<ContractInstance, TransactionReceipt>,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<TransactionReceipt> { ) -> anyhow::Result<TransactionReceipt> {
match input.method { match input.method {
// This input was already executed when `handle_input` was called. We just need to // This input was already executed when `handle_input` was called. We just need to
@@ -229,8 +228,9 @@ where
.remove(&input.instance) .remove(&input.instance)
.context("Failed to find deployment receipt for constructor call"), .context("Failed to find deployment receipt for constructor call"),
Method::Fallback | Method::FunctionName(_) => { Method::Fallback | Method::FunctionName(_) => {
let resolver = node.resolver().await?;
let tx = match input let tx = match input
.legacy_transaction(node, self.default_resolution_context()) .legacy_transaction(resolver.as_ref(), self.default_resolution_context())
.await .await
{ {
Ok(tx) => tx, Ok(tx) => tx,
@@ -250,11 +250,11 @@ where
#[instrument(level = "info", skip_all)] #[instrument(level = "info", skip_all)]
async fn handle_input_call_frame_tracing( async fn handle_input_call_frame_tracing(
&self, &self,
execution_receipt: &TransactionReceipt, tx_hash: TxHash,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<CallFrame> { ) -> anyhow::Result<CallFrame> {
node.trace_transaction( node.trace_transaction(
execution_receipt, tx_hash,
GethDebugTracingOptions { GethDebugTracingOptions {
tracer: Some(GethDebugTracerType::BuiltInTracer( tracer: Some(GethDebugTracerType::BuiltInTracer(
GethDebugBuiltInTracerType::CallTracer, GethDebugBuiltInTracerType::CallTracer,
@@ -314,7 +314,7 @@ where
&self, &self,
input: &Input, input: &Input,
execution_receipt: &TransactionReceipt, execution_receipt: &TransactionReceipt,
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
tracing_result: &CallFrame, tracing_result: &CallFrame,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
// Resolving the `input.expected` into a series of expectations that we can then assert on. // Resolving the `input.expected` into a series of expectations that we can then assert on.
@@ -362,7 +362,7 @@ where
async fn handle_input_expectation_item( async fn handle_input_expectation_item(
&self, &self,
execution_receipt: &TransactionReceipt, execution_receipt: &TransactionReceipt,
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
expectation: ExpectedOutput, expectation: ExpectedOutput,
tracing_result: &CallFrame, tracing_result: &CallFrame,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
@@ -507,8 +507,8 @@ where
#[instrument(level = "info", skip_all)] #[instrument(level = "info", skip_all)]
async fn handle_input_diff( async fn handle_input_diff(
&self, &self,
execution_receipt: &TransactionReceipt, tx_hash: TxHash,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<(GethTrace, DiffMode)> { ) -> anyhow::Result<(GethTrace, DiffMode)> {
let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig { let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig {
diff_mode: Some(true), diff_mode: Some(true),
@@ -517,11 +517,11 @@ where
}); });
let trace = node let trace = node
.trace_transaction(execution_receipt, trace_options) .trace_transaction(tx_hash, trace_options)
.await .await
.context("Failed to obtain geth prestate tracer output")?; .context("Failed to obtain geth prestate tracer output")?;
let diff = node let diff = node
.state_diff(execution_receipt) .state_diff(tx_hash)
.await .await
.context("Failed to obtain state diff for transaction")?; .context("Failed to obtain state diff for transaction")?;
@@ -533,7 +533,7 @@ where
&mut self, &mut self,
metadata: &Metadata, metadata: &Metadata,
balance_assertion: &BalanceAssertion, balance_assertion: &BalanceAssertion,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
let Some(instance) = balance_assertion let Some(instance) = balance_assertion
.address .address
@@ -562,11 +562,12 @@ where
expected_balance: amount, expected_balance: amount,
.. ..
}: &BalanceAssertion, }: &BalanceAssertion,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
let resolver = node.resolver().await?;
let address = Address::from_slice( let address = Address::from_slice(
Calldata::new_compound([address_string]) Calldata::new_compound([address_string])
.calldata(node, self.default_resolution_context()) .calldata(resolver.as_ref(), self.default_resolution_context())
.await? .await?
.get(12..32) .get(12..32)
.expect("Can't fail"), .expect("Can't fail"),
@@ -595,7 +596,7 @@ where
&mut self, &mut self,
metadata: &Metadata, metadata: &Metadata,
storage_empty_assertion: &StorageEmptyAssertion, storage_empty_assertion: &StorageEmptyAssertion,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
let Some(instance) = storage_empty_assertion let Some(instance) = storage_empty_assertion
.address .address
@@ -624,11 +625,12 @@ where
is_storage_empty, is_storage_empty,
.. ..
}: &StorageEmptyAssertion, }: &StorageEmptyAssertion,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
let resolver = node.resolver().await?;
let address = Address::from_slice( let address = Address::from_slice(
Calldata::new_compound([address_string]) Calldata::new_compound([address_string])
.calldata(node, self.default_resolution_context()) .calldata(resolver.as_ref(), self.default_resolution_context())
.await? .await?
.get(12..32) .get(12..32)
.expect("Can't fail"), .expect("Can't fail"),
@@ -667,7 +669,7 @@ where
deployer: Address, deployer: Address,
calldata: Option<&Calldata>, calldata: Option<&Calldata>,
value: Option<EtherValue>, value: Option<EtherValue>,
node: &T::Blockchain, node: &dyn EthereumNode,
) -> anyhow::Result<(Address, JsonAbi, Option<TransactionReceipt>)> { ) -> anyhow::Result<(Address, JsonAbi, Option<TransactionReceipt>)> {
if let Some((_, address, abi)) = self.deployed_contracts.get(contract_instance) { if let Some((_, address, abi)) = self.deployed_contracts.get(contract_instance) {
return Ok((*address, abi.clone(), None)); return Ok((*address, abi.clone(), None));
@@ -710,8 +712,9 @@ where
}; };
if let Some(calldata) = calldata { if let Some(calldata) = calldata {
let resolver = node.resolver().await?;
let calldata = calldata let calldata = calldata
.calldata(node, self.default_resolution_context()) .calldata(resolver.as_ref(), self.default_resolution_context())
.await?; .await?;
code.extend(calldata); code.extend(calldata);
} }
@@ -728,11 +731,7 @@ where
let receipt = match node.execute_transaction(tx).await { let receipt = match node.execute_transaction(tx).await {
Ok(receipt) => receipt, Ok(receipt) => receipt,
Err(error) => { Err(error) => {
tracing::error!( tracing::error!(?error, "Contract deployment transaction failed.");
node = std::any::type_name::<T>(),
?error,
"Contract deployment transaction failed."
);
return Err(error); return Err(error);
} }
}; };
@@ -763,36 +762,23 @@ where
} }
} }
pub struct CaseDriver<'a, Leader: Platform, Follower: Platform> { pub struct CaseDriver<'a> {
metadata: &'a Metadata, metadata: &'a Metadata,
case: &'a Case, case: &'a Case,
leader_node: &'a Leader::Blockchain, platform_state: Vec<(&'a dyn EthereumNode, PlatformIdentifier, CaseState)>,
follower_node: &'a Follower::Blockchain,
leader_state: CaseState<Leader>,
follower_state: CaseState<Follower>,
} }
impl<'a, L, F> CaseDriver<'a, L, F> impl<'a> CaseDriver<'a> {
where
L: Platform,
F: Platform,
{
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
pub fn new( pub fn new(
metadata: &'a Metadata, metadata: &'a Metadata,
case: &'a Case, case: &'a Case,
leader_node: &'a L::Blockchain, platform_state: Vec<(&'a dyn EthereumNode, PlatformIdentifier, CaseState)>,
follower_node: &'a F::Blockchain, ) -> CaseDriver<'a> {
leader_state: CaseState<L>,
follower_state: CaseState<F>,
) -> CaseDriver<'a, L, F> {
Self { Self {
metadata, metadata,
case, case,
leader_node, platform_state,
follower_node,
leader_state,
follower_state,
} }
} }
@@ -805,42 +791,44 @@ where
.enumerate() .enumerate()
.map(|(idx, v)| (StepIdx::new(idx), v)) .map(|(idx, v)| (StepIdx::new(idx), v))
{ {
let (leader_step_output, follower_step_output) = try_join!( // Run this step concurrently across all platforms; short-circuit on first failure
self.leader_state let metadata = self.metadata;
.handle_step(self.metadata, &step, self.leader_node) let step_futs =
.instrument(info_span!( self.platform_state
"Handling Step", .iter_mut()
%step_idx, .map(|(node, platform_id, case_state)| {
target = "Leader", let platform_id = *platform_id;
)), let node_ref = *node;
self.follower_state let step_clone = step.clone();
.handle_step(self.metadata, &step, self.follower_node) let span = info_span!(
.instrument(info_span!( "Handling Step",
"Handling Step", %step_idx,
%step_idx, platform = %platform_id,
target = "Follower", );
)) async move {
)?; case_state
.handle_step(metadata, &step_clone, node_ref)
.await
.map_err(|e| (platform_id, e))
}
.instrument(span)
});
match (leader_step_output, follower_step_output) { match try_join_all(step_futs).await {
(StepOutput::FunctionCall(..), StepOutput::FunctionCall(..)) => { Ok(_outputs) => {
// TODO: We need to actually work out how/if we will compare the diff between // All platforms succeeded for this step
// the leader and the follower. The diffs are almost guaranteed to be different steps_executed += 1;
// from leader and follower and therefore without an actual strategy for this }
// we have something that's guaranteed to fail. Even a simple call to some Err((platform_id, error)) => {
// contract will produce two non-equal diffs because on the leader the contract tracing::error!(
// has address X and on the follower it has address Y. On the leader contract X %step_idx,
// contains address A in the state and on the follower it contains address B. So platform = %platform_id,
// this isn't exactly a straightforward thing to do and I'm not even sure that ?error,
// it's possible to do. Once we have an actual strategy for doing the diffs we "Step failed on platform",
// will implement it here. Until then, this remains empty. );
return Err(error);
} }
(StepOutput::BalanceAssertion, StepOutput::BalanceAssertion) => {}
(StepOutput::StorageEmptyAssertion, StepOutput::StorageEmptyAssertion) => {}
_ => unreachable!("The two step outputs can not be of a different kind"),
} }
steps_executed += 1;
} }
Ok(steps_executed) Ok(steps_executed)
+350 -25
View File
@@ -3,45 +3,370 @@
//! This crate defines the testing configuration and //! This crate defines the testing configuration and
//! provides a helper utility to execute tests. //! provides a helper utility to execute tests.
use revive_dt_compiler::{SolidityCompiler, revive_resolc, solc}; use std::{
use revive_dt_config::TestingPlatform; pin::Pin,
use revive_dt_format::traits::ResolverApi; thread::{self, JoinHandle},
use revive_dt_node::{Node, geth, kitchensink::KitchensinkNode}; };
use alloy::genesis::Genesis;
use anyhow::Context as _;
use revive_dt_common::types::*;
use revive_dt_compiler::{SolidityCompiler, revive_resolc::Resolc, solc::Solc};
use revive_dt_config::*;
use revive_dt_node::{Node, geth::GethNode, substrate::SubstrateNode};
use revive_dt_node_interaction::EthereumNode; use revive_dt_node_interaction::EthereumNode;
use tracing::info;
pub mod driver; pub mod driver;
/// One platform can be tested differentially against another. /// A trait that describes the interface for the platforms that are supported by the tool.
/// #[allow(clippy::type_complexity)]
/// For this we need a blockchain node implementation and a compiler.
pub trait Platform { pub trait Platform {
type Blockchain: EthereumNode + Node + ResolverApi; /// Returns the identifier of this platform. This is a combination of the node and the compiler
type Compiler: SolidityCompiler; /// used.
fn platform_identifier(&self) -> PlatformIdentifier;
/// Returns the matching [TestingPlatform] of the [revive_dt_config::Arguments]. /// Returns a full identifier for the platform.
fn config_id() -> &'static TestingPlatform; fn full_identifier(&self) -> (NodeIdentifier, VmIdentifier, CompilerIdentifier) {
(
self.node_identifier(),
self.vm_identifier(),
self.compiler_identifier(),
)
}
/// Returns the identifier of the node used.
fn node_identifier(&self) -> NodeIdentifier;
/// Returns the identifier of the vm used.
fn vm_identifier(&self) -> VmIdentifier;
/// Returns the identifier of the compiler used.
fn compiler_identifier(&self) -> CompilerIdentifier;
/// Creates a new node for the platform by spawning a new thread, creating the node object,
/// initializing it, spawning it, and waiting for it to start up.
fn new_node(
&self,
context: Context,
) -> anyhow::Result<JoinHandle<anyhow::Result<Box<dyn EthereumNode + Send + Sync>>>>;
/// Creates a new compiler for the provided platform
fn new_compiler(
&self,
context: Context,
version: Option<VersionOrRequirement>,
) -> Pin<Box<dyn Future<Output = anyhow::Result<Box<dyn SolidityCompiler>>>>>;
} }
#[derive(Default)] #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Default, Hash)]
pub struct Geth; pub struct GethEvmSolcPlatform;
impl Platform for Geth { impl Platform for GethEvmSolcPlatform {
type Blockchain = geth::GethNode; fn platform_identifier(&self) -> PlatformIdentifier {
type Compiler = solc::Solc; PlatformIdentifier::GethEvmSolc
}
fn config_id() -> &'static TestingPlatform { fn node_identifier(&self) -> NodeIdentifier {
&TestingPlatform::Geth NodeIdentifier::Geth
}
fn vm_identifier(&self) -> VmIdentifier {
VmIdentifier::Evm
}
fn compiler_identifier(&self) -> CompilerIdentifier {
CompilerIdentifier::Solc
}
fn new_node(
&self,
context: Context,
) -> anyhow::Result<JoinHandle<anyhow::Result<Box<dyn EthereumNode + Send + Sync>>>> {
let genesis_configuration = AsRef::<GenesisConfiguration>::as_ref(&context);
let genesis = genesis_configuration.genesis()?.clone();
Ok(thread::spawn(move || {
let node = GethNode::new(context);
let node = spawn_node::<GethNode>(node, genesis)?;
Ok(Box::new(node) as Box<_>)
}))
}
fn new_compiler(
&self,
context: Context,
version: Option<VersionOrRequirement>,
) -> Pin<Box<dyn Future<Output = anyhow::Result<Box<dyn SolidityCompiler>>>>> {
Box::pin(async move {
let compiler = Solc::new(context, version).await;
compiler.map(|compiler| Box::new(compiler) as Box<dyn SolidityCompiler>)
})
} }
} }
#[derive(Default)] #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Default, Hash)]
pub struct Kitchensink; pub struct KitchensinkPolkavmResolcPlatform;
impl Platform for Kitchensink { impl Platform for KitchensinkPolkavmResolcPlatform {
type Blockchain = KitchensinkNode; fn platform_identifier(&self) -> PlatformIdentifier {
type Compiler = revive_resolc::Resolc; PlatformIdentifier::KitchensinkPolkavmResolc
}
fn config_id() -> &'static TestingPlatform { fn node_identifier(&self) -> NodeIdentifier {
&TestingPlatform::Kitchensink NodeIdentifier::Kitchensink
}
fn vm_identifier(&self) -> VmIdentifier {
VmIdentifier::PolkaVM
}
fn compiler_identifier(&self) -> CompilerIdentifier {
CompilerIdentifier::Resolc
}
fn new_node(
&self,
context: Context,
) -> anyhow::Result<JoinHandle<anyhow::Result<Box<dyn EthereumNode + Send + Sync>>>> {
let genesis_configuration = AsRef::<GenesisConfiguration>::as_ref(&context);
let kitchensink_path = AsRef::<KitchensinkConfiguration>::as_ref(&context)
.path
.clone();
let genesis = genesis_configuration.genesis()?.clone();
Ok(thread::spawn(move || {
let node = SubstrateNode::new(
kitchensink_path,
SubstrateNode::KITCHENSINK_EXPORT_CHAINSPEC_COMMAND,
context,
);
let node = spawn_node(node, genesis)?;
Ok(Box::new(node) as Box<_>)
}))
}
fn new_compiler(
&self,
context: Context,
version: Option<VersionOrRequirement>,
) -> Pin<Box<dyn Future<Output = anyhow::Result<Box<dyn SolidityCompiler>>>>> {
Box::pin(async move {
let compiler = Resolc::new(context, version).await;
compiler.map(|compiler| Box::new(compiler) as Box<dyn SolidityCompiler>)
})
} }
} }
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Default, Hash)]
pub struct KitchensinkRevmSolcPlatform;
impl Platform for KitchensinkRevmSolcPlatform {
fn platform_identifier(&self) -> PlatformIdentifier {
PlatformIdentifier::KitchensinkRevmSolc
}
fn node_identifier(&self) -> NodeIdentifier {
NodeIdentifier::Kitchensink
}
fn vm_identifier(&self) -> VmIdentifier {
VmIdentifier::Evm
}
fn compiler_identifier(&self) -> CompilerIdentifier {
CompilerIdentifier::Solc
}
fn new_node(
&self,
context: Context,
) -> anyhow::Result<JoinHandle<anyhow::Result<Box<dyn EthereumNode + Send + Sync>>>> {
let genesis_configuration = AsRef::<GenesisConfiguration>::as_ref(&context);
let kitchensink_path = AsRef::<KitchensinkConfiguration>::as_ref(&context)
.path
.clone();
let genesis = genesis_configuration.genesis()?.clone();
Ok(thread::spawn(move || {
let node = SubstrateNode::new(
kitchensink_path,
SubstrateNode::KITCHENSINK_EXPORT_CHAINSPEC_COMMAND,
context,
);
let node = spawn_node(node, genesis)?;
Ok(Box::new(node) as Box<_>)
}))
}
fn new_compiler(
&self,
context: Context,
version: Option<VersionOrRequirement>,
) -> Pin<Box<dyn Future<Output = anyhow::Result<Box<dyn SolidityCompiler>>>>> {
Box::pin(async move {
let compiler = Solc::new(context, version).await;
compiler.map(|compiler| Box::new(compiler) as Box<dyn SolidityCompiler>)
})
}
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Default, Hash)]
pub struct ReviveDevNodePolkavmResolcPlatform;
impl Platform for ReviveDevNodePolkavmResolcPlatform {
fn platform_identifier(&self) -> PlatformIdentifier {
PlatformIdentifier::ReviveDevNodePolkavmResolc
}
fn node_identifier(&self) -> NodeIdentifier {
NodeIdentifier::ReviveDevNode
}
fn vm_identifier(&self) -> VmIdentifier {
VmIdentifier::PolkaVM
}
fn compiler_identifier(&self) -> CompilerIdentifier {
CompilerIdentifier::Resolc
}
fn new_node(
&self,
context: Context,
) -> anyhow::Result<JoinHandle<anyhow::Result<Box<dyn EthereumNode + Send + Sync>>>> {
let genesis_configuration = AsRef::<GenesisConfiguration>::as_ref(&context);
let revive_dev_node_path = AsRef::<ReviveDevNodeConfiguration>::as_ref(&context)
.path
.clone();
let genesis = genesis_configuration.genesis()?.clone();
Ok(thread::spawn(move || {
let node = SubstrateNode::new(
revive_dev_node_path,
SubstrateNode::REVIVE_DEV_NODE_EXPORT_CHAINSPEC_COMMAND,
context,
);
let node = spawn_node(node, genesis)?;
Ok(Box::new(node) as Box<_>)
}))
}
fn new_compiler(
&self,
context: Context,
version: Option<VersionOrRequirement>,
) -> Pin<Box<dyn Future<Output = anyhow::Result<Box<dyn SolidityCompiler>>>>> {
Box::pin(async move {
let compiler = Resolc::new(context, version).await;
compiler.map(|compiler| Box::new(compiler) as Box<dyn SolidityCompiler>)
})
}
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Default, Hash)]
pub struct ReviveDevNodeRevmSolcPlatform;
impl Platform for ReviveDevNodeRevmSolcPlatform {
fn platform_identifier(&self) -> PlatformIdentifier {
PlatformIdentifier::ReviveDevNodeRevmSolc
}
fn node_identifier(&self) -> NodeIdentifier {
NodeIdentifier::ReviveDevNode
}
fn vm_identifier(&self) -> VmIdentifier {
VmIdentifier::Evm
}
fn compiler_identifier(&self) -> CompilerIdentifier {
CompilerIdentifier::Solc
}
fn new_node(
&self,
context: Context,
) -> anyhow::Result<JoinHandle<anyhow::Result<Box<dyn EthereumNode + Send + Sync>>>> {
let genesis_configuration = AsRef::<GenesisConfiguration>::as_ref(&context);
let revive_dev_node_path = AsRef::<ReviveDevNodeConfiguration>::as_ref(&context)
.path
.clone();
let genesis = genesis_configuration.genesis()?.clone();
Ok(thread::spawn(move || {
let node = SubstrateNode::new(
revive_dev_node_path,
SubstrateNode::REVIVE_DEV_NODE_EXPORT_CHAINSPEC_COMMAND,
context,
);
let node = spawn_node(node, genesis)?;
Ok(Box::new(node) as Box<_>)
}))
}
fn new_compiler(
&self,
context: Context,
version: Option<VersionOrRequirement>,
) -> Pin<Box<dyn Future<Output = anyhow::Result<Box<dyn SolidityCompiler>>>>> {
Box::pin(async move {
let compiler = Solc::new(context, version).await;
compiler.map(|compiler| Box::new(compiler) as Box<dyn SolidityCompiler>)
})
}
}
impl From<PlatformIdentifier> for Box<dyn Platform> {
fn from(value: PlatformIdentifier) -> Self {
match value {
PlatformIdentifier::GethEvmSolc => Box::new(GethEvmSolcPlatform) as Box<_>,
PlatformIdentifier::KitchensinkPolkavmResolc => {
Box::new(KitchensinkPolkavmResolcPlatform) as Box<_>
}
PlatformIdentifier::KitchensinkRevmSolc => {
Box::new(KitchensinkRevmSolcPlatform) as Box<_>
}
PlatformIdentifier::ReviveDevNodePolkavmResolc => {
Box::new(ReviveDevNodePolkavmResolcPlatform) as Box<_>
}
PlatformIdentifier::ReviveDevNodeRevmSolc => {
Box::new(ReviveDevNodeRevmSolcPlatform) as Box<_>
}
}
}
}
impl From<PlatformIdentifier> for &dyn Platform {
fn from(value: PlatformIdentifier) -> Self {
match value {
PlatformIdentifier::GethEvmSolc => &GethEvmSolcPlatform as &dyn Platform,
PlatformIdentifier::KitchensinkPolkavmResolc => {
&KitchensinkPolkavmResolcPlatform as &dyn Platform
}
PlatformIdentifier::KitchensinkRevmSolc => {
&KitchensinkRevmSolcPlatform as &dyn Platform
}
PlatformIdentifier::ReviveDevNodePolkavmResolc => {
&ReviveDevNodePolkavmResolcPlatform as &dyn Platform
}
PlatformIdentifier::ReviveDevNodeRevmSolc => {
&ReviveDevNodeRevmSolcPlatform as &dyn Platform
}
}
}
}
fn spawn_node<T: Node + EthereumNode + Send + Sync>(
mut node: T,
genesis: Genesis,
) -> anyhow::Result<T> {
info!(
id = node.id(),
connection_string = node.connection_string(),
"Spawning node"
);
node.spawn(genesis)
.context("Failed to spawn node process")?;
info!(
id = node.id(),
connection_string = node.connection_string(),
"Spawned node"
);
Ok(node)
}
+299 -349
View File
@@ -1,8 +1,9 @@
mod cached_compiler; mod cached_compiler;
mod pool;
use std::{ use std::{
borrow::Cow, borrow::Cow,
collections::{BTreeMap, HashMap}, collections::{BTreeSet, HashMap},
io::{BufWriter, Write, stderr}, io::{BufWriter, Write, stderr},
path::Path, path::Path,
sync::Arc, sync::Arc,
@@ -20,20 +21,19 @@ use futures::{Stream, StreamExt};
use indexmap::{IndexMap, indexmap}; use indexmap::{IndexMap, indexmap};
use revive_dt_node_interaction::EthereumNode; use revive_dt_node_interaction::EthereumNode;
use revive_dt_report::{ use revive_dt_report::{
NodeDesignation, ReportAggregator, Reporter, ReporterEvent, TestCaseStatus, ExecutionSpecificReporter, ReportAggregator, Reporter, ReporterEvent, TestCaseStatus,
TestSpecificReporter, TestSpecifier, TestSpecificReporter, TestSpecifier,
}; };
use schemars::schema_for; use schemars::schema_for;
use serde_json::{Value, json}; use serde_json::{Value, json};
use tokio::try_join;
use tracing::{debug, error, info, info_span, instrument}; use tracing::{debug, error, info, info_span, instrument};
use tracing_subscriber::{EnvFilter, FmtSubscriber}; use tracing_subscriber::{EnvFilter, FmtSubscriber};
use revive_dt_common::{iterators::EitherIter, types::Mode}; use revive_dt_common::{iterators::EitherIter, types::Mode};
use revive_dt_compiler::{CompilerOutput, SolidityCompiler}; use revive_dt_compiler::SolidityCompiler;
use revive_dt_config::{Context, *}; use revive_dt_config::{Context, *};
use revive_dt_core::{ use revive_dt_core::{
Geth, Kitchensink, Platform, Platform,
driver::{CaseDriver, CaseState}, driver::{CaseDriver, CaseState},
}; };
use revive_dt_format::{ use revive_dt_format::{
@@ -43,9 +43,9 @@ use revive_dt_format::{
metadata::{ContractPathAndIdent, Metadata, MetadataFile}, metadata::{ContractPathAndIdent, Metadata, MetadataFile},
mode::ParsedMode, mode::ParsedMode,
}; };
use revive_dt_node::{Node, pool::NodePool};
use crate::cached_compiler::CachedCompiler; use crate::cached_compiler::CachedCompiler;
use crate::pool::NodePool;
fn main() -> anyhow::Result<()> { fn main() -> anyhow::Result<()> {
let (writer, _guard) = tracing_appender::non_blocking::NonBlockingBuilder::default() let (writer, _guard) = tracing_appender::non_blocking::NonBlockingBuilder::default()
@@ -112,7 +112,7 @@ fn main() -> anyhow::Result<()> {
#[instrument(level = "debug", name = "Collecting Corpora", skip_all)] #[instrument(level = "debug", name = "Collecting Corpora", skip_all)]
fn collect_corpora( fn collect_corpora(
context: &ExecutionContext, context: &TestExecutionContext,
) -> anyhow::Result<HashMap<Corpus, Vec<MetadataFile>>> { ) -> anyhow::Result<HashMap<Corpus, Vec<MetadataFile>>> {
let mut corpora = HashMap::new(); let mut corpora = HashMap::new();
@@ -133,32 +133,35 @@ fn collect_corpora(
Ok(corpora) Ok(corpora)
} }
async fn run_driver<L, F>( async fn run_driver(
context: ExecutionContext, context: TestExecutionContext,
metadata_files: &[MetadataFile], metadata_files: &[MetadataFile],
reporter: Reporter, reporter: Reporter,
report_aggregator_task: impl Future<Output = anyhow::Result<()>>, report_aggregator_task: impl Future<Output = anyhow::Result<()>>,
) -> anyhow::Result<()> platforms: Vec<&dyn Platform>,
where ) -> anyhow::Result<()> {
L: Platform, let mut nodes = Vec::<(&dyn Platform, NodePool)>::new();
F: Platform, for platform in platforms.into_iter() {
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static, let pool = NodePool::new(Context::ExecuteTests(Box::new(context.clone())), platform)
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static, .inspect_err(|err| {
{ error!(
let leader_nodes = NodePool::<L::Blockchain>::new(context.clone()) ?err,
.context("Failed to initialize leader node pool")?; platform_identifier = %platform.platform_identifier(),
let follower_nodes = NodePool::<F::Blockchain>::new(context.clone()) "Failed to initialize the node pool for the platform."
.context("Failed to initialize follower node pool")?; )
})
.context("Failed to initialize the node pool")?;
nodes.push((platform, pool));
}
let tests_stream = tests_stream( let tests_stream = tests_stream(
&context, &context,
metadata_files.iter(), metadata_files.iter(),
&leader_nodes, nodes.as_slice(),
&follower_nodes,
reporter.clone(), reporter.clone(),
) )
.await; .await;
let driver_task = start_driver_task::<L, F>(&context, tests_stream) let driver_task = start_driver_task(&context, tests_stream)
.await .await
.context("Failed to start driver task")?; .context("Failed to start driver task")?;
let cli_reporting_task = start_cli_reporting_task(reporter); let cli_reporting_task = start_cli_reporting_task(reporter);
@@ -169,19 +172,12 @@ where
Ok(()) Ok(())
} }
async fn tests_stream<'a, L, F>( async fn tests_stream<'a>(
args: &ExecutionContext, args: &TestExecutionContext,
metadata_files: impl IntoIterator<Item = &'a MetadataFile> + Clone, metadata_files: impl IntoIterator<Item = &'a MetadataFile> + Clone,
leader_node_pool: &'a NodePool<L::Blockchain>, nodes: &'a [(&dyn Platform, NodePool)],
follower_node_pool: &'a NodePool<F::Blockchain>,
reporter: Reporter, reporter: Reporter,
) -> impl Stream<Item = Test<'a, L, F>> ) -> impl Stream<Item = Test<'a>> {
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
{
let tests = metadata_files let tests = metadata_files
.into_iter() .into_iter()
.flat_map(|metadata_file| { .flat_map(|metadata_file| {
@@ -231,35 +227,36 @@ where
stream::iter(tests.into_iter()) stream::iter(tests.into_iter())
.filter_map( .filter_map(
move |(metadata_file, case_idx, case, mode, reporter)| async move { move |(metadata_file, case_idx, case, mode, reporter)| async move {
let leader_compiler = <L::Compiler as SolidityCompiler>::new( let mut platforms = Vec::new();
args, for (platform, node_pool) in nodes.iter() {
mode.version.clone().map(Into::into), let node = node_pool.round_robbin();
) let compiler = platform
.await .new_compiler(
.inspect_err(|err| error!(?err, "Failed to instantiate the leader compiler")) Context::ExecuteTests(Box::new(args.clone())),
.ok()?; mode.version.clone().map(Into::into),
)
.await
.inspect_err(|err| {
error!(
?err,
platform_identifier = %platform.platform_identifier(),
"Failed to instantiate the compiler"
)
})
.ok()?;
let follower_compiler = <F::Compiler as SolidityCompiler>::new( let reporter = reporter
args, .execution_specific_reporter(node.id(), platform.platform_identifier());
mode.version.clone().map(Into::into), platforms.push((*platform, node, compiler, reporter));
) }
.await
.inspect_err(|err| error!(?err, "Failed to instantiate the follower compiler"))
.ok()?;
let leader_node = leader_node_pool.round_robbin(); Some(Test {
let follower_node = follower_node_pool.round_robbin();
Some(Test::<L, F> {
metadata: metadata_file, metadata: metadata_file,
metadata_file_path: metadata_file.metadata_file_path.as_path(), metadata_file_path: metadata_file.metadata_file_path.as_path(),
mode: mode.clone(), mode: mode.clone(),
case_idx: CaseIdx::new(case_idx), case_idx: CaseIdx::new(case_idx),
case, case,
leader_node, platforms,
follower_node,
leader_compiler,
follower_compiler,
reporter, reporter,
}) })
}, },
@@ -293,18 +290,10 @@ where
}) })
} }
async fn start_driver_task<'a, L, F>( async fn start_driver_task<'a>(
context: &ExecutionContext, context: &TestExecutionContext,
tests: impl Stream<Item = Test<'a, L, F>>, tests: impl Stream<Item = Test<'a>>,
) -> anyhow::Result<impl Future<Output = ()>> ) -> anyhow::Result<impl Future<Output = ()>> {
where
L: Platform,
F: Platform,
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
L::Compiler: 'a,
F::Compiler: 'a,
{
info!("Starting driver task"); info!("Starting driver task");
let cached_compiler = Arc::new( let cached_compiler = Arc::new(
@@ -327,23 +316,18 @@ where
let cached_compiler = cached_compiler.clone(); let cached_compiler = cached_compiler.clone();
async move { async move {
test.reporter for (platform, node, _, _) in test.platforms.iter() {
.report_leader_node_assigned_event( test.reporter
test.leader_node.id(), .report_node_assigned_event(
*L::config_id(), node.id(),
test.leader_node.connection_string(), platform.platform_identifier(),
) node.connection_string(),
.expect("Can't fail"); )
test.reporter .expect("Can't fail");
.report_follower_node_assigned_event( }
test.follower_node.id(),
*F::config_id(),
test.follower_node.connection_string(),
)
.expect("Can't fail");
let reporter = test.reporter.clone(); let reporter = test.reporter.clone();
let result = handle_case_driver::<L, F>(test, cached_compiler).await; let result = handle_case_driver(&test, cached_compiler).await;
match result { match result {
Ok(steps_executed) => reporter Ok(steps_executed) => reporter
@@ -449,230 +433,174 @@ async fn start_cli_reporting_task(reporter: Reporter) {
mode = %test.mode, mode = %test.mode,
case_idx = %test.case_idx, case_idx = %test.case_idx,
case_name = test.case.name.as_deref().unwrap_or("Unnamed Case"), case_name = test.case.name.as_deref().unwrap_or("Unnamed Case"),
leader_node = test.leader_node.id(),
follower_node = test.follower_node.id(),
) )
)] )]
async fn handle_case_driver<'a, L, F>( async fn handle_case_driver<'a>(
test: Test<'a, L, F>, test: &Test<'a>,
cached_compiler: Arc<CachedCompiler<'a>>, cached_compiler: Arc<CachedCompiler<'a>>,
) -> anyhow::Result<usize> ) -> anyhow::Result<usize> {
where let platform_state = stream::iter(test.platforms.iter())
L: Platform, // Compiling the pre-link contracts.
F: Platform, .filter_map(|(platform, node, compiler, reporter)| {
L::Blockchain: revive_dt_node::Node + Send + Sync + 'static, let cached_compiler = cached_compiler.clone();
F::Blockchain: revive_dt_node::Node + Send + Sync + 'static,
L::Compiler: 'a,
F::Compiler: 'a,
{
let leader_reporter = test
.reporter
.execution_specific_reporter(test.leader_node.id(), NodeDesignation::Leader);
let follower_reporter = test
.reporter
.execution_specific_reporter(test.follower_node.id(), NodeDesignation::Follower);
let ( async move {
CompilerOutput { let compiler_output = cached_compiler
contracts: leader_pre_link_contracts, .compile_contracts(
}, test.metadata,
CompilerOutput { test.metadata_file_path,
contracts: follower_pre_link_contracts, test.mode.clone(),
}, None,
) = try_join!( compiler.as_ref(),
cached_compiler.compile_contracts::<L>( *platform,
test.metadata, reporter,
test.metadata_file_path, )
test.mode.clone(), .await
None, .inspect_err(|err| {
&test.leader_compiler, error!(
&leader_reporter, ?err,
), platform_identifier = %platform.platform_identifier(),
cached_compiler.compile_contracts::<F>( "Pre-linking compilation failed"
test.metadata, )
test.metadata_file_path, })
test.mode.clone(), .ok()?;
None, Some((test, platform, node, compiler, reporter, compiler_output))
&test.follower_compiler,
&follower_reporter
)
)
.context("Failed to compile pre-link contracts for leader/follower in parallel")?;
let mut leader_deployed_libraries = None::<HashMap<_, _>>;
let mut follower_deployed_libraries = None::<HashMap<_, _>>;
let mut contract_sources = test
.metadata
.contract_sources()
.context("Failed to retrieve contract sources from metadata")?;
for library_instance in test
.metadata
.libraries
.iter()
.flatten()
.flat_map(|(_, map)| map.values())
{
debug!(%library_instance, "Deploying Library Instance");
let ContractPathAndIdent {
contract_source_path: library_source_path,
contract_ident: library_ident,
} = contract_sources
.remove(library_instance)
.context("Failed to find the contract source")?;
let (leader_code, leader_abi) = leader_pre_link_contracts
.get(&library_source_path)
.and_then(|contracts| contracts.get(library_ident.as_str()))
.context("Declared library was not compiled")?;
let (follower_code, follower_abi) = follower_pre_link_contracts
.get(&library_source_path)
.and_then(|contracts| contracts.get(library_ident.as_str()))
.context("Declared library was not compiled")?;
let leader_code = match alloy::hex::decode(leader_code) {
Ok(code) => code,
Err(error) => {
anyhow::bail!("Failed to hex-decode the byte code {}", error)
} }
}; })
let follower_code = match alloy::hex::decode(follower_code) { // Deploying the libraries for the platform.
Ok(code) => code, .filter_map(
Err(error) => { |(test, platform, node, compiler, reporter, compiler_output)| async move {
anyhow::bail!("Failed to hex-decode the byte code {}", error) let mut deployed_libraries = None::<HashMap<_, _>>;
} let mut contract_sources = test
}; .metadata
.contract_sources()
.inspect_err(|err| {
error!(
?err,
platform_identifier = %platform.platform_identifier(),
"Failed to retrieve contract sources from metadata"
)
})
.ok()?;
for library_instance in test
.metadata
.libraries
.iter()
.flatten()
.flat_map(|(_, map)| map.values())
{
debug!(%library_instance, "Deploying Library Instance");
// Getting the deployer address from the cases themselves. This is to ensure that we're let ContractPathAndIdent {
// doing the deployments from different accounts and therefore we're not slowed down by contract_source_path: library_source_path,
// the nonce. contract_ident: library_ident,
let deployer_address = test } = contract_sources.remove(library_instance)?;
.case
.steps
.iter()
.filter_map(|step| match step {
Step::FunctionCall(input) => Some(input.caller),
Step::BalanceAssertion(..) => None,
Step::StorageEmptyAssertion(..) => None,
})
.next()
.unwrap_or(Input::default_caller());
let leader_tx = TransactionBuilder::<Ethereum>::with_deploy_code(
TransactionRequest::default().from(deployer_address),
leader_code,
);
let follower_tx = TransactionBuilder::<Ethereum>::with_deploy_code(
TransactionRequest::default().from(deployer_address),
follower_code,
);
let (leader_receipt, follower_receipt) = try_join!( let (code, abi) = compiler_output
test.leader_node.execute_transaction(leader_tx), .contracts
test.follower_node.execute_transaction(follower_tx) .get(&library_source_path)
)?; .and_then(|contracts| contracts.get(library_ident.as_str()))?;
debug!( let code = alloy::hex::decode(code).ok()?;
?library_instance,
library_address = ?leader_receipt.contract_address,
"Deployed library to leader"
);
debug!(
?library_instance,
library_address = ?follower_receipt.contract_address,
"Deployed library to follower"
);
let leader_library_address = leader_receipt // Getting the deployer address from the cases themselves. This is to ensure
.contract_address // that we're doing the deployments from different accounts and therefore we're
.context("Contract deployment didn't return an address")?; // not slowed down by the nonce.
let follower_library_address = follower_receipt let deployer_address = test
.contract_address .case
.context("Contract deployment didn't return an address")?; .steps
.iter()
.filter_map(|step| match step {
Step::FunctionCall(input) => Some(input.caller),
Step::BalanceAssertion(..) => None,
Step::StorageEmptyAssertion(..) => None,
})
.next()
.unwrap_or(Input::default_caller());
let tx = TransactionBuilder::<Ethereum>::with_deploy_code(
TransactionRequest::default().from(deployer_address),
code,
);
let receipt = node
.execute_transaction(tx)
.await
.inspect_err(|err| {
error!(
?err,
%library_instance,
platform_identifier = %platform.platform_identifier(),
"Failed to deploy the library"
)
})
.ok()?;
leader_deployed_libraries.get_or_insert_default().insert( debug!(
library_instance.clone(), ?library_instance,
( platform_identifier = %platform.platform_identifier(),
library_ident.clone(), "Deployed library"
leader_library_address, );
leader_abi.clone(),
),
);
follower_deployed_libraries.get_or_insert_default().insert(
library_instance.clone(),
(
library_ident,
follower_library_address,
follower_abi.clone(),
),
);
}
if let Some(ref leader_deployed_libraries) = leader_deployed_libraries {
leader_reporter.report_libraries_deployed_event(
leader_deployed_libraries
.clone()
.into_iter()
.map(|(key, (_, address, _))| (key, address))
.collect::<BTreeMap<_, _>>(),
)?;
}
if let Some(ref follower_deployed_libraries) = follower_deployed_libraries {
follower_reporter.report_libraries_deployed_event(
follower_deployed_libraries
.clone()
.into_iter()
.map(|(key, (_, address, _))| (key, address))
.collect::<BTreeMap<_, _>>(),
)?;
}
let ( let library_address = receipt.contract_address?;
CompilerOutput {
contracts: leader_post_link_contracts, deployed_libraries.get_or_insert_default().insert(
}, library_instance.clone(),
CompilerOutput { (library_ident.clone(), library_address, abi.clone()),
contracts: follower_post_link_contracts, );
}, }
) = try_join!(
cached_compiler.compile_contracts::<L>( Some((
test.metadata, test,
test.metadata_file_path, platform,
test.mode.clone(), node,
leader_deployed_libraries.as_ref(), compiler,
&test.leader_compiler, reporter,
&leader_reporter, compiler_output,
), deployed_libraries,
cached_compiler.compile_contracts::<F>( ))
test.metadata, },
test.metadata_file_path,
test.mode.clone(),
follower_deployed_libraries.as_ref(),
&test.follower_compiler,
&follower_reporter
) )
) // Compiling the post-link contracts.
.context("Failed to compile post-link contracts for leader/follower in parallel")?; .filter_map(
|(test, platform, node, compiler, reporter, _, deployed_libraries)| {
let cached_compiler = cached_compiler.clone();
let leader_state = CaseState::<L>::new( async move {
test.leader_compiler.version().clone(), let compiler_output = cached_compiler
leader_post_link_contracts, .compile_contracts(
leader_deployed_libraries.unwrap_or_default(), test.metadata,
leader_reporter, test.metadata_file_path,
); test.mode.clone(),
let follower_state = CaseState::<F>::new( deployed_libraries.as_ref(),
test.follower_compiler.version().clone(), compiler.as_ref(),
follower_post_link_contracts, *platform,
follower_deployed_libraries.unwrap_or_default(), reporter,
follower_reporter, )
); .await
.inspect_err(|err| {
error!(
?err,
platform_identifier = %platform.platform_identifier(),
"Pre-linking compilation failed"
)
})
.ok()?;
let mut driver = CaseDriver::<L, F>::new( let case_state = CaseState::new(
test.metadata, compiler.version().clone(),
test.case, compiler_output.contracts,
test.leader_node, deployed_libraries.unwrap_or_default(),
test.follower_node, reporter.clone(),
leader_state, );
follower_state,
); Some((*node, platform.platform_identifier(), case_state))
}
},
)
// Collect
.collect::<Vec<_>>()
.await;
let mut driver = CaseDriver::new(test.metadata, test.case, platform_state);
driver driver
.execute() .execute()
.await .await
@@ -680,41 +608,43 @@ where
} }
async fn execute_corpus( async fn execute_corpus(
context: ExecutionContext, context: TestExecutionContext,
tests: &[MetadataFile], tests: &[MetadataFile],
reporter: Reporter, reporter: Reporter,
report_aggregator_task: impl Future<Output = anyhow::Result<()>>, report_aggregator_task: impl Future<Output = anyhow::Result<()>>,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
match (&context.leader, &context.follower) { let platforms = context
(TestingPlatform::Geth, TestingPlatform::Kitchensink) => { .platforms
run_driver::<Geth, Kitchensink>(context, tests, reporter, report_aggregator_task) .iter()
.await? .copied()
} .collect::<BTreeSet<_>>()
(TestingPlatform::Geth, TestingPlatform::Geth) => { .into_iter()
run_driver::<Geth, Geth>(context, tests, reporter, report_aggregator_task).await? .map(Into::<&dyn Platform>::into)
} .collect::<Vec<_>>();
_ => unimplemented!(),
} run_driver(context, tests, reporter, report_aggregator_task, platforms).await?;
Ok(()) Ok(())
} }
/// this represents a single "test"; a mode, path and collection of cases. /// this represents a single "test"; a mode, path and collection of cases.
#[derive(Clone)] #[allow(clippy::type_complexity)]
struct Test<'a, L: Platform, F: Platform> { struct Test<'a> {
metadata: &'a MetadataFile, metadata: &'a MetadataFile,
metadata_file_path: &'a Path, metadata_file_path: &'a Path,
mode: Cow<'a, Mode>, mode: Cow<'a, Mode>,
case_idx: CaseIdx, case_idx: CaseIdx,
case: &'a Case, case: &'a Case,
leader_node: &'a <L as Platform>::Blockchain, platforms: Vec<(
follower_node: &'a <F as Platform>::Blockchain, &'a dyn Platform,
leader_compiler: L::Compiler, &'a dyn EthereumNode,
follower_compiler: F::Compiler, Box<dyn SolidityCompiler>,
ExecutionSpecificReporter,
)>,
reporter: TestSpecificReporter, reporter: TestSpecificReporter,
} }
impl<'a, L: Platform, F: Platform> Test<'a, L, F> { impl<'a> Test<'a> {
/// Checks if this test can be ran with the current configuration. /// Checks if this test can be ran with the current configuration.
pub fn check_compatibility(&self) -> TestCheckFunctionResult { pub fn check_compatibility(&self) -> TestCheckFunctionResult {
self.check_metadata_file_ignored()?; self.check_metadata_file_ignored()?;
@@ -743,74 +673,94 @@ impl<'a, L: Platform, F: Platform> Test<'a, L, F> {
} }
} }
/// Checks if the leader and the follower both support the desired targets in the metadata file. /// Checks if the platforms all support the desired targets in the metadata file.
fn check_target_compatibility(&self) -> TestCheckFunctionResult { fn check_target_compatibility(&self) -> TestCheckFunctionResult {
let leader_support = let mut error_map = indexmap! {
<L::Blockchain as Node>::matches_target(self.metadata.targets.as_deref()); "test_desired_targets" => json!(self.metadata.targets.as_ref()),
let follower_support = };
<F::Blockchain as Node>::matches_target(self.metadata.targets.as_deref()); let mut is_allowed = true;
let is_allowed = leader_support && follower_support; for (platform, ..) in self.platforms.iter() {
let is_allowed_for_platform = match self.metadata.targets.as_ref() {
None => true,
Some(targets) => {
let mut target_matches = false;
for target in targets.iter() {
if &platform.vm_identifier() == target {
target_matches = true;
break;
}
}
target_matches
}
};
is_allowed &= is_allowed_for_platform;
error_map.insert(
platform.platform_identifier().into(),
json!(is_allowed_for_platform),
);
}
if is_allowed { if is_allowed {
Ok(()) Ok(())
} else { } else {
Err(( Err((
"Either the leader or the follower do not support the target desired by the test.", "One of the platforms do do not support the targets allowed by the test.",
indexmap! { error_map,
"test_desired_targets" => json!(self.metadata.targets.as_ref()),
"leader_support" => json!(leader_support),
"follower_support" => json!(follower_support),
},
)) ))
} }
} }
// Checks for the compatibility of the EVM version with the leader and follower nodes. // Checks for the compatibility of the EVM version with the platforms specified.
fn check_evm_version_compatibility(&self) -> TestCheckFunctionResult { fn check_evm_version_compatibility(&self) -> TestCheckFunctionResult {
let Some(evm_version_requirement) = self.metadata.required_evm_version else { let Some(evm_version_requirement) = self.metadata.required_evm_version else {
return Ok(()); return Ok(());
}; };
let leader_support = evm_version_requirement let mut error_map = indexmap! {
.matches(&<L::Blockchain as revive_dt_node::Node>::evm_version()); "test_desired_evm_version" => json!(self.metadata.required_evm_version),
let follower_support = evm_version_requirement };
.matches(&<F::Blockchain as revive_dt_node::Node>::evm_version()); let mut is_allowed = true;
let is_allowed = leader_support && follower_support; for (platform, node, ..) in self.platforms.iter() {
let is_allowed_for_platform = evm_version_requirement.matches(&node.evm_version());
is_allowed &= is_allowed_for_platform;
error_map.insert(
platform.platform_identifier().into(),
json!(is_allowed_for_platform),
);
}
if is_allowed { if is_allowed {
Ok(()) Ok(())
} else { } else {
Err(( Err((
"EVM version is incompatible with either the leader or the follower.", "EVM version is incompatible for the platforms specified",
indexmap! { error_map,
"test_desired_evm_version" => json!(self.metadata.required_evm_version),
"leader_support" => json!(leader_support),
"follower_support" => json!(follower_support),
},
)) ))
} }
} }
/// Checks if the leader and follower compilers support the mode that the test is for. /// Checks if the platforms compilers support the mode that the test is for.
fn check_compiler_compatibility(&self) -> TestCheckFunctionResult { fn check_compiler_compatibility(&self) -> TestCheckFunctionResult {
let leader_support = self let mut error_map = indexmap! {
.leader_compiler "test_desired_evm_version" => json!(self.metadata.required_evm_version),
.supports_mode(self.mode.optimize_setting, self.mode.pipeline); };
let follower_support = self let mut is_allowed = true;
.follower_compiler for (platform, _, compiler, ..) in self.platforms.iter() {
.supports_mode(self.mode.optimize_setting, self.mode.pipeline); let is_allowed_for_platform =
let is_allowed = leader_support && follower_support; compiler.supports_mode(self.mode.optimize_setting, self.mode.pipeline);
is_allowed &= is_allowed_for_platform;
error_map.insert(
platform.platform_identifier().into(),
json!(is_allowed_for_platform),
);
}
if is_allowed { if is_allowed {
Ok(()) Ok(())
} else { } else {
Err(( Err((
"Compilers do not support this mode either for the leader or for the follower.", "Compilers do not support this mode either for the provided platforms.",
indexmap! { error_map,
"mode" => json!(self.mode),
"leader_support" => json!(leader_support),
"follower_support" => json!(follower_support),
},
)) ))
} }
} }
+52
View File
@@ -0,0 +1,52 @@
//! This crate implements concurrent handling of testing node.
use std::sync::atomic::{AtomicUsize, Ordering};
use anyhow::Context as _;
use revive_dt_config::*;
use revive_dt_core::Platform;
use revive_dt_node_interaction::EthereumNode;
/// The node pool starts one or more [Node] which then can be accessed
/// in a round robbin fashion.
pub struct NodePool {
next: AtomicUsize,
nodes: Vec<Box<dyn EthereumNode + Send + Sync>>,
}
impl NodePool {
/// Create a new Pool. This will start as many nodes as there are workers in `config`.
pub fn new(context: Context, platform: &dyn Platform) -> anyhow::Result<Self> {
let concurrency_configuration = AsRef::<ConcurrencyConfiguration>::as_ref(&context);
let nodes = concurrency_configuration.number_of_nodes;
let mut handles = Vec::with_capacity(nodes);
for _ in 0..nodes {
let context = context.clone();
handles.push(platform.new_node(context)?);
}
let mut nodes = Vec::with_capacity(nodes);
for handle in handles {
nodes.push(
handle
.join()
.map_err(|error| anyhow::anyhow!("failed to spawn node: {:?}", error))
.context("Failed to join node spawn thread")?
.map_err(|error| anyhow::anyhow!("node failed to spawn: {error}"))
.context("Node failed to spawn")?,
);
}
Ok(Self {
nodes,
next: Default::default(),
})
}
/// Get a handle to the next node.
pub fn round_robbin(&self) -> &dyn EthereumNode {
let current = self.next.fetch_add(1, Ordering::SeqCst) % self.nodes.len();
self.nodes.get(current).unwrap().as_ref()
}
}
+51 -28
View File
@@ -308,7 +308,7 @@ impl Input {
pub async fn encoded_input( pub async fn encoded_input(
&self, &self,
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
context: ResolutionContext<'_>, context: ResolutionContext<'_>,
) -> anyhow::Result<Bytes> { ) -> anyhow::Result<Bytes> {
match self.method { match self.method {
@@ -377,7 +377,7 @@ impl Input {
/// Parse this input into a legacy transaction. /// Parse this input into a legacy transaction.
pub async fn legacy_transaction( pub async fn legacy_transaction(
&self, &self,
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
context: ResolutionContext<'_>, context: ResolutionContext<'_>,
) -> anyhow::Result<TransactionRequest> { ) -> anyhow::Result<TransactionRequest> {
let input_data = self let input_data = self
@@ -466,7 +466,7 @@ impl Calldata {
pub async fn calldata( pub async fn calldata(
&self, &self,
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
context: ResolutionContext<'_>, context: ResolutionContext<'_>,
) -> anyhow::Result<Vec<u8>> { ) -> anyhow::Result<Vec<u8>> {
let mut buffer = Vec::<u8>::with_capacity(self.size_requirement()); let mut buffer = Vec::<u8>::with_capacity(self.size_requirement());
@@ -478,7 +478,7 @@ impl Calldata {
pub async fn calldata_into_slice( pub async fn calldata_into_slice(
&self, &self,
buffer: &mut Vec<u8>, buffer: &mut Vec<u8>,
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
context: ResolutionContext<'_>, context: ResolutionContext<'_>,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
match self { match self {
@@ -515,7 +515,7 @@ impl Calldata {
pub async fn is_equivalent( pub async fn is_equivalent(
&self, &self,
other: &[u8], other: &[u8],
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
context: ResolutionContext<'_>, context: ResolutionContext<'_>,
) -> anyhow::Result<bool> { ) -> anyhow::Result<bool> {
match self { match self {
@@ -557,7 +557,7 @@ impl CalldataItem {
#[instrument(level = "info", skip_all, err)] #[instrument(level = "info", skip_all, err)]
async fn resolve( async fn resolve(
&self, &self,
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
context: ResolutionContext<'_>, context: ResolutionContext<'_>,
) -> anyhow::Result<U256> { ) -> anyhow::Result<U256> {
let mut stack = Vec::<CalldataToken<U256>>::new(); let mut stack = Vec::<CalldataToken<U256>>::new();
@@ -662,7 +662,7 @@ impl<T: AsRef<str>> CalldataToken<T> {
/// https://github.com/matter-labs/era-compiler-tester/blob/0ed598a27f6eceee7008deab3ff2311075a2ec69/compiler_tester/src/test/case/input/value.rs#L43-L146 /// https://github.com/matter-labs/era-compiler-tester/blob/0ed598a27f6eceee7008deab3ff2311075a2ec69/compiler_tester/src/test/case/input/value.rs#L43-L146
async fn resolve( async fn resolve(
self, self,
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
context: ResolutionContext<'_>, context: ResolutionContext<'_>,
) -> anyhow::Result<CalldataToken<U256>> { ) -> anyhow::Result<CalldataToken<U256>> {
match self { match self {
@@ -695,7 +695,7 @@ impl<T: AsRef<str>> CalldataToken<T> {
context context
.transaction_hash() .transaction_hash()
.context("No transaction hash provided to get the transaction gas price") .context("No transaction hash provided to get the transaction gas price")
.map(|tx_hash| resolver.transaction_gas_price(tx_hash))? .map(|tx_hash| resolver.transaction_gas_price(*tx_hash))?
.await .await
.map(U256::from) .map(U256::from)
} else if item == Self::GAS_LIMIT_VARIABLE { } else if item == Self::GAS_LIMIT_VARIABLE {
@@ -799,7 +799,7 @@ mod tests {
use alloy::{eips::BlockNumberOrTag, json_abi::JsonAbi}; use alloy::{eips::BlockNumberOrTag, json_abi::JsonAbi};
use alloy_primitives::{BlockHash, BlockNumber, BlockTimestamp, ChainId, TxHash, address}; use alloy_primitives::{BlockHash, BlockNumber, BlockTimestamp, ChainId, TxHash, address};
use alloy_sol_types::SolValue; use alloy_sol_types::SolValue;
use std::collections::HashMap; use std::{collections::HashMap, pin::Pin};
use super::*; use super::*;
use crate::metadata::ContractIdent; use crate::metadata::ContractIdent;
@@ -807,40 +807,63 @@ mod tests {
struct MockResolver; struct MockResolver;
impl ResolverApi for MockResolver { impl ResolverApi for MockResolver {
async fn chain_id(&self) -> anyhow::Result<ChainId> { fn chain_id(&self) -> Pin<Box<dyn Future<Output = anyhow::Result<ChainId>> + '_>> {
Ok(0x123) Box::pin(async move { Ok(0x123) })
} }
async fn block_gas_limit(&self, _: BlockNumberOrTag) -> anyhow::Result<u128> { fn block_gas_limit(
Ok(0x1234) &self,
_: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = anyhow::Result<u128>> + '_>> {
Box::pin(async move { Ok(0x1234) })
} }
async fn block_coinbase(&self, _: BlockNumberOrTag) -> anyhow::Result<Address> { fn block_coinbase(
Ok(Address::ZERO) &self,
_: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = anyhow::Result<Address>> + '_>> {
Box::pin(async move { Ok(Address::ZERO) })
} }
async fn block_difficulty(&self, _: BlockNumberOrTag) -> anyhow::Result<U256> { fn block_difficulty(
Ok(U256::from(0x12345u128)) &self,
_: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = anyhow::Result<U256>> + '_>> {
Box::pin(async move { Ok(U256::from(0x12345u128)) })
} }
async fn block_base_fee(&self, _: BlockNumberOrTag) -> anyhow::Result<u64> { fn block_base_fee(
Ok(0x100) &self,
_: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = anyhow::Result<u64>> + '_>> {
Box::pin(async move { Ok(0x100) })
} }
async fn block_hash(&self, _: BlockNumberOrTag) -> anyhow::Result<BlockHash> { fn block_hash(
Ok([0xEE; 32].into()) &self,
_: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = anyhow::Result<BlockHash>> + '_>> {
Box::pin(async move { Ok([0xEE; 32].into()) })
} }
async fn block_timestamp(&self, _: BlockNumberOrTag) -> anyhow::Result<BlockTimestamp> { fn block_timestamp(
Ok(0x123456) &self,
_: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = anyhow::Result<BlockTimestamp>> + '_>> {
Box::pin(async move { Ok(0x123456) })
} }
async fn last_block_number(&self) -> anyhow::Result<BlockNumber> { fn last_block_number(
Ok(0x1234567) &self,
) -> Pin<Box<dyn Future<Output = anyhow::Result<BlockNumber>> + '_>> {
Box::pin(async move { Ok(0x1234567) })
} }
async fn transaction_gas_price(&self, _: &TxHash) -> anyhow::Result<u128> { fn transaction_gas_price(
Ok(0x200) &self,
_: TxHash,
) -> Pin<Box<dyn Future<Output = anyhow::Result<u128>> + '_>> {
Box::pin(async move { Ok(0x200) })
} }
} }
@@ -987,7 +1010,7 @@ mod tests {
async fn resolve_calldata_item( async fn resolve_calldata_item(
input: &str, input: &str,
deployed_contracts: &HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>, deployed_contracts: &HashMap<ContractInstance, (ContractIdent, Address, JsonAbi)>,
resolver: &impl ResolverApi, resolver: &(impl ResolverApi + ?Sized),
) -> anyhow::Result<U256> { ) -> anyhow::Result<U256> {
let context = ResolutionContext::default().with_deployed_contracts(deployed_contracts); let context = ResolutionContext::default().with_deployed_contracts(deployed_contracts);
CalldataItem::new(input).resolve(resolver, context).await CalldataItem::new(input).resolve(resolver, context).await
+5 -3
View File
@@ -13,8 +13,10 @@ use serde::{Deserialize, Serialize};
use revive_common::EVMVersion; use revive_common::EVMVersion;
use revive_dt_common::{ use revive_dt_common::{
cached_fs::read_to_string, iterators::FilesWithExtensionIterator, macros::define_wrapper_type, cached_fs::read_to_string,
types::Mode, iterators::FilesWithExtensionIterator,
macros::define_wrapper_type,
types::{Mode, VmIdentifier},
}; };
use tracing::error; use tracing::error;
@@ -81,7 +83,7 @@ pub struct Metadata {
/// example, if we wish for the metadata file's cases to only be run on PolkaVM then we'd /// example, if we wish for the metadata file's cases to only be run on PolkaVM then we'd
/// specify a target of "PolkaVM" in here. /// specify a target of "PolkaVM" in here.
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub targets: Option<Vec<String>>, pub targets: Option<Vec<VmIdentifier>>,
/// A vector of the test cases and workloads contained within the metadata file. This is their /// A vector of the test cases and workloads contained within the metadata file. This is their
/// primary description. /// primary description.
+29 -10
View File
@@ -1,4 +1,5 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::pin::Pin;
use alloy::eips::BlockNumberOrTag; use alloy::eips::BlockNumberOrTag;
use alloy::json_abi::JsonAbi; use alloy::json_abi::JsonAbi;
@@ -12,36 +13,54 @@ use crate::metadata::{ContractIdent, ContractInstance};
/// crate implements to go from string calldata and into the bytes calldata. /// crate implements to go from string calldata and into the bytes calldata.
pub trait ResolverApi { pub trait ResolverApi {
/// Returns the ID of the chain that the node is on. /// Returns the ID of the chain that the node is on.
fn chain_id(&self) -> impl Future<Output = Result<ChainId>>; fn chain_id(&self) -> Pin<Box<dyn Future<Output = Result<ChainId>> + '_>>;
/// Returns the gas price for the specified transaction. /// Returns the gas price for the specified transaction.
fn transaction_gas_price(&self, tx_hash: &TxHash) -> impl Future<Output = Result<u128>>; fn transaction_gas_price(
&self,
tx_hash: TxHash,
) -> Pin<Box<dyn Future<Output = Result<u128>> + '_>>;
// TODO: This is currently a u128 due to Kitchensink needing more than 64 bits for its gas limit // TODO: This is currently a u128 due to substrate needing more than 64 bits for its gas limit
// when we implement the changes to the gas we need to adjust this to be a u64. // when we implement the changes to the gas we need to adjust this to be a u64.
/// Returns the gas limit of the specified block. /// Returns the gas limit of the specified block.
fn block_gas_limit(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<u128>>; fn block_gas_limit(
&self,
number: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = Result<u128>> + '_>>;
/// Returns the coinbase of the specified block. /// Returns the coinbase of the specified block.
fn block_coinbase(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<Address>>; fn block_coinbase(
&self,
number: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = Result<Address>> + '_>>;
/// Returns the difficulty of the specified block. /// Returns the difficulty of the specified block.
fn block_difficulty(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<U256>>; fn block_difficulty(
&self,
number: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = Result<U256>> + '_>>;
/// Returns the base fee of the specified block. /// Returns the base fee of the specified block.
fn block_base_fee(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<u64>>; fn block_base_fee(
&self,
number: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = Result<u64>> + '_>>;
/// Returns the hash of the specified block. /// Returns the hash of the specified block.
fn block_hash(&self, number: BlockNumberOrTag) -> impl Future<Output = Result<BlockHash>>; fn block_hash(
&self,
number: BlockNumberOrTag,
) -> Pin<Box<dyn Future<Output = Result<BlockHash>> + '_>>;
/// Returns the timestamp of the specified block, /// Returns the timestamp of the specified block,
fn block_timestamp( fn block_timestamp(
&self, &self,
number: BlockNumberOrTag, number: BlockNumberOrTag,
) -> impl Future<Output = Result<BlockTimestamp>>; ) -> Pin<Box<dyn Future<Output = Result<BlockTimestamp>> + '_>>;
/// Returns the number of the last block. /// Returns the number of the last block.
fn last_block_number(&self) -> impl Future<Output = Result<BlockNumber>>; fn last_block_number(&self) -> Pin<Box<dyn Future<Output = Result<BlockNumber>> + '_>>;
} }
#[derive(Clone, Copy, Debug, Default)] #[derive(Clone, Copy, Debug, Default)]
+4
View File
@@ -9,6 +9,10 @@ repository.workspace = true
rust-version.workspace = true rust-version.workspace = true
[dependencies] [dependencies]
revive-common = { workspace = true }
revive-dt-format = { workspace = true }
alloy = { workspace = true } alloy = { workspace = true }
anyhow = { workspace = true } anyhow = { workspace = true }
+25 -7
View File
@@ -1,35 +1,53 @@
//! This crate implements all node interactions. //! This crate implements all node interactions.
use alloy::primitives::{Address, StorageKey, U256}; use std::pin::Pin;
use std::sync::Arc;
use alloy::primitives::{Address, StorageKey, TxHash, U256};
use alloy::rpc::types::trace::geth::{DiffMode, GethDebugTracingOptions, GethTrace}; use alloy::rpc::types::trace::geth::{DiffMode, GethDebugTracingOptions, GethTrace};
use alloy::rpc::types::{EIP1186AccountProofResponse, TransactionReceipt, TransactionRequest}; use alloy::rpc::types::{EIP1186AccountProofResponse, TransactionReceipt, TransactionRequest};
use anyhow::Result; use anyhow::Result;
use revive_common::EVMVersion;
use revive_dt_format::traits::ResolverApi;
/// An interface for all interactions with Ethereum compatible nodes. /// An interface for all interactions with Ethereum compatible nodes.
#[allow(clippy::type_complexity)]
pub trait EthereumNode { pub trait EthereumNode {
fn id(&self) -> usize;
/// Returns the nodes connection string.
fn connection_string(&self) -> &str;
/// Execute the [TransactionRequest] and return a [TransactionReceipt]. /// Execute the [TransactionRequest] and return a [TransactionReceipt].
fn execute_transaction( fn execute_transaction(
&self, &self,
transaction: TransactionRequest, transaction: TransactionRequest,
) -> impl Future<Output = Result<TransactionReceipt>>; ) -> Pin<Box<dyn Future<Output = Result<TransactionReceipt>> + '_>>;
/// Trace the transaction in the [TransactionReceipt] and return a [GethTrace]. /// Trace the transaction in the [TransactionReceipt] and return a [GethTrace].
fn trace_transaction( fn trace_transaction(
&self, &self,
receipt: &TransactionReceipt, tx_hash: TxHash,
trace_options: GethDebugTracingOptions, trace_options: GethDebugTracingOptions,
) -> impl Future<Output = Result<GethTrace>>; ) -> Pin<Box<dyn Future<Output = Result<GethTrace>> + '_>>;
/// Returns the state diff of the transaction hash in the [TransactionReceipt]. /// Returns the state diff of the transaction hash in the [TransactionReceipt].
fn state_diff(&self, receipt: &TransactionReceipt) -> impl Future<Output = Result<DiffMode>>; fn state_diff(&self, tx_hash: TxHash) -> Pin<Box<dyn Future<Output = Result<DiffMode>> + '_>>;
/// Returns the balance of the provided [`Address`] back. /// Returns the balance of the provided [`Address`] back.
fn balance_of(&self, address: Address) -> impl Future<Output = Result<U256>>; fn balance_of(&self, address: Address) -> Pin<Box<dyn Future<Output = Result<U256>> + '_>>;
/// Returns the latest storage proof of the provided [`Address`] /// Returns the latest storage proof of the provided [`Address`]
fn latest_state_proof( fn latest_state_proof(
&self, &self,
address: Address, address: Address,
keys: Vec<StorageKey>, keys: Vec<StorageKey>,
) -> impl Future<Output = Result<EIP1186AccountProofResponse>>; ) -> Pin<Box<dyn Future<Output = Result<EIP1186AccountProofResponse>> + '_>>;
/// Returns the resolver that is to use with this ethereum node.
fn resolver(&self) -> Pin<Box<dyn Future<Output = Result<Arc<dyn ResolverApi + '_>>> + '_>>;
/// Returns the EVM version of the node.
fn evm_version(&self) -> EVMVersion;
} }
+2 -2
View File
@@ -1,5 +1,5 @@
/// This constant defines how much Wei accounts are pre-seeded with in genesis. /// This constant defines how much Wei accounts are pre-seeded with in genesis.
/// ///
/// Note: After changing this number, check that the tests for kitchensink work as we encountered /// Note: After changing this number, check that the tests for substrate work as we encountered
/// some issues with different values of the initial balance on Kitchensink. /// some issues with different values of the initial balance on substrate.
pub const INITIAL_BALANCE: u128 = 10u128.pow(37); pub const INITIAL_BALANCE: u128 = 10u128.pow(37);
+331 -272
View File
@@ -5,6 +5,7 @@ use std::{
io::{BufRead, BufReader, Read, Write}, io::{BufRead, BufReader, Read, Write},
ops::ControlFlow, ops::ControlFlow,
path::PathBuf, path::PathBuf,
pin::Pin,
process::{Child, Command, Stdio}, process::{Child, Command, Stdio},
sync::{ sync::{
Arc, Arc,
@@ -24,7 +25,7 @@ use alloy::{
fillers::{CachedNonceManager, ChainIdFiller, FillProvider, NonceFiller, TxFiller}, fillers::{CachedNonceManager, ChainIdFiller, FillProvider, NonceFiller, TxFiller},
}, },
rpc::types::{ rpc::types::{
EIP1186AccountProofResponse, TransactionReceipt, TransactionRequest, EIP1186AccountProofResponse, TransactionRequest,
trace::geth::{DiffMode, GethDebugTracingOptions, PreStateConfig, PreStateFrame}, trace::geth::{DiffMode, GethDebugTracingOptions, PreStateConfig, PreStateFrame},
}, },
}; };
@@ -92,6 +93,43 @@ impl GethNode {
const RECEIPT_POLLING_DURATION: Duration = Duration::from_secs(5 * 60); const RECEIPT_POLLING_DURATION: Duration = Duration::from_secs(5 * 60);
const TRACE_POLLING_DURATION: Duration = Duration::from_secs(60); const TRACE_POLLING_DURATION: Duration = Duration::from_secs(60);
pub fn new(
context: impl AsRef<WorkingDirectoryConfiguration>
+ AsRef<WalletConfiguration>
+ AsRef<GethConfiguration>
+ Clone,
) -> Self {
let working_directory_configuration =
AsRef::<WorkingDirectoryConfiguration>::as_ref(&context);
let wallet_configuration = AsRef::<WalletConfiguration>::as_ref(&context);
let geth_configuration = AsRef::<GethConfiguration>::as_ref(&context);
let geth_directory = working_directory_configuration
.as_path()
.join(Self::BASE_DIRECTORY);
let id = NODE_COUNT.fetch_add(1, Ordering::SeqCst);
let base_directory = geth_directory.join(id.to_string());
let wallet = wallet_configuration.wallet();
Self {
connection_string: base_directory.join(Self::IPC_FILE).display().to_string(),
data_directory: base_directory.join(Self::DATA_DIRECTORY),
logs_directory: base_directory.join(Self::LOGS_DIRECTORY),
base_directory,
geth: geth_configuration.path.clone(),
id,
handle: None,
start_timeout: geth_configuration.start_timeout_ms,
wallet: wallet.clone(),
chain_id_filler: Default::default(),
nonce_manager: Default::default(),
// We know that we only need to be storing 2 files so we can specify that when creating
// the vector. It's the stdout and stderr of the geth node.
logs_file_to_flush: Vec::with_capacity(2),
}
}
/// Create the node directory and call `geth init` to configure the genesis. /// Create the node directory and call `geth init` to configure the genesis.
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn init(&mut self, mut genesis: Genesis) -> anyhow::Result<&mut Self> { fn init(&mut self, mut genesis: Genesis) -> anyhow::Result<&mut Self> {
@@ -289,320 +327,327 @@ impl GethNode {
} }
impl EthereumNode for GethNode { impl EthereumNode for GethNode {
fn id(&self) -> usize {
self.id as _
}
fn connection_string(&self) -> &str {
&self.connection_string
}
#[instrument( #[instrument(
level = "info", level = "info",
skip_all, skip_all,
fields(geth_node_id = self.id, connection_string = self.connection_string), fields(geth_node_id = self.id, connection_string = self.connection_string),
err, err,
)] )]
async fn execute_transaction( fn execute_transaction(
&self, &self,
transaction: TransactionRequest, transaction: TransactionRequest,
) -> anyhow::Result<alloy::rpc::types::TransactionReceipt> { ) -> Pin<Box<dyn Future<Output = anyhow::Result<alloy::rpc::types::TransactionReceipt>> + '_>>
let provider = self {
.provider() Box::pin(async move {
.await let provider = self
.context("Failed to create provider for transaction submission")?; .provider()
.await
.context("Failed to create provider for transaction submission")?;
let pending_transaction = provider let pending_transaction = provider
.send_transaction(transaction) .send_transaction(transaction)
.await .await
.inspect_err( .inspect_err(
|err| tracing::error!(%err, "Encountered an error when submitting the transaction"), |err| tracing::error!(%err, "Encountered an error when submitting the transaction"),
) )
.context("Failed to submit transaction to geth node")?; .context("Failed to submit transaction to geth node")?;
let transaction_hash = *pending_transaction.tx_hash(); let transaction_hash = *pending_transaction.tx_hash();
// The following is a fix for the "transaction indexing is in progress" error that we used // The following is a fix for the "transaction indexing is in progress" error that we used
// to get. You can find more information on this in the following GH issue in geth // to get. You can find more information on this in the following GH issue in geth
// https://github.com/ethereum/go-ethereum/issues/28877. To summarize what's going on, // https://github.com/ethereum/go-ethereum/issues/28877. To summarize what's going on,
// before we can get the receipt of the transaction it needs to have been indexed by the // before we can get the receipt of the transaction it needs to have been indexed by the
// node's indexer. Just because the transaction has been confirmed it doesn't mean that it // node's indexer. Just because the transaction has been confirmed it doesn't mean that it
// has been indexed. When we call alloy's `get_receipt` it checks if the transaction was // has been indexed. When we call alloy's `get_receipt` it checks if the transaction was
// confirmed. If it has been, then it will call `eth_getTransactionReceipt` method which // confirmed. If it has been, then it will call `eth_getTransactionReceipt` method which
// _might_ return the above error if the tx has not yet been indexed yet. So, we need to // _might_ return the above error if the tx has not yet been indexed yet. So, we need to
// implement a retry mechanism for the receipt to keep retrying to get it until it // implement a retry mechanism for the receipt to keep retrying to get it until it
// eventually works, but we only do that if the error we get back is the "transaction // eventually works, but we only do that if the error we get back is the "transaction
// indexing is in progress" error or if the receipt is None. // indexing is in progress" error or if the receipt is None.
// //
// Getting the transaction indexed and taking a receipt can take a long time especially when // Getting the transaction indexed and taking a receipt can take a long time especially when
// a lot of transactions are being submitted to the node. Thus, while initially we only // a lot of transactions are being submitted to the node. Thus, while initially we only
// allowed for 60 seconds of waiting with a 1 second delay in polling, we need to allow for // allowed for 60 seconds of waiting with a 1 second delay in polling, we need to allow for
// a larger wait time. Therefore, in here we allow for 5 minutes of waiting with exponential // a larger wait time. Therefore, in here we allow for 5 minutes of waiting with exponential
// backoff each time we attempt to get the receipt and find that it's not available. // backoff each time we attempt to get the receipt and find that it's not available.
let provider = Arc::new(provider); let provider = Arc::new(provider);
poll( poll(
Self::RECEIPT_POLLING_DURATION, Self::RECEIPT_POLLING_DURATION,
PollingWaitBehavior::Constant(Duration::from_millis(200)), PollingWaitBehavior::Constant(Duration::from_millis(200)),
move || { move || {
let provider = provider.clone(); let provider = provider.clone();
async move { async move {
match provider.get_transaction_receipt(transaction_hash).await { match provider.get_transaction_receipt(transaction_hash).await {
Ok(Some(receipt)) => Ok(ControlFlow::Break(receipt)), Ok(Some(receipt)) => Ok(ControlFlow::Break(receipt)),
Ok(None) => Ok(ControlFlow::Continue(())), Ok(None) => Ok(ControlFlow::Continue(())),
Err(error) => { Err(error) => {
let error_string = error.to_string(); let error_string = error.to_string();
match error_string.contains(Self::TRANSACTION_INDEXING_ERROR) { match error_string.contains(Self::TRANSACTION_INDEXING_ERROR) {
true => Ok(ControlFlow::Continue(())), true => Ok(ControlFlow::Continue(())),
false => Err(error.into()), false => Err(error.into()),
}
} }
} }
} }
} },
}, )
) .instrument(tracing::info_span!(
.instrument(tracing::info_span!( "Awaiting transaction receipt",
"Awaiting transaction receipt", ?transaction_hash
?transaction_hash ))
)) .await
.await })
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn trace_transaction( fn trace_transaction(
&self, &self,
transaction: &TransactionReceipt, tx_hash: TxHash,
trace_options: GethDebugTracingOptions, trace_options: GethDebugTracingOptions,
) -> anyhow::Result<alloy::rpc::types::trace::geth::GethTrace> { ) -> Pin<Box<dyn Future<Output = anyhow::Result<alloy::rpc::types::trace::geth::GethTrace>> + '_>>
let provider = Arc::new( {
Box::pin(async move {
let provider = Arc::new(
self.provider()
.await
.context("Failed to create provider for tracing")?,
);
poll(
Self::TRACE_POLLING_DURATION,
PollingWaitBehavior::Constant(Duration::from_millis(200)),
move || {
let provider = provider.clone();
let trace_options = trace_options.clone();
async move {
match provider
.debug_trace_transaction(tx_hash, trace_options)
.await
{
Ok(trace) => Ok(ControlFlow::Break(trace)),
Err(error) => {
let error_string = error.to_string();
match error_string.contains(Self::TRANSACTION_TRACING_ERROR) {
true => Ok(ControlFlow::Continue(())),
false => Err(error.into()),
}
}
}
}
},
)
.await
})
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn state_diff(
&self,
tx_hash: TxHash,
) -> Pin<Box<dyn Future<Output = anyhow::Result<DiffMode>> + '_>> {
Box::pin(async move {
let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig {
diff_mode: Some(true),
disable_code: None,
disable_storage: None,
});
match self
.trace_transaction(tx_hash, trace_options)
.await
.context("Failed to trace transaction for prestate diff")?
.try_into_pre_state_frame()
.context("Failed to convert trace into pre-state frame")?
{
PreStateFrame::Diff(diff) => Ok(diff),
_ => anyhow::bail!("expected a diff mode trace"),
}
})
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn balance_of(
&self,
address: Address,
) -> Pin<Box<dyn Future<Output = anyhow::Result<U256>> + '_>> {
Box::pin(async move {
self.provider() self.provider()
.await .await
.context("Failed to create provider for tracing")?, .context("Failed to get the Geth provider")?
); .get_balance(address)
poll( .await
Self::TRACE_POLLING_DURATION, .map_err(Into::into)
PollingWaitBehavior::Constant(Duration::from_millis(200)), })
move || {
let provider = provider.clone();
let trace_options = trace_options.clone();
async move {
match provider
.debug_trace_transaction(transaction.transaction_hash, trace_options)
.await
{
Ok(trace) => Ok(ControlFlow::Break(trace)),
Err(error) => {
let error_string = error.to_string();
match error_string.contains(Self::TRANSACTION_TRACING_ERROR) {
true => Ok(ControlFlow::Continue(())),
false => Err(error.into()),
}
}
}
}
},
)
.await
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn state_diff(&self, transaction: &TransactionReceipt) -> anyhow::Result<DiffMode> { fn latest_state_proof(
let trace_options = GethDebugTracingOptions::prestate_tracer(PreStateConfig {
diff_mode: Some(true),
disable_code: None,
disable_storage: None,
});
match self
.trace_transaction(transaction, trace_options)
.await
.context("Failed to trace transaction for prestate diff")?
.try_into_pre_state_frame()
.context("Failed to convert trace into pre-state frame")?
{
PreStateFrame::Diff(diff) => Ok(diff),
_ => anyhow::bail!("expected a diff mode trace"),
}
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn balance_of(&self, address: Address) -> anyhow::Result<U256> {
self.provider()
.await
.context("Failed to get the Geth provider")?
.get_balance(address)
.await
.map_err(Into::into)
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn latest_state_proof(
&self, &self,
address: Address, address: Address,
keys: Vec<StorageKey>, keys: Vec<StorageKey>,
) -> anyhow::Result<EIP1186AccountProofResponse> { ) -> Pin<Box<dyn Future<Output = anyhow::Result<EIP1186AccountProofResponse>> + '_>> {
self.provider() Box::pin(async move {
.await self.provider()
.context("Failed to get the Geth provider")? .await
.get_proof(address, keys) .context("Failed to get the Geth provider")?
.latest() .get_proof(address, keys)
.await .latest()
.map_err(Into::into) .await
.map_err(Into::into)
})
}
// #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn resolver(
&self,
) -> Pin<Box<dyn Future<Output = anyhow::Result<Arc<dyn ResolverApi + '_>>> + '_>> {
Box::pin(async move {
let id = self.id;
let provider = self.provider().await?;
Ok(Arc::new(GethNodeResolver { id, provider }) as Arc<dyn ResolverApi>)
})
}
fn evm_version(&self) -> EVMVersion {
EVMVersion::Cancun
} }
} }
impl ResolverApi for GethNode { pub struct GethNodeResolver<F: TxFiller<Ethereum>, P: Provider<Ethereum>> {
id: u32,
provider: FillProvider<F, P, Ethereum>,
}
impl<F: TxFiller<Ethereum>, P: Provider<Ethereum>> ResolverApi for GethNodeResolver<F, P> {
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn chain_id(&self) -> anyhow::Result<alloy::primitives::ChainId> { fn chain_id(
self.provider() &self,
.await ) -> Pin<Box<dyn Future<Output = anyhow::Result<alloy::primitives::ChainId>> + '_>> {
.context("Failed to get the Geth provider")? Box::pin(async move { self.provider.get_chain_id().await.map_err(Into::into) })
.get_chain_id()
.await
.map_err(Into::into)
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn transaction_gas_price(&self, tx_hash: &TxHash) -> anyhow::Result<u128> { fn transaction_gas_price(
self.provider() &self,
.await tx_hash: TxHash,
.context("Failed to get the Geth provider")? ) -> Pin<Box<dyn Future<Output = anyhow::Result<u128>> + '_>> {
.get_transaction_receipt(*tx_hash) Box::pin(async move {
.await? self.provider
.context("Failed to get the transaction receipt") .get_transaction_receipt(tx_hash)
.map(|receipt| receipt.effective_gas_price) .await?
.context("Failed to get the transaction receipt")
.map(|receipt| receipt.effective_gas_price)
})
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_gas_limit(&self, number: BlockNumberOrTag) -> anyhow::Result<u128> { fn block_gas_limit(
self.provider() &self,
.await number: BlockNumberOrTag,
.context("Failed to get the Geth provider")? ) -> Pin<Box<dyn Future<Output = anyhow::Result<u128>> + '_>> {
.get_block_by_number(number) Box::pin(async move {
.await self.provider
.context("Failed to get the geth block")? .get_block_by_number(number)
.context("Failed to get the Geth block, perhaps there are no blocks?") .await
.map(|block| block.header.gas_limit as _) .context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| block.header.gas_limit as _)
})
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_coinbase(&self, number: BlockNumberOrTag) -> anyhow::Result<Address> { fn block_coinbase(
self.provider() &self,
.await number: BlockNumberOrTag,
.context("Failed to get the Geth provider")? ) -> Pin<Box<dyn Future<Output = anyhow::Result<Address>> + '_>> {
.get_block_by_number(number) Box::pin(async move {
.await self.provider
.context("Failed to get the geth block")? .get_block_by_number(number)
.context("Failed to get the Geth block, perhaps there are no blocks?") .await
.map(|block| block.header.beneficiary) .context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| block.header.beneficiary)
})
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_difficulty(&self, number: BlockNumberOrTag) -> anyhow::Result<U256> { fn block_difficulty(
self.provider() &self,
.await number: BlockNumberOrTag,
.context("Failed to get the Geth provider")? ) -> Pin<Box<dyn Future<Output = anyhow::Result<U256>> + '_>> {
.get_block_by_number(number) Box::pin(async move {
.await self.provider
.context("Failed to get the geth block")? .get_block_by_number(number)
.context("Failed to get the Geth block, perhaps there are no blocks?") .await
.map(|block| U256::from_be_bytes(block.header.mix_hash.0)) .context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| U256::from_be_bytes(block.header.mix_hash.0))
})
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_base_fee(&self, number: BlockNumberOrTag) -> anyhow::Result<u64> { fn block_base_fee(
self.provider() &self,
.await number: BlockNumberOrTag,
.context("Failed to get the Geth provider")? ) -> Pin<Box<dyn Future<Output = anyhow::Result<u64>> + '_>> {
.get_block_by_number(number) Box::pin(async move {
.await self.provider
.context("Failed to get the geth block")? .get_block_by_number(number)
.context("Failed to get the Geth block, perhaps there are no blocks?") .await
.and_then(|block| { .context("Failed to get the geth block")?
block .context("Failed to get the Geth block, perhaps there are no blocks?")
.header .and_then(|block| {
.base_fee_per_gas block
.context("Failed to get the base fee per gas") .header
}) .base_fee_per_gas
.context("Failed to get the base fee per gas")
})
})
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_hash(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockHash> { fn block_hash(
self.provider() &self,
.await number: BlockNumberOrTag,
.context("Failed to get the Geth provider")? ) -> Pin<Box<dyn Future<Output = anyhow::Result<BlockHash>> + '_>> {
.get_block_by_number(number) Box::pin(async move {
.await self.provider
.context("Failed to get the geth block")? .get_block_by_number(number)
.context("Failed to get the Geth block, perhaps there are no blocks?") .await
.map(|block| block.header.hash) .context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| block.header.hash)
})
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn block_timestamp(&self, number: BlockNumberOrTag) -> anyhow::Result<BlockTimestamp> { fn block_timestamp(
self.provider() &self,
.await number: BlockNumberOrTag,
.context("Failed to get the Geth provider")? ) -> Pin<Box<dyn Future<Output = anyhow::Result<BlockTimestamp>> + '_>> {
.get_block_by_number(number) Box::pin(async move {
.await self.provider
.context("Failed to get the geth block")? .get_block_by_number(number)
.context("Failed to get the Geth block, perhaps there are no blocks?") .await
.map(|block| block.header.timestamp) .context("Failed to get the geth block")?
.context("Failed to get the Geth block, perhaps there are no blocks?")
.map(|block| block.header.timestamp)
})
} }
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
async fn last_block_number(&self) -> anyhow::Result<BlockNumber> { fn last_block_number(&self) -> Pin<Box<dyn Future<Output = anyhow::Result<BlockNumber>> + '_>> {
self.provider() Box::pin(async move { self.provider.get_block_number().await.map_err(Into::into) })
.await
.context("Failed to get the Geth provider")?
.get_block_number()
.await
.map_err(Into::into)
} }
} }
impl Node for GethNode { impl Node for GethNode {
fn new(
context: impl AsRef<WorkingDirectoryConfiguration>
+ AsRef<ConcurrencyConfiguration>
+ AsRef<GenesisConfiguration>
+ AsRef<WalletConfiguration>
+ AsRef<GethConfiguration>
+ AsRef<KitchensinkConfiguration>
+ AsRef<ReviveDevNodeConfiguration>
+ AsRef<EthRpcConfiguration>
+ Clone,
) -> Self {
let working_directory_configuration =
AsRef::<WorkingDirectoryConfiguration>::as_ref(&context);
let wallet_configuration = AsRef::<WalletConfiguration>::as_ref(&context);
let geth_configuration = AsRef::<GethConfiguration>::as_ref(&context);
let geth_directory = working_directory_configuration
.as_path()
.join(Self::BASE_DIRECTORY);
let id = NODE_COUNT.fetch_add(1, Ordering::SeqCst);
let base_directory = geth_directory.join(id.to_string());
let wallet = wallet_configuration.wallet();
Self {
connection_string: base_directory.join(Self::IPC_FILE).display().to_string(),
data_directory: base_directory.join(Self::DATA_DIRECTORY),
logs_directory: base_directory.join(Self::LOGS_DIRECTORY),
base_directory,
geth: geth_configuration.path.clone(),
id,
handle: None,
start_timeout: geth_configuration.start_timeout_ms,
wallet: wallet.clone(),
chain_id_filler: Default::default(),
nonce_manager: Default::default(),
// We know that we only need to be storing 2 files so we can specify that when creating
// the vector. It's the stdout and stderr of the geth node.
logs_file_to_flush: Vec::with_capacity(2),
}
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn id(&self) -> usize {
self.id as _
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn connection_string(&self) -> String {
self.connection_string.clone()
}
#[instrument(level = "info", skip_all, fields(geth_node_id = self.id))] #[instrument(level = "info", skip_all, fields(geth_node_id = self.id))]
fn shutdown(&mut self) -> anyhow::Result<()> { fn shutdown(&mut self) -> anyhow::Result<()> {
// Terminate the processes in a graceful manner to allow for the output to be flushed. // Terminate the processes in a graceful manner to allow for the output to be flushed.
@@ -645,17 +690,6 @@ impl Node for GethNode {
.stdout; .stdout;
Ok(String::from_utf8_lossy(&output).into()) Ok(String::from_utf8_lossy(&output).into())
} }
fn matches_target(targets: Option<&[String]>) -> bool {
match targets {
None => true,
Some(targets) => targets.iter().any(|str| str.as_str() == "evm"),
}
}
fn evm_version() -> EVMVersion {
EVMVersion::Cancun
}
} }
impl Drop for GethNode { impl Drop for GethNode {
@@ -669,11 +703,11 @@ impl Drop for GethNode {
mod tests { mod tests {
use super::*; use super::*;
fn test_config() -> ExecutionContext { fn test_config() -> TestExecutionContext {
ExecutionContext::default() TestExecutionContext::default()
} }
fn new_node() -> (ExecutionContext, GethNode) { fn new_node() -> (TestExecutionContext, GethNode) {
let context = test_config(); let context = test_config();
let mut node = GethNode::new(&context); let mut node = GethNode::new(&context);
node.init(context.genesis_configuration.genesis().unwrap().clone()) node.init(context.genesis_configuration.genesis().unwrap().clone())
@@ -698,7 +732,7 @@ mod tests {
let (_context, node) = new_node(); let (_context, node) = new_node();
// Act // Act
let chain_id = node.chain_id().await; let chain_id = node.resolver().await.unwrap().chain_id().await;
// Assert // Assert
let chain_id = chain_id.expect("Failed to get the chain id"); let chain_id = chain_id.expect("Failed to get the chain id");
@@ -711,7 +745,12 @@ mod tests {
let (_context, node) = new_node(); let (_context, node) = new_node();
// Act // Act
let gas_limit = node.block_gas_limit(BlockNumberOrTag::Latest).await; let gas_limit = node
.resolver()
.await
.unwrap()
.block_gas_limit(BlockNumberOrTag::Latest)
.await;
// Assert // Assert
let gas_limit = gas_limit.expect("Failed to get the gas limit"); let gas_limit = gas_limit.expect("Failed to get the gas limit");
@@ -724,7 +763,12 @@ mod tests {
let (_context, node) = new_node(); let (_context, node) = new_node();
// Act // Act
let coinbase = node.block_coinbase(BlockNumberOrTag::Latest).await; let coinbase = node
.resolver()
.await
.unwrap()
.block_coinbase(BlockNumberOrTag::Latest)
.await;
// Assert // Assert
let coinbase = coinbase.expect("Failed to get the coinbase"); let coinbase = coinbase.expect("Failed to get the coinbase");
@@ -737,7 +781,12 @@ mod tests {
let (_context, node) = new_node(); let (_context, node) = new_node();
// Act // Act
let block_difficulty = node.block_difficulty(BlockNumberOrTag::Latest).await; let block_difficulty = node
.resolver()
.await
.unwrap()
.block_difficulty(BlockNumberOrTag::Latest)
.await;
// Assert // Assert
let block_difficulty = block_difficulty.expect("Failed to get the block difficulty"); let block_difficulty = block_difficulty.expect("Failed to get the block difficulty");
@@ -750,7 +799,12 @@ mod tests {
let (_context, node) = new_node(); let (_context, node) = new_node();
// Act // Act
let block_hash = node.block_hash(BlockNumberOrTag::Latest).await; let block_hash = node
.resolver()
.await
.unwrap()
.block_hash(BlockNumberOrTag::Latest)
.await;
// Assert // Assert
let _ = block_hash.expect("Failed to get the block hash"); let _ = block_hash.expect("Failed to get the block hash");
@@ -762,7 +816,12 @@ mod tests {
let (_context, node) = new_node(); let (_context, node) = new_node();
// Act // Act
let block_timestamp = node.block_timestamp(BlockNumberOrTag::Latest).await; let block_timestamp = node
.resolver()
.await
.unwrap()
.block_timestamp(BlockNumberOrTag::Latest)
.await;
// Assert // Assert
let _ = block_timestamp.expect("Failed to get the block timestamp"); let _ = block_timestamp.expect("Failed to get the block timestamp");
@@ -774,7 +833,7 @@ mod tests {
let (_context, node) = new_node(); let (_context, node) = new_node();
// Act // Act
let block_number = node.last_block_number().await; let block_number = node.resolver().await.unwrap().last_block_number().await;
// Assert // Assert
let block_number = block_number.expect("Failed to get the block number"); let block_number = block_number.expect("Failed to get the block number");
+1 -30
View File
@@ -1,34 +1,15 @@
//! This crate implements the testing nodes. //! This crate implements the testing nodes.
use alloy::genesis::Genesis; use alloy::genesis::Genesis;
use revive_common::EVMVersion;
use revive_dt_config::*;
use revive_dt_node_interaction::EthereumNode; use revive_dt_node_interaction::EthereumNode;
pub mod common; pub mod common;
pub mod constants; pub mod constants;
pub mod geth; pub mod geth;
pub mod kitchensink; pub mod substrate;
pub mod pool;
/// An abstract interface for testing nodes. /// An abstract interface for testing nodes.
pub trait Node: EthereumNode { pub trait Node: EthereumNode {
/// Create a new uninitialized instance.
fn new(
context: impl AsRef<WorkingDirectoryConfiguration>
+ AsRef<ConcurrencyConfiguration>
+ AsRef<GenesisConfiguration>
+ AsRef<WalletConfiguration>
+ AsRef<GethConfiguration>
+ AsRef<KitchensinkConfiguration>
+ AsRef<ReviveDevNodeConfiguration>
+ AsRef<EthRpcConfiguration>
+ Clone,
) -> Self;
/// Returns the identifier of the node.
fn id(&self) -> usize;
/// Spawns a node configured according to the genesis json. /// Spawns a node configured according to the genesis json.
/// ///
/// Blocking until it's ready to accept transactions. /// Blocking until it's ready to accept transactions.
@@ -39,16 +20,6 @@ pub trait Node: EthereumNode {
/// Blocking until it's completely stopped. /// Blocking until it's completely stopped.
fn shutdown(&mut self) -> anyhow::Result<()>; fn shutdown(&mut self) -> anyhow::Result<()>;
/// Returns the nodes connection string.
fn connection_string(&self) -> String;
/// Returns the node version. /// Returns the node version.
fn version(&self) -> anyhow::Result<String>; fn version(&self) -> anyhow::Result<String>;
/// Given a list of targets from the metadata file, this function determines if the metadata
/// file can be ran on this node or not.
fn matches_target(targets: Option<&[String]>) -> bool;
/// Returns the EVM version of the node.
fn evm_version() -> EVMVersion;
} }
-110
View File
@@ -1,110 +0,0 @@
//! This crate implements concurrent handling of testing node.
use std::{
sync::atomic::{AtomicUsize, Ordering},
thread,
};
use alloy::genesis::Genesis;
use anyhow::Context as _;
use revive_dt_config::{
ConcurrencyConfiguration, EthRpcConfiguration, GenesisConfiguration, GethConfiguration,
KitchensinkConfiguration, ReviveDevNodeConfiguration, WalletConfiguration,
WorkingDirectoryConfiguration,
};
use tracing::info;
use crate::Node;
/// The node pool starts one or more [Node] which then can be accessed
/// in a round robbin fasion.
pub struct NodePool<T> {
next: AtomicUsize,
nodes: Vec<T>,
}
impl<T> NodePool<T>
where
T: Node + Send + 'static,
{
/// Create a new Pool. This will start as many nodes as there are workers in `config`.
pub fn new(
context: impl AsRef<WorkingDirectoryConfiguration>
+ AsRef<ConcurrencyConfiguration>
+ AsRef<GenesisConfiguration>
+ AsRef<WalletConfiguration>
+ AsRef<GethConfiguration>
+ AsRef<KitchensinkConfiguration>
+ AsRef<ReviveDevNodeConfiguration>
+ AsRef<EthRpcConfiguration>
+ Send
+ Sync
+ Clone
+ 'static,
) -> anyhow::Result<Self> {
let concurrency_configuration = AsRef::<ConcurrencyConfiguration>::as_ref(&context);
let genesis_configuration = AsRef::<GenesisConfiguration>::as_ref(&context);
let nodes = concurrency_configuration.number_of_nodes;
let genesis = genesis_configuration.genesis()?;
let mut handles = Vec::with_capacity(nodes);
for _ in 0..nodes {
let context = context.clone();
let genesis = genesis.clone();
handles.push(thread::spawn(move || spawn_node::<T>(context, genesis)));
}
let mut nodes = Vec::with_capacity(nodes);
for handle in handles {
nodes.push(
handle
.join()
.map_err(|error| anyhow::anyhow!("failed to spawn node: {:?}", error))
.context("Failed to join node spawn thread")?
.map_err(|error| anyhow::anyhow!("node failed to spawn: {error}"))
.context("Node failed to spawn")?,
);
}
Ok(Self {
nodes,
next: Default::default(),
})
}
/// Get a handle to the next node.
pub fn round_robbin(&self) -> &T {
let current = self.next.fetch_add(1, Ordering::SeqCst) % self.nodes.len();
self.nodes.get(current).unwrap()
}
}
fn spawn_node<T: Node + Send>(
context: impl AsRef<WorkingDirectoryConfiguration>
+ AsRef<ConcurrencyConfiguration>
+ AsRef<GenesisConfiguration>
+ AsRef<WalletConfiguration>
+ AsRef<GethConfiguration>
+ AsRef<KitchensinkConfiguration>
+ AsRef<ReviveDevNodeConfiguration>
+ AsRef<EthRpcConfiguration>
+ Clone
+ 'static,
genesis: Genesis,
) -> anyhow::Result<T> {
let mut node = T::new(context);
info!(
id = node.id(),
connection_string = node.connection_string(),
"Spawning node"
);
node.spawn(genesis)
.context("Failed to spawn node process")?;
info!(
id = node.id(),
connection_string = node.connection_string(),
"Spawned node"
);
Ok(node)
}
File diff suppressed because it is too large Load Diff
+17 -39
View File
@@ -11,8 +11,9 @@ use std::{
use alloy_primitives::Address; use alloy_primitives::Address;
use anyhow::{Context as _, Result}; use anyhow::{Context as _, Result};
use indexmap::IndexMap; use indexmap::IndexMap;
use revive_dt_common::types::PlatformIdentifier;
use revive_dt_compiler::{CompilerInput, CompilerOutput, Mode}; use revive_dt_compiler::{CompilerInput, CompilerOutput, Mode};
use revive_dt_config::{Context, TestingPlatform}; use revive_dt_config::Context;
use revive_dt_format::{case::CaseIdx, corpus::Corpus, metadata::ContractInstance}; use revive_dt_format::{case::CaseIdx, corpus::Corpus, metadata::ContractInstance};
use semver::Version; use semver::Version;
use serde::Serialize; use serde::Serialize;
@@ -84,11 +85,8 @@ impl ReportAggregator {
RunnerEvent::TestIgnored(event) => { RunnerEvent::TestIgnored(event) => {
self.handle_test_ignored_event(*event); self.handle_test_ignored_event(*event);
} }
RunnerEvent::LeaderNodeAssigned(event) => { RunnerEvent::NodeAssigned(event) => {
self.handle_leader_node_assigned_event(*event); self.handle_node_assigned_event(*event);
}
RunnerEvent::FollowerNodeAssigned(event) => {
self.handle_follower_node_assigned_event(*event);
} }
RunnerEvent::PreLinkContractsCompilationSucceeded(event) => { RunnerEvent::PreLinkContractsCompilationSucceeded(event) => {
self.handle_pre_link_contracts_compilation_succeeded_event(*event) self.handle_pre_link_contracts_compilation_succeeded_event(*event)
@@ -257,28 +255,15 @@ impl ReportAggregator {
let _ = self.listener_tx.send(event); let _ = self.listener_tx.send(event);
} }
fn handle_leader_node_assigned_event(&mut self, event: LeaderNodeAssignedEvent) { fn handle_node_assigned_event(&mut self, event: NodeAssignedEvent) {
let execution_information = self.execution_information(&ExecutionSpecifier { let execution_information = self.execution_information(&ExecutionSpecifier {
test_specifier: event.test_specifier, test_specifier: event.test_specifier,
node_id: event.id, node_id: event.id,
node_designation: NodeDesignation::Leader, platform_identifier: event.platform_identifier,
}); });
execution_information.node = Some(TestCaseNodeInformation { execution_information.node = Some(TestCaseNodeInformation {
id: event.id, id: event.id,
platform: event.platform, platform_identifier: event.platform_identifier,
connection_string: event.connection_string,
});
}
fn handle_follower_node_assigned_event(&mut self, event: FollowerNodeAssignedEvent) {
let execution_information = self.execution_information(&ExecutionSpecifier {
test_specifier: event.test_specifier,
node_id: event.id,
node_designation: NodeDesignation::Follower,
});
execution_information.node = Some(TestCaseNodeInformation {
id: event.id,
platform: event.platform,
connection_string: event.connection_string, connection_string: event.connection_string,
}); });
} }
@@ -413,14 +398,11 @@ impl ReportAggregator {
specifier: &ExecutionSpecifier, specifier: &ExecutionSpecifier,
) -> &mut ExecutionInformation { ) -> &mut ExecutionInformation {
let test_case_report = self.test_case_report(&specifier.test_specifier); let test_case_report = self.test_case_report(&specifier.test_specifier);
match specifier.node_designation { test_case_report
NodeDesignation::Leader => test_case_report .platform_execution
.leader_execution_information .entry(specifier.platform_identifier)
.get_or_insert_default(), .or_default()
NodeDesignation::Follower => test_case_report .get_or_insert_default()
.follower_execution_information
.get_or_insert_default(),
}
} }
} }
@@ -455,12 +437,8 @@ pub struct TestCaseReport {
/// Information on the status of the test case and whether it succeeded, failed, or was ignored. /// Information on the status of the test case and whether it succeeded, failed, or was ignored.
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<TestCaseStatus>, pub status: Option<TestCaseStatus>,
/// Information related to the execution on the leader. /// Information related to the execution on one of the platforms.
#[serde(skip_serializing_if = "Option::is_none")] pub platform_execution: BTreeMap<PlatformIdentifier, Option<ExecutionInformation>>,
pub leader_execution_information: Option<ExecutionInformation>,
/// Information related to the execution on the follower.
#[serde(skip_serializing_if = "Option::is_none")]
pub follower_execution_information: Option<ExecutionInformation>,
} }
/// Information related to the status of the test. Could be that the test succeeded, failed, or that /// Information related to the status of the test. Could be that the test succeeded, failed, or that
@@ -488,18 +466,18 @@ pub enum TestCaseStatus {
}, },
} }
/// Information related to the leader or follower node that's being used to execute the step. /// Information related to the platform node that's being used to execute the step.
#[derive(Clone, Debug, Serialize)] #[derive(Clone, Debug, Serialize)]
pub struct TestCaseNodeInformation { pub struct TestCaseNodeInformation {
/// The ID of the node that this case is being executed on. /// The ID of the node that this case is being executed on.
pub id: usize, pub id: usize,
/// The platform of the node. /// The platform of the node.
pub platform: TestingPlatform, pub platform_identifier: PlatformIdentifier,
/// The connection string of the node. /// The connection string of the node.
pub connection_string: String, pub connection_string: String,
} }
/// Execution information tied to the leader or the follower. /// Execution information tied to the platform.
#[derive(Clone, Debug, Default, Serialize)] #[derive(Clone, Debug, Default, Serialize)]
pub struct ExecutionInformation { pub struct ExecutionInformation {
/// Information related to the node assigned to this test case. /// Information related to the node assigned to this test case.
+3 -9
View File
@@ -2,7 +2,7 @@
use std::{path::PathBuf, sync::Arc}; use std::{path::PathBuf, sync::Arc};
use revive_dt_common::define_wrapper_type; use revive_dt_common::{define_wrapper_type, types::PlatformIdentifier};
use revive_dt_compiler::Mode; use revive_dt_compiler::Mode;
use revive_dt_format::{case::CaseIdx, input::StepIdx}; use revive_dt_format::{case::CaseIdx, input::StepIdx};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -22,18 +22,12 @@ pub struct TestSpecifier {
} }
/// An absolute path for a test that also includes information about the node that it's assigned to /// An absolute path for a test that also includes information about the node that it's assigned to
/// and whether it's the leader or follower. /// and what platform it belongs to.
#[derive(Clone, Debug, PartialEq, Eq, Hash)] #[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct ExecutionSpecifier { pub struct ExecutionSpecifier {
pub test_specifier: Arc<TestSpecifier>, pub test_specifier: Arc<TestSpecifier>,
pub node_id: usize, pub node_id: usize,
pub node_designation: NodeDesignation, pub platform_identifier: PlatformIdentifier,
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub enum NodeDesignation {
Leader,
Follower,
} }
#[derive(Clone, Debug, PartialEq, Eq, Hash)] #[derive(Clone, Debug, PartialEq, Eq, Hash)]
+8 -19
View File
@@ -6,8 +6,8 @@ use std::{collections::BTreeMap, path::PathBuf, sync::Arc};
use alloy_primitives::Address; use alloy_primitives::Address;
use anyhow::Context as _; use anyhow::Context as _;
use indexmap::IndexMap; use indexmap::IndexMap;
use revive_dt_common::types::PlatformIdentifier;
use revive_dt_compiler::{CompilerInput, CompilerOutput}; use revive_dt_compiler::{CompilerInput, CompilerOutput};
use revive_dt_config::TestingPlatform;
use revive_dt_format::metadata::Metadata; use revive_dt_format::metadata::Metadata;
use revive_dt_format::{corpus::Corpus, metadata::ContractInstance}; use revive_dt_format::{corpus::Corpus, metadata::ContractInstance};
use semver::Version; use semver::Version;
@@ -412,14 +412,14 @@ macro_rules! define_event {
pub fn execution_specific_reporter( pub fn execution_specific_reporter(
&self, &self,
node_id: impl Into<usize>, node_id: impl Into<usize>,
node_designation: impl Into<$crate::common::NodeDesignation> platform_identifier: impl Into<PlatformIdentifier>
) -> [< $ident ExecutionSpecificReporter >] { ) -> [< $ident ExecutionSpecificReporter >] {
[< $ident ExecutionSpecificReporter >] { [< $ident ExecutionSpecificReporter >] {
reporter: self.reporter.clone(), reporter: self.reporter.clone(),
execution_specifier: Arc::new($crate::common::ExecutionSpecifier { execution_specifier: Arc::new($crate::common::ExecutionSpecifier {
test_specifier: self.test_specifier.clone(), test_specifier: self.test_specifier.clone(),
node_id: node_id.into(), node_id: node_id.into(),
node_designation: node_designation.into(), platform_identifier: platform_identifier.into(),
}) })
} }
} }
@@ -434,7 +434,7 @@ macro_rules! define_event {
} }
/// A reporter that's tied to a specific execution of the test case such as execution on /// A reporter that's tied to a specific execution of the test case such as execution on
/// a specific node like the leader or follower. /// a specific node from a specific platform.
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct [< $ident ExecutionSpecificReporter >] { pub struct [< $ident ExecutionSpecificReporter >] {
$vis reporter: [< $ident Reporter >], $vis reporter: [< $ident Reporter >],
@@ -520,25 +520,14 @@ define_event! {
/// A reason for the failure of the test. /// A reason for the failure of the test.
reason: String, reason: String,
}, },
/// An event emitted when the test case is assigned a leader node. /// An event emitted when the test case is assigned a platform node.
LeaderNodeAssigned { NodeAssigned {
/// A specifier for the test that the assignment is for. /// A specifier for the test that the assignment is for.
test_specifier: Arc<TestSpecifier>, test_specifier: Arc<TestSpecifier>,
/// The ID of the node that this case is being executed on. /// The ID of the node that this case is being executed on.
id: usize, id: usize,
/// The platform of the node. /// The identifier of the platform used.
platform: TestingPlatform, platform_identifier: PlatformIdentifier,
/// The connection string of the node.
connection_string: String,
},
/// An event emitted when the test case is assigned a follower node.
FollowerNodeAssigned {
/// A specifier for the test that the assignment is for.
test_specifier: Arc<TestSpecifier>,
/// The ID of the node that this case is being executed on.
id: usize,
/// The platform of the node.
platform: TestingPlatform,
/// The connection string of the node. /// The connection string of the node.
connection_string: String, connection_string: String,
}, },
+4 -1
View File
@@ -89,10 +89,13 @@ echo "This may take a while..."
echo "" echo ""
# Run the tool # Run the tool
RUST_LOG="error" cargo run --release -- execute-tests \ RUST_LOG="info" cargo run --release -- execute-tests \
--platform geth-evm-solc \
--platform revive-dev-node-polkavm-resolc \
--corpus "$CORPUS_FILE" \ --corpus "$CORPUS_FILE" \
--working-directory "$WORKDIR" \ --working-directory "$WORKDIR" \
--concurrency.number-of-nodes 5 \ --concurrency.number-of-nodes 5 \
--concurrency.ignore-concurrency-limit \
--kitchensink.path "$SUBSTRATE_NODE_BIN" \ --kitchensink.path "$SUBSTRATE_NODE_BIN" \
--revive-dev-node.path "$REVIVE_DEV_NODE_BIN" \ --revive-dev-node.path "$REVIVE_DEV_NODE_BIN" \
--eth-rpc.path "$ETH_RPC_BIN" \ --eth-rpc.path "$ETH_RPC_BIN" \