mirror of
https://github.com/pezkuwichain/pezkuwi-subxt.git
synced 2026-05-09 10:37:58 +00:00
4c651637f2
* starting * Updated from other branch. * setting flag * flag in storage struct * fix flagging to access and insert. * added todo to fix * also missing serialize meta to storage proof * extract meta. * Isolate old trie layout. * failing test that requires storing in meta when old hash scheme is used. * old hash compatibility * Db migrate. * runing tests with both states when interesting. * fix chain spec test with serde default. * export state (missing trie function). * Pending using new branch, lacking genericity on layout resolution. * extract and set global meta * Update to branch 4 * fix iterator with root flag (no longer insert node). * fix trie root hashing of root * complete basic backend. * Remove old_hash meta from proof that do not use inner_hashing. * fix trie test for empty (force layout on empty deltas). * Root update fix. * debug on meta * Use trie key iteration that do not include value in proofs. * switch default test ext to use inner hash. * small integration test, and fix tx cache mgmt in ext. test failing * Proof scenario at state-machine level. * trace for db upgrade * try different param * act more like iter_from. * Bigger batches. * Update trie dependency. * drafting codec changes and refact * before removing unused branch no value alt hashing. more work todo rename all flag var to alt_hash, and remove extrinsic replace by storage query at every storage_root call. * alt hashing only for branch with value. * fix trie tests * Hash of value include the encoded size. * removing fields(broken) * fix trie_stream to also include value length in inner hash. * triedbmut only using alt type if inner hashing. * trie_stream to also only use alt hashing type when actually alt hashing. * Refactor meta state, logic should work with change of trie treshold. * Remove NoMeta variant. * Remove state_hashed trigger specific functions. * pending switching to using threshold, new storage root api does not make much sense. * refactoring to use state from backend (not possible payload changes). * Applying from previous state * Remove default from storage, genesis need a special build. * rem empty space * Catch problem: when using triedb with default: we should not revert nodes: otherwhise thing as trie codec cannot decode-encode without changing state. * fix compilation * Right logic to avoid switch on reencode when default layout. * Clean up some todos * remove trie meta from root upstream * update upstream and fix benches. * split some long lines. * UPdate trie crate to work with new design. * Finish update to refactored upstream. * update to latest triedb changes. * Clean up. * fix executor test. * rust fmt from master. * rust format. * rustfmt * fix * start host function driven versioning * update state-machine part * still need access to state version from runtime * state hash in mem: wrong * direction likely correct, but passing call to code exec for genesis init seem awkward. * state version serialize in runtime, wrong approach, just initialize it with no threshold for core api < 4 seems more proper. * stateversion from runtime version (core api >= 4). * update trie, fix tests * unused import * clean some TODOs * Require RuntimeVersionOf for executor * use RuntimeVersionOf to resolve genesis state version. * update runtime version test * fix state-machine tests * TODO * Use runtime version from storage wasm with fast sync. * rustfmt * fmt * fix test * revert useless changes. * clean some unused changes * fmt * removing useless trait function. * remove remaining reference to state_hash * fix some imports * Follow chain state version management. * trie update, fix and constant threshold for trie layouts. * update deps * Update to latest trie pr changes. * fix benches * Verify proof requires right layout. * update trie_root * Update trie deps to latest * Update to latest trie versioning * Removing patch * update lock * extrinsic for sc-service-test using layout v0. * Adding RuntimeVersionOf to CallExecutor works. * fmt * error when resolving version and no wasm in storage. * use existing utils to instantiate runtime code. * Patch to delay runtime switch. * Revert "Patch to delay runtime switch." This reverts commit 67e55fee468f1a0cda853f5362b22e0d775786da. * useless closure * remove remaining state_hash variables. * Remove outdated comment * useless inner hash * fmt * fmt and opt-in feature to apply state change. * feature gate core version, use new test feature for node and test node * Use a 'State' api version instead of Core one. * fix merge of test function * use blake macro. * Fix state api (require declaring the api in runtime). * Opt out feature, fix macro for io to select a given version instead of latest. * run test nodes on new state. * fix * Apply review change (docs and error). * fmt * use explicit runtime_interface in doc test * fix ui test * fix doc test * fmt * use default for path and specname when resolving version. * small review related changes. * doc value size requirement. * rename old_state feature * Remove macro changes * feature rename * state version as host function parameter * remove flag for client api * fix tests * switch storage chain proof to V1 * host functions, pass by state version enum * use WrappedRuntimeCode * start * state_version in runtime version * rust fmt * Update storage proof of max size. * fix runtime version rpc test * right intent of convert from compat * fix doc test * fix doc test * split proof * decode without replay, and remove some reexports. * Decode with compatibility by default. * switch state_version to u8. And remove RuntimeVersionBasis. * test * use api when reading embedded version * fix decode with apis * extract core version instead * test fix * unused import * review changes. Co-authored-by: kianenigma <kian@parity.io>
521 lines
14 KiB
Rust
521 lines
14 KiB
Rust
#![cfg_attr(not(feature = "std"), no_std)]
|
|
|
|
// Make the WASM binary available.
|
|
#[cfg(feature = "std")]
|
|
include!(concat!(env!("OUT_DIR"), "/wasm_binary.rs"));
|
|
|
|
/// Wasm binary unwrapped. If built with `SKIP_WASM_BUILD`, the function panics.
|
|
#[cfg(feature = "std")]
|
|
pub fn wasm_binary_unwrap() -> &'static [u8] {
|
|
WASM_BINARY.expect(
|
|
"Development wasm binary is not available. Testing is only supported with the flag \
|
|
disabled.",
|
|
)
|
|
}
|
|
|
|
#[cfg(not(feature = "std"))]
|
|
use sp_std::{vec, vec::Vec};
|
|
|
|
#[cfg(not(feature = "std"))]
|
|
use sp_core::{ed25519, sr25519};
|
|
#[cfg(not(feature = "std"))]
|
|
use sp_io::{
|
|
crypto::{ed25519_verify, sr25519_verify},
|
|
hashing::{blake2_128, blake2_256, sha2_256, twox_128, twox_256},
|
|
storage, wasm_tracing,
|
|
};
|
|
#[cfg(not(feature = "std"))]
|
|
use sp_runtime::{
|
|
print,
|
|
traits::{BlakeTwo256, Hash},
|
|
};
|
|
#[cfg(not(feature = "std"))]
|
|
use sp_sandbox::{SandboxEnvironmentBuilder, SandboxInstance, SandboxMemory, Value};
|
|
|
|
extern "C" {
|
|
#[allow(dead_code)]
|
|
fn missing_external();
|
|
|
|
#[allow(dead_code)]
|
|
fn yet_another_missing_external();
|
|
}
|
|
|
|
#[cfg(not(feature = "std"))]
|
|
/// Mutable static variables should be always observed to have
|
|
/// the initialized value at the start of a runtime call.
|
|
static mut MUTABLE_STATIC: u64 = 32;
|
|
|
|
#[cfg(not(feature = "std"))]
|
|
/// This is similar to `MUTABLE_STATIC`. The tests need `MUTABLE_STATIC` for testing that
|
|
/// non-null initialization data is properly restored during instance reusing.
|
|
///
|
|
/// `MUTABLE_STATIC_BSS` on the other hand focuses on the zeroed data. This is important since there
|
|
/// may be differences in handling zeroed and non-zeroed data.
|
|
static mut MUTABLE_STATIC_BSS: u64 = 0;
|
|
|
|
sp_core::wasm_export_functions! {
|
|
fn test_calling_missing_external() {
|
|
unsafe { missing_external() }
|
|
}
|
|
|
|
fn test_calling_yet_another_missing_external() {
|
|
unsafe { yet_another_missing_external() }
|
|
}
|
|
|
|
fn test_data_in(input: Vec<u8>) -> Vec<u8> {
|
|
print("set_storage");
|
|
storage::set(b"input", &input);
|
|
|
|
print("storage");
|
|
let foo = storage::get(b"foo").unwrap();
|
|
|
|
print("set_storage");
|
|
storage::set(b"baz", &foo);
|
|
|
|
print("finished!");
|
|
b"all ok!".to_vec()
|
|
}
|
|
|
|
fn test_clear_prefix(input: Vec<u8>) -> Vec<u8> {
|
|
storage::clear_prefix(&input, None);
|
|
b"all ok!".to_vec()
|
|
}
|
|
|
|
fn test_empty_return() {}
|
|
|
|
fn test_dirty_plenty_memory(heap_base: u32, heap_pages: u32) {
|
|
// This piece of code will dirty multiple pages of memory. The number of pages is given by
|
|
// the `heap_pages`. It's unit is a wasm page (64KiB). The first page to be cleared
|
|
// is a wasm page that that follows the one that holds the `heap_base` address.
|
|
//
|
|
// This function dirties the **host** pages. I.e. we dirty 4KiB at a time and it will take
|
|
// 16 writes to process a single wasm page.
|
|
|
|
let heap_ptr = heap_base as usize;
|
|
|
|
// Find the next wasm page boundary.
|
|
let heap_ptr = round_up_to(heap_ptr, 65536);
|
|
|
|
// Make it an actual pointer
|
|
let heap_ptr = heap_ptr as *mut u8;
|
|
|
|
// Traverse the host pages and make each one dirty
|
|
let host_pages = heap_pages as usize * 16;
|
|
for i in 0..host_pages {
|
|
unsafe {
|
|
// technically this is an UB, but there is no way Rust can find this out.
|
|
heap_ptr.add(i * 4096).write(0);
|
|
}
|
|
}
|
|
|
|
fn round_up_to(n: usize, divisor: usize) -> usize {
|
|
(n + divisor - 1) / divisor
|
|
}
|
|
}
|
|
|
|
fn test_exhaust_heap() -> Vec<u8> { Vec::with_capacity(16777216) }
|
|
|
|
fn test_fp_f32add(a: [u8; 4], b: [u8; 4]) -> [u8; 4] {
|
|
let a = f32::from_le_bytes(a);
|
|
let b = f32::from_le_bytes(b);
|
|
f32::to_le_bytes(a + b)
|
|
}
|
|
|
|
fn test_panic() { panic!("test panic") }
|
|
|
|
fn test_conditional_panic(input: Vec<u8>) -> Vec<u8> {
|
|
if input.len() > 0 {
|
|
panic!("test panic")
|
|
}
|
|
|
|
input
|
|
}
|
|
|
|
fn test_blake2_256(input: Vec<u8>) -> Vec<u8> {
|
|
blake2_256(&input).to_vec()
|
|
}
|
|
|
|
fn test_blake2_128(input: Vec<u8>) -> Vec<u8> {
|
|
blake2_128(&input).to_vec()
|
|
}
|
|
|
|
fn test_sha2_256(input: Vec<u8>) -> Vec<u8> {
|
|
sha2_256(&input).to_vec()
|
|
}
|
|
|
|
fn test_twox_256(input: Vec<u8>) -> Vec<u8> {
|
|
twox_256(&input).to_vec()
|
|
}
|
|
|
|
fn test_twox_128(input: Vec<u8>) -> Vec<u8> {
|
|
twox_128(&input).to_vec()
|
|
}
|
|
|
|
fn test_ed25519_verify(input: Vec<u8>) -> bool {
|
|
let mut pubkey = [0; 32];
|
|
let mut sig = [0; 64];
|
|
|
|
pubkey.copy_from_slice(&input[0..32]);
|
|
sig.copy_from_slice(&input[32..96]);
|
|
|
|
let msg = b"all ok!";
|
|
ed25519_verify(&ed25519::Signature(sig), &msg[..], &ed25519::Public(pubkey))
|
|
}
|
|
|
|
fn test_sr25519_verify(input: Vec<u8>) -> bool {
|
|
let mut pubkey = [0; 32];
|
|
let mut sig = [0; 64];
|
|
|
|
pubkey.copy_from_slice(&input[0..32]);
|
|
sig.copy_from_slice(&input[32..96]);
|
|
|
|
let msg = b"all ok!";
|
|
sr25519_verify(&sr25519::Signature(sig), &msg[..], &sr25519::Public(pubkey))
|
|
}
|
|
|
|
fn test_ordered_trie_root() -> Vec<u8> {
|
|
BlakeTwo256::ordered_trie_root(
|
|
vec![
|
|
b"zero"[..].into(),
|
|
b"one"[..].into(),
|
|
b"two"[..].into(),
|
|
],
|
|
sp_core::storage::StateVersion::V1,
|
|
).as_ref().to_vec()
|
|
}
|
|
|
|
fn test_offchain_index_set() {
|
|
sp_io::offchain_index::set(b"k", b"v");
|
|
}
|
|
|
|
fn test_offchain_local_storage() -> bool {
|
|
let kind = sp_core::offchain::StorageKind::PERSISTENT;
|
|
assert_eq!(sp_io::offchain::local_storage_get(kind, b"test"), None);
|
|
sp_io::offchain::local_storage_set(kind, b"test", b"asd");
|
|
assert_eq!(sp_io::offchain::local_storage_get(kind, b"test"), Some(b"asd".to_vec()));
|
|
|
|
let res = sp_io::offchain::local_storage_compare_and_set(
|
|
kind,
|
|
b"test",
|
|
Some(b"asd".to_vec()),
|
|
b"",
|
|
);
|
|
assert_eq!(sp_io::offchain::local_storage_get(kind, b"test"), Some(b"".to_vec()));
|
|
res
|
|
}
|
|
|
|
fn test_offchain_local_storage_with_none() {
|
|
let kind = sp_core::offchain::StorageKind::PERSISTENT;
|
|
assert_eq!(sp_io::offchain::local_storage_get(kind, b"test"), None);
|
|
|
|
let res = sp_io::offchain::local_storage_compare_and_set(kind, b"test", None, b"value");
|
|
assert_eq!(res, true);
|
|
assert_eq!(sp_io::offchain::local_storage_get(kind, b"test"), Some(b"value".to_vec()));
|
|
}
|
|
|
|
fn test_offchain_http() -> bool {
|
|
use sp_core::offchain::HttpRequestStatus;
|
|
let run = || -> Option<()> {
|
|
let id = sp_io::offchain::http_request_start(
|
|
"POST",
|
|
"http://localhost:12345",
|
|
&[],
|
|
).ok()?;
|
|
sp_io::offchain::http_request_add_header(id, "X-Auth", "test").ok()?;
|
|
sp_io::offchain::http_request_write_body(id, &[1, 2, 3, 4], None).ok()?;
|
|
sp_io::offchain::http_request_write_body(id, &[], None).ok()?;
|
|
let status = sp_io::offchain::http_response_wait(&[id], None);
|
|
assert!(status == vec![HttpRequestStatus::Finished(200)], "Expected Finished(200) status.");
|
|
let headers = sp_io::offchain::http_response_headers(id);
|
|
assert_eq!(headers, vec![(b"X-Auth".to_vec(), b"hello".to_vec())]);
|
|
let mut buffer = vec![0; 64];
|
|
let read = sp_io::offchain::http_response_read_body(id, &mut buffer, None).ok()?;
|
|
assert_eq!(read, 3);
|
|
assert_eq!(&buffer[0..read as usize], &[1, 2, 3]);
|
|
let read = sp_io::offchain::http_response_read_body(id, &mut buffer, None).ok()?;
|
|
assert_eq!(read, 0);
|
|
|
|
Some(())
|
|
};
|
|
|
|
run().is_some()
|
|
}
|
|
|
|
fn test_enter_span() -> u64 {
|
|
wasm_tracing::enter_span(Default::default())
|
|
}
|
|
|
|
fn test_exit_span(span_id: u64) {
|
|
wasm_tracing::exit(span_id)
|
|
}
|
|
|
|
fn test_nested_spans() {
|
|
sp_io::init_tracing();
|
|
let span_id = wasm_tracing::enter_span(Default::default());
|
|
{
|
|
sp_io::init_tracing();
|
|
let span_id = wasm_tracing::enter_span(Default::default());
|
|
wasm_tracing::exit(span_id);
|
|
}
|
|
wasm_tracing::exit(span_id);
|
|
}
|
|
|
|
fn returns_mutable_static() -> u64 {
|
|
unsafe {
|
|
MUTABLE_STATIC += 1;
|
|
MUTABLE_STATIC
|
|
}
|
|
}
|
|
|
|
fn returns_mutable_static_bss() -> u64 {
|
|
unsafe {
|
|
MUTABLE_STATIC_BSS += 1;
|
|
MUTABLE_STATIC_BSS
|
|
}
|
|
}
|
|
|
|
fn allocates_huge_stack_array(trap: bool) -> Vec<u8> {
|
|
// Allocate a stack frame that is approx. 75% of the stack (assuming it is 1MB).
|
|
// This will just decrease (stacks in wasm32-u-u grow downwards) the stack
|
|
// pointer. This won't trap on the current compilers.
|
|
let mut data = [0u8; 1024 * 768];
|
|
|
|
// Then make sure we actually write something to it.
|
|
//
|
|
// If:
|
|
// 1. the stack area is placed at the beginning of the linear memory space, and
|
|
// 2. the stack pointer points to out-of-bounds area, and
|
|
// 3. a write is performed around the current stack pointer.
|
|
//
|
|
// then a trap should happen.
|
|
//
|
|
for (i, v) in data.iter_mut().enumerate() {
|
|
*v = i as u8; // deliberate truncation
|
|
}
|
|
|
|
if trap {
|
|
// There is a small chance of this to be pulled up in theory. In practice
|
|
// the probability of that is rather low.
|
|
panic!()
|
|
}
|
|
|
|
data.to_vec()
|
|
}
|
|
|
|
// Check that the heap at `heap_base + offset` don't contains the test message.
|
|
// After the check succeeds the test message is written into the heap.
|
|
//
|
|
// It is expected that the given pointer is not allocated.
|
|
fn check_and_set_in_heap(heap_base: u32, offset: u32) {
|
|
let test_message = b"Hello invalid heap memory";
|
|
let ptr = (heap_base + offset) as *mut u8;
|
|
|
|
let message_slice = unsafe { sp_std::slice::from_raw_parts_mut(ptr, test_message.len()) };
|
|
|
|
assert_ne!(test_message, message_slice);
|
|
message_slice.copy_from_slice(test_message);
|
|
}
|
|
|
|
fn test_spawn() {
|
|
let data = vec![1u8, 2u8];
|
|
let data_new = sp_tasks::spawn(tasks::incrementer, data).join();
|
|
|
|
assert_eq!(data_new, vec![2u8, 3u8]);
|
|
}
|
|
|
|
fn test_nested_spawn() {
|
|
let data = vec![7u8, 13u8];
|
|
let data_new = sp_tasks::spawn(tasks::parallel_incrementer, data).join();
|
|
|
|
assert_eq!(data_new, vec![10u8, 16u8]);
|
|
}
|
|
|
|
fn test_panic_in_spawned() {
|
|
sp_tasks::spawn(tasks::panicker, vec![]).join();
|
|
}
|
|
|
|
fn test_return_i8() -> i8 {
|
|
-66
|
|
}
|
|
|
|
fn test_take_i8(value: i8) {
|
|
assert_eq!(value, -66);
|
|
}
|
|
}
|
|
|
|
#[cfg(not(feature = "std"))]
|
|
mod tasks {
|
|
use sp_std::prelude::*;
|
|
|
|
pub fn incrementer(data: Vec<u8>) -> Vec<u8> {
|
|
data.into_iter().map(|v| v + 1).collect()
|
|
}
|
|
|
|
pub fn panicker(_: Vec<u8>) -> Vec<u8> {
|
|
panic!()
|
|
}
|
|
|
|
pub fn parallel_incrementer(data: Vec<u8>) -> Vec<u8> {
|
|
let first = data.into_iter().map(|v| v + 2).collect::<Vec<_>>();
|
|
let second = sp_tasks::spawn(incrementer, first).join();
|
|
second
|
|
}
|
|
}
|
|
|
|
/// A macro to define a test entrypoint for each available sandbox executor.
|
|
macro_rules! wasm_export_sandbox_test_functions {
|
|
(
|
|
$(
|
|
fn $name:ident<T>(
|
|
$( $arg_name:ident: $arg_ty:ty ),* $(,)?
|
|
) $( -> $ret_ty:ty )? where T: SandboxInstance<$state:ty> $(,)?
|
|
{ $( $fn_impl:tt )* }
|
|
)*
|
|
) => {
|
|
$(
|
|
#[cfg(not(feature = "std"))]
|
|
fn $name<T>( $($arg_name: $arg_ty),* ) $( -> $ret_ty )? where T: SandboxInstance<$state> {
|
|
$( $fn_impl )*
|
|
}
|
|
|
|
paste::paste! {
|
|
sp_core::wasm_export_functions! {
|
|
fn [<$name _host>]( $($arg_name: $arg_ty),* ) $( -> $ret_ty )? {
|
|
$name::<sp_sandbox::host_executor::Instance<$state>>( $( $arg_name ),* )
|
|
}
|
|
|
|
fn [<$name _embedded>]( $($arg_name: $arg_ty),* ) $( -> $ret_ty )? {
|
|
$name::<sp_sandbox::embedded_executor::Instance<$state>>( $( $arg_name ),* )
|
|
}
|
|
}
|
|
}
|
|
)*
|
|
};
|
|
}
|
|
|
|
wasm_export_sandbox_test_functions! {
|
|
fn test_sandbox<T>(code: Vec<u8>) -> bool
|
|
where
|
|
T: SandboxInstance<State>,
|
|
{
|
|
execute_sandboxed::<T>(&code, &[]).is_ok()
|
|
}
|
|
|
|
fn test_sandbox_args<T>(code: Vec<u8>) -> bool
|
|
where
|
|
T: SandboxInstance<State>,
|
|
{
|
|
execute_sandboxed::<T>(&code, &[Value::I32(0x12345678), Value::I64(0x1234567887654321)])
|
|
.is_ok()
|
|
}
|
|
|
|
fn test_sandbox_return_val<T>(code: Vec<u8>) -> bool
|
|
where
|
|
T: SandboxInstance<State>,
|
|
{
|
|
let ok = match execute_sandboxed::<T>(&code, &[Value::I32(0x1336)]) {
|
|
Ok(sp_sandbox::ReturnValue::Value(Value::I32(0x1337))) => true,
|
|
_ => false,
|
|
};
|
|
|
|
ok
|
|
}
|
|
|
|
fn test_sandbox_instantiate<T>(code: Vec<u8>) -> u8
|
|
where
|
|
T: SandboxInstance<()>,
|
|
{
|
|
let env_builder = T::EnvironmentBuilder::new();
|
|
let code = match T::new(&code, &env_builder, &mut ()) {
|
|
Ok(_) => 0,
|
|
Err(sp_sandbox::Error::Module) => 1,
|
|
Err(sp_sandbox::Error::Execution) => 2,
|
|
Err(sp_sandbox::Error::OutOfBounds) => 3,
|
|
};
|
|
|
|
code
|
|
}
|
|
|
|
fn test_sandbox_get_global_val<T>(code: Vec<u8>) -> i64
|
|
where
|
|
T: SandboxInstance<()>,
|
|
{
|
|
let env_builder = T::EnvironmentBuilder::new();
|
|
let instance = if let Ok(i) = T::new(&code, &env_builder, &mut ()) {
|
|
i
|
|
} else {
|
|
return 20
|
|
};
|
|
|
|
match instance.get_global_val("test_global") {
|
|
Some(sp_sandbox::Value::I64(val)) => val,
|
|
None => 30,
|
|
_ => 40,
|
|
}
|
|
}
|
|
}
|
|
|
|
#[cfg(not(feature = "std"))]
|
|
struct State {
|
|
counter: u32,
|
|
}
|
|
|
|
#[cfg(not(feature = "std"))]
|
|
fn execute_sandboxed<T>(
|
|
code: &[u8],
|
|
args: &[Value],
|
|
) -> Result<sp_sandbox::ReturnValue, sp_sandbox::HostError>
|
|
where
|
|
T: sp_sandbox::SandboxInstance<State>,
|
|
{
|
|
fn env_assert(
|
|
_e: &mut State,
|
|
args: &[Value],
|
|
) -> Result<sp_sandbox::ReturnValue, sp_sandbox::HostError> {
|
|
if args.len() != 1 {
|
|
return Err(sp_sandbox::HostError)
|
|
}
|
|
let condition = args[0].as_i32().ok_or_else(|| sp_sandbox::HostError)?;
|
|
if condition != 0 {
|
|
Ok(sp_sandbox::ReturnValue::Unit)
|
|
} else {
|
|
Err(sp_sandbox::HostError)
|
|
}
|
|
}
|
|
fn env_inc_counter(
|
|
e: &mut State,
|
|
args: &[Value],
|
|
) -> Result<sp_sandbox::ReturnValue, sp_sandbox::HostError> {
|
|
if args.len() != 1 {
|
|
return Err(sp_sandbox::HostError)
|
|
}
|
|
let inc_by = args[0].as_i32().ok_or_else(|| sp_sandbox::HostError)?;
|
|
e.counter += inc_by as u32;
|
|
Ok(sp_sandbox::ReturnValue::Value(Value::I32(e.counter as i32)))
|
|
}
|
|
|
|
let mut state = State { counter: 0 };
|
|
|
|
let env_builder = {
|
|
let mut env_builder = T::EnvironmentBuilder::new();
|
|
env_builder.add_host_func("env", "assert", env_assert);
|
|
env_builder.add_host_func("env", "inc_counter", env_inc_counter);
|
|
let memory = match T::Memory::new(1, Some(16)) {
|
|
Ok(m) => m,
|
|
Err(_) => unreachable!(
|
|
"
|
|
Memory::new() can return Err only if parameters are borked; \
|
|
We passing params here explicitly and they're correct; \
|
|
Memory::new() can't return a Error qed"
|
|
),
|
|
};
|
|
env_builder.add_memory("env", "memory", memory);
|
|
env_builder
|
|
};
|
|
|
|
let mut instance = T::new(code, &env_builder, &mut state)?;
|
|
let result = instance.invoke("call", args, &mut state);
|
|
|
|
result.map_err(|_| sp_sandbox::HostError)
|
|
}
|