Request based availability distribution (#2423)

* WIP

* availability distribution, still very wip.

Work on the requesting side of things.

* Some docs on what I intend to do.

* Checkpoint of session cache implementation

as I will likely replace it with something smarter.

* More work, mostly on cache

and getting things to type check.

* Only derive MallocSizeOf and Debug for std.

* availability-distribution: Cache feature complete.

* Sketch out logic in `FetchTask` for actual fetching.

- Compile fixes.
- Cleanup.

* Format cleanup.

* More format fixes.

* Almost feature complete `fetch_task`.

Missing:

- Check for cancel
- Actual querying of peer ids.

* Finish FetchTask so far.

* Directly use AuthorityDiscoveryId in protocol and cache.

* Resolve `AuthorityDiscoveryId` on sending requests.

* Rework fetch_task

- also make it impossible to check the wrong chunk index.
- Export needed function in validator_discovery.

* From<u32> implementation for `ValidatorIndex`.

* Fixes and more integration work.

* Make session cache proper lru cache.

* Use proper lru cache.

* Requester finished.

* ProtocolState -> Requester

Also make sure to not fetch our own chunk.

* Cleanup + fixes.

* Remove unused functions

- FetchTask::is_finished
- SessionCache::fetch_session_info

* availability-distribution responding side.

* Cleanup + Fixes.

* More fixes.

* More fixes.

adder-collator is running!

* Some docs.

* Docs.

* Fix reporting of bad guys.

* Fix tests

* Make all tests compile.

* Fix test.

* Cleanup + get rid of some warnings.

* state -> requester

* Mostly doc fixes.

* Fix test suite.

* Get rid of now redundant message types.

* WIP

* Rob's review remarks.

* Fix test suite.

* core.relay_parent -> leaf for session request.

* Style fix.

* Decrease request timeout.

* Cleanup obsolete errors.

* Metrics + don't fail on non fatal errors.

* requester.rs -> requester/mod.rs

* Panic on invalid BadValidator report.

* Fix indentation.

* Use typed default timeout constant.

* Make channel size 0, as each sender gets one slot anyways.

* Fix incorrect metrics initialization.

* Fix build after merge.

* More fixes.

* Hopefully valid metrics names.

* Better metrics names.

* Some tests that already work.

* Slightly better docs.

* Some more tests.

* Fix network bridge test.
This commit is contained in:
Robert Klotzner
2021-02-26 18:58:07 +01:00
committed by GitHub
parent 241b1f12a7
commit 48409e5548
45 changed files with 2037 additions and 1523 deletions
@@ -239,7 +239,7 @@ impl RequestFromBackersPhase {
// Request data.
to_state.send(FromInteraction::MakeFullDataRequest(
params.validator_authority_keys[validator_index as usize].clone(),
params.validator_authority_keys[validator_index.0 as usize].clone(),
params.candidate_hash.clone(),
validator_index,
tx,
@@ -279,8 +279,8 @@ impl RequestFromBackersPhase {
}
impl RequestChunksPhase {
fn new(n_validators: ValidatorIndex) -> Self {
let mut shuffling: Vec<_> = (0..n_validators).collect();
fn new(n_validators: u32) -> Self {
let mut shuffling: Vec<_> = (0..n_validators).map(ValidatorIndex).collect();
shuffling.shuffle(&mut rand::thread_rng());
RequestChunksPhase {
@@ -300,7 +300,7 @@ impl RequestChunksPhase {
let (tx, rx) = oneshot::channel();
to_state.send(FromInteraction::MakeChunkRequest(
params.validator_authority_keys[validator_index as usize].clone(),
params.validator_authority_keys[validator_index.0 as usize].clone(),
params.candidate_hash.clone(),
validator_index,
tx,
@@ -347,7 +347,7 @@ impl RequestChunksPhase {
if let Ok(anticipated_hash) = branch_hash(
&params.erasure_root,
&chunk.proof,
chunk.index as usize,
chunk.index.0 as usize,
) {
let erasure_chunk_hash = BlakeTwo256::hash(&chunk.chunk);
@@ -415,7 +415,7 @@ impl RequestChunksPhase {
if self.received_chunks.len() >= params.threshold {
let concluded = match polkadot_erasure_coding::reconstruct_v1(
params.validators.len(),
self.received_chunks.values().map(|c| (&c.chunk[..], c.index as usize)),
self.received_chunks.values().map(|c| (&c.chunk[..], c.index.0 as usize)),
) {
Ok(data) => {
if reconstructed_data_matches_root(params.validators.len(), &params.erasure_root, &data) {
@@ -852,7 +852,7 @@ async fn handle_network_update(
chunk.is_some(),
request_id,
candidate_hash,
validator_index,
validator_index.0,
);
// Whatever the result, issue an
@@ -882,7 +882,7 @@ async fn handle_network_update(
chunk.is_some(),
request_id,
awaited_chunk.candidate_hash,
awaited_chunk.validator_index,
awaited_chunk.validator_index.0,
);
// If there exists an entry under r_id, remove it.
@@ -1003,7 +1003,7 @@ async fn issue_request(
request_id,
peer_id,
awaited_chunk.candidate_hash,
awaited_chunk.validator_index,
awaited_chunk.validator_index.0,
);
protocol_v1::AvailabilityRecoveryMessage::RequestChunk(
@@ -1019,7 +1019,7 @@ async fn issue_request(
request_id,
peer_id,
awaited_data.candidate_hash,
awaited_data.validator_index,
awaited_data.validator_index.0,
);
protocol_v1::AvailabilityRecoveryMessage::RequestFullData(
@@ -184,7 +184,7 @@ impl TestState {
validators: self.validator_public.clone(),
discovery_keys: self.validator_authority_id.clone(),
// all validators in the same group.
validator_groups: vec![(0..self.validators.len()).map(|i| i as ValidatorIndex).collect()],
validator_groups: vec![(0..self.validators.len()).map(|i| ValidatorIndex(i as _)).collect()],
..Default::default()
}))).unwrap();
}
@@ -272,10 +272,10 @@ impl TestState {
virtual_overseer,
AvailabilityRecoveryMessage::NetworkBridgeUpdateV1(
NetworkBridgeEvent::PeerMessage(
self.validator_peer_id[validator_index as usize].clone(),
self.validator_peer_id[validator_index.0 as usize].clone(),
protocol_v1::AvailabilityRecoveryMessage::Chunk(
request_id,
Some(self.chunks[validator_index as usize].clone()),
Some(self.chunks[validator_index.0 as usize].clone()),
)
)
)
@@ -317,10 +317,10 @@ impl TestState {
virtual_overseer,
AvailabilityRecoveryMessage::NetworkBridgeUpdateV1(
NetworkBridgeEvent::PeerMessage(
self.validator_peer_id[validator_index as usize].clone(),
self.validator_peer_id[validator_index.0 as usize].clone(),
protocol_v1::AvailabilityRecoveryMessage::Chunk(
request_id,
Some(self.chunks[validator_index as usize].clone()),
Some(self.chunks[validator_index.0 as usize].clone()),
)
)
)
@@ -457,7 +457,7 @@ fn derive_erasure_chunks_with_proofs_and_root(
.enumerate()
.map(|(index, (proof, chunk))| ErasureChunk {
chunk: chunk.to_vec(),
index: index as _,
index: ValidatorIndex(index as _),
proof,
})
.collect::<Vec<ErasureChunk>>();