Use same fmt and clippy configs as in Substrate (#7611)

* Use same rustfmt.toml as Substrate

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>

* format format file

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>

* Format with new config

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>

* Add Substrate Clippy config

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>

* Print Clippy version in CI

Otherwise its difficult to reproduce locally.

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>

* Make fmt happy

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>

* Update node/core/pvf/src/error.rs

Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>

* Update node/core/pvf/src/error.rs

Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>
This commit is contained in:
Oliver Tale-Yazdi
2023-08-14 16:29:29 +02:00
committed by GitHub
parent ac435c96cf
commit 342d720573
203 changed files with 1880 additions and 1504 deletions
+6 -5
View File
@@ -321,7 +321,8 @@ where
return futures::pending!()
}
// If there are active requests, this will always resolve to `Some(_)` when a request is finished.
// If there are active requests, this will always resolve to `Some(_)` when a request is
// finished.
if let Some(Ok(Some(result))) = self.active_requests.next().await {
self.store_cache(result);
}
@@ -343,10 +344,10 @@ where
{
loop {
// Let's add some back pressure when the subsystem is running at `MAX_PARALLEL_REQUESTS`.
// This can never block forever, because `active_requests` is owned by this task and any mutations
// happen either in `poll_requests` or `spawn_request` - so if `is_busy` returns true, then
// even if all of the requests finish before us calling `poll_requests` the `active_requests` length
// remains invariant.
// This can never block forever, because `active_requests` is owned by this task and any
// mutations happen either in `poll_requests` or `spawn_request` - so if `is_busy` returns
// true, then even if all of the requests finish before us calling `poll_requests` the
// `active_requests` length remains invariant.
if subsystem.is_busy() {
// Since we are not using any internal waiting queues, we need to wait for exactly
// one request to complete before we can read the next one from the overseer channel.
+2 -1
View File
@@ -895,7 +895,8 @@ fn multiple_requests_in_parallel_are_working() {
receivers.push(rx);
}
// The backpressure from reaching `MAX_PARALLEL_REQUESTS` will make the test block, we need to drop the lock.
// The backpressure from reaching `MAX_PARALLEL_REQUESTS` will make the test block, we need
// to drop the lock.
drop(lock);
for _ in 0..MAX_PARALLEL_REQUESTS * 100 {