Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve http performance and ws readability #175

Merged
merged 12 commits into from
Jan 25, 2024
Merged
17 changes: 13 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,6 @@
#### Added

* Add support for reporting peer client information
* Speed up parsing and serialization of requests and responses with
[zerocopy](https://crates.io/crates/zerocopy)
* Store torrents with up to two peers without an extra heap allocation for the
peers.

#### Changed

Expand All @@ -28,10 +24,16 @@
* Remove support for unbounded worker channels
* Add backpressure in socket workers. They will postpone reading from the
socket if sending a request to a swarm worker failed
* Avoid a heap allocation for torrents with two or less peers. This can save
a lot of memory if many torrents are tracked
* Improve announce performance by avoiding having to filter response peers
* In announce response statistics, don't include announcing peer
* Distribute announce responses from swarm workers over socket workers to
decrease performance loss due to underutilized threads
* Harden ConnectionValidator to make IP spoofing even more costly
* Remove config key `network.poll_event_capacity` (always use 1)
* Speed up parsing and serialization of requests and responses by using
[zerocopy](https://crates.io/crates/zerocopy)

### aquatic_http

Expand All @@ -41,6 +43,13 @@
* Support running without TLS
* Support running behind reverse proxy

#### Changed

* Index peers by packet source IP and provided port instead of by source ip
and peer id. This is likely slightly faster.
* Improve announce performance by avoiding having to filter response peers
* In announce response statistics, don't include announcing peer

#### Fixed

* Fix bug where clean up after closing connections wasn't always done
Expand Down
116 changes: 14 additions & 102 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

24 changes: 9 additions & 15 deletions TODO.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,21 @@

## High priority

* if peer_clients is on, add task to generate prometheus exports on regular
interval to clean up data
* general
* add task to generate prometheus exports on regular interval to clean up
data. this is important if peer_clients is activated

* http
* consider storing small number of peers without extra heap allocation
* add CI transfer test for http without TLS

* aquatic_bench
* Opentracker "slow to get up to speed", is it due to getting faster once
inserts are rarely needed since most ip-port combinations have been sent?
In that case, a shorter duration (e.g., 30 seconds) would be a good idea.
* Maybe investigate aquatic memory use.
* Would it use significantly less memory to store peers in an ArrayVec if
there are only, say, 2 of them?

* CI transfer test
* add HTTP without TLS

* http
* panic sentinel not working
* general
* panic sentinel not working? at least seemingly not in http?

## Medium priority

Expand All @@ -42,10 +41,6 @@

* aquatic_ws
* Add cleaning task for ConnectionHandle.announced_info_hashes?
* RES memory still high after traffic stops, even if torrent maps and connection slabs go down to 0 len and capacity
* replacing indexmap_amortized / simd_json with equivalents doesn't help
* SinkExt::send maybe doesn't wake up properly?
* related to https://github.com/sdroege/async-tungstenite/blob/master/src/compat.rs#L18 ?

* Performance hyperoptimization (receive interrupts on correct core)
* If there is no network card RSS support, do eBPF XDP CpuMap redirect based on packet info, to
Expand All @@ -63,7 +58,6 @@
* thiserror?
* CI
* uring load test?
* what poll event capacity is actually needed?
* load test
* move additional request sending to for each received response, maybe
with probability 0.2
Expand Down
Loading
Loading