Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use Cubic CC by default #2295

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

feat: use Cubic CC by default #2295

wants to merge 6 commits into from

Conversation

mxinden
Copy link
Collaborator

@mxinden mxinden commented Dec 21, 2024

Firefox uses Cubic by default:

- name: network.http.http3.cc_algorithm
  type: RelaxedAtomicUint32
  value: 1
  mirror: always
  rust: true

https://searchfox.org/mozilla-central/rev/f9517009d8a4946dbdd3acd72a31dc34fca79586/modules/libpref/init/StaticPrefList.yaml

This commit updates Neqo to use Cubic instead of New Reno by default.

Firefox uses Cubic by default:

``` yaml
- name: network.http.http3.cc_algorithm
  type: RelaxedAtomicUint32
  value: 1
  mirror: always
  rust: true
```

https://searchfox.org/mozilla-central/rev/f9517009d8a4946dbdd3acd72a31dc34fca79586/modules/libpref/init/StaticPrefList.yaml

This commit updates Neqo to use Cubic instead of New Reno by default.
Copy link

github-actions bot commented Dec 21, 2024

Failed Interop Tests

QUIC Interop Runner, client vs. server, differences relative to 7f8136e.

neqo-latest as client

neqo-latest as server

All results

Succeeded Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

Unsupported Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

@mxinden
Copy link
Collaborator Author

mxinden commented Jan 5, 2025

Cubic does not seem to grow the cwnd as fast as New Reno in congestion avoidance phase. More particularly, while our New Reno implementation increases its cwnd in each iteration by the SMSS, our Cubic implementation only does so in every second iteration.

expected_cwnd += client.plpmtu();
assert_eq!(cwnd(&client), expected_cwnd);

That is surprising to me. I expected Cubic to grow faster than New Reno in the beginning, given its concave increase early in congestion avoidance.

I will give this more thought.

@larseggert
Copy link
Collaborator

Cubic does not seem to grow the cwnd as fast as New Reno in congestion avoidance phase.

That seems like a bug: https://datatracker.ietf.org/doc/html/rfc9438#section-1-5

Specifically, CUBIC may increase the congestion window more aggressively than Reno during the congestion avoidance phase. According to [RFC5681], during congestion avoidance, the sender must not increment cwnd by more than Sender Maximum Segment Size (SMSS) bytes once per round-trip time (RTT), whereas CUBIC may increase cwnd much more aggressively.

@mxinden
Copy link
Collaborator Author

mxinden commented Jan 13, 2025

My assessment above is wrong for the following reason:

Cubic does not seem to grow the cwnd as fast as New Reno in congestion avoidance phase.

Wrong.

More particularly, while our New Reno implementation increases its cwnd in each iteration by the SMSS, our Cubic implementation only does so in every second iteration.

Correct.

BUT, while New Reno halves its congestion window on a congestion event:

(curr_cwnd / 2, acked_bytes / 2)

Cubic will reduce it by 30% only:

// CUBIC_BETA = 0.7;
pub const CUBIC_BETA_USIZE_DIVIDEND: usize = 7;
pub const CUBIC_BETA_USIZE_DIVISOR: usize = 10;

curr_cwnd * CUBIC_BETA_USIZE_DIVIDEND / CUBIC_BETA_USIZE_DIVISOR,
acked_bytes * CUBIC_BETA_USIZE_DIVIDEND / CUBIC_BETA_USIZE_DIVISOR,

Thus, within the 5 iterations of the test, Cubic does not grow its congestion window as fast as New Reno, because it starts off with a larger congestion window after the congestion event, i.e. has a head start.

@larseggert
Copy link
Collaborator

Aside: Is there any point at having CUBIC_BETA_USIZE_DIVIDEND and CUBIC_BETA_USIZE_DIVISOR vs. just CUBIC_BETA = 0.7? Are we ever using just one of them without the division?

@mxinden
Copy link
Collaborator Author

mxinden commented Jan 13, 2025

Aside: Is there any point at having CUBIC_BETA_USIZE_DIVIDEND and CUBIC_BETA_USIZE_DIVISOR vs. just CUBIC_BETA = 0.7? Are we ever using just one of them without the division?

I assume the split is due to its usage with the usizes curr_cwnd and acked_bytes. The split in dividend and divisor allows for integer multiplication of a usize with a fraction without a conversion from/to a floating point.

curr_cwnd * CUBIC_BETA_USIZE_DIVIDEND / CUBIC_BETA_USIZE_DIVISOR,
acked_bytes * CUBIC_BETA_USIZE_DIVIDEND / CUBIC_BETA_USIZE_DIVISOR,

I don't have an opinion on whether it is worth the complexity it introduces. Given our work ahead (#1912) I suggest we keep it as is for now.

Copy link

codecov bot commented Jan 13, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.33%. Comparing base (6b87603) to head (4d6f34a).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2295      +/-   ##
==========================================
+ Coverage   95.29%   95.33%   +0.03%     
==========================================
  Files         114      114              
  Lines       36856    36843      -13     
  Branches    36856    36843      -13     
==========================================
+ Hits        35123    35124       +1     
+ Misses       1727     1711      -16     
- Partials        6        8       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

fn cc_cong_avoidance_recovery_period_to_cong_avoidance() {
let mut client = default_client();
let mut server = default_server();
fn cc_cong_avoidance_recovery_period_to_cong_avoidance(cc_algorithm: CongestionControlAlgorithm) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only test in cc.rs that specifically depends on the underlying CongestionControlAlgorithm used. The other tests focus on the high level ClassicCongestionControl.

For the above reason, I think it is worth running this unit test with both NewReno and Cubic.

Previously this test would assert on the concrete individual NewReno window increases. That is difficult to do when supporting two CongestionControlAlgorithms.

Instead I suggest simply asserting that after x rounds, the congestion controller reaches the congestion window before the loss event again. Thus it switched from Recovery to CongestionAvoidance.

@mxinden mxinden marked this pull request as ready for review January 13, 2025 17:50
@mxinden
Copy link
Collaborator Author

mxinden commented Jan 14, 2025

Curious what the benchmark results will be.

@mxinden mxinden enabled auto-merge January 15, 2025 20:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants