Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use Cubic CC by default #2295

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions neqo-bin/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ pub struct QuicParameters {
/// The idle timeout for connections, in seconds.
pub idle_timeout: u64,

#[arg(long = "cc", default_value = "newreno")]
#[arg(long = "cc", default_value = "cubic")]
/// The congestion controller to use.
pub congestion_control: CongestionControlAlgorithm,

Expand Down Expand Up @@ -141,7 +141,7 @@ impl Default for QuicParameters {
max_streams_bidi: 16,
max_streams_uni: 16,
idle_timeout: 30,
congestion_control: CongestionControlAlgorithm::NewReno,
congestion_control: CongestionControlAlgorithm::Cubic,
no_pacing: false,
no_pmtud: false,
preferred_address_v4: None,
Expand Down
2 changes: 1 addition & 1 deletion neqo-transport/src/cc/tests/cubic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ fn tcp_phase() {
assert!(num_acks2 < expected_ack_tcp_increase2);

// The time needed to increase cwnd by MAX_DATAGRAM_SIZE using the cubic equation will be
// calculates from: W_cubic(elapsed_time + t_to_increase) - W_cubic(elapsed_time) =
// calculated from: W_cubic(elapsed_time + t_to_increase) - W_cubic(elapsed_time) =
// MAX_DATAGRAM_SIZE => CUBIC_C * (elapsed_time + t_to_increase)^3 * MAX_DATAGRAM_SIZE +
// CWND_INITIAL - CUBIC_C * elapsed_time^3 * MAX_DATAGRAM_SIZE + CWND_INITIAL =
// MAX_DATAGRAM_SIZE => t_to_increase = cbrt((1 + CUBIC_C * elapsed_time^3) / CUBIC_C) -
Expand Down
2 changes: 1 addition & 1 deletion neqo-transport/src/connection/params.rs
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ impl Default for ConnectionParameters {
fn default() -> Self {
Self {
versions: VersionConfig::default(),
cc_algorithm: CongestionControlAlgorithm::NewReno,
cc_algorithm: CongestionControlAlgorithm::Cubic,
max_data: LOCAL_MAX_DATA,
max_stream_data_bidi_remote: u64::try_from(RECV_BUFFER_SIZE).unwrap(),
max_stream_data_bidi_local: u64::try_from(RECV_BUFFER_SIZE).unwrap(),
Expand Down
66 changes: 25 additions & 41 deletions neqo-transport/src/connection/tests/cc.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@ use crate::{
recovery::{ACK_ONLY_SIZE_LIMIT, PACKET_THRESHOLD},
sender::PACING_BURST_SIZE,
stream_id::StreamType,
tracking::DEFAULT_ACK_PACKET_TOLERANCE,
ConnectionParameters,
CongestionControlAlgorithm, ConnectionParameters,
};

#[test]
Expand Down Expand Up @@ -211,12 +210,11 @@ fn single_packet_on_recovery() {
assert!(dgram.is_some());
}

#[test]
/// Verify that CC moves out of recovery period when packet sent after start
/// of recovery period is acked.
fn cc_cong_avoidance_recovery_period_to_cong_avoidance() {
let mut client = default_client();
let mut server = default_server();
fn cc_cong_avoidance_recovery_period_to_cong_avoidance(cc_algorithm: CongestionControlAlgorithm) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only test in cc.rs that specifically depends on the underlying CongestionControlAlgorithm used. The other tests focus on the high level ClassicCongestionControl.

For the above reason, I think it is worth running this unit test with both NewReno and Cubic.

Previously this test would assert on the concrete individual NewReno window increases. That is difficult to do when supporting two CongestionControlAlgorithms.

Instead I suggest simply asserting that after x rounds, the congestion controller reaches the congestion window before the loss event again. Thus it switched from Recovery to CongestionAvoidance.

let mut client = new_client(ConnectionParameters::default().cc_algorithm(cc_algorithm));
let mut server = new_server(ConnectionParameters::default().cc_algorithm(cc_algorithm));
let now = connect_rtt_idle(&mut client, &mut server, DEFAULT_RTT);

// Create stream 0
Expand All @@ -233,59 +231,45 @@ fn cc_cong_avoidance_recovery_period_to_cong_avoidance() {
now += DEFAULT_RTT / 2;
let s_ack = ack_bytes(&mut server, stream_id, c_tx_dgrams, now);

let cwnd_before_loss = cwnd(&client);

// Client: Process ack
now += DEFAULT_RTT / 2;
client.process_input(s_ack, now);

let cwnd_after_loss = cwnd(&client);

// Should be in CARP now.
now += DEFAULT_RTT / 2;
assert!(cwnd_before_loss > cwnd_after_loss);
qinfo!("moving to congestion avoidance {}", cwnd(&client));

// Now make sure that we increase congestion window according to the
// accurate byte counting version of congestion avoidance.
// Check over several increases to be sure.
let mut expected_cwnd = cwnd(&client);
// Fill cwnd.
let (mut c_tx_dgrams, next_now) = fill_cwnd(&mut client, stream_id, now);
now = next_now;
for i in 0..5 {
for i in 0..6 {
qinfo!("iteration {i}");

let c_tx_size: usize = c_tx_dgrams.iter().map(Datagram::len).sum();
let (c_tx_dgrams, next_now) = fill_cwnd(&mut client, stream_id, now);
qinfo!(
"client sending {c_tx_size} bytes into cwnd of {}",
"client sending {} bytes into cwnd of {}",
c_tx_dgrams.iter().map(Datagram::len).sum::<usize>(),
cwnd(&client)
);
assert_eq!(c_tx_size, expected_cwnd);

// As acks arrive we will continue filling cwnd and save all packets
// from this cycle will be stored in next_c_tx_dgrams.
let mut next_c_tx_dgrams: Vec<Datagram> = Vec::new();

// Until we process all the packets, the congestion window remains the same.
// Note that we need the client to process ACK frames in stages, so split the
// datagrams into two, ensuring that we allow for an ACK for each batch.
let most = c_tx_dgrams.len() - usize::try_from(DEFAULT_ACK_PACKET_TOLERANCE).unwrap() - 1;
let s_ack = ack_bytes(&mut server, stream_id, c_tx_dgrams.drain(..most), now);
assert_eq!(cwnd(&client), expected_cwnd);
client.process_input(s_ack, now);
// make sure to fill cwnd again.
let (mut new_pkts, next_now) = fill_cwnd(&mut client, stream_id, now);
now = next_now;
next_c_tx_dgrams.append(&mut new_pkts);

let s_ack = ack_bytes(&mut server, stream_id, c_tx_dgrams, now);
assert_eq!(cwnd(&client), expected_cwnd);
client.process_input(s_ack, now);
// make sure to fill cwnd again.
let (mut new_pkts, next_now) = fill_cwnd(&mut client, stream_id, now);
now = next_now;
next_c_tx_dgrams.append(&mut new_pkts);

expected_cwnd += client.plpmtu();
assert_eq!(cwnd(&client), expected_cwnd);
c_tx_dgrams = next_c_tx_dgrams;
}

assert!(cwnd_before_loss < cwnd(&client));
}

#[test]
fn cc_cong_avoidance_recovery_period_to_cong_avoidance_new_reno() {
cc_cong_avoidance_recovery_period_to_cong_avoidance(CongestionControlAlgorithm::NewReno);
}

#[test]
fn cc_cong_avoidance_recovery_period_to_cong_avoidance_cubic() {
cc_cong_avoidance_recovery_period_to_cong_avoidance(CongestionControlAlgorithm::Cubic);
}

#[test]
Expand Down
Loading