Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pipelining with Netty #29414

Open
isaacrivriv opened this issue Aug 15, 2024 · 0 comments
Open

Pipelining with Netty #29414

isaacrivriv opened this issue Aug 15, 2024 · 0 comments

Comments

@isaacrivriv
Copy link
Member

isaacrivriv commented Aug 15, 2024

While working on HTTP Netty and running builds, I've found occasional issues with HTTP that seem intermittent (timeouts waiting for data, unexpected EoF errors, missing data, closed connections). While looking into the issues I've seen that at least part of the cause that seems to be affecting this is related to Http 1.1 Pipelining.

Legacy

When we have read all the headers and pass the request to WebContainer, we start reading data only when asked by WebContainer either because the application asked for it or a security filter asked to queue up data or similar.

When writing data, we ensure everything is written to the wire before we queue up another read for data if working on persistent connections. See call hierarchy for pipelining here. For HTTP Pipelining the server in short queues up the requests, handles them in a sequential manner when "closing" the previous request.

Netty

Currently in Netty we have the ability to handle multiple requests at the same time in parallel for HTTP Pipelining. However, the spec details that it must be for "safe" methods where data is not handled and it must send the data corresponding responses in the same order that the requests were received. See RFC section on Pipelining. There is currently no logic in Netty that handles this ordered write for pipelining.

Proposed solution

As of now it would be great if we could do parallel work and write out the responses in the order that the requests were accepted but not sure what additional changes need to happen in the code to make this work. To keep that same behavior and ensure spec compliance I'll start by matching the responses written out to match the order of requests received and keep this open to see if we can do additional work on this for parallelizing.

Solution

After some discussions, we decided to go with a handler that works with a queue of pipelined requests only triggering the next request after the previous one finishes processing by waiting for the final write. This is the easiest solution as of now due to auto read since we will continue reading requests while processing others but has it's disadvantages since we will have to keep in mind the size of que queue and requests in memory compared to legacy. I strongly believe once we beta, a better approach would be to disable autoread and read only when we require.

Since HTTP pipelining does not exist in HTTP2 (instead multiplexing is used) then this handler would only be used in HTTP1.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Identified Tasks
Development

No branches or pull requests

1 participant