Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement proper payload chunked encoding in HTTP api_v3 #6467

Open
yurishkuro opened this issue Jan 2, 2025 · 4 comments · May be fixed by #6479
Open

Implement proper payload chunked encoding in HTTP api_v3 #6467

yurishkuro opened this issue Jan 2, 2025 · 4 comments · May be fixed by #6479

Comments

@yurishkuro
Copy link
Member

yurishkuro commented Jan 2, 2025

Chunked encoding allows HTTP endpoint to stream the results back to the client incrementally, which works well with our Query Service v2 streaming interface. The http.Server should support that automatically through the use of http.Flusher API

func handler(w http.ResponseWriter, r *http.Request) {
        // Simulate a long-running process that produces data in chunks
        for i := 0; i < 5; i++ {
                chunk := fmt.Sprintf("Chunk %d\n", i)
                fmt.Fprintln(w, chunk)
                w.(http.Flusher).Flush() // Flush the buffer immediately
                time.Sleep(1 * time.Second) // Simulate work between chunks
        }
}

However, this ^ example may be wrong and we may need to do something special for chunked encoding, which according to spec should look like

HTTP/1.1 200 OK
Transfer-Encoding: chunked
Content-Type: text/plain

1a
This is the first chunk.
1c
This is the second chunk.
20
This is the third chunk.
0

(i.e. each chunk is preceeded by a hex-encoded length on a new line).

The example of reading the correct chunked encoding:

       resp, err := http.Get("http://localhost:8080/")
        if err != nil {
                log.Fatal(err)
        }
        defer resp.Body.Close()

        // Read the response body in chunks
        buf := make([]byte, 1024)
        for {
                n, err := resp.Body.Read(buf)
                if err != nil && err != io.EOF {
                        log.Fatal(err)
                }
                if n == 0 {
                        break
                }
                fmt.Print(string(buf[:n]))
        }

Additional consideration: our HTTP server supports compression, so need to figure out how that affects the response protocol.

@VaibhavMalik4187
Copy link
Contributor

This looks like an interesting issue. Using chunks, we can send streams of unknown size as a sequence of length-delimited buffers. The exact HTTP/1.1 specification for chunked encodings are present at https://datatracker.ietf.org/doc/html/rfc7230#page-36

Compression handling could be a little tricky to implement. I'll need to investigate how compression handling (if used) interacts with chunked encodings. Then we can decide on an approach to handle both functionalities. Currently, there are two possible options in mind:

  1. Compress each chunk -> Adds additional processing overhead on the server side. But simplifies decompression on the client side.
  2. Wrap the chunked response in a compressed stream -> Reduces overall bandwidth usage but requires decompressing the response on the client side.

I'm more inclined towards the first approach as it handles everything on the server side. But we'll need to analyze how it affects the performance, thoughts?

@VaibhavMalik4187
Copy link
Contributor

This looks like an interesting issue. Using chunks, we can send streams of unknown size as a sequence of length-delimited buffers. The exact HTTP/1.1 specification for chunked encodings are present at https://datatracker.ietf.org/doc/html/rfc7230#page-36

Compression handling could be a little tricky to implement. I'll need to investigate how compression handling (if used) interacts with chunked encodings. Then we can decide on an approach to handle both functionalities. Currently, there are two possible options in mind:

1. Compress each chunk -> Adds additional processing overhead on the server side. But simplifies decompression on the client side.

2. Wrap the chunked response in a compressed stream -> Reduces overall bandwidth usage but requires decompressing the response on the client side.

I'm more inclined towards the first approach as it handles everything on the server side. But we'll need to analyze how it affects the performance, thoughts?

https://stackoverflow.com/questions/5280633/gzip-compression-of-chunked-encoding-response suggests the following approach:

  • Compress the whole content using gzip and then apply the chunked encoding.

@yurishkuro
Copy link
Member Author

yurishkuro commented Jan 3, 2025

Compress the whole content using gzip and then apply the chunked encoding.

to me that sounds like an approach that defeats the purpose of streaming. Yes, the content is sent in smaller chunks, but the client cannot do anything with the content until it receives all of it.

We used to use https://github.com/grpc-ecosystem/grpc-gateway to serve APIv3 over HTTP. I would recommend reading their docs to understand how they are handling streaming responses. We did borrow the GRPCGatewayWrapper struct to approximate their behavior.

@VaibhavMalik4187
Copy link
Contributor

@yurishkuro, thank you for the pointers.

@chahatsagarmain, I just saw that you raised a PR to implement this feature. As you can see, I was also interested in working on this. I believe it is common courtesy to at least ask the other person if they're working on it rather than just submitting a PR, knowing that somebody else has shown interest in working on the same topic.

Contributing to open-source projects is about collaboration, not competition. I don't have much experience with Jaeger's codebase, so it took me some time to figure things out. But great, now we already have an open PR targeting this enhancement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants