-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Signalling for rate-limits for a client? #39
Comments
The rate-limiting here is referring to the number of CONNECT or CONNECT-UDP requests the proxy allows for a given client. This would translate into delayed or rejected requests, as a normal HTTP proxy behavior. |
Yeah. I think there is nothing special to do here, implementations or deployments can leverage HTTP layer mechanisms if they care. (A big advantage of MASQUE!) |
No, I meant any limits in bits or number of packets. |
Unless something like this already exists for CONNECT in HTTP/1.1 or HTTP/2 I don't know that we need to do anything. |
@gloinul I think having strict limits on the number of packets or bytes that go through a proxy connection could be enforced by a proxy closing a given CONNECT request stream if it violates a policy. If you want explicit signaling about expected rates or datagram flow control, that'd need to be another separate extension. |
I believe that there isn't anything to do here — I agree with Lucas's points. Please re-open if you disagree! |
The draft do discuss the aspect that the proxy may rate limit a client. Should there actually be explicit signalling of these rates to the client.
I also assume these parameters are on client level and thus apply across the different UDP flows.
The text was updated successfully, but these errors were encountered: