Charlie Wang

HTTP 413 errors are rarely seen

If you've ever done web development, you might have thought about limiting the size of requests. Many web frameworks have a simple configuration value for that, or you can slap a middleware on it. The server is supposed to return an HTTP 413 Content Too Large error so that the client can figure out what's going on and adjust, but in practice clients rarely see this response and usually get some sort of timeout error. What's going on?

Most implementations of a request size limit will fail fast and send an HTTP 413 as soon there are too many bytes being read from the socket for the request, and then close the connection or at least refuse to read from it anymore.

Unfortunately, almost every* HTTP client I've seen will try to write the entire request into the socket before reading anything from it. Since the server closes the connection, buffers (TCP windows) will fill up very quickly and the client will block indefinitely trying to send the rest of the request. Eventually, things will timeout and the client will see some socket error instead of an HTTP error.

If the client gets lucky, the remainder of the request actually fits within the TCP window, so it won't get blocked and the error will be seen. What can you do if you're not so lucky?

I can think of these options, but they all require direct control over how things are read and sent on the underlying socket, which isn't something that your standard HTTP framework is going to give you.

And yes, the server is spec-compliant when it does this.

See

*: Apparently cURL does the right thing

More posts