Before diving deep into HTTP, we should take a look at the network stack layer. To abstract out various layers in a communication system, a model called OSI model is defined. Simply, it folds out from the least abstract (physical: electromagnetic signals) to the most abstract (application: HTTP, IMAP etc.) layer.
OSI Model Schema
HTTP is built upon over TCP (most of the time), which is a layer below of HTTP in the OSI model. Due to the physical nature of any communication method, the data can be lost or corrupted during the transmission. TCP is one of the protocols that mitigate reliability issues. In other words, TCP can handle the faults that can occur and strive to deliver every single data package with success.
In the meantime, HTTP defines the communication between two parties, namely client and server. In this setup, only the client can initiate a contact by sending a request, and getting a response from the server.
HTTP is a text based protocol. Every request (a message sent by the client) and every response (a message sent by the server) conforms to a basic structure. The message (can be also referred as payload) consists of the following: the request line, the headers and the body. This structure is valid for every HTTP version that is defined.
Here we specify a method, source URI and HTTP protocol to initiate the request. The source URI refers to a resource (any type of data, e.g: an HTML document, an image file, a document) and the method represents an action over the resource. The most basic example might be as follows:
GET /index.html HTTP/1.1
Headers are key value pairs which alters the behavior of the client or the server, and the keys are case-insensitive. Since a key value pair can be defined more than once, even though it resembles a hash map, it is not.
There are many standardized headers. Those headers can define some metadata about the resource or the request or the client itself. Here is an example request header section:
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/119.0
The body can be in any format, but in a request message, it is only allowed to be used with POST, PUT and PATCH methods that are specified in the request line. By using
content-type header, the client tells the server the type of the body that is sent.
We mentioned that an HTTP request has a verb which represent an action. The server sends the result of the action in the status line as a status code. The status line also consists of the protocol and the reason:
HTTP/1.1 200 OK
The status codes are 3-digit numbers with predefined meaning. All valid status codes can be found in the official specification document: https://www.rfc-editor.org/rfc/rfc9110.html#name-status-codes.
They are just like the response header counterpart. The only thing worth mentioning is that some headers are only appropriate to be used as response headers and some as request headers.
Once again, the body can be in any format. And the format of the body must be annotated with
The main difference between v1 and v1.1 is the ability to reduce the request latency by eliminating TCP handshake on subsequent requests. To initialize a TCP connection, the client and the server executes a ‘handshake’ which consists of 3 requests. This alone sets an overhead of 3 RTT (round trip time). In HTTP v1, every request is required to close the underlying TCP connection. Therefore, even though the client requests a new document from the server, it should initialize a new TCP connection. However, with the introduction of
connection: keep-alive header in HTTP v1.1, the client is allowed to use the same TCP connection for subsequent requests.
Before we continue with the evolution of the HTTP, we should take a look at TLS, which stands for Transport Layer Security. As it is mentioned before, HTTP is a text based protocol, and it is not secure on its own. Anybody that can eavesdrop on the network can easily inspect the network packages and see their content.
To counter this weakness, HTTP and TLS are used together to encrypt the data end to end. This is also known as HTTPS, and it is the backbone of the internet security.
TLS uses an asymmetric cryptographic algorithm called Diffie-Hellman Exchange. In short, this algorithm ensures that the client and the server can safely transfer their cryptographic keys in an unsecure medium (which is TCP), but knowing those keys wouldn’t make the encrypted connection vulnerable.
In addition to the TCP handshake, with HTTPS, the client and the server perform a TLS handshake to share their public keys with each other. Once this step complete, the data can be delivered encrypted over TCP.
Server Hello + Certificate
ClientKeyExchange + ChangeCipherSpecFinished
This major version of HTTP brings improvements over latency while being backwards compatible with HTTP/1 systems.
One of the key features of HTTP/2 is the multiplexing. Before explaining multiplexing, it is important to understand one of the shortcoming of HTTP/1.1. Previously, we mentioned that HTTP/1.1 can use the same TCP connection for multiple requests to reduce the latency. However, it doesn’t allow using the same connection to make parallel requests. The client has to wait for the transfer of the first resource to complete, before getting the data for the subsequent resource. For that reason, many browsers circumvent this by opening up a new TCP connections (usually the default behavior is to have max. 6 connections per domain). But, every additional connection means having more overhead to the process. It is resource intensive. On the other hand, the multiplexing in HTTP/2 allows clients to receive data for out of order requests by utilizing same TCP connection.
In addition to multiplexing, HTTP/2 can compress the HTTP headers efficiently. An algorithm called HPACK is used to take advantage of a property of HTTP headers: Commonly used set of header keys is known. Unlike its predecessors, HPACK is also secure to use in presence of TLS .
Server push is another major improvement that comes with HTTP/2. With multiplexing, the clients can request documents in parallel and server push can take it one level further: It allows the server to preemptively send subsequent documents to the client before the client sends explicit requests for them. If a
Link response header is present with
preload instruction, the server can also opt in to send the resource referenced in the
Link header. As you can imagine, this is especially useful when an HTML document can reference the assets like CSS, fonts and images that will be required to render the document. It is even possible to do this for XHR requests to fetch any type of content that is desired to be loaded lazily.
Unlike its predecessors, HTTP/3 is not built on TCP, but UDP. UDP does not provide any mechanism to ensure the data reliability as opposed to TCP. But this trait can be used to overcome one of the shortcomings of TCP: true multiplexing. As previously stated, HTTP/2 made it possible to request as many resources at the same time, out of order, in a parallel fashion. However, TCP has a weakness that reduces the efficiency of the multiplexing controlled in the HTTP layer. TCP always ensures that a data package is sent successfully, and if a package is not delivered, the package is resent (also knows as congestion control). This behavior causes the whole transfer process to stall for all requests, even when multiplexing. To be precise, from the TCP point of view, there is no way to differentiate the packages that belong to separate resources, if they are sent using the same connection. HTTP/3 does not have the same flaw, and it ensures that the multiplexing can happen while ensuring the individual resource integrity.
Another significant improvement is the handshake mechanism. Prior versions of HTTP relied on having two handshakes at the start: one for TCP and one for TLS. HTTP/3 unifies the two handshakes into one to reduce the initialization time significantly.