HTTP/2 packets are sent as one or more TCP packets. In the same way as TCP packets are ultimately sent as IP packets (or datagrams).
This does mean that even though HTTP/2 has multiplexing at the application layer (HTTP) it does not have truly independent streams at a transport layer (TCP), and one issue of HTTP/2 is we have just moved the head of line (HOL) blocking problem from the HTTP layer to the TCP layer.
Let’s look at an example: an example web page needs to download 10 images to display.
Under HTTP/1.1 the browser would open a TCP connection, fire off the first request, and then be stuck as it cannot use that TCP connection to make subsequent requests. This despite the fact that the TCP connection is doing nothing until it gets a response and there nothing stopping it at a TCP layer. It was purely a HTTP restriction and primarily due to the fact that HTTP/1 was text based so mixing up bits of requests wasn’t possible. HTTP/1.1 did have the concept of HTTP pipelining which allowed subsequent requests to be sent, but they still had to come back in order. And it was very poorly supported. Instead, as a workaround, browsers opened multiple connections (typically 6) but that had many downsides too (slow to create, to get up to speed and not possible to prioritise across them).
HTTP/2 allows those subsequent requests to be sent on the same TCP connection and then to receive bits of all the requests back in any order and piece them together for processing. So the first image requested might actually be the last received. This is especially useful for slow connections (where the delay in sending is a significant chunk of the total time taken) or when the server might take a while processing some requests compared to others (e.g. if the first image has to be fetched from disk but the second is already available in a cache, then why not use the connection to send that second image). This is why HTTP/2 is generally faster and better than HTTP/1.1 - because it uses the TCP connection better and is not as wasteful.
However, because TCP is a guaranteed, in-order, protocol, that has no idea what the higher level application (HTTP) is using it for, this does introduce some problems to HTTP/2 if a TCP packet gets lost.
Let’s say those 10 images all come back in order. But a packet from the first image is lost. In theory, if HTTP/2 was truly made up of independent streams, the browser could display the last 9 images and then re-request the missing TCP packet and then display the first image. Instead what happens is all 10 images are held up waiting for that missing TCP packet to be resent before TCP let’s the upper layer HTTP know which messages have been received.
So in a lossy environment, HTTP/2 performs significantly worse than HTTP/1.1 with 6 connections.
This was all known at the time HTTP/2 was being created but, in most cases, HTTP/2 was faster so they released it anyway until they could work on fixing that case.
HTTP/3 looks to solve this remaining case. It does this by moving away from TCP to a new protocol called QUIC which is made with the idea of multiplexing built into it, unlike TCP. QUIC is built upon UDP rather than try to create a whole new low level protocol as that is well supported. But QUIC is very complicated and will take a while to get here, which is why they did not hold up HTTP/2 to have that and instead released what they have as a step along the way.