1
votes

These days I'm evaluating HTTP/2 (on Nginx), as a possible candidate for boosting performance of my application.

I was looking on this nice Akamai HTTP2 demo. From this demo I can see that the "http2" part loads much faster, apparently thanks to HTTP2 multiplexing feature.

So, I decided to look a bit closer. I opened Chrome (version 51) developer tools and examined the Network panel.

I expected to see one single network connection, handling all the requests (e.g. multiplexing).

However, I see multiple connections issued, one per image tile:

enter image description here

Moreover, I see that there is a delay ("stalled") for almost every reques:

enter image description here

I expected that (contrary to HTTP1) all requests will be issued in parallel without delays. Would someone help me to understand what is going on?

1

1 Answers

3
votes

What you see are not multiple connections, one per image tile, but multiple requests, one per image tile, on a single TCP connection.

The fact that are multiplexed is evident because there is a large number of requests (tens or even hundreds) that are sent at the same time. See how the requests are all aligned vertically.

Compare this with a HTTP/1.1 profile and you will see a ladder, ziggurat-style profile because only (typically) 6 requests can be sent at a time. See for example this presentation I gave at 39:54.

What you see is therefore the expected profile for a multiplexed protocol like HTTP/2.

The tiny "stalled" delay that you see for those requests may be due to internal implementation delays (such as queuing) as well as HTTP/2 protocol details such as flow control.