2
votes

In several Internet sources I've seen a general recommendation to disable sending Expect: 100-continue HTTP header in order to increase performance if the client is NOT actually going to send a large body.

However, testing the following code reveals that sending the header makes overall time decrease by ~50ms in average.

var hc = new HttpClient();
hc.DefaultRequestHeaders.ExpectContinue = ?;
hc.BaseAddress = new Uri("http://XXX/api/");
var r = new HttpRequestMessage(HttpMethod.Post, new Uri("YYY", UriKind.Relative))
{
    Content = new StringContent("{}", Encoding.UTF8, @"application/json")
};


var tt = hc.SendAsync(r).Result;
tt.Content.ReadAsStringAsync().Result.Dump();
hc.Dispose();

Here is the WireShark dump for request with Expect: 100-continue

  1 0.000000000    ss.ss.ss.176          dd.dd.dd.150         TCP      66     54515→80 [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=4 SACK_PERM=1
  2 0.342137000    dd.dd.dd.150         ss.ss.ss.176          TCP      66     80→54515 [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1380 WS=1 SACK_PERM=1
  3 0.342687000    ss.ss.ss.176          dd.dd.dd.150         TCP      54     54515→80 [ACK] Seq=1 Ack=1 Win=66780 Len=0
  4 *REF*          ss.ss.ss.176          dd.dd.dd.150         HTTP     272    POST /XXX/api/YYY HTTP/1.1 
  5 0.361158000    dd.dd.dd.150         ss.ss.ss.176          HTTP     79     HTTP/1.1 100 Continue 
  6 0.361846000    ss.ss.ss.176          dd.dd.dd.150         TCP      56     54515→80 [PSH, ACK] Seq=219 Ack=26 Win=66752 Len=2
  7 0.705497000    dd.dd.dd.150         ss.ss.ss.176          HTTP     461    HTTP/1.1 200 OK  (application/json)
  8 0.726029000    ss.ss.ss.176          dd.dd.dd.150         TCP      54     54515→80 [FIN, ACK] Seq=221 Ack=433 Win=66348 Len=0
  9 1.067923000    dd.dd.dd.150         ss.ss.ss.176          TCP      54     80→54515 [FIN, ACK] Seq=433 Ack=222 Win=65535 Len=0
 10 1.068466000    ss.ss.ss.176          dd.dd.dd.150         TCP      54     54515→80 [ACK] Seq=222 Ack=434 Win=66348 Len=0

The same request without the header:

 11 9.300455000    ss.ss.ss.176          dd.dd.dd.150         TCP      66     54516→80 [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=4 SACK_PERM=1
 12 9.640626000    dd.dd.dd.150         ss.ss.ss.176          TCP      66     80→54516 [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1380 WS=1 SACK_PERM=1
 13 9.641393000    ss.ss.ss.176          dd.dd.dd.150         TCP      54     54516→80 [ACK] Seq=1 Ack=1 Win=66780 Len=0
 14 *REF*          ss.ss.ss.176          dd.dd.dd.150         HTTP     250    POST /XXX/api/YYY HTTP/1.1 
 15 0.406794000    dd.dd.dd.150         ss.ss.ss.176          TCP      54     80→54516 [ACK] Seq=1 Ack=197 Win=65535 Len=0
 16 0.406963000    ss.ss.ss.176          dd.dd.dd.150         TCP      56     54516→80 [PSH, ACK] Seq=197 Ack=1 Win=66780 Len=2
 17 0.749589000    dd.dd.dd.150         ss.ss.ss.176          HTTP     461    HTTP/1.1 200 OK  (application/json)
 18 0.769053000    ss.ss.ss.176          dd.dd.dd.150         TCP      54     54516→80 [FIN, ACK] Seq=199 Ack=408 Win=66372 Len=0
 19 1.109276000    dd.dd.dd.150         ss.ss.ss.176          TCP      54     80→54516 [FIN, ACK] Seq=408 Ack=200 Win=65535 Len=0
 20 1.109742000    ss.ss.ss.176          dd.dd.dd.150         TCP      54     54516→80 [ACK] Seq=200 Ack=409 Win=66372 Len=0

Same results were received for IIS 7.5, IIS 8.0

The questions are:

  1. What makes the request with the Expect header execute faster, when theoretically the opposite shall take place?
  2. Is it always the case that the body of POST request goes within a separate TCP packet (I've looked through only a couple of samples, there this is true)? Here I mean why TCP packet at line 14 in dump does not contain the data (POST body) that was sent in TCP packet at line 16?
2
It's hard to answer the first question without seeing the data in the captures. It looks as if the actual POST data is contained within packets 4 and 14 respectively, since their size is 250+ bytes. Packets 15 and 16 are a bit suspect - there's no reason for extra roundtrip. On a general level, no - HTTP clients normally do not separate POST data from meta-data into distinct TCP packets in requests.RomanK
One more observation: if disable Nagle algorithm (ServicePointManager.UseNagleAlgorithm = false) then providing Expect header or not providing makes almost no difference. But POST body is still pushed in a different TCP packet.Pavel Baravik
Have the same nasty issue with HttpWebRequest and .NET FrameworkDmitriy

2 Answers

1
votes

I had the same problem (~50 ms of delay). I guess it's a bug on HttpClient implementation of .NET Framework.

I made some test with .NET Core 2.1 and I was able to remove Continue message without performance decay.

0
votes

Resolved same problem with TcpClient

var tcpClient = new TcpClient();

var uri = new Uri(httpAddress);

tcpClient.Connect(uri.Host, uri.Port);

string httpResponse = null;

using (NetworkStream networkStream = tcpClient.GetStream())
{
    var httpRequestBuilder = new StringBuilder();

    httpRequestBuilder.AppendLine("POST / HTTP/1.1");
    httpRequestBuilder.Append("Host: ").AppendLine(uri.Host);
    httpRequestBuilder.AppendLine("Content-Type: application/json");
    httpRequestBuilder.Append("Content-Length: ").AppendLine(postJsonBody.Length.ToString());
    httpRequestBuilder.AppendLine();
    httpRequestBuilder.AppendLine(postJsonBody);

    var httpRequest = httpRequestBuilder.ToString();
    var requestBytes = Encoding.UTF8.GetBytes(httpRequest);

    // sending request as one package
    // (without PSH)
    networkStream.Write(requestBytes, 0, requestBytes.Length);

    // reading response
    using (var sr = new StreamReader(networkStream, Encoding.UTF8))
    {
         httpResponse = sr.ReadToEnd();
    }
 }

 tcpClient.Close();