15
votes

I have some doubts over increasing TCP Window Size in application. In my C++ software application, we are sending data packets of size around 1k from client to server using TCP/IP blocking socket. Recently I came across this concept TCP Window Size. So I tried increasing the value to 64K using setsockopt() for both SO_SNDBUF and SO_RCVBUF. After increasing this value, I get some improvements in performance for WAN connection but not in LAN connection.

As per my understanding in TCP Window Size,

Client will send data packets to server. Upon reaching this TCP Window Size, it will wait to make sure ACK received from the server for the first packet in the window size. In case of WAN connection, ACK is getting delayed from the server to the client because of latency in RTT of around 100ms. So in this case, increasing TCP Window Size compensates ACK wait time and thereby improving performance.

I want to understand how the performance improves in my application.

In my application, even though TCP Window Size (Both Send and Receive Buffer) is increased using setsockopt at socket level, we still maintain the same packet size of 1k (i.e the bytes we send from client to server in a single socket send). Also we disabled Nagle algorithm (inbuilt option to consolidate small packets into a large packet thereby avoiding frequent socket call).

My doubts are as follows:

  1. Since I am using blocking socket, for each data packet send of 1k, it should block if ACK doesn't come from the server. Then how does the performance improve after improving the TCP window Size in WAN connection alone ? If I misunderstood the concept of TCP Window Size, please correct me.

  2. For sending 64K of data, I believe I still need to call socket send function 64 times ( since i am sending 1k per send through blocking socket) even though I increased my TCP Window Size to 64K. Please confirm this.

  3. What is the maximum limit of TCP window size with windows scaling enabled with RFC 1323 algorithm ?

I am not so good in my English. If you couldn't understand any of the above, please let me know.

2
TCP can dynamically change the fragment size to match the highest max-MTU of any router on the way to optimize performance. When you don't want this to happen, you can prevent this by setting the "don't fragment" flag.Philipp

2 Answers

34
votes

First of all, there is a big misconception evident from your question: that the TCP window size is what is controlled by SO_SNDBUF and SO_RCVBUF. This is not true.

What is the TCP window size?

In a nutshell, the TCP window size determines how much follow-up data (packets) your network stack is willing to put on the wire before receiving acknowledgement for the earliest packet that has not been acknowledged yet.

The TCP stack has to live with and account for the fact that once a packet has been determined to be lost or mangled during transmission, every packet sent, from that one onwards, has to be re-sent since packets may only be acknowledged in order by the receiver. Therefore, allowing too many unacknowledged packets to exist at the same time consumes the connection's bandwidth speculatively: there is no guarantee that the bandwidth used will actually produce anything useful.

On the other hand, not allowing multiple unacknowledged packets at the same time would simply kill the bandwidth of connections that have a high bandwidth-delay product. Therefore, the TCP stack has to strike a balance between using up bandwidth for no benefit and not driving the pipe aggressively enough (and thus allowing some of its capacity to go unused).

The TCP window size determines where this balance is struck.

What do SO_SNDBUF and SO_RCVBUF do?

They control the amount of buffer space that the network stack has reserved for servicing your socket. These buffers serve to accumulate outgoing data that the stack has not yet been able to put on the wire and data that has been received from the wire but not yet read by your application respectively.

If one of these buffers is full you won't be able to send or receive more data until some space is freed. Note that these buffers only affect how the network stack handles data on the "near" side of the network interface (before they have been sent or after they have arrived), while the TCP window affects how the stack manages data on the "far" side of the interface (i.e. on the wire).

Answers to your questions

  1. No. If that were the case then you would incur a roundtrip delay for each packet sent, which would totally destroy the bandwidth of connections with high latency.

  2. Yes, but that has nothing to do with either the TCP window size or with the size of the buffers allocated to that socket.

  3. According to all sources I have been able to find (example), scaling allows the window to reach a maximum size of 1GB.

1
votes
  1. Since I am using blocking socket, for each data packet send of 1k, it should block if ACK doesn't come from the server.

Wrong. Sending in TCP is asynchronous. send() just transfers the data to the socket send buffer and returns. It only blocks while the socket send buffer is full.

Then how does the performance improve after improving the TCP window Size in WAN connection alone?

Because you were wrong about it blocking until it got an ACK.

  1. For sending 64K of data, I believe I still need to call socket send function 64 times

Why? You could just call it once with the 64k data buffer.

( since i am sending 1k per send through blocking socket)

Why? Or is this a repetition of your misconception under (1)?

even though I increased my TCP Window Size to 64K. Please confirm this.

No. You can send it all at once. No loop required.

What is the maximum limit of TCP window size with windows scaling enabled with RFC 1323 algorithm?

Much bigger than you will ever need.