0
votes

I would like to modelize the rate at which a linux TCP receiver consumes data in its buffer. I know it depends on how the application is developed (if MSG_DONTWAIT is enabled etc...) but what would be the most generic behavior ? what is the mean time between the arrival time of packet in the buffer and the associated recv() return call ?

I would like to find that rate through TCP pacing at the sender : Like I change the datarate of the TCP sender until the receiever window remains stable, in which case the sending rate would be equal to the receiver consumption rate. I would like to do TCP pacing in userspace but I am afraid the kernel would prevent it (even with NAGLE disabled etc...).

I am looking for any hints/papers that could provide me with this kind of information

Best regards

1
The sending rate can never exceed the receiving rate anyway in TCP, because of receive window control. All you have to do is send the data and track how long it took. You need to set a large positive linger timeout so that the final close() is synchronous rather than asynchronous.user207421

1 Answers

0
votes

Sender window is given by Minimum of(congestion window, receiver window). Usually the congestion window is slowly increases & stabilizes after a while. Faster the receiver application clears the kernel buffer faster the sender will push data.