I would like to modelize the rate at which a linux TCP receiver consumes data in its buffer. I know it depends on how the application is developed (if MSG_DONTWAIT is enabled etc...) but what would be the most generic behavior ? what is the mean time between the arrival time of packet in the buffer and the associated recv() return call ?
I would like to find that rate through TCP pacing at the sender : Like I change the datarate of the TCP sender until the receiever window remains stable, in which case the sending rate would be equal to the receiver consumption rate. I would like to do TCP pacing in userspace but I am afraid the kernel would prevent it (even with NAGLE disabled etc...).
I am looking for any hints/papers that could provide me with this kind of information
Best regards