0
votes

I built a program that provides some statistics to TCP and UDP transfers. The client sends a packet containing 30KB of data and is sent 100 times continuously to the server. The client and server in this case are connected through Ethernet. What's currently baffling me is in my results, TCP finishes almost 2X faster than UDP.

I've done some research and I've seen explanations involving MTU and such but I can't seem to connect them all in my mind. Can someone explain what's happening? Does my results make sense?

1
You could monitor transmission with some network traffic monitoring tool like ethereal or wireshark to see what exactly is happening there.Mikael Lepistö
Are you using a connected socket in UDP or are you calling sendto? Also, are the client and the server on the same LAN?mac

1 Answers

1
votes

Most likely you are seeing the effect of the nagle algorithm. http://en.wikipedia.org/wiki/Nagle's_algorithm.

TCP will "wait" for more data for a short period of time and send it together in a single packet, where UDP will send each individual datagram.