I built a program that provides some statistics to TCP and UDP transfers. The client sends a packet containing 30KB of data and is sent 100 times continuously to the server. The client and server in this case are connected through Ethernet. What's currently baffling me is in my results, TCP finishes almost 2X faster than UDP.
I've done some research and I've seen explanations involving MTU and such but I can't seem to connect them all in my mind. Can someone explain what's happening? Does my results make sense?