3
votes

I am using boost::asio to do UDP as well as TCP communication in my Client app & Server applications. I found that I am only able to transmit data of size 65535 bytes using UDP as it seems to be the max packet size in UDP.

The max packet size limit is also there in TCP which is 65535 bytes ? But I am able send chunks larger than max packet size using boost::asio::write in TCP & read it all fine on the client app. I see that I don't have to bother about the max packet size in TCP but in UDP I have ensure each socket.send_to is done with a buffer smaller than max packet size

How does this work ? Is this because TCP is stream based takes care of creating packets at the lower layer ? Is there some way I can increase the max packet size in UDP ?

Is it possible that some of the bytes of an UDP packet I sent from server side could be missing when I read on client side ? If yes, then is there way only to detect the loss on client side of UDP ?

2

2 Answers

3
votes

TCP takes care of transmission control (that's actually what T and C stand for in TCP). You usually don't bother with the amount of data you send to a TCP socket, because it manages on its own how much data to send in each packet. Each TCP packet can have up to 65536 bytes of payload, but you usually don't think about it, because TCP is rather complex and can do a lot of things.

UDP however lacks any control mechanism and kept as simple as possible, so you need to decide how much data to send with each packet. Maximal size is again 65536 bytes because you have only two bytes in the UDP headers to specify the length of a message. Another thing to consider when deciding on a UDP packet size is the fact that lower level protocols have their own limits too (65K for IP, ~1500 bytes for ethernet).

You can't increase the maximum size of a UDP packet and you generally don't want to do it because large UDP packets can be dropped without any notice. Other answers on SO suggest using 512-8K packets for datagrams over internet.

It is possible to receive a UDP datagram with damaged bytes (not "missing" though). But each packet is covered by a checksum so a client will know if the datagram has been damaged in transition.

3
votes

The problem is not so much related to UDP and TCP, as it is to IP. UDP and TCP are transport protocols, which does not define a maximum packet (or segment) size. IP is a network protocol. An IP packet can contains at most 65536 (2^16) bytes, since there are two bytes used to define the packet size. Large IP packets are divided into segments. If one of the segments is lost, or corrupted, then the entire IP packet is lost. The size of the segments depends on the link layer protocol, usually Ethernet. For Ethernet the usual maximum size is 1500 bytes at most, or more if jumbo frames are allowed.

So, if you transmit UDP packets larger than 1500 bytes it may be divided into several segments. This is normally fine, if there are no losses on the network. However, if there are losses the impact will only be bigger when there are more dependent segments. For example, consider a network with 1% losses, if you transmit a UDP packet of 65536 bytes, the it will most likely be divided into 44 segments. The probability of this packet being received is then: (1-0.01)^44 = 64 %...

This is also why many TCP implementations and UDP based applications use at most 1500 bytes packets.

Extracting corrupted packets is a nontrivial task, look for libraries like libpcap.