My understanding of UDP was that while there is a limitation of MTU size, if a datagram exceeds the MTU, it will get fragmented on the IP layer, transmitted as separate packets, and then reconstructed on the receiving end. If one of the fragments gets dropped, the UDP layer will drop the whole datagram. If everything arrives, the IP layer re-constructs the datagram and UDP should receive it as a whole.
This isn't however the behaviour I am experiencing. Here's a simple server loop
var udp = new UdpClient(port);
while (true) {
IPEndPoint remote = new IPEndPoint(IPAddress.Any, 0);
byte[] payload = udp.Receive(ref remote);
Console.WriteLine($"Received {payload.Length}, IP: {remote}");
}
and sending 2999 bytes
of data via netcat
as following
head -c 2999 /dev/urandom | nc -4u -w1 localhost 8999
the server loop receives three times with payloads of size 1024
, 1024
and 951
bytes. Since 2*1024 + 951 = 2999
, it seems obvious that the data that I intended to send was actually sent, but the UdpClient
is receiving it as three different datagrams.
This seems inconsistent with the fact that the UDP layers works on datagrams as a whole. Should one implement their own fragment reconstruction logic when working directly with UDP? Or is there a way to only receive complete datagrams?
true
, try to set it to false when receiving. But if taking sockets as example, I'd expect you are responsible for combine data asReceive
method will (as it seems) simply block until either data are received or buffer is full, see this question for some hints. – Sinatr