3
votes

IMPORTANT NOTE: I'm aware that UDP is an unreliable protocol. But, as I'm not the manufacturer of the device that delivers the data, I can only try to minimize the impact. Hence, please don't post any more statements about UDP being unreliable. I need suggestions to reduce the loss to a minimum instead.

I've implemented an application C++ which needs to receive a large amount of UDP packets in short time and needs to work under Windows (Winsock). The program works, but seems to drop packets, if the Datarate (or Packet Rate) per UDP stream reaches a certain level... Note, that I cannot change the camera interface to use TCP.

Details: It's a client for Gigabit-Ethernet cameras, which send their images to the computer using UDP packets. The data rate per camera is often close to the capacity of the network interface (~120 Megabytes per second), which means even with 8KB-Jumbo Frames the packet rate is at 10'000 to 15'000 per camera. Currently we have connected 4 cameras to one computer... and this means up to 60'000 packets per second.

The software handles all cameras at the same time and the stream receiver for each camera is implemented as a separate thread and has it's own receiving UDP socket. At a certain frame rate the software seems miss a few UDP frames (even the network capacity is used only by ~60-70%) every few minutes.

Hardware Details

  • Cameras are from foreign manufacturers! They send UDP streams to a configurable UDP endpoint via ethernet. No TCP-support...
  • Cameras are connected via their own dedicated network interface (1GBit/s)
  • Direct connection, no switch used (!)
  • Cables are CAT6e or CAT7

Implementation Details

So far I set the SO_RCVBUF to a large value:

int32_t rbufsize = 4100 * 3100 * 2; // two 12 MP images
if (setsockopt(s, SOL_SOCKET, SO_RCVBUF, (char*)&rbufsize, sizeof(rbufsize)) == -1) {
    perror("SO_RCVBUF");
    throw runtime_error("Could not set socket option SO_RCVBUF.");
}

The error is not thrown. Hence, I assume the value was accepted. I also set the priority of the main process to HIGH-PRIORITY_CLASS by using the following code:

SetPriorityClass(GetCurrentProcess(), HIGH_PRIORITY_CLASS); 

However, I didn't find any possibility to change the thread priorities. The threads are created after the process priority is set...

The receiver threads use blocking IO to receive one packet at a time (with a 1000 ms timeout to allow the thread to react to a global shutdown signal). If a packet is received, it's stored in a buffer and the loop immediately continues to receive any further packets.

Questions

Is there any other way how I can reduce the probability of a packet loss? Any possibility to maybe receive all packets that are stored in the sockets buffer with one call? (I don't need any information about the sender side; just the contained payload) Maybe, you can also suggest some registry/network card settings to check...

2
If you can't live with dropped packets, UDP is the wrong protocol to use. It's by intent and design unreliable with no guarantee of delivery.Shawn
"data rate per camera is often close to the capacity of the network interface" - "we have connected 4 cameras to one computer" - it makes little sense to ask about reducing UDP packet drop rate... Try switching to 10G cable first or employ separate 1G networks for each camera.user7860670
maybe the OS's buffer size for UDP is too small. we had similar issues and fixed it by increasing the buffer size. another away is to query the socket more frequently, although this is not always possible, depending on your application.fdan
(1) Start with one thread per connected camera just reading the message(s). Don't do anything with the messages except monitor for dropped packets. (2) Now queue the incoming messages for processing, again monitor for dropped packets. (3) Add processing (of the queue), checked for drops ... (4) etc Obviously if you get dropped packets at any stage you need to look closer at CPU/Network usage and how you implement the problem stage.Richard Critten
One thing to keep in mind is that the UDP packets can get dropped anywhere along the path they travel (including inside the sending device's IP stack!) -- and if the packets are lost before they get to your computer, there is very little you can do about it.Jeremy Friesner

2 Answers

1
votes

To increase the UDP Rx performance for GigE cameras on Widnows you may want to look into writing a custom filter driver (NDIS). This allows you to intercept the messages in the kernel, stop them from reaching userspace, pack them into some buffer and then send to userspace via a custom ioctl to your application. I have done this, it took about a week of work to get done. There is a sample available from Microsoft which I used as base for it.

It is also possible to use an existing generic driver, such as pcap, which I also tried and that took about half a week. This is not as good because pcap cannot determine when the frames end so packet grouping will be sub optimal.

I would suggest first digging deep in the network stack settings and making sure that the PC is not starved for resources. Look at guides for tuning e.g. Intel network cards for this type of load, that could potentially have a larger impact than a custom driver.

(I know this is an older thread and you have probably solved your problem. But things like this is good to document for future adventurers..)

0
votes
  • IOCP and WSARecv in overlapped mode, you can setup around ~60k WSARecv
  • on the thread that handles the GetQueuedCompletionStatus process the data and also do a WSARecv in that thread to comnpensate for the one being used when receiving the data

please note that your udp packet size should stay below the MTU above it will cause drops depending on all the network hardware between the camera and the software

  • write some UDP testers that mimuc the camera to test the network just to be sure that the hardware will support the load.

https://www.winsocketdotnetworkprogramming.com/winsock2programming/winsock2advancediomethod5e.html