1
votes

(This is for a low latency system)

Assuming I have some code which transfers received UDP packets to a region of shared memory, how can I then notify the application (in user mode) that it is now time to read the shared memory? I do not want the application continuously polling eating up cpu cycles.

Is it possible to insert some code in the network stack which can call my application code immediately after it has written to the shared memory?

EDIT I added a C tag, but the application would be in C++

2
Are you receiving the packets in usermode, e.g. via a network driver, or in kernel mode? If the former, you should simply be able to use normal (blocking) sockets. A bit more architectural context would be helpful.JohnJ
@JohnJ I presume I would have to write my own network driver if I opt for kernel mode? Whatever would be lowest latency? I presume writing my own driver would do this? The packets are fixed size so writing a driver may not be as difficult as it sounds- especially as I have no TCP connection to setup?user997112
What is relation between two processes? Are they parent/child, or something different?Valeri Atamaniouk
@ValeriAtamaniouk no particular relation. I have UDP packets arriving and I want to get them to my application asap (via shared memory) so I was asking what is the fastest way of notifying my application that the shared memory is now populated with new packet data.user997112
@user997112 if you have somewhat standard network hardware it will usually be hard to beat it, and the kernel IP stack, for performance. They are heavily optimized already. Generally it's easier to do things in user space and use the common idioms (signals, semaphores, pthreads, ...). And definitely avoid polling, as you wrote.JohnJ

2 Answers

0
votes

One way to signal an event from one Unix process to another is with POSIX semaphores. You would use sem_open to initialize and open a named semaphore that you can use cross-process.

See How can I get multiple calls to sem_open working in C?.

The lowest latency method to signal an event between processes on the same host is to spin-wait looking for a (shared) memory location to change... this avoids a system call. You expressly said you do not want the application polling, however in a multi-threaded application running on a multi-core system it may not be a bad tradeoff if you really care about latency.

0
votes

Unless you are planning to use a real-time OS, there is no "immediate" protocol. The CPU resources are available in quantums of few milliseconds, and usually it takes some time for your user thread to understand it can continue.

Considering all above, any form of IPC would do: local sockets, signals, pipes, event descriptors etc. Practical difference on performance would be miserable.

Furthermore, usage of shared memory can lead to unnessessary complications in maintaining/debugging, but that's the designer's choice.