I have a backend process that does work on my database. That's used on a separate computer so that way the frontend works miracles (in terms of speed at least). That backend process creates a UDP server and listen for packets on it.
On the frontend computer, I create child process from a server. Each child may create data in the database that require the backend to do some more work. To let the backend know, I send a PING using a UDP client connection.
Front End / Backend Setup Processing
+-------+ +---------+ +----------+
| | | | | Internet |
| Front | PING | Backend | | Client |
| End |-------->| | +----------+
| | | | HTTP Request |
+-------+ +---------+ v
^ ^ +----------+
| | | FrontEnd |--------+
| | +----------+ PING |
v v HTTP Response | v
+---------------------------+ v +---------+
| | +----------+ | Backend |
| Cassandra Database | | Internet | +---------+
| | | Client |
+---------------------------+ +----------+
Without the PING, the backends ends its work and falls asleep until the next PING wakes it up. Although there is a failsafe, I put a timeout of 5 minutes so the backend wakes up once in a while no matter what.
My question here is about the UDP stack, I understand it is a FIFO, but I am wondering about two parameters:
How many PING can I receive before the FIFO gets full?
May I receive a PING and lose it if I don't read it soon enough?
The answer to these questions can help me adjust the current waiting loop of the backend server. So far I have assumed that the FIFO had a limit and that I may lose some packets, but I have not implemented a way to allow for packets disappearing (i.e. someone sends a PING, but the backend takes too long before checking the UDP stack again and thus the network decides that the packet has now timed out and removes it from under my feet.)
Update: I added a simple processing to show what happens when (it is time based from top to bottom)