0
votes

I am writing a Client program that sends a stream UDP packets to the Server. In the Server, whenever a UDP packet is received, it should do some processing and send the packet back to the Client.

For this scenario, should I have to implement a queue in the Server to handle the stream of UDP packets or the underlying protocol layers handle this?

In other words, is it sufficient if the Server wait for the UDP packet using recvfrom() API, whenever it receives the UDP packet, process it and again wait for the next UDP packet using the recvfrom() API.

3

3 Answers

1
votes

UDP does not have built-in flow control. Which means unless the server is not guaranteed to process datagrams faster than the client sends them, you will eventually have a full receive buffer and the network stack will discard incoming packets. This is irrespective of setting a larger buffer size (that only delays the problem, but doesn't fix it).

Therefore, if you know for certain that the rate at which packets are being sent, say 50 per second or so, is something your server can comfortably cope with (in the worst case, not in the best case!), you are good with a simple blocking recvfrom.

Otherwise, unless losing large amounts of packets is a non-issue, you need to implement some way of flow control which is similar to what TCP is doing. For example: client is allowed to send max 20 packets, plus one packet for every answer packet received, as a very very easy algorithm.

Another (apparent) solution would be to offload the processing and sending of an answer to some worker threads, but like increasing buffer sizes, this only shifts the problem backwards a bit, it doesn't solve it. Also, the server design is much more complicated.
Still, if the client can send data at virtually infinite speed (say, 12-14 million packets per second on 10G), it will eventually outrun your processing capacity, if the worker threads cannot cope with that volume.

0
votes

Linux TCP/IP stack surely have a UDP packet buffer whose size is tunable with sysctl -w net.core.rmem_max=... etc.

The quickest way to prevent UDP packet loss on your server would be just to increse the maximum size and length to very large values, if it's acceptable for you and the server admin.

recvfrom() with a blocking socket is perfectly OK.

0
votes

I think you do not require to handle UDP packet to your server side, the underlying network protocol(most of protocol do it but you not depends on it) do your job.

Now regarding to wait for UDP packet at server side,you can use non blocking socket connection so each time you poll that function(or put in while(1) loop) which accept the client request if client connected then simply receive packet from it.So next time if you call that function then check client already connected or not if yes then simply receive UDP data from that connection.

Let me explain how can receive packet at your server side:

Suppose function acceptUDPPaket() function for you

int accept_sock[5];
int serversock;//listen on this server socket

void acceptUDPPaket(void)
{
    struct sockaddr   client;
    int               clientsize;
    char i;
    int  sock;

    clientsize = sizeof(struct sockaddr_in);

    for(i=0;i<5;i++) // I have take maximum 5 client connection
    {
        if(accept_sock[i] == SYS_SOCKETNULL)
        {
            //your accept socket code
            //ex: 

            sock=accept(serversock, &client, &clientsize);
            if(sock != NULL)
            {
                accept_sock[i] =sock;
                break;
            }
        }
    }

    for(i=0;i<5;i++)
    {
        if(accept_sock[i] != NULL)
        {

            receveFrom(&accept_sock[i]);//receive UDP packet from client and process on it
        }
    }
}

So might above pseudo like code clear your confusion.