6
votes

I am studying for my exams and found this question:

A typical UDP server can be implemented using a single socket. Explain why, for a TCP driven server, I find that two sockets are created - one where all clients approach the server, and one specific (socket) for each client for further communication between the server and client.

This is (in my understanding) driven by concurrency-issues (the wish not to communicate too much with a single client on the contact-point address). I know that UDP is connectionless, but can't illustrate it in my mind. I see that if a server is UDP-driven it can do a single action (pump a content repeatedly through/to a socket/port), which then could be listen to by multiple clients. If a server can react to two tasks - a get and a put. How can a client give an instruction without creating a connection? The client (in my mind) needs to send the get-request on a known port, and get the feedback on the same port. This would block the servers ability to communicate with multiple clients at the same time. Then would it be nicer to create a second socket to communicate on between the two parties so that potential communication between the server and other clients is not hindered? (as in the case with tcp)

2

2 Answers

11
votes

For TCP, there's no choice, the socket API maps one TCP connection to one socket, which is between two endpoints.

For UDP, the socket API allows one socket to receive from many endpoints, and to send to many endpoints - so many servers use just one socket since there isn't any need for more.

In some cases, the protocol is a simple request and reply. There's no need to create another socket for that - just take note of the source address, and send the reply there - so that's what some servers do.

For others, the protocol might require longer lived data exchange where it's more convenient to create a new socket, so some servers do that.

This would block the servers ability to communicate with multiple clients at the same time.

Not necessarily. If the server CPU is busy executing instructions, it can't service anyone else regardless of whether it handles multiple clients on the same socket or not. If the server does blocking calls (e.g. a database query), or you want to exploit multiple cores, you can handle that in multiple threads, or use a threadpool pattern even with just 1 socket. The server just needs to keep track of the source IP address and port for each packet so it knows where to send the reply to.

But if it makes more sense for a particular protocol/application to use multiple sockets, e.g. one per client - there's noting wrong with doing so, the usual approach in that case is:

  • client sends a packet to the server on its well known port
  • server notes the source port of the client packet
  • server creates a new socket, sends the reply on that socket
  • client notes the source port of the reply
  • client uses use that port for further communication with the server instead of its well know port.
3
votes

The client (in my mind) needs to send the get-request on a known port, and get the feedback on the same port. This would block the servers ability to communicate with multiple clients at the same time.

No it wouldn't. This is imaginary.

Then would it be nicer to create a second socket to communicate on between the two parties so that potential communication between the server and other clients is not hindered? (as in the case with tcp)

'Potential communication between the server and other clients' isn't 'hindered' anyway.

Creating a second socket gives no advantage, and it isn't mandated by the API. And your correct desire for the client to send to and receive from the same remote port contradicts your desire to create a second socket at the server. A second socket would have a different port.