I have a bunch of servers where files are being generated constantly. These files need to be sent to a central location. The files are never larger than 50MB. I am planning to use ZeroMQ to send these files (encapsulated in messages), so that file writing on the central location does not happen concurrently (for e.g. using scp to do the transfers would start many disk write processes on the destination).
I can see a few ways to do this with ZeroMQ:
- Use REQ sockets on the producers and a single REP socket on the consumer. This could work, but I think it would starve slower producers, as there is no fair queueing. Also, I am not sure if the REQ sockets would drop messages if the REP socket is not available.
- Use PUSH sockets on the producers and a PULL socket on the consumer. This has fair queuing on the consumer and the docs say that PUSH sockets never discard messages. However, is it fully reliable?
My reliability requirements are:
- Messages (in my case files) should not be lost. So I would like to build it in such a way that there is an acknowledgement to the producer for each message received at the consumer.
- Messages from a particular producer should be received in the same order as they were produced.
- Producers can come and go, and they should be resistant to the consumer being unavailable for some periods of time.
What sort of sockets are appropriate for this kind of application? Any pointers to what kind of zmq pattern I should be looking at would be great.