I have an application (the "server") which updates a block of data in memory - around 100k bytes - every second.
There are 1 to 4 other instances of a "client" application running on other workstations on the same network, and these need to read the same 100k image every second.
This has been implemented up til now by writing the image to a file on the server and having the clients read from that file across the network. This has worked with no problems for many years, but lately (coincident with a move to windows 8-based hardware) it has developed a problem where the file becomes inaccessible to all nodes except one. Exiting the client application running on this node frees up the file and it then becomes accessible again to everyone.
I'm still perplexed as to the the cause of this lockout, but I'm wondering if it may be the mechanism discussed here, where a file isn't closed due to a network glitch. I'm thinking that having the clients request the data over TCP/IP would avoid this.
There doesn't need to be any handshaking other than the clients failing to connect or read data - the server just needs to go about it's business and respond to requests by grabbing the data and sending it. I'm pretty hazy however about the best architecture to achieve this. Are TidTCPClient and TidTCPServer going to cut it? I'm assuming the clients would request the data in a thread, but does this mean the server needs to run a thread continuously to respond to requests?