My scenario is that I have a hundred small text files that I want to load, parse, and store in a DLL. Clients of the DLL are transient (command line programs), and I would prefer not to reload the data on every command line invocation.
So, I thought I would write a Windows server to store the data and have the clients query the server using TCP. But, the TCP performance was really slow. I wrote the following code using Stopwatch
to measure the socket setup time.
// time the TCP interaction to see where the time goes
var stopwatch = new Stopwatch();
stopwatch.Start();
// create and connect socket to remote host
client = new TcpClient (hostname, hostport); // auto-connects to server
Console.WriteLine ("Connected to {0}",hostname);
// get a stream handle from the connected client
netstream = client.GetStream();
// send the command to the far end
netstream.Write(sendbuf, 0, sendbuf.Length);
Console.WriteLine ("Sent command to far end: '{0}'",cmd);
stopwatch.Stop();
sendTime = stopwatch.ElapsedMilliseconds;
Much to my surprise, that little bit of code took 1,037 milliseconds (1 second) to execute. I expected the time to be far smaller. Is that a normal socket setup time between a client and server running on a modern Windows 10 localhost?
To compare, I wrote a loop that loaded 10 files x 100 lines each, and that experiment only took 1ms. So, it was 1000x faster reading from disk (an SSD) than it was to use sockets to a server.
I know what to do in my scenario (use file reads on each invocation), but I would like to know if anyone can confirm these kinds of timings for socket setup times. Or maybe there are faster interprocess communication mechanisms for a local machine that would compare favorably with file reads/parses. I really don't want to believe that File.ReadAllLines(filepath)
is the fastest way when spread over hundreds of command-line client invocations.
EDIT - Avoid DNS lookup by using explict IPEndPoint address
Following the comments below, I replaced "localhost" with an IPEndpoint method to set up the connection. The change reduced the 1037ms to about 20ms, but (1) the TcpClient would not automatically connect, and (2) the sending of text failed to reach the server. So, there must be something different between the original and IPEndPoint methods.
// new IPEndPoint method
// fast at 20ms, but the server never sees the sent text
string serverIP = "127.0.0.1";
IPAddress address = IPAddress.Parse (serverIP);
IPEndPoint remoteEP = new IPEndPoint(address, hostport);
client = new TcpClient(remoteEP);
client.Connect (remoteEP); // new; required w IPEndPoint method
// send text command to the far end
netstream = client.GetStream();
netstream.Write(sendbuf, 0, sendbuf.Length);
Console.WriteLine ("Sent command to far end: '{0}'",cmd);
stopwatch.Stop();
sendTime = stopwatch.ElapsedMilliseconds;
Console.WriteLine ($"Milliseconds for sending by TCP: '{sendTime}'");
// unfortunately, the server never sees the sent text now
I don't know why using an IPEndPoint as an input argument to TcpClient requires an explicit connect when TcpClient would automatically connect before. And I don't know why the netstream.Write
fails now too. Examples on the net always use socket.Connect
and socket.Send
with IPEndPoints.
EDIT #2 - Use IPEndPoint with sockets, not streams
// use sockets, not streams
// This code takes 3 seconds to send text to the server
// But at least this code works. The original code was faster at 1 second.
string serverIP = "127.0.0.1";
IPAddress address = IPAddress.Parse(serverIP);
IPEndPoint remoteEP = new IPEndPoint(address, hostport);
socket = new Socket (AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);
socket.Connect (remoteEP);
socket.Send (sendbuf);
EDIT #3 - After experiments based on Evk comments:
Using the information provided by Evk above, I did several experiments as follows. Three clients and two servers were used.
Client 1: DNS returns only IPv4 using new TcpClient().
Client 2: DNS returns only Ipv6 using new TcpClient(AddressFamily.InternetworkV6)
Client 3: DNS returns IPv4 and IPv6 using new TcpClient(“localhost”,port)
Server 1: IPv4 new TcpListener(IPAddress.Loopback, port)
Server 2: IPv6 new TcpListener(IPAddress.IPv6Loopback, port)
From worst to best, the 6 possible pairs returned the following results:
c4xs6 - Client 1 ip4 with Server 2 ip6 – connection actively refused.
c6xs4 - Client 2 ip6 with Server 1 ip4 – connection actively refused.
c46xs4 - Client 3 (both) with Server 1 ip4, always delayed 1000ms because client tried using IPv6 before timing out and trying ip4, which worked consistently. This was the original code in this post.
C46xs6 - Client 3 (both) with Server 2 ip6, after a fresh restart of both, was fast on the first try (21ms) and on subsequent closely-spaced tries. But after waiting a minute or three, the next try was 3000ms, followed by fast 20ms times on closely-spaced subsequent tries.
C4xs4 – Same behavior as above. First try after a fresh restart was fast, as were closely-spaced subsequent tries. But after waiting a minute or two, the next try was 3000ms, followed by fast (20ms) closely-spaced subsequent tries.
C6xS6 – Same behavior as above. Fast after a fresh server reboot, but after a minute or two, a delay try (3000ms) followed by fast (20ms) responses to closely-spaced tries.
My experiments showed no consistently fast responses over time. There must be some kind of a delay or timeout or sleeping behavior when the connections go idle. I use netstream.Close; client.Close();
to close each connection on each try. (Is that right?) I don’t know what could be causing the delayed responses after a minute or two of idle no-active-connection time.
Any idea what might be causing the delay after a minute or two of idle listening time? The client is supposedly out of the system memory, having exited the console program. The server is supposedly doing nothing new, just listening for another connection.
client = new TcpClient(new IPEndPoint( new IPAddress(...), hostport)
– Esernew IPEndPoint()
mechanism to set up the connection as described above, and the time eventually dropped from 1037ms to 20ms. But, when I used the IPE method, the client did not connect automatically, and I received an error trying toGetStream
on a "non-connected" socket. I had to call client.Connect after creating the new TcpClient object. Maybe that did something too since the server never received the outgoing Send text. I am searching for examples to follow. – Kevin