2
votes

We're developing a .NET CF 3.5 application on a Windows Embedded CE 6 plattform. We're trying to implement a small (HTTP 1.0) web server in .NET that should deliver a WebApp and respond to simple REST requests.

Our implementation follows the pattern shown in this MSDN article: http://msdn.microsoft.com/en-us/library/aa446537.aspx. We use a TCP listening socket, async-callbacks in combination with BeginAccept, EndAccept, BeginRecieve and EndReceive.

An incoming connection on the listening port is handled by an asynchronous accept callback. (see http://msdn.microsoft.com/en-us/library/5bb431f9.aspx). By calling the EndAccept method, within this asynchronous accept callback, we tell the listening port to hand the connection to a new socket and free the port, so that new incoming connection requests can be accepted by the listening port. The already accepted request is processed in an own thread with (because it is processed in the async callback).

We've tried already to minimize the time between the BeginAccept and EndAccept. Because, during this time period between calling BeginAccept and EndAccept, incoming connection requests are placed in the backlog queue of the listening socket. The length of this queue can be configured via the so called backlog parameter - this parameter has a plattform dependent maximum. If the backlog queue is exhausted, new tcp-connection requests are rejected during the three-way-handshake (Client/Browser gets a RST as response to it's syn).

Now we bumped into the problem, that most modern browsers like Firefox, Chrome, Safari e.g. use up to 15 (or more) concurrent connections to load data from a server (the max. number of concurrent connections per Host can be configured in Firefox with about:config -> network.http.max-connections-per-server). When a page is loaded, the browser establishes up 15 connection when needed, depending on the number of resources that need to be loaded (e.g. images, javascript or css files).

The .NET CF socket.listen method (see http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.listen.aspx) allows the definition of a backlog number.
From our understanding we should have a backlog greater then 15 e.g. 20 or so, because all connection requests are triggered at the same time by the browser, so our small webserver got hit by 15 simultaneous connections requests. A too small backlog queue size, results in aborted TCP connections, because not all incoming connections can be queue until the listening socket can accept them all. In Firebug or Chrome, these requests are shown as "aborted". So we increased the backlog in our case by socket.listen(20) to 20 and hoped that everything would be fine and ready to withstand even the greediest browsers.

Problem is, the backlog parameter in the socket.listen() call is silently set to SOMAXXCON (max. 5 connections in our case). Setting a number higher then that has no effect. When a browser establishes e.g. 16 concurrent socket connects, some got lost, due to the fact that some sockets simply don't fit in the backlog queue of 5 and the TCP connection get a TCP-RST from the webserver - and some resources are missing on the webpage.

Is there any way to change the SOMAXXCON in Windows Embedded CE 6.0? (We're able to change the plattform image - utilize plattform builder). Or is there an error in our understanding of that matter?

We've attached the source code that we're currently using:

public class StateObject
    {
        // Client  socket.
        public Socket workSocket = null;
        // Size of receive buffer.
        public const int BufferSize = 1024;
        // Receive buffer.
        public byte[] buffer = new byte[BufferSize];
        // Received data string.
        public StringBuilder sb = new StringBuilder();
    }

public void StartListening()
    {
       logger.Debug("Started Listening at : " + this.listeninghostIp + ":" + this.listeningport);
       IPEndPoint localEP = new IPEndPoint(IPAddress.Parse(this.listeninghostIp), Convert.ToInt32(this.listeningport));
       listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);

       listener.Bind(localEP);
       listener.Listen(10);
       ThreadPool.QueueUserWorkItem(new WaitCallback(CheckForConnections) );
    }

public void CheckForConnections()
    {
        try
        {
            logger.Debug("listening successfully started! Waiting for incoming connections...");
            listener.BeginAccept(new AsyncCallback(acceptCallback), listener);
          }
        catch (Exception ex)
        {
            logger.Error("Exception Occured while starting Listening : " + ex.Message.ToString());
        }
     }

private void acceptCallback(IAsyncResult ar)
    {
        try
        {

            Socket listener = (Socket)ar.AsyncState;
            listener.BeginAccept(new AsyncCallback(acceptCallback), listener);
            Socket handler = listener.EndAccept(ar);
            StateObject state = new StateObject();
            state.workSocket = handler;
            handler.BeginReceive(state.buffer, 0, StateObject.BufferSize, 0,
        new AsyncCallback(ReadCallback), state);

            logger.Debug("listening socket accepted connection...");

         }
        catch (Exception ex)
        {
            logger.Error("Error on acceptCallback. Error: " + ex.Message);
        }

public void ReadCallback(IAsyncResult ar)
    {
        String content = String.Empty;

        StateObject state = (StateObject)ar.AsyncState;
        Socket handler = state.workSocket;
        int bytesRead = handler.EndReceive(ar);

        if (bytesRead > 0)
        {
            state.sb.Append(Encoding.ASCII.GetString(
            state.buffer, 0, bytesRead));
        }
        ClientConnectionFactory.createConnection(ar.AsyncState);
 }
1

1 Answers

-1
votes

I think you're headed down the wrong path - you can definitely handle this sceanrio without changing the backlog request number.

The client requests come in on port 80, but the responses don't go back to the client on port 80. What that means is that you can use asynchronous socket handling to receive the request, then pass it off for parsing and reply that way the subsequent requests aren't waiting on complete handling of previous requests. We use this technique in our Padarn web server and have no problems handling multiple requests from single client browsers or even multiple requests from multiple simultaneous clients.