5
votes

I have been working on a private project where i wanted to learn how to program on a windows phone, and at a point i started to fiddle with sockets and the camera, and a great idea came to mind video feed (dumb me to even attempt).

but now I'm here, I have something that well, it works like a charm but a Lumia 800 cannot chug through the for-loop fast enough. It sends a frame per lets say 7-8 seconds something i think is strange since well, it should be strong enough. It feels and looks like watching porn on a 56k modem without the porn.

I also realized that a frame is 317000 pixels and that would sum up to roughly 1MB per frame I'm also sending xy coordinates so mine takes up 2.3MB per frame still working on a different way to solve this to keep it down. so I'm guessing i would need to do dome magic to make both position and pixel values of an acceptable size. because atm would i get5 it up at an acceptable speed it would require at least 60MB/s to get something like 30fps but thats a problem for another day.

//How many pixels to send per burst (1000 seems to be the best)
    const int PixelPerSend = 1000;
    int bSize = 7 * PixelPerSend;
    //Comunication thread UDP feed   
    private void EthernetComUDP() //Runs in own thread
    {
        //Connect to Server
        clientUDP = new SocketClientUDP();
        int[] ImageContent = new int[(int)cam.PreviewResolution.Height *   (int)cam.PreviewResolution.Width];
        byte[] PacketContent = new byte[bSize];
        string Pixel,l;
        while (SendingData)
        {
            cam.GetPreviewBufferArgb32(ImageContent);
            int x = 1, y = 1, SenderCount = 0;
            //In dire need of a speedup
            for (int a = 0; a < ImageContent.Length; a++) //this loop
            {
                Pixel = Convert.ToString(ImageContent[a], 2).PadLeft(32, '0');

                //A - removed to conserve bandwidth
                //PacketContent[SenderCount] = Convert.ToByte(Pixel.Substring(0, 8), 2);//0
                //R
                PacketContent[SenderCount] = Convert.ToByte(Pixel.Substring(8, 8), 2);//8
                //G
                PacketContent[SenderCount + 1] = Convert.ToByte(Pixel.Substring(16, 8), 2);//16
                //B
                PacketContent[SenderCount + 2] = Convert.ToByte(Pixel.Substring(24, 8), 2);//24

                //Coordinates
                //X
                l = Convert.ToString(x, 2).PadLeft(16, '0');
                //X bit(1-8)
                PacketContent[SenderCount + 3] = Convert.ToByte(l.Substring(0, 8), 2);
                //X bit(9-16)
                PacketContent[SenderCount + 4] = Convert.ToByte(l.Substring(8, 8), 2);

                //Y
                l = Convert.ToString(y, 2).PadLeft(16, '0');
                //Y bit(1-8)
                PacketContent[SenderCount + 5] = Convert.ToByte(l.Substring(0, 8), 2);
                //Y bit(9-16)
                PacketContent[SenderCount + 6] = Convert.ToByte(l.Substring(8, 8), 2);

                x++;
                if (x == cam.PreviewResolution.Width)
                {
                    y++;
                    x = 1;
                }

                SenderCount += 7;
                if (SenderCount == bSize)
                {
                    clientUDP.Send(ConnectToIP, PORT + 1, PacketContent);
                    SenderCount = 0;
                }
            }
        }
        //Close on finish
        clientUDP.Close();
    }

i have tried for simplicity to just send the pixels induvidialy using

BitConverter.GetBytes(ImageContent[a]);

instead of the string parsing mess i have created (to be fixed just wanted a proof of concept) but to do the simple BitConverter did not speed it up to much.

so now im on my last idea the UDP sender socket witch is rhoughly identical to the one on msdn's library.

    public string Send(string serverName, int portNumber, byte[] payload)
    {
        string response = "Operation Timeout";
        // We are re-using the _socket object that was initialized in the Connect method
        if (_socket != null)
        {
            // Create SocketAsyncEventArgs context object
            SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs();
            // Set properties on context object
            socketEventArg.RemoteEndPoint = new DnsEndPoint(serverName, portNumber);
            // Inline event handler for the Completed event.
            // Note: This event handler was implemented inline in order to make this method self-contained.
            socketEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(delegate(object s, SocketAsyncEventArgs e)
            {
                response = e.SocketError.ToString();
                // Unblock the UI thread
                _clientDone.Set();
            });
            socketEventArg.SetBuffer(payload, 0, payload.Length);
            // Sets the state of the event to nonsignaled, causing threads to block
            _clientDone.Reset();
            // Make an asynchronous Send request over the socket
            _socket.SendToAsync(socketEventArg);
            // Block the UI thread for a maximum of TIMEOUT_MILLISECONDS milliseconds.
            // If no response comes back within this time then proceed
            _clientDone.WaitOne(TIMEOUT_MILLISECONDS);
        }
        else
        {
            response = "Socket is not initialized";
        }
        return response;
    }

All in all i have ended up on 3 solutions

  1. Accept defeat (but that wont happen so lets look at 2)

  2. Work down the amount of data sent (destroys quality 640x480 is small enough i think)

  3. Find the obvious problem (Google and friend's ran out of good ideas, thats why I'm here)

2
Sure you are not just maxing out the bandwidth? What is your physical and logical network layer?TomTom
both via the USB cable and WiFi im reading out a max of 500Kb/s should it not be more?Thomas Andreè Wang
Break the problem down - What frame rate do you get if you ignore the camera and send the same single frame repeatedly? What frame rate do you get if you stream the data to a file? What if you send only the first 5k or so of the frame? Do you need to be using Argb since you don't have an alpha channel?James Snell
i tried that i sent the first pixel repeatedly, There was no change actually its slower to send the same pixel over and over than to burst 1000 and 1000 as in the code example above. And yes i need to use Argb its the only way to fetch the preview buffer aside from YCbCr and Y only preview buffers.Thomas Andreè Wang

2 Answers

2
votes

The problem is almost certainly the messing about with the data. Converting a megabyte of binary data into several megabytes of text and then extracting and sending individual characters will add a massive overhead per byte of source data. Looping through individual pixels to build a send buffer will take (relatively speaking) geological timescales.

The fastest way to do this is likely to be to grab a buffer of binary data from the camera, and send it with one UDP write. Only process or break up the data on the phone if you have to, and be careful to access the original binary data directly - don't convert it all to strings and back to binary. Every extra method call you add into this process will just add overhead. If you have to use a loop, try to pre-calculate as much as you can outside the loop to minimise the work that is done on each iteration.

0
votes

A couple things come to mind: #1 Break up the raw image array into pieces to be sent over the wire. Not sure if Linq is available on Windows Phone but something like this.
#2 Converting from int to string to byte will be very inefficient because of the processing time and memory usage. A better approach would be to bulk copy chunks of the int array to a byte array directly. Example