I'm having a problem with doing something via V4L2; I'm also quite a beginner in this, so a link to a document that explains some of my questions here would be of great value.
So my situations is this: I'm making a computer vision module and what I need from it is to have the camera continuously capture video without actually saving it on the disk (or storing the whole thing in the memory) but to allow me to process individual frames. I assume I might not have the processing capacity to process every single frame, so I need to skip some. So when I finish doing my logic on one frame, I want to grab whatever the latest thing my camera has seen and do some processing on it. At the same time, as I said, I don't need the video to be stored in memory, though it's ok if I have some kind of circular buffer of a limited capacity.
I've tried these tutorials: http://jayrambhia.wordpress.com/2013/07/03/capture-images-using-v4l2-on-linux/ http://linuxtv.org/downloads/v4l-dvb-apis/capture-example.html
but the first one doesn't work (hangs at the "retrieving frame" step with LED on - any ideas why btw?) and the second one doesn't give me understanding of how to do my task. Generally, my camera works fine on my Linux device - I've checked with a GUI tool.
So my question can be roughly broken down into the following: 1. How do you access individual frames as described above? What is the general approach there? Any concrete examples online that you know of, or an article? 2. Is it possible to get my frame as a 2D array of easily-to-process format, such as RGB? I basically want to access each pixel by (x,y) coordinates and get R, G, and B channels
I'd appreciate any help, including links; I'd stay away from raw API listings for now because I need to understand the general idea first, but if there is no other way, I'll read those of course :)