Setting a frame rate doesn't always work like you expect. It depends on two things:
- What your camera is capable of outputting.
- Whether the current capture backend you're using supports changing frame rates.
So point (1). Your camera will have a list of formats which it is capable of delivering to a capture device (e.g. your computer). This might be 1920x1080 @ 30 fps or 1920x1080 @ 60 fps and it also specifies a pixel format. The vast majority of consumer cameras do not let you change their frame rates with any more granularity than that. And most capture libraries will refuse to change to a capture format that the camera isn't advertising.
Even machine vision cameras, which allow you much more control, typically only offer a selection of frame rates (e.g. 1, 2, 5, 10, 15, 25, 30, etc). If you want a non-supported frame rate at a hardware level, usually the only way to do it is to use hardware triggering.
And point (2). When you use cv.VideoCapture
you're really calling a platform-specific library like DirectShow or V4L2. We call this a backend. You can specify exactly which backend is in use by using something like:
cv2.VideoCapture(0 + cv2.CAP_DSHOW)
There are lots of CAP_X
's defined, but only some will apply to your platform (e.g CAP_V4L2
is for Linux only). On Windows, forcing the system to use DirectShow is a pretty good bet. However as above, if your camera only reports it can output 30fps and 60fps, requesting 10fps will be meaningless. Worse, a lot of settings simply report True
in OpenCV when they're not actually implemented. You've seen that most of the time reading parameters will give you sensible results though, however if the parameter isn't implemented (e.g exposure is a common one that isn't) then you might get nonsense.
You're better off waiting for a period of time and then reading the last image.
Be careful with this strategy. Don't do this:
while capturing:
res, image = cap.read()
time.sleep(1)
you need to make sure you're continually purging the camera's frame buffer otherwise you will start to see lag in your videos. Something like the following should work:
frame_rate = 10
prev = 0
while capturing:
time_elapsed = time.time() - prev
res, image = cap.read()
if time_elapsed > 1./frame_rate:
prev = time.time()
# Do something with your image here.
process_image()
For an application like a hand detector, what works well is to have a thread capturing images and the detector running in another thread (which also controls the GUI). Your detector pulls the last image captured, runs and display the results (you may need to lock access to the image buffer while you're reading/writing it). That way your bottleneck is the detector, not the performance of the camera.