I read from this thread - Get most accurate image using OpenCV - that I can use variance to measure which of the input images are the sharpest. I can't seem to find a tutorial for this. I am very new to openCV. Right now, my code scans images from a folder and stores them using vector
for (int ct = 0; ct < images.size() ; ct++) {
//should i put the cvAvgSdv function here?
waitKey(0);
}
Thank you for any help!
Update: I called this fxn: cvAvgSdv(images[ct],&scalar_mean,&std_dev); and it gave me an error: No suitable conversion function from cv::Mat to const cvArr * exists.
Can I use the fxn without converting the Mat to iplImage? If not, what's the easiest way to convert the Mat?