79
votes

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:

How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.

Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.

EDIT:

Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.

Mat image;

(...)

Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);

And then you can read/write RGB values with:

p->x //B
p->y //G
p->z //R
6

6 Answers

100
votes

Try the following:

cv::Mat image = ...do some stuff...;

image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b

image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
18
votes

The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:

frame.data[frame.channels()*(frame.cols*y + x)];

Likewise, to get B, G, and R:

uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];    
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];

Note that this code assumes the stride is equal to the width of the image.

2
votes

A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.

cv::Mat vImage_; 

if(src_)
{
    cv::Vec3f vec_;

    for(int i = 0; i < vHeight_; i++)
        for(int j = 0; j < vWidth_; j++)
        {
            vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.

            vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;

            ++src_;
        }
}

if(! vImage_.data ) // Check for invalid input
    printf("failed to read image by OpenCV.");
else
{
    cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
    cv::imshow( windowName_, vImage_); // Show the image.
}
0
votes

The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.

0
votes
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values 
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{

        if (r > 2) r = 0;

        if (r == 0) value[i] = 0;
        if (r == 1)value[i] =  0;
        if (r == 2)value[i] = 255;

        r++;
}
-6
votes
const double pi = boost::math::constants::pi<double>();

cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
    float distance = 2.0f;
    float angle = ellipse.angle;
    cv::Point ellipse_center = ellipse.center;
    float major_axis = ellipse.size.width/2;
    float minor_axis = ellipse.size.height/2;
    cv::Point pixel;
    float a,b,c,d;

    for(int x = 0; x < image.cols; x++)
    {
        for(int y = 0; y < image.rows; y++) 
        {
        auto u =  cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
        auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);

        distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);  

        if(distance<=1)
        {
            image.at<cv::Vec3b>(y,x)[1] = 255;
        }
      }
  }
  return image;  
}