I'm trying to use matplotlib
to read in an RGB image and convert it to grayscale.
In matlab I use this:
img = rgb2gray(imread('image.png'));
In the matplotlib tutorial they don't cover it. They just read in the image
import matplotlib.image as mpimg
img = mpimg.imread('image.png')
and then they slice the array, but that's not the same thing as converting RGB to grayscale from what I understand.
lum_img = img[:,:,0]
I find it hard to believe that numpy or matplotlib doesn't have a built-in function to convert from rgb to gray. Isn't this a common operation in image processing?
I wrote a very simple function that works with the image imported using imread
in 5 minutes. It's horribly inefficient, but that's why I was hoping for a professional implementation built-in.
Sebastian has improved my function, but I'm still hoping to find the built-in one.
matlab's (NTSC/PAL) implementation:
import numpy as np
def rgb2gray(rgb):
r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]
gray = 0.2989 * r + 0.5870 * g + 0.1140 * b
return gray
gray = np.mean(rgb, -1)
. Maybergb[...,:3]
there if it is actually rgba. – seberggray = np.mean(rgb, -1)
works fine. thanks. Is there any reason not to use this? Why would I use the solutions in the answers below instead? – waspinatornp.mean(rgb, -1)
. – unutbu0.2989 * R + 0.5870 * G + 0.1140 * B
I'm assuming that it's the standard way of doing it. – waspinator