1
votes

I am trying to extract the LAB a-channel of a 32-bit RGB image. However I fail to read the image correctly and I get unexpected results.

import cv2
org = cv2.imread('42.png', -1)
print org.dtype
# print uint8
lab_image = cv2.cvtColor(org, cv2.COLOR_RGB2LAB)
l,a,b = cv2.split(lab_image)
cv2.imshow('', a)
cv2.waitKey(0)

Original image: http://labtools.ipk-gatersleben.de/images/42.png

Expected output (ImageJ): http://labtools.ipk-gatersleben.de/images/imagej_out.png

OpenCV output: http://labtools.ipk-gatersleben.de/images/python_out.png

I also tried to read/convert the image with skimage but the result is the same...

1
Try using cv2.COLOR_BGR2LAB, since OpenCV reads BGR, not RGB. You probably need also to be sure to load in BGR using cv2.imread('42.png', cv2.IMREAD_COLOR). In any case, use cv2.IMREAD_UNCHANGED instead of -1 which is quite cripticMiki
Thanks for your suggestions. I tried them but unfortunately the result is the same...honeymoon
From where you derived the expected output? It doesn't seems to be correctZdaR
From ImageJ (convert to LAB space).honeymoon
@snowflake I also agree with ZdaR. You must be mistaking something for something else. Please check and come back.Jeru Luke

1 Answers

2
votes

Your code has several issues. First, as Miki correctly pointed out, you have to swap the red and blue channels. According to OpenCV documentation (emphasis mine):

Note that the default color format in OpenCV is often referred to as RGB but it is actually BGR (the bytes are reversed)

Then you need to cast the image to float32 (because float64 is not supported by cv2.cvtColor) and scale it down to fit the 0..1 range:

In case of linear transformations, the range does not matter. But in case of a non-linear transformation, an input RGB image should be normalized to the proper value range to get the correct results, for example, for RGB → Lu*v* transformation. For example, if you have a 32-bit floating-point image directly converted from an 8-bit image without any scaling, then it will have the 0..255 value range instead of 0..1 assumed by the function. So, before calling cvtColor, you need first to scale the image down

The values of a returned by cv2.cvtColor are constrained to be -127 <= a <= 127. For improved visualization it is useful to stretch contrast by subtracting a.min() from a and rescale the resulting values by a factor 255./(a.max() - a.min()) to fit the range 0..255. If you do so, you should obtain the expected result. Here's the the full code:

import cv2
import numpy as np
org = np.float32(cv2.imread('42.png', -1))/255.
lab_image = cv2.cvtColor(org, cv2.COLOR_BGR2LAB)
l, a, b = cv2.split(lab_image)
a_scaled = np.uint8(255.*(a - a.min())/(a.max() - a.min()))
cv2.imshow('', a_scaled)
cv2.waitKey(0)

Bonus

You can obtain the same result with scikit-image:

from skimage import io, color
import matplotlib.pyplot as plt

org = io.imread('http://labtools.ipk-gatersleben.de/images/42.png')
lab_image = color.rgb2lab(org)
a = lab_image[:, :, 1]

fig, ax = plt.subplots(1, 1)
plt.set_cmap('gray')
ax.imshow(a)

a-channel

† Actually the results yielded by OpenCV and scikit-image are not exactly equal. There are slight differences due to numeric errors associated to floating-point arithmetic. Such discrepancies stem from the fact that cv2.cvtColor returns three 2D arrays of float32 whereas skimage.color.rgb2lab yields a 3D array of float64.