6
votes

I have implemented several functions to convert sRGB to the CIE-L*a*b* color space.

Now, I'd like to use that for dithering, but I'm unsure how to exactly discern which color is the "lower" color, and which one is the "higher" color.

When dithering in a one-dimensional color space (gray scale) things are easy. When using error diffusion dithering, I calculate the nearest gray scale value from my palette, and add the error to the surrounding pixels, depending on what kind of dithering matrix I use (for instance Floyd-Steinberg). Since this is one-dimensional, it's pretty easy, there is just this one value. But now I have this three dimensional color space, should I just add the error to each coordinate individually?
(This is the only way it makes sense to me, at this point.)

When dithering with an ordered dither matrix, things get even more complicated. Ordered dither matrices define a threshold value. For this, I need to know the "lower" pixel value, and the "higher" pixel value of my pallet, in regards to my current pixel that I'm about to dither. Now, I calculate the distance to both pixels, the threshold value from the dither matrix decides after which value between those to neighboring pixels, the pixel is either dithered to the lower, or to the higher pixel.
(The actual implementation would of course be more optimal than calculating that, by using a matrix that is sensibly chosen for the amount gray scale color values in my pallet. Also, things like choosing the pallet with evenly spaced color values, etc.)
This is - again - pretty easy in a one-dimensional color space. but in CIE-L*a*b*, there is no "higher" or "lower" value as such.

Using just the luminance to apply the threshold matrix to, seems pretty incorrect, I might have two colors with the same luminance in my pallet, then what?

2
Applying color difference in LAB color space might help you to set threshold based on comparison between colors. Lower the delta e value more closely two colors are related.Sarthak Singhal
@SarthakSinghal Well, yes, I'm already using ΔE to calculate color distances. But that doesn't help. alone, because I need to have the distinction between "higher" and "lower". If I simply dither from the nearest color to the second nearest color, I might dither "down" to either a lower luminance, etc. which would be incorrect.polemon

2 Answers

3
votes

This paper describes the problem very close to your question, and then proceeds to provide an algorithm to solve it:

http://ira.lib.polyu.edu.hk/bitstream/10397/1494/1/A%20multiscale%20color%20error_05.pdf

Hopefully this helps.

1
votes

When you're dithering in more than one dimension, you would want to quantize the values and diffuse the error in each dimension independently.

Starting with a 3-channel RGB image as an example: separate the components into three greyscale images, dither them independently, and then combine them back into a color image. The error in one channel is ignorant of the error in other channels...don't get caught up on ΔE or anything.

The same concept applies to dithering in CIE-Lab, even for an ordered dither. The dithering would be applied independently on each channel. Don't worry about the euclidean distance between pixels, just consider the delta on the individual channel.

"Higher" and "lower" is easily digested in a single-dimension environment. Even for the individual channels of CIElab.

You are correct that you would not want to apply the threshold matrix on just the luminance channel! I believe you would want three threshold matrices, one for each channel, which are determined according to how you configured your palette. (these matrices might be the same, maybe different, depending on how you distribute the palette values across the channels)

In three channels, you can visualize your palette as a cube (xyz). So when the Luminance channel is quantized to a certain value, that might determine the X coordinate, but you still have a whole range of values in the Y and Z directions. Those coordinates are decided by how the other channels are quantized. Generate the palette in such a way that the channels can be varied independently. You don't even need to have the same number of quantization levels in each dimension. You might choose to have only 3 possible luminance values, and use the rest of the palette for varying the A/B channels with more precision. (this is why your three threshold matrices might be different.)