1
votes

I have a truecolour image with 16 bits per channel (created by ImageMagick), which I'd like to read into separate R, G, B planes. The problem is that the resulting 16-bit pixel value simply duplicates the low and high bytes in the result. If the red component has an approximate value of 0x42, for example, my libpng code actually returns a value of 0x4242.

My first problem is that I'm not quite convinced that the input actually has 16-bit pixels. This is what identify -verbose returns:

   Type: TrueColor    
   Endianess: Undefined    
   Colorspace: RGB
   Depth: 16/8-bit    
   Channel depth:
      red: 8-bit
      green: 8-bit
      blue: 8-bit    
   Channel statistics:
      Red:
       min: 0 (0)
       max: 65535 (1)
   Properties:
      PNG:IHDR.bit_depth       : 16
      PNG:IHDR.color_type      : 2

Does this image actually have 16-bit data that I can extract? If so, how? My libpng read code basically does this (after confirming that png_get_color_type returns 2, and png_get_bit_depth returns 16):

   png_uint_32 rowbytes = png_get_rowbytes(png, info);
   image        = (png_byte *) xmalloc(height * rowbytes);
   row_pointers = (png_bytep*) xmalloc(sizeof(png_bytep) * height);

   for(int i = 0; i < (int)height; i++)
      row_pointers[i] = image + (i*rowbytes);

   png_read_image(png, row_pointers);

And my code which splits image into RGB planes does this:

   uint16_t *src = (uint16_t*) image;   // libpng composite image
   uint16_t *rplane, *gplane, *bplane;  // separate planar channels
   ...
   for(size_t i=0; i<nrows; i++) {
      for(size_t j=0; j<ncols; j++) {
         *rplane++ = *src++;
         *gplane++ = *src++;
         *bplane++ = *src++;
      }
   }

So this assumes that the 48-bit pixel data returned by libpng is 16b R, then 16b B, then 16b G.

If I read the image into Matlab it reports that it's only reading 8-b pixels because of limitations in ImageMagick. If Matlab reports that pixel (x,y) has a red component of 0x42, for example, the libpng code above reports that it has a value of 0x4242. However, Matlab and the libpng code above agree on the RGB values of all pixels, except that Matlab reports them all as single 8-bit components, and the libpng code duplicates this value twice in each 16-bit component.

Any ideas? Thanks.

EDIT

non-verbose identify output:

> identify in.png
in.png PNG 2776x2776 2776x2776+0+0 16-bit DirectClass 14.35MB 0.000u 0:00.000
2

2 Answers

2
votes

The "Depth: 16/8-bit" line in the "identify" report means that the image is stored with 16-bit samples, but all pixels could be represented without loss using 8 bits. That is, every component of every pixel has a value with its high byte equal to its low byte (i.e., has a value that is equally divisible by 257).

For example, for this PPM image

P3 2 2 65535
    0     0     0
32896 32896 32896
32896 32896 32896
65535 65535 65535

the samples are all evenly divisible by 257 and identify -verbose file.ppm reports

Depth: 16/8-bit
  Channel depth:
    gray: 8-bit

But if you change the final row to "65535 65535 65534", then identify -verbose file.ppm reports

Depth: 16-bit
  Channel depth:
    red: 8-bit
    green: 8-bit
    blue: 16-bit

To know how the image was actually stored in the PNG file, you have to look at the PNG:IHDR properties shown by "identify". Or you can use "pngcheck" to get a truthful report about the PNG file contents.

There's an explanation of sample scaling in the PNG specification. ImageMagick achieves this by simply multiplying or dividing by 257.0 when scaling from 8 to 16-bit samples and vice versa. See the "ScaleQuantumToShort()" and "ScaleShortToQuantum()" inline functions in the ImageMagick source.

1
votes

If I make an image 1000x1000 full of random data that is difficult to compress, it comes out at 5.7MB (as you would expect for 1 million pixels of 16-bit Red, 16-bit Green and 16-bit Blue) and shows up as 16-bit according to identify:

convert -size 1000x1000! xc:gray +noise random image.png

ls -lhrt
-rw-r--r--      1 mark  staff   5.7M 18 Jan 16:59 image.png

identify image.png
image.png PNG 1000x1000 1000x1000+0+0 16-bit sRGB 6.011MB 0.000u 0:00.000

If I now do the same but with 8-bit pixels:

convert -size 1000x1000! xc:gray +noise random -depth 8 image.png

It shows up as 8-bit in identify and takes half the space:

identify image.png
image.png PNG 1000x1000 1000x1000+0+0 8-bit sRGB 3.006MB 0.000u 0:00.000

ls -lhrt
-rw-r--r--      1 mark  staff   2.9M 18 Jan 17:01 image.png

So I deduce that identify is telling the truth and I am pretty sure your image is not actually 16-bit. How did you create it?

In the light of Glenn's answer, I guess one workaround might be to set the top-left or bottom right pixel of your image to a prime number in all three channels, like 65,521 which should effectively prevent any image being storable in a single byte! And just ignore that pixel in subsequent processing or if you desperately need its value, add an extra row of dummy data.