0
votes

I have a PNG file. I have decompressed the IDAT chunk and read it as a 16-bit color (16-bit per pixel) and saved them in a 1-dimensional array. The PNG file is 126x128 (wxh). But the pixel count that I have (array length) is 16,192. Therefore:

        16192 <-- pixel I got
126x128=16128 <-- pixel using width and height
        -----
           64 <-- difference

What is this 64 pixel excess?

Edit

Thank you for your replies (comments and answers), especially @leonbloy.

The value 16,192 (pixel I got) is actually 34,384 bytes of data, I already divide it with the BytesPerPixel.

1
Leonboy has it right. There's an extra byte at the start of each line so that lines can be pre-filtered in different ways for different parts of the image. And BTW, you can blame be for that. :-)Lee Daniel Crocker

1 Answers

1
votes

Hard to tell if you don't tell us the details (how are you computing the amount of pixels?)

One possible explanation is that you are forgetting to take into account that each PNG row is prepended by a byte that tells the "filter" applied for that row (ref). Hence, the total number of bytes inside the IDAT chunks (before ZLIB compression; and don't forget that there can be many IDAT chunks, you must append all them) is

Bytes = Rows x (1 +  Cols x BytesPerPixel)

If your image is 16 bits grayscale (BytesPerPixel=2), and you are computing Pixels=Rows x Cols= Bytes/BytesPerPixel, then this would explain the difference.