I'm a bit puzzled with internal representation of Bitmap's pixels in ByteBuffer (testing on ARM/little endian):
1) In the Java layer I create an ARGB bitmap and fill it with 0xff112233 color:
Bitmap sampleBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(sampleBitmap);
Paint paint = new Paint();
paint.setStyle(Paint.Style.FILL);
paint.setColor(Color.rgb(0x11,0x22, 0x33));
canvas.drawRect(0,0, sampleBitmap.getWidth(), sampleBitmap.getHeight(), paint);
To test, sampleBitmap.getPixel(0,0) indeed returns 0xff112233 that matches ARGB pixel format.
2) The bitmap is packed into direct ByteBuffer before passing to the native layer:
final int byteSize = sampleBitmap.getAllocationByteCount();
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(byteSize);
//byteBuffer.order(ByteOrder.LITTLE_ENDIAN);// See below
sampleBitmap.copyPixelsToBuffer(byteBuffer);
To test, regardless of the buffer's order setting, in the debugger I see the byte layout which doesn't quite match ARGB but more like a big endian RGBA (or little endian ABGR!?)
byteBuffer.rewind();
final byte [] out = new byte[4];
byteBuffer.get(out, 0, out.length);
out = {byte[4]@12852} 0 = (0x11) 1 = (0x22) 2 = (0x33) 3 = (0xFF)
Now, I'm passing this bitmap to the native layer where I must extract pixels and I would expect Bitmap.Config.ARGB_8888 to be represented, depending on buffer's byte order as:
a) byteBuffer.order(ByteOrder.LITTLE_ENDIAN):
out = {byte[4]@12852} 0 = (0x33) 1 = (0x22) 2 = (0x11) 3 = (0xFF)
or
b) byteBuffer.order(ByteOrder.BIG_ENDIAN):
out = {byte[4]@12852} 0 = (0xFF) 1 = (0x11) 2 = (0x22) 3 = (0x33)
I can make the code which extracts the pixels work based on above output but I don't like it since I can't explain the behaviour which I hope someone will do :)
Thanks!
Bitmap.Config.ARGB_8888says "Use this formula to pack into 32 bits:int color = (A & 0xff) << 24 | (B & 0xff) << 16 | (G & 0xff) << 8 | (R & 0xff);" - Michael