5
votes

I've noticed in Apple's sample code that they often provide a value of 0 in the bytesPerRow parameter of CGBitmapContextCreate. For example, this comes out of the Reflection sample project.

CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh,  
                                                            8, 0, colorSpace, kCGImageAlphaNone);

That seemed odd to me, since I've always gone the route of multiplying the image width by the number of bytes per pixel. I tried swapping in a zero into my own code and tested it out. Sure enough, it still works.

size_t bitsPerComponent = 8;
size_t bytesPerPixel = 4;
size_t bytesPerRow = reflectionWidth * bytesPerPixel;   

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
                                             reflectionWidth,
                                             reflectionHeight,
                                             bitsPerComponent,
                                             0, // bytesPerRow ??
                                             colorSpace,
                                             kCGImageAlphaPremultipliedLast);

According to the docs, bytesPerRow should be "The number of bytes of memory to use per row of the bitmap."

So whats the deal? When can I supply a zero and when must I calculate the exact value? Are there any performance implications of doing it one way or the other?

1
The example you posted: CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh, 8,0,colorSpace,kCGImageAlphaNone); is NOT valid. You CANNOT create an bitmap context WITHOUT an alpha channel.PleaseHelp
btw--if you look at the log output from your app (you may have to check the system log in Console.app) CGBitmapContextCreate will print an error message whenever you try to create a bitmap context with invalid parameters.nielsbot

1 Answers

8
votes

My understanding is that if you pass in zero, it calculates the bytes-per-row based on the bitsPerComponent and width arguments. You might want additional padding at the end of each row of bytes (if your device required it, or some other constraint). In this case, you could pass a value that was more than just width * (bytes per pixel). I would imagine this is probably never needed in modern i/MacOS development, except for some weird edge-case optimizations.