I've noticed in Apple's sample code that they often provide a value of 0 in the bytesPerRow parameter of CGBitmapContextCreate. For example, this comes out of the Reflection sample project.
CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh,
8, 0, colorSpace, kCGImageAlphaNone);
That seemed odd to me, since I've always gone the route of multiplying the image width by the number of bytes per pixel. I tried swapping in a zero into my own code and tested it out. Sure enough, it still works.
size_t bitsPerComponent = 8;
size_t bytesPerPixel = 4;
size_t bytesPerRow = reflectionWidth * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
reflectionWidth,
reflectionHeight,
bitsPerComponent,
0, // bytesPerRow ??
colorSpace,
kCGImageAlphaPremultipliedLast);
According to the docs, bytesPerRow should be "The number of bytes of memory to use per row of the bitmap."
So whats the deal? When can I supply a zero and when must I calculate the exact value? Are there any performance implications of doing it one way or the other?
CGBitmapContextCreate
will print an error message whenever you try to create a bitmap context with invalid parameters. – nielsbot