2
votes

In my iPad drawing app, I have a 768 x 1024 buffer context that I create and use to speed up my drawing.

Here is the code I'm currently executing when drawRect is called that simply copies my already existing drawing onto the screen:

CGContextRef context = UIGraphicsGetCurrentContext();

CGImageRef image = CGBitmapContextCreateImage([DrawingState sharedState].context);
CGContextDrawImage(context, CGRectMake(0, 0, 768, 1024), image);
CGImageRelease(image);

This code works just as expected: it collects and image from my background context and draws it into my current context, then releases the image. It works!

However, I'm doing some performance tuning to speed up my drawing and I'm trying this code instead, where "rect" is the rectangle passed in by drawRect

CGContextRef context = UIGraphicsGetCurrentContext();

CGImageRef image = CGBitmapContextCreateImage([DrawingState sharedState].context);
CGImageRef subImage = CGImageCreateWithImageInRect(image, rect);
CGContextDrawImage(context, rect, subImage);
CGImageRelease(subImage);
CGImageRelease(image);

This does not work; the image does not appear in the correct location, and may not even be the correct image (can't tell since if the image appears in the wrong location, a portion of the image won't even draw since it will be outside of drawRect's rectangle.) Any idea what is going on?

Below is how I am initializing the [DrawingState sharedState].context. This part should be fine, but I figured I'd include it for completeness.

if(context != NULL)
{
    CGContextRelease(context);
    context = NULL;
}
if(bitmapData != NULL)
{
    free(bitmapData);
    bitmapData = NULL;
}
if(colorSpace != NULL)
{
    CGColorSpaceRelease(colorSpace);
    colorSpace = NULL;
}

CGSize canvasSize = CGSizeMake(screenWidth, screenHeight);

int bitmapByteCount;
int bitmapBytesPerRow;

bitmapBytesPerRow   = (canvasSize.width * 4);
bitmapByteCount     = (bitmapBytesPerRow * canvasSize.height);

colorSpace = CGColorSpaceCreateDeviceRGB();

bitmapData = malloc( bitmapByteCount );

if( bitmapData == NULL ){
    NSLog(@"Buffer could not be alloc'd");
}

//Create the context
context = CGBitmapContextCreate(bitmapData, canvasSize.width, canvasSize.height, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst);

As requested, the actual optimization I am trying to perform is below. I am drawing a buffer image (before doing additional drawing) into my buffer context, then later copying my buffer context into the current context.

Original Code:

CGContextClearRect([DrawingState sharedState].context, CGRectMake(0, 0, 768, 1024));
if(bufferImage != NULL)
{
    CGContextDrawImage([DrawingState sharedState].context, CGRectMake(0, 0, 768, 1024), bufferImage);
}

Optimized Code:

CGContextClearRect([DrawingState sharedState].context, rect);
if(bufferImage != NULL)
{
    CGImageRef subImage = CGImageCreateWithImageInRect(bufferImage, rect);
    CGContextDrawImage([DrawingState sharedState].context, rect, bufferImage);
    CGImageRelease(subImage);
}
2

2 Answers

0
votes

I doubt you want to use the same rect value for creating a sub image and for drawing.

If you read the docs, CGImageCreateWithImageInRect takes the intersection of the image and the rect parameter, while CGContextDrawImage will draw an image into rect.

I don't see how you plan to use this as an optimization but your results sound like what I would expect from your code.

0
votes

The short version of the answer to this question was that I wanted to use CGContextClipToRect instead of trying to get sub images. CGContextClipToRect(CGContextRef, CGRect) is used to automatically clip any drawing to a context outside the supplied rectangle.