1
votes

enter image description here

I am trying to do dispatch_async to the drawing code I posted on https://stackguides.com/questions/34430468/while-profiling-with-instruments-i-see-a-lot-of-cpu-consuming-task-happening-w. I got an error of : "No matching function for call to 'dispatch_async' . What I am trying to do , as this is a memory expensive operation , trying to create queue for the rendering operation to happen in background and when the image is ready to put in the main queue because UI update process works in the main thread. So guys guide me on this thread . I am posting the whole code.

#pragma mark Blurring the image
- (UIImage *)blurWithCoreImage:(UIImage *)sourceImage
{
    // Set up output context.
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
    dispatch_async(queue, ^{
        CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];

        // Apply Affine-Clamp filter to stretch the image so that it does not
        // look shrunken when gaussian blur is applied
        CGAffineTransform transform = CGAffineTransformIdentity;
        CIFilter *clampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
        [clampFilter setValue:inputImage forKey:@"inputImage"];
        [clampFilter setValue:[NSValue valueWithBytes:&transform objCType:@encode(CGAffineTransform)] forKey:@"inputTransform"];

        // Apply gaussian blur filter with radius of 30
        CIFilter *gaussianBlurFilter = [CIFilter filterWithName: @"CIGaussianBlur"];
        [gaussianBlurFilter setValue:clampFilter.outputImage forKey: @"inputImage"];
        [gaussianBlurFilter setValue:@10 forKey:@"inputRadius"]; //30

        CIContext *context = [CIContext contextWithOptions:nil];
        CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
        UIGraphicsBeginImageContext(self.view.frame.size);
        CGContextRef outputContext = UIGraphicsGetCurrentContext();

        // Invert image coordinates
        CGContextScaleCTM(outputContext, 1.0, -1.0);
        CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);

        // Draw base image.
        CGContextDrawImage(outputContext, self.view.frame, cgImage);

        // Apply white tint
        CGContextSaveGState(outputContext);
        CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
        CGContextFillRect(outputContext, self.view.frame);
        CGContextRestoreGState(outputContext);

        dispatch_async(dispatch_get_main_queue(), ^{
            UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
            UIGraphicsEndImageContext();

            return outputImage;
        })

    });



// Output image is ready.


}

It is throwing error on this code dispatch_async(dispatch_get_main_queue(), i.e when I am trying to bring it back on the main thread, for the UI works in main thread. What I am missing?

2
You should not call this kind of procedure from the main thread in the first place. A better solution is to encapsulate the code that consumes the result of this procedure as a continuation, or a completion block in Objective-C term. Asynchronously dispatched codes never return to the context it got fired from, unless extra synchronization methods being taken to prevent that thread from continuing, which is not desirable in this case, though. - ZhangChn
You are missing a semicolon at the end of your dispatch_async call, e.g. dispatch_async(dispatch_get_main_queue(), ^{ ... });. - Rob

2 Answers

1
votes

See this answer to a similar question:

Is this Core Graphics code thread safe?

You start drawing on one thread, then finish it on another thread. That's a ticking time bomb.

In addition, the "return outputImage" performed on the main thread isn't going to do you any good, because there is nobody to receive that return value. You should do all your drawing in the same thread, extract the image, and then call something on the main thread that processes the complete image.

0
votes

I think your code looks good but the way you are using may wrong. So please try like bellow

Create one method like bellow

- (UIImage *)blurWithCoreImage:(UIImage *)sourceImage
{


    // Set up output context.
//    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
//    dispatch_async(queue, ^{
        CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];

        // Apply Affine-Clamp filter to stretch the image so that it does not
        // look shrunken when gaussian blur is applied
        CGAffineTransform transform = CGAffineTransformIdentity;
        CIFilter *clampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
        [clampFilter setValue:inputImage forKey:@"inputImage"];
        [clampFilter setValue:[NSValue valueWithBytes:&transform objCType:@encode(CGAffineTransform)] forKey:@"inputTransform"];

        // Apply gaussian blur filter with radius of 30
        CIFilter *gaussianBlurFilter = [CIFilter filterWithName: @"CIGaussianBlur"];
        [gaussianBlurFilter setValue:clampFilter.outputImage forKey: @"inputImage"];
        [gaussianBlurFilter setValue:@10 forKey:@"inputRadius"]; //30

        CIContext *context = [CIContext contextWithOptions:nil];
        CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
        UIGraphicsBeginImageContext(self.view.frame.size);
        CGContextRef outputContext = UIGraphicsGetCurrentContext();

        // Invert image coordinates
        CGContextScaleCTM(outputContext, 1.0, -1.0);
        CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);

        // Draw base image.
        CGContextDrawImage(outputContext, self.view.frame, cgImage);

        // Apply white tint
        CGContextSaveGState(outputContext);
        CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
        CGContextFillRect(outputContext, self.view.frame);
        CGContextRestoreGState(outputContext);

        UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();
        return outputImage;
}

and use this method like bellow

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
        dispatch_async(queue, ^{
            UIImage *img = [self blurWithCoreImage:[UIImage imageNamed:@"imagename.png"]];
            dispatch_async(dispatch_get_main_queue(), ^{
                [self.view addSubview:[[UIImageView alloc] initWithImage:img]];
            });
        });

I just tried like bellow for testing , it gave me proper result. so have a try

Result of above code

Let me know if you face any issues , all the best