1
votes

For the iPhone, I am clipping an image with CGContextClipToMask.

In 320x480, it is looking beautiful. But in the retina display/simulator the masking looks like it has been done in 320x480, and then scaled up to 640x960 - it looks a bit tatty.

The correct 640x960 images are being used (I've marked them to make sure).

My code is below. Does anyone know what it might be? Would be super grateful for some help. Many thanks.

-(id)makeMainImage:(UIImage*)initMaskImg initMainImage:(UIImage*)initMainImage{

    //get images
    UIImage *mainImg = initMainImage;
    UIImage *maskImg = initMaskImg;

    //new context, to draw image into
    UIGraphicsBeginImageContext(mainImg.size);
    CGContextRef context = UIGraphicsGetCurrentContext();

    //position context
    CGContextTranslateCTM(context, 0,480);
    CGContextScaleCTM(context, 1.0, -1.0);//flip coordinates

    //rect
    CGRect imageRect = CGRectMake(0, 0, 320, 480);

    //set mask
    CGContextClipToMask(context, imageRect, maskImg.CGImage);

    //main image
    CGContextDrawImage(context, imageRect, mainImg.CGImage);

    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();    

    return newImage;
}
1

1 Answers

5
votes

Don't use 'UIGraphicsBeginImageContext', use

UIGraphicsBeginImageContextWithOptions(size, opaque, 0); // last option is the scale option

This is how you create retina images.