I'm trying to code a method which detects when two UIImages collide taking into account only the non-transparent pixels. Just to be clear, a method that returns TRUE when a pixel with its alpha component greater than 0 on a UIImageView overlaps with a pixel also with its alpha component greater than 0 on the other UIImageView.
The method should be something like:
- (void)checkCollisionBetweenImage:(UIImage *)img1 inFrame:(CGRect)frame1 andImage:(UIImage *)img2 inFrame:(CGRect)frame2;
So it receives both images to be checked with its frame passed independently since the coordinate positions must be converted to match (UIImageView.frame won't do).
[UPDATE 1] I'll update with a piece of code I used in a previous question I made, this code however doesn't always work. I guess the problem lies in the fact that the UIImages used aren't necessarily in the same superview.
Detect pixel collision/overlapping between two images
if (!CGRectIntersectsRect(frame1, frame2)) return NO;
NSLog(@"OverlapsPixelsInImage:withImage:> Images Intersect");
UIImage *img1 = imgView1.image;
UIImage *img2 = imgView2.image;
CGImageRef imgRef1 = [img1 CGImage];
CGImageRef imgRef2 = [img2 CGImage];
float minx = MIN(frame1.origin.x, frame2.origin.x);
float miny = MIN(frame1.origin.y, frame2.origin.y);
float maxx = MAX(frame1.origin.x + frame1.size.width, frame2.origin.x + frame2.size.width);
float maxy = MAX(frame1.origin.y + frame1.size.height, frame2.origin.y + frame2.size.height);
CGRect canvasRect = CGRectMake(0, 0, maxx - minx, maxy - miny);
size_t width = floorf(canvasRect.size.width);
size_t height = floorf(canvasRect.size.height);
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerRow = 4 * width;
unsigned char *rawData = malloc(canvasRect.size.width * canvasRect.size.height * 4 * bitsPerComponent);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context, 0, canvasRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(frame2.origin.x - minx, frame2.origin.y - miny, frame2.size.width, frame2.size.height), imgRef2);
CGContextDrawImage(context, CGRectMake(frame1.origin.x - minx, frame1.origin.y - miny, frame1.size.width, frame1.size.height), imgRef1);
CGContextRelease(context);
int byteIndex = 0;
for (int i = 0; i < width * height; i++)
{
int8_t alpha = rawData[byteIndex + 3];
if (alpha > 64)
{
NSLog(@"collided in byte: %d", i);
free(rawData);
return YES;
}
byteIndex += 4;
}
free(rawData);
return NO;
[UPDATE 2] I used another line supposedly required just before drawing the masked image.
CGContextSetBlendMode(context, kCGBlendModeCopy);
Still doesn't work. Curiously, I've been checking the alpha values detected on collision and they are random numbers. It's curious because the images I'm using have only full opacity or full transparency, nothing in between.