I'm using Emgu.CV to perform some basic image manipulation and composition. My images are loaded as Image<Bgra,Byte>
.
Question #1: When I use the Image<,>.Add()
method, the images are always blended together, regardless of the alpha value. Instead I'd like them to be composited one atop the other, and use the included alpha channel to determine how the images should be blended. So if I call image1.Add(image2)
any fully opaque pixels in image2 would completely cover the pixels from image1, while semi-transparent pixels would be blended based on the alpha value.
Here's what I'm trying to do in visual form. There's a city image with some "transparent holes" cut out, and a frog behind. This is what it should look like:
And this is what openCV produces.
How can I get this effect with OpenCV? And will it be as fast as calling Add()
?
Question #2: is there a way to perform this composition in-place instead of creating a new image with each call to Add()
? (e.g. image1.AddImageInPlace(image2)
modifies the bytes of image1
?)
NOTE: Looking for answers within Emgu.CV, which I'm using because of how well it handles perspective warping.