1
votes

Following this formula for alpha blending two color values, I wish to apply this to n numpy arrays of rgba image data (though the expected use-case will, in practice, have a very low upper bound of arrays, probably > 5). In context, this process will be constrained to arrays of identical shape.

I could in theory achieve this through iteration, but expect that this would be computationally intensive and terribly inefficient.

What is the most efficient way to apply a function between two elements in the same position between two arrays across the entire array?

A loose example:

# in context, the numpy arrays come from here, as either numpy data in the 
# first place or a path
def import_data(source):
    # first test for an extant numpy array
    try:
        assert(type(source) is np.ndarray)
        data = source
    except AssertionError:
        try:
            exists(source)
            data = add_alpha_channel(np.array(Image.open(source)))
        except IOError:
            raise IOError("Cannot identify image data in file '{0}'".format(source))
        except TypeError:
                raise TypeError("Cannot identify image data from source.")

    return data

# and here is the in-progress method that will, in theory composite the stack of 
# arrays; it context this is a bit more elaborate; self.width & height are just what  
# they appear to be—-the final size of the composited output of all layers

def render(self):
        render_surface = np.zeros((self.height, self.width, 4))
        for l in self.__layers:  
            foreground = l.render() # basically this just returns an np array
            # the next four lines just find the regions between two layers to 
            # be composited
            l_x1, l_y1 = l.origin
            l_x2 = l_x1 + foreground.shape[1]
            l_y2 = l_y1 + foreground.shape[0]
            background = render_surface[l_y1: l_y2, l_x1: l_x2]

            # at this point, foreground & background contain two identically shaped 
            # arrays to be composited; next line is where the function i'm seeking 
            # ought to go
            render_surface[l_y1: l_y2, l_x1: l_x2] = ?
1
@MarkSetchell So, this solution may be what I fall back to, but I'd prefer not to add a dependency to the project for just one method (it's an extensive package for distribution in academic research). I'm reasonably confident there is a way to zip the numpy arrays into a single dimension and do some array manipulation to achieve this, I'm just not remotely competent enough with this sort of programming to get there myself. But I appreciate this, because yes, failing a numpy-only solution this will work.Jonline
Questions like this tend to get better responses if you give some sample data - maybe show a few lines of code to generate 3 images of 5 rows, 6 columns and 4 RGBA values so everyone can see what is getting multiplied by what as all the dimensions are different.Mark Setchell
@MarkSetchell I could, but the real-world context is too complex to show, it's a minor graphics library for a particular context, and as my question states, I will always be working with arrays of identical shape. It amounts to a list of ndarrays and then... the entire rest is my question. But point taken, I'll throw something ecologically valid together.Jonline

1 Answers

7
votes

Starting with these two RGBA images:

enter image description here

enter image description here

I implemented the formula you linked to and came up with this:

#!/usr/local/bin/python3
from PIL import Image
import numpy as np

# Open input images, and make Numpy array versions
src  = Image.open("a.png")
dst  = Image.open("b.png")
nsrc = np.array(src, dtype=np.float)
ndst = np.array(dst, dtype=np.float)

# Extract the RGB channels
srcRGB = nsrc[...,:3]
dstRGB = ndst[...,:3]

# Extract the alpha channels and normalise to range 0..1
srcA = nsrc[...,3]/255.0
dstA = ndst[...,3]/255.0

# Work out resultant alpha channel
outA = srcA + dstA*(1-srcA)

# Work out resultant RGB
outRGB = (srcRGB*srcA[...,np.newaxis] + dstRGB*dstA[...,np.newaxis]*(1-srcA[...,np.newaxis])) / outA[...,np.newaxis]

# Merge RGB and alpha (scaled back up to 0..255) back into single image
outRGBA = np.dstack((outRGB,outA*255)).astype(np.uint8)

# Make into a PIL Image, just to save it
Image.fromarray(outRGBA).save('result.png')

Output image

enter image description here