My problem is as follows:
import numpy as np
# given are two arrays of random, different shapes and sizes
a = np.array(...).reshape(?)
b = np.array(...).reshape(?)
# if broadcasting a and b works
c = a * b
# I want to guarantee that the assignment of the result works too
a += c
This obviously works if a.shape == (a * b).shape
,
but fails if b
is the larger array.
I therefore want in the case that the broadcasting result is larger
than a
, to summarize over the broadcasted axes using some reduce operator.
An example of the expected output of the new broadcast_inverse() function that I need:
a = np.array(2)
b = np.ones(6).reshape(2, 3)
# broadcasting a to the shape of b
c = a * b
# now invert the broadcast by accumulating the excess axes
d = broadcast_inverse(c, a.shape, reduce_fun=np.sum)
assert a.shape == d.shape
print('a=', a) # a= 2
print('b=', b) # b= [[1 1 1] [1 1 1]]
print('c=', c) # c= [[2 2 2] [2 2 2]]
print('d=', d) # d= 12
A second example using 2D arrays and a different reduce function:
a = np.array([[2], [3]])
b = np.arange(2*3).reshape(2, 3) + 1
# broadcasting a to the shape of b
c = a * b
# broadcast inversion using the big product over the excess axes
d = broadcast_inverse(c, a.shape, reduce_fun=np.prod)
assert a.shape == d.shape
print('a=', a) # a= [[2] [3]]
print('b=', b) # b= [[1 2 3] [4 5 6]]
print('c=', c) # c= [[ 2 4 6] [12 15 18]]
print('d=', d) # d= [[ 48], [3240]]
I tried doing it by iterating over shapes, but it turned out to be quite difficult to do bug free. So I was hoping somebody knows if numpy exposes over which axes it will perform a broadcast or maybe somebody knows some other cool efficient trick. I actually would also be happy with a function that doesn't support scalar arrays and different reduce functions. broadcast_inverse only needs to support arrays larger than 0D and the sum reduce function.
complexthing(dependencies)+1
you atleast need to make sure that either your c or b shapes are fixed cause from that you can easily compute the other's shape – Vaibhav gusain