I have implemented a TCB Spline in Python via Numpy. The critical piece of the code appears below:
np.einsum('km,km,kl,lm->m',xdiffpow_knot, h_pow_knot[:,i], hermite_matrix, lag_knot[:,i])
where k and l are always 4 (k being powers of 0 to 3 and l being the 4 control points used by TCB splines) and m is the length of the array x I want to interpolate.
I implemented via np.einsum at the time as I couldn't figure out the necessary matrix operations to do it without np.einsum. It seems as though I'm left with an extra m in the result (note that the first two km terms are simply element-wise multiplication).
Now I'm reimplementing in Julia without einsum (so I can take advantage of algorithmic differentiation in ForwardDiff, ReverseDiff, etc.). How do I replicate the above einsum via matrix operations?
What have I tried?
Thinking solely about the dimensions involved and making the dot-product work, it feels as though I'm missing an m-element vector. The only m element vector that makes sense is a 1-s vector, which I believe would act as a summation. But I'd like to validate that this is correct in theory before I rely on it.
Full Code. It's Ugly...
HERMITE_MATRIX = np.array([[ 2.,-2., 1., 1.],
[-3., 3.,-2.,-1.],
[ 0., 0., 1., 0.],
[ 1., 0., 0., 0.]])
def hermite(x_knot, y_knot, tension=0.0, continuity=0.0, bias=0.0, weight_fwd_knot=None, hermite_matrix=HERMITE_MATRIX):
order = 3
h_knot = np.diff(x_knot)
h_pow_knot = (1.0 / h_knot) ** np.arange(order, -1, -1)[:,None]
is_first_knot = np.isclose(x_knot, x_knot[0])
is_last_knot = np.isclose(x_knot, x_knot[-1])
y_next_knot = np.roll(y_knot,-1)
y_prev_knot = np.roll(y_knot, 1)
x_next_knot = np.roll(x_knot,-1)
x_prev_knot = np.roll(x_knot, 1)
if weight_fwd_knot is None:
weight_fwd_knot = np.where(is_last_knot, 0.0, np.where(is_first_knot, 1.0, (x_next_knot - x_knot)/(x_next_knot - x_prev_knot)))
weight_bak_knot = 1.0 - weight_fwd_knot
dydxfwd_knot = np.where(is_last_knot, (y_knot - y_prev_knot)/(x_knot - x_prev_knot), (y_next_knot - y_knot)/(x_next_knot - x_knot))
dydxbak_knot = np.where(is_first_knot, (y_next_knot - y_knot)/(x_next_knot - x_knot), (y_knot - y_prev_knot)/(x_knot - x_prev_knot))
dy_in_knot = (1 - tension) * ((1 + continuity) * (1 - bias) * dydxfwd_knot * weight_fwd_knot + (1 - continuity) * (1 + bias) * dydxbak_knot * weight_bak_knot)
dy_out_knot = (1 - tension) * ((1 - continuity) * (1 - bias) * dydxfwd_knot * weight_fwd_knot + (1 + continuity) * (1 + bias) * dydxbak_knot * weight_bak_knot)
lag_knot = np.array([y_knot[:-1], y_next_knot[:-1], dy_out_knot[:-1] * h_knot, dy_in_knot[1:] * h_knot])
def f(x):
i = np.maximum(np.minimum(np.searchsorted(x_knot, x, side="right") - 1, x_knot.size - 2), 0)
xdiffpow_knot = (x - x_knot[i]) ** np.arange(order, -1, -1)[:,None]
return np.einsum('km,km,kl,lm->m',xdiffpow_knot, h_pow_knot[:,i], hermite_matrix, lag_knot[:,i])
return f
xarrays, which are the ones that impact or are impacted by the non-differentiable functions. It is almost always the case that the jacobian is something likedyknotdy. Thus far I've implemented a linear, cubic, and exponential tension spline in Julia and alldyknotdyseem to work as expected. - MikeRanddydyknotis the jacobian. - MikeRand