I have a quite unique problem. I have a multivariate linear regression problem where my goal is to find the intercept of such regression making sure that the sum of the coefficients is <= 1 and that each coefficient is not negative. I spent a lot of time searching online and I found a great answer here:
The below code shows how I managed to override the coefficients of the regression using the output from the code shared in the answer above. My question/issue at this point is: how do I calculate the new intercept value given the custom coefficients?
from sklearn.datasets import load_boston
X, Y = load_boston(return_X_y=True)
from scipy.optimize import minimize
Y = y
# Define the Model
model = lambda b, X: b[0] * X[:,0] + b[1] * X[:,1] + b[2] * X[:,2]
# The objective Function to minimize (least-squares regression)
obj = lambda b, Y, X: np.sum(np.abs(Y-model(b, X))**2)
# Bounds: b[0], b[1], b[2] >= 0
bnds = [(0, None), (0, None), (0, None)]
# Constraint: b[0] + b[1] + b[2] - 1 = 0
cons = [{"type": "eq", "fun": lambda b: b[0]+b[1]+b[2] - 1}]
# Initial guess for b[1], b[2], b[3]:
xinit = np.array([0, 0, 1])
res = minimize(obj, args=(Y, X), x0=xinit, bounds=bnds, constraints=cons)
print(f"b1={res.x[0]}, b2={res.x[1]}, b3={res.x[2]}")
#Save the coefficients for further analysis on goodness of fit
beta1 = res.x[0]
beta2 = res.x[1]
beta3 = res.x[2]
from sklearn.linear_model import LinearRegression
model2 = LinearRegression(nonnegative=False)
model2.fit(X, Y)
print("Regression intecept = {}".format(model2.intercept_))
print("Regression coefficient(s) -> \n{}".format(model2.coef_))
r_sq_model2 = model2.score(X, y)
print("Regression R-squared = {}".format(r_sq_model2))
model2.coef_ = np.array([ beta1, beta2, beta3 ])
print("\n* Overriden Regression coefficient(s) -> \n{}".format(model2.coef_))
r_sq_model2 = model2.score(X, y)
print("Regression R-squared with adj coeff(s) = {}".format(r_sq_model2))
# HOW TO IF I FIND THE NEW INTERCEPT?
Thanks for your help