I am attempting to convert a sum of absolute deviations to a linear programming problem so that I can utilize CPLEX (or other solver). I am stuck on how the matrices are to be set up. The problem is as follows:
minimize abs(x1 - 5) + abs(x2 - 3)
s.t. x1 + x2 = 10
I have the following constraints set up to transform the problem into a linear form:
x1 - 5 <= t1
-(x1 - 5) <= t1 and
x2 - 3 <= t2
-(x2 - 3) <= t2
I've set up the objective function as
c = [0,0,1,1]
but I am lost on how to set up
Ax <= b
in matrix form. What I have so far is:
A = [[ 1, -1, 0, 0],
[-1, -1, 0, 0],
[ 0, 0, 1,-1],
[ 0, 0,-1,-1]]
b = [ 5, -5, 3,-3]
I have set up the other constraint in matrix for as:
B = [1, 1, 0, 0]
b2 = [10]
When I run the following:
linprog(c,A_ub=A,b_ub=b,A_eq=B,b_eq=b2,bounds=[(0,None),(0,None)])
I get the following error message back:
ValueError: Invalid input for linprog: A_eq must have exactly two dimensions, and the number of columns in A_eq must be equal to the size of c
I know there is a solution because when I use scipy.optimize.minimize it solves to [6,4]. I'm sure the issue is I am not formulating the input matrices correctly but I am not sure how to set them up so that it runs.
Edit - here is the code that does not run:
import numpy as np
from scipy.optimize import linprog, minimize
c = np.block([np.zeros(2),np.ones(2)])
print("c =>",c)
A = [[ 1, -1, 0, 0],
[-1, -1, 0, 0],
[ 0, 0, 1,-1],
[ 0, 0,-1,-1]]
b = [[ 5, -5, 3,-3]]
print(A)
print(np.multiply(A,b))
B = [ 1, 1, 0, 0]
b2 = [10]
print(np.multiply(B,b2))
linprog(c,A_ub=A,b_ub=b,A_eq=B,b_eq=b2,bounds=[(0,None),(0,None)],
options={'disp':True})