0
votes

(I'm using Eigen C++ library)

I need to solve a system of equations with the form of Ax = 0 in order to find the x vector (A is sparse).

EDIT: A is SPD (symmetric positive definite)

Because some of the x values are known, I removed them out of A to create Af and into another matrix Ac with the same dimensions as A, and multiplied -Ac with a vector that has the known x values and zeros in all other places to create a vector b.

Now I try to solve Af * x = b using SparseLU<SparseMatrix<float>>, but it fails because the factorization doesn't work. I get this error:

THE MATRIX IS STRUCTURALLY SINGULAR ... ZERO COLUMN AT 480

Why is it a problem that I have a zero column? A zero row would have been a problem for sure but a zero column? I just changed something like this:

x_1 + x_2 = 0

x_2 = 3

to

x_1 + 0 = -3

The solution is x_1 = -3 & x_2 = 3 even though I would have had a zero column had I put the equations in a matrix.

How can I solve this problem?

3

3 Answers

2
votes

You have "A" and "b = 0" given, and know A is symmetric positive definite. Because A is spd, it has no nontrivial null-space (this is an important property). That is, the only "x" that exists such that Ax = b = 0 is x = 0.

So you won't find a solution if given nonzero values. It will always be x = 0. I say this to emphasize Eigen can't solve the problem, by the way it's formulated.

1
votes

Why is it a problem that I have a zero column?

Because it means that your equation ignores one of the components of the vector of unknowns, which results in the solution being non-unique (any value of the ignored unknown could satisfy the equation).

Because some of the x values are known, I removed them out of A to create Af and into another matrix Ac with the same dimensions as A, and multiplied -Ac with a vector that has the known x values and zeros in all other places to create a vector b.

I'm not sure how exactly you removed the x values from the matrix. They aren't the components of the matrix, so they can't be removed. But if they are indeed known, you should substitute them into the system of equations, and then remove corresponding columns from the matrix, and then remove extraneous rows to make the matrix square.

E.g. suppose you have A and b defined as follows:

    ⎛1 2 3⎞
A = ⎜2 4 5⎟,
    ⎝3 5 6⎠
b = (3,-5,8)ᵀ,

and your equation is

Ax=b.

Let x₂ be known to be -32. We can substitute it, yielding the following equation:

A'x'+c=b,

where

     ⎛1 3⎞
A' = ⎜2 5⎟,
     ⎝3 6⎠
x' = (x₁,x₃)ᵀ,
c = (-64,-128,-160)ᵀ.

c here is the product of the second column of A with x₂. After moving c to the RHS to yield k=b-c there, you'll get an overdetermined (albeit consistent) system

A'x'=k,

so you should remove one of the rows from A and the same row from k. E.g. removing the last row will result in the equation

A"x'=k',

where

     ⎛1 3⎞
A" = ⎝2 5⎠,
k' = (67,123)ᵀ,

and whose solution is x'=(x₁,x₃)ᵀ=(34,11)ᵀ. This equation can be solved using the usual linear solvers that work with square matrices.

0
votes

My linear algebra is a bit rusty.

In order for a matrix to have an unique solution, the rank of the matrix needs to be the same as the dimension. If you have a zero column then rank will be one less then the dimension and you have one free variable.

If you are trying to solve Ax = 0 then it is the null space that you are after, try looking at this link.