0
votes

I have a symmetric positive definite matrix "A" of dimension nxn. I want to compute its inverse and square root. My questions are:

  1. I can compute the inverse using lapack subroutine "dpotri" which returns an upper/lower triangular part of inverse of A. Can I compute the square root of A with information obtained from dpotri or do I need to use "dpotrf" to compute the square root separately. The order is not important. I mean to say, can we use "dpotrf" first to compute A=LL' (where L' is the square root) and from them compute inverse of A without using dpotri.

  2. I only have upper triangular part of A and rest of elements are set to 0 initially. I can change its lower part by copying elements from the upper part but I want to avoid this operation. Can we use "dpotri" or "dpotrf" on matrix "A" having only upper part (and rest of the matrix elements set to 0).

1
Just to make sure that I get you right. You need the cholesky factorisation of both the matrix and its square root, right?Kaveh Vahedipour
No. I just need Cholesky factorization of "A" matrix only. So A becomes LL' where L' is the square root.user402940
OK. So the other route is over dsyevr eigen values and eigen vectors. But technically it depends on how well conditioned your matrices are. Then dsyevr converges very quickly and should outperform the Cholesky approach easily. But a poorly conditioned matrix will behave better in dpotrf.Kaveh Vahedipour
Sounds great. My matrix is a correlation matrix and is symmetric positive definite. It gets created during each iteration of an algorithm and slowly gets ill-conditioned in later iterations. I think "dsyevr" can be used instead of dpotrf for eigenvalues in the initial iterations as it outperforms cholesky factorization.user402940
Would you mind accepting this as an answer then? You'd be totally awesome. This does not only help my credit, but also make it easier for folks to find the answer later.Kaveh Vahedipour

1 Answers

1
votes

OK. So the other route is over dsyevr eigen values and eigen vectors. But technically it depends on how well conditioned your matrices are. Then dsyevr converges very quickly and should outperform the Cholesky approach easily. But a poorly conditioned matrix will behave better in dpotrf.