This question is about precision of computation using NumPy vs. Octave/MATLAB (the MATLAB code below has only been tested with Octave, however). I am aware of a similar question on Stackoverflow, namely this, but that seems somewhat far from what I'm asking below.
Setup
Everything is running on Ubuntu 14.04.
Python version 3.4.0.
NumPy version 1.8.1 compiled against OpenBLAS.
Octave version 3.8.1 compiled against OpenBLAS.
Sample Code
Sample Python code.
import numpy as np
from scipy import linalg as la
def build_laplacian(n):
lap=np.zeros([n,n])
for j in range(n-1):
lap[j+1][j]=1
lap[j][j+1]=1
lap[n-1][n-2]=1
lap[n-2][n-1]=1
return lap
def evolve(s, lap):
wave=la.expm(-1j*s*lap).dot([1]+[0]*(lap.shape[0]-1))
for i in range(len(wave)):
wave[i]=np.linalg.norm(wave[i])**2
return wave
We now run the following.
np.min(evolve(2, build_laplacian(500)))
which gives something on the order of e-34
.
We can produce similar code in Octave/MATLAB:
function lap=build_laplacian(n)
lap=zeros(n,n);
for i=1:(n-1)
lap(i+1,i)=1;
lap(i,i+1)=1;
end
lap(n,n-1)=1;
lap(n-1,n)=1;
end
function result=evolve(s, lap)
d=zeros(length(lap(:,1)),1); d(1)=1;
result=expm(-1i*s*lap)*d;
for i=1:length(result)
result(i)=norm(result(i))^2;
end
end
We then run
min(evolve(2, build_laplacian(500)))
and get 0
. In fact, evolve(2, build_laplacian(500)))(60)
gives something around e-100
or less (as expected).
The Question
Does anyone know what would be responsible for such a large discrepancy between NumPy and Octave (again, I haven't tested the code with MATLAB, but I'd expect to see similar results).
Of course, one can also compute the matrix exponential by first diagonalizing the matrix. I have done this and have gotten similar or worse results (with NumPy).
EDITS
My scipy
version is 0.14.0
. I am aware that Octave/MATLAB use the Pade approximation scheme, and am familiar with this algorithm. I am not sure what scipy
does, but we can try the following.
Diagonalize the matrix with
numpy
'seig
oreigh
(in our case the latter works fine since the matrix is Hermitian). As a result we get two matrices: a diagonal matrixD
, and the matrixU
, withD
consisting of eigenvalues of the original matrix on the diagonal, andU
consists of the corresponding eigenvectors as columns; so that the original matrix is given byU.T.dot(D).dot(U)
.Exponentiate
D
(this is now easy sinceD
is diagonal).Now, if
M
is the original matrix andd
is the original vectord=[1]+[0]*n
, we getscipy.linalg.expm(-1j*s*M).dot(d)=U.T.dot(numpy.exp(-1j*s*D).dot(U.dot(d))
.
Unfortunately, this produces the same result as before. Thus this probably has something to do either with the way numpy.linalg.eig
and numpy.linalg.eigh
work, or with the way numpy
does arithmetic internally.
So the question is: how do we increase numpy
's precision? Indeed, as mentioned above, Octave seems to do a much finer job in this case.
expm
fromscipy
. What version of scipy are you using? – Warren Weckesserexpm
is a tricky function. mathworks.com/help/matlab/ref/expm.html cites an article by Moler about '19 dubious Ways. The Pade approximation depends strongly on the preconditioning strategy. The Octave doc for
expm` also discusses this approximation issue. – hpauljscipy
callsnumpy
, no? Anyway, my version is0.14.0
. – user2321808scipy/numpy
is off by quite a few orders of magnitude. We can actually diagonalize the matrix first withnumpy
'seig
oreigh
, and then take the exponential of the diagonal matrix. This produces the same result. I don't know which algorithmscipy
uses, but it seems this either has to do with the waynumpy
diagonalizes matrices, or with the internal arithmetic workings. Question is, how can we increasenumpy
's precision? – user2321808expm
is implemented in scipy. The source code for the development version is in github.com/scipy/scipy/blob/master/scipy/linalg/matfuncs.py; theexpm
function there is just a thin wrapper of the real implementation in github.com/scipy/scipy/blob/master/scipy/sparse/linalg/… – Warren Weckesser