I'm currently working on an Augmented reality application. The targetted device being an Optical See-though HMD I need to calibrate its display to achieve a correct registration of virtual objects. I used that implementation of SPAAM for android to do it and the result are precise enough for my purpose.
My problem is, calibration application give in output a 4x4 projection matrix I could have directly use with OpenGL for exemple. But, the Augmented Reality framework I use only accept optical calibration parameters under the format Field of View some parameter + Aspect Ratio some parameter + 4x4 View matrix.
Here is what I have :
Correct calibration result under wrong format :
6.191399, 0.114267, -0.142429, -0.142144
-0.100027, 11.791289, 0.05604, 0.055928
0.217304,-0.486923, -0.990243, -0.988265
0.728104, 0.005347, -0.197072, 0.003122
You can take a look at the code that generate this result here.
What I understand is the Single Point Active Alignment Method produce a 3x4 matrix, then the program multiply this matrice by an orthogonal projection matrix to get the result above. Here are the param used to produce the orthogonal matrix :
near : 0.1, far : 100.0, right : 960, left : 0, top : 540, bottom: 0
Bad calibration result under right format :
Param 1 : 12.465418
Param 2 : 1.535465
0.995903, -0.046072, 0.077501, 0.000000
0.050040, 0.994671, -0.047959, 0.000000
-0.075318, 0.051640, 0.992901, 0.000000
114.639359, -14.115030, -24.993097, 1.000000
I don't have any information on how these result are obtained.
I read these parameters from binary files, and I don't know if matrices are stored in row or column major form. So the two matrices may have to be transposed.
My question is : Is it possible, and if yes, how to get these three parameters from the projection first matrix I have ?
fov = 2.0*atan(1.0/prjM[1][1])*180.0/PI; aspect = prjM[1][1]/prj[0][0]
- see How to recover view space position given view space depth value and ndc xy – Rabbid76