I was working on a project that involves creating a 3D model of an object using the Microsoft Kinect. I was planning to use the Microsoft Kinect SDK, OpenNI to capture cloud points of the object at different angles and use ICP to map and create the 3D of the object. Please correct me if I am wrong in my statement above. Since I am a amateur at this I really don't know if I am going in the right direction.
My hardware details are - Microsoft Kinect, Windows 7 - 64-bit, Microsoft Visual Studio 2010, Microsoft Kinect SDK, OpenNI,Primesense,NITE(all installed using .exe or self extractors, I did not use cmake...I have kind of gotten fed up of using that! since i run into so many errors!)
As of now, I have been able to connect my Kinect and using some demo tutorials online I was able to view the RGB data and the Depth map of the Kinect. I was reading about OpenNI and was not able to make much progress on that either. (There is code in C++ and C#) Now the questions-
How do I get the 'cloud points' of each image I take of the object? should I use OpenNi for this?
After getting the cloud points of each image I plan to run the ICP algorithm. Any details or links I can use to learn about this and implement it?
After running the ICP algorithm, I need to display the recreated 3D, so should I do it using Visual Studio 2010 itself?
I came across some softwares like 'Meshlab' which help create the 3D using .ply files..the .ply data is obtained from the depth map of the Kinect...Is this another direction I could look at?
Thanks Aditya