2
votes

My question is about a school project that I'm working on. It involves mapping 3D models of clothing (like a pair of jeans) on a skeleton that is generated by my Kinect camera.

An example can be found here: http://www.youtube.com/watch?v=p70eDrvy-uM.

I have searched on this subject and found some related threads like: http://forums.create.msdn.com/forums/t/93396.aspx - this question demonstrates a way using brekel for motion capturing. However, I have to present it in XNA.

I believe that the answer lies in the skeleton data of the 3D model (properly exported as a .FBX file). Is there a way to align or match that skeleton with a skeleton that the Kinect camera generates?

Thanks in advance!

Edit: I am making some progress. I have been playing around with some different models, trying to move some bones upward (very simple use of CreateTranslation with a float that is calculated on the elapsed game time), and it works if I choose the rootbone, but it doesn't work on some bones (like a hand or an arm for example). If I track al the Transform properties of that bone including the X, Y, and Z properties then I can see that something is moving.. However the chosen bones stays in it's place. Anyone have any thoughts perhaps..?

4

4 Answers

7
votes

If you are interested, then you'll find a demo here. It has source code for using Real-time Motion capture using the Kinect and XNA.

3
votes

I have been working on this off and on for a while now. A simple solution I'm using right now is you use the points the nui skeleton tracks to match to the rotations of various joints in a .fbx model. The fbx model will most likely have many more joints then what are tracked and for those you can just iterate a rotation.

The fun part:

The Kinect tracks skeleton joint position in skeleton space -1 to 1 while your models need rotations in model space. Both of them provide child position or rotation in relation with their parent bone in the hierarchy. Also the rotations you need for a .fbx model are around an arbitrary axis.

What you will need is the change from the .fbx model in its bind pose to the pose represented by the kinect data. To get this you can do some vector math to find the rotation of a parent joint around an arbitrary axis and rotate it accordingly then move on down the skeleton chain.

Say you have a shoulder we will call point a and the elbow we can call point b on the bind pose of the fbx model. We will have a point a' and b' on the skeleton. So for the bind model we will have a vector V from the shoulder to the elbow.

V = b - a

The skeleton model will also have a vector V'

V' = b' - a'

So the axis of rotation for the shoulder will be

Vhat = V cross-product V'

The angle of rotation for the shoulder around Vhat

Theta = ((V dot-product V') / magnitude(V) ) * magnitude(V')

To you will want to rotate the shoulder joint on the fbx model by theta around Vhat.

The model may seem to flop a bit so you may have to use some constraints or use quaternions or other things that help it not look like a dead fish.

Also I'm not sure if XNA has a built in function to rotate around an arbitrary axis. And if someone could double check my math I'd appreciate it, I'm a little rusty.

0
votes

The Kinect SDK delivers only points of body parts like head postion or right hand position. Seperately you can also read the depth stream from the Kinect sensor.

But currently the Kinect doesn't generate a 3D Model of the body. You would have to do that by yourself.

0
votes

I eventually settled for the Digital Runes option (with Kinect demo), which was almost perfect apart from a few problems that I wasn't able to solve.

But because I had to hurry for school, we decided to turn the project around completely and our new solution did not involve the Kinect. The Digital Runes examples did help a lot though.