There has been previous questions (here, here and here) related to my question, however my question has a different aspect to it, which I have not seen in any of the previously asked questions.
I have acquired a dataset for my research using Kinect Depth sensor. This dataset is in the format of .png images for both depth and rgb stream at a specific instant. To give you more idea below are the frames:
EDIT: I am adding the edge detection output here.
Sobel Edge detection output for:
RGB Image
Depth Image
Now what I am trying to do is align these two frames to give me a combined RGBZ image.
I do not have knowledge of the underlying camera characteristics or the distance between both rgb and infrared sensors.
Is there a method which can be applied to match the RGB values to the corresponding Z values?
One of the ideas I have is to use edges in both images and try to match them.