I have a point cloud of a depth image that was taken with the camera 30 degrees above the horizontal (rotated 30 degrees in z-axis). I want to translate all of the points back to their position as if the camera was at 0 degrees, which I believe I can do with the following rotation matrix:
|cos(30) -sin(30) 0|
|sin(30) cos30 0|
|0 0 1|
However, when looking at the pcl method to transform a point cloud I found this:
pcl::transformPointCloud (const PointCloud< PointT > &cloud_in,
PointCloud< PointT > &cloud_out, const Eigen::Matrix< Scalar, 4, 4 > &transform)
But why is it a 4x4 matrix as opposed to the 3x3 rotation one above?