This was an assignment in ME 592 - Data Analytics and Machine Learning for Cyber-Physical Systems at Iowa State University.
The Computer Vision Group at the University of Munich (TUM) explored using an Asus Xtion, similar to a Microsoft Kinect v1, to scan a large cabinet. The scanner has a RGB camera as well as an IR Depth Camera. We were assigned three tasks, using parts of example code provided by TUM.
The RGB and Depth images were captured at slightly (and I do mean slightly) different times, thus our first task was to associate a RGB image with its corresponding Depth image simply by comparing the difference of the timestamps against a threshold.
The second task was to use a script we found on TUM to create pointcloud data for each associated RGB and Depth image found in task one. The script took as an input one RGB file and one Depth file, which was tedious due to the thousand or so images we had. To get around this, the scripts used for steps one and two were combined - when the timestamp associations were made in task one, pointclouds were immediately generated. These .ply files could be visualized in a program like Meshlab, seen below.
The last task we were assigned was basically repeat the first two steps, but this time using a ROS bag file. This was our first (ever) experience with ROS, which has a steep learning curve. The additional challenge with this step is that the provided sample code is now quite old and there were many issues, for example, ROS has changed a lot in the time since the original program was made as well as many dependency issues. Nevertheless, after a great deal of research and debugging, we were ultimately successful. The video below shows the visualization made using RVIZ.