MIN-Fakultät
Fachbereich Informatik
TAMS

3 Dimensional Reconstruction from Monocular Image Sequences

Some Results

Description

Image driven environment perception is one of the main research topics in the field of autonomous robot applications. The thesis of Sascha Jockel invastigated and implemented an image based three dimensional reconstruction system for such robot applications in case of daily table scenarios. Perception is made at two spatial-temporal varying positions by a micro-head camera mounted on a six-degree-of-freedom robot-arm of our service-robot TASER. Via user interaction the epipolar geometry and fundamentalmatrix will be calculated by selecting 10 corresponding corners in both input images predicted by a Harris-corner-detector. The images then will be rectified by the calculated fundamentalmatrix to bring corresponding scanlines together on the same vertical image coordinates. Afterwards a stereocorrespondence is made by a fast Birchfield algorithm that provides a 2.5 dimensional depth map of the scene. Based on the depth map a three dimensional textured point-cloud will be presented as interactive OpenGL scene model.

The pictures below demonstrate different steps of the reconstruction system. The pictures of the first row are the original captured images. In the second row the results of the corner detection algorithm is presented and the user has to choose few corresponding points in both images. The third row shows the rectified images and in the fourth row these image is the resulting depth map. The depth map is used to compute the depth of each pixel and the resulting 3 dimensional model is presented in the last row.

Images

Input images Left input img Right input img
Feature extraction Left selected Right selected
Epipolar lines Left rectified Right rectified
Rectified images Left rectified Right rectified
Depth image Depth img
Few 3 dim. views Reconstructed 3-D scene (perspective 1)

Further Informations