טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentLerner Ronen
SubjectConstrained Pose and Motion Estimation
DepartmentDepartment of Computer Science
Supervisor Professor Ehud Rivlin
Full Thesis textFull thesis text - English Version


Abstract

Developing totally autonomous navigation system for vehicles and UAVs is a challenging task. Most navigation systems today either integrate inertial/odometry measurements which yield an increasing drift along time, or use GPS measurements which might be unavailable under some circumstances. Vision based navigation algorithms compute the camera pose either using a set of landmarks or by ego-motion integration.

For the first alternative, correspondences between the image features and the 3D landmarks need to be computed which is a challenging task. The ego-motion integration alternative suffers from the same drifts that appear in inertial navigation.


It can be stated that what unifies all pose estimation algorithms is that they utilize the available data - both the captured images and the information about the surrounding environment - in order to define a system of constraints on the navigation parameters. The accuracy and robustness of the different algorithms are a direct result of the constraint system quality. As we have more information at our disposal the conditionality

of the constraint systems may improve which in turn yield stronger algorithms. However, there may be several possible methods to utilize the information for the navigation task. Some methods use this information in non-optimal manner while others fully exploit the available data. It is in the core of this research to examine proper utilization of the available data in order to achieve as accurate and robust navigation results as possible.


A novel algorithm for pose and motion estimation using image sequence and a Digital Terrain Map (DTM) is presented. Using a DTM as a global reference enables the recovery of the absolute position and orientation of a camera with respect to the external reference frame. Since the full structure of the observed terrain is encoded in the DTM no specific features need to be detected and matched (in contrast to the landmark based approach).


Two alternative methods are proposed for utilizing the available data to construct the constraints on the navigation parameters. One alternative starts by computing feature correspondences between two selected images and then use them, together with the 3D model, for the constraints construction. Another alternative is to combine in the constraints the well known brightness constancy constraint which directly relates to the brightness of the images pixels. This yields a ”direct-method” scheme that spare us the need to compute the feature correspondence which might be very difficult in some scenarios.