|Ph.D Student||Amir Geva|
|Subject||Sensory Routines for Indoor Autonomous Quad-Copter|
|Department||Department of Computer Science||Supervisors||Full Professor Rivlin Ehud|
|Dr. Rotstein Hector|
Quad-Copters are quickly becoming an industry and military tool that perform a myriad of tasks. Initially, these crafts are manually controlled, at least in part. It is inevitable, though, that for the purposes of scalability, autonomous behavior will become essential.
In order to facilitate autonomous operation of a quad-copter, it is necessary to know
where the craft is located, with respect to the environment. When flying outdoors, the
position can be sensed using Global Positioning System (GPS), but given that this system may fail, it is necessary to have an alternative. Using a combination of structure from motion with information about the environment, in the form of a sampled digital terrain map (DTM), one can calculate the position of the quad-copter with only a monocular camera as a sensor. This research describes means of integrating DTM information into a state-of-the-art structure from motion method called Bundle Adjustment.
When moving indoors, the option of using a GPS sensor becomes impossible, and an alternative method is required. The first part of the research presented in this thesis is an extension of prior work on the outdoor scenario, and introduces new constraint types, and a new smooth function model for the DTM. The method is compared to a previously available method called C-DTM and is shown to be superior. The thesis also introduces a localization method that is based on a combination of a LIDAR sensor and DTM, for cases where poor visibility may render the camera useless.
For the indoor environment, a new method based on integration of bundle adjustment
and a building floor plan is presented, and its performance is analyzed. Also, due to
the limited processing power on the quad-copter, research has been done to reduce the
processing load, including feature filtering and new light-weight methods for calculating camera orientation and position from single frames. Combining both methods yields the means to do real-time control and navigation.