|Ph.D Student||Barzilay Ouriel|
|Subject||Active Vision: From Biokinematics and Animal Behavior to|
|Department||Department of Mechanical Engineering||Supervisors||Professor Alon Wolf|
|Professor Lihi Zelnik-Manor|
|Full Thesis text|
Active vision is a process used by human beings and most animals to improve their visual recognition and avoid ill-posed visual problems. By combining motion with their visual senses and perception capabilities, active observers can solve basic visual problems more efficiently than passive observers, and complex problems can be addressed more easily, as many studies have demonstrated. Autonomous robotic systems, therefore, should imitate this process for improved visual perception. Detection and characterization of active vision strategies in biological systems and implementing them on a robotic system are a challenging and important task that was addressed in this study, based on the exploration of active vision mechanisms in the owl. We investigated how, by imitating the barn owl's repertoire of motor behaviors, an autonomous agent could obtain augmented information on the structure of the environment to achieve improved scan accuracy in object modeling.
Barn owls virtually lack eye movements, but are known to possess highly developed stereopsis capabilities. Their long and flexible necks allow them to perform stereotypic head movements while focusing on objects of interest. Prominently, barn owls perform conspicuous side-to-side movements, called peering, when introduced to a new environment. These movements, also performed by various other species, are believed to play an essential role in distance estimation and visual perception, by inducing motion parallax.
After a novel experimental setup allowing simultaneous head motion capture and estimation of the incoming visual signal in barn owls was developed, we performed a thorough kinematic analysis and characterized the peering motions. Subsequently, a robotic agent equipped with the Kinect camera was constructed for investigation of active vision mechanisms with focus on scan accuracy. The robotic platform produced 3D reconstructions of static environments in real time by means of Microsoft Research’s KinectFusion algorithm. Scans performed with the extracted peering motions were compared to point clouds obtained from other types of scanning trajectories. The principal objective of this interdisciplinary research was to improve the scan accuracy of autonomous robots by enriching them with bio-inspired viewpoint manipulation.