|M.Sc Student||Shalev Omer|
|Subject||Robot Navigating in Orchards using Top-View Imagery|
|Department||Department of Autonomous Systems and Robotics||Supervisor||Professor Amir Degani|
Mobile robots are becoming common in agriculture and are used for a variety of purposes. In orchards, there are various tasks that a mobile robot can perform, such as sensing plant stress, sensing pests, yield monitoring, or selective spraying. One commonality to these tasks is the need for autonomous and accurate navigation. While for some tasks it is enough to have a rough location estimate, for others it is crucial to have centimeter-level accuracy to allow precise sensing or precise manipulation capabilities, i.e., precision agriculture. As examples, accurate localization of an Unmanned Ground Vehicle (UGV) allows to execute accurate and selective pesticide spraying based on the tree’s status and history. The ultimate objective of this thesis is to provide navigation solutions which can serve for precision agriculture use cases.
Navigation of ground vehicles in orchards is a complex problem which is yet to be fully addressed. The typical navigation approaches are not adjusted to the characteristics of the orchard environment, such as large dimensions, difficult terrain and homogeneous scenery. In addition, Global Positioning System (GPS) localization is usually not applicable in orchards due to signal occlusions. To alleviate the above-mentioned difficulties in this complex environment, we propose to use top-view images of the orchard acquired in real time. This additional auxiliary sensing aids by providing additional information to the ground vehicle.
Our navigation approaches rely on computer vision techniques that are applied on the top-view images. By extracting “canopies masks” from the images, we form a heterogeneous and compact representation of the orchard. These techniques also allow to form a semantic tree map which distinguishes between the individual trees and labels them. Using these representations, we are able to tackle the navigation challenges in new ways which are GPS-independent and refrain from the use of artificial landmarks.
In this work, we suggest two families of applicable navigation architectures that leverage the top-view observations from different altitudes. For continuous low altitude video streams, we present an innovative way to deal with the “kidnapped robot problem” by using the canopies masks extracted from the images. Using high altitude images, we propose a semantic global path planner which plans trajectories between the labeled trees and is based on a cost map derived from the canopies mask. As these high altitude images are acquired periodically and opportunistically, we suggest using them also for periodic pose updates of the ground vehicle.
The proposed approaches are supported by field experiments conducted in several real orchards, during different seasons and different times of the day. The data collected at the field was used in numerous offline experiments and analyses. These experiments demonstrate the effectiveness of our suggested approaches, both in terms of accuracy and repeatability.