|M.Sc Student||Josef Shirel|
|Subject||Reinforcement Learning for Autonomous Vehicle Navigation|
in Unknown Rough Terrain
|Department||Department of Autonomous Systems and Robotics||Supervisor||ASSOCIATE PROF. Amir Degani|
|Full Thesis text|
Safe unmanned ground vehicle navigation in unknown rough terrain is crucial for various tasks such as exploration, search and rescue and agriculture. One of the properties of a fully autonomous system is moving around its operating environment without assistance and without harm to the surrounding and to the vehicle itself. To achieve this capability, a vast amount of work was done in the field of robotics motion planning. Motion planning algorithms receive a representation of the sensed environment and find a path for a robot, from start configuration to goal configuration while avoiding obstacles. Depending on the navigation problem, the available representation of the environment and the robot, a suitable motion planning algorithm can be chosen. Some planners are offline, meaning the path is constructed based on known representation of the environment and later followed by path or trajectory following controller. In contrast, online planners construct the path incrementally based on sensor information which is gathered while executing the plan.
Many motion planners require a binarized representation of the environment dividing the space to free space and obstacles. Structured environments such as indoors, provide objects and surface that can be easily estimated as traversable or non-traversable and binarized to obstacles or free space. However, rough terrain presents a variety of objects that can be hard to classify as they can be traversed only from a certain angle or velocity depending on the vehicle’s dynamics. Furthermore, offline global planning is often not possible when operating in harsh, unknown environments, and therefore, online local planning must be used. Most online rough terrain local planners require heavy computational resources, used for optimal trajectory searching and estimating vehicle orientation in positions within the range of the sensors.
The objective of this thesis is to provide an online local planner for autonomous navigation in unknown rough terrain. The planner will avoid the need to binarize the environment or to estimate the traversability of the terrain as the input to the planner is raw sensing data. We present a deep reinforcement learning approach for local planning in unknown rough terrain with zero-range to local-range sensing depending on available sensors. Using neural networks, we gain low memory footprint and the ability to process large amount of data. In order to evaluate our approach, we implemented two baselines algorithms inspired by potential functions as directional method and ego-graphs as local motion planning search spaces methods. Our approach was able to achieve superior results compared to the baselines methods, both in percentage of successful plans and planning time.
We also validate our approach in a dynamic simulation, Gazebo, navigating the Unmanned Ground Vehicle (UGV) on a continuous terrain with a variety of discrete obstacles. The generated terrain presents hazardous paths that can lead to flipping (pitch direction), rolling-over (roll direction), sliding and falling into pits. Our local planner allows navigating the UGV safely from start position to goal position in the generated terrain while avoiding different types of obstacles and traversing dynamically challenging areas with increased slip.