|M.Sc Student||Tcenov Ilya|
|Subject||Error Predction for Adaptive Depth Sampling|
|Department||Department of Electrical and Computers Engineering||Supervisor||ASSOCIATE PROF. Guy Gilboa|
|Full Thesis text|
Autonomous systems require extensive and accurate information regarding the surrounding environment. Efficient data gathering is a fundamental infrastructure for correct decision making of the system, e.g. in navigation and obstacle avoidance. Today, most autonomous systems are equipped with various sensing systems. Although visual stereo systems can be used for depth sensing, their accuracy sharply deteriorates with distance. Moreover, they are vulnerable to camouflaged objects (of low contrast compared to the background), as well as to poor weather conditions.
Light Detection and Ranging (LiDAR) sensors are widely used for depth sensing. They are based on active infra-red illumination, and therefore are more robust compared to stereo methods. However, LiDARs sample depth very sparsely, due to system and power constraints. This requires additional post-processing algorithms in order to generate dense depth estimations of the scene.
Our goal is to investigate ways to improve the sampling pattern in order to considerably enhance the reconstruction accuracy. Achieving accurate depth maps with fewer samples can result in increased frame-rate, or reduced sensor price. Recent technological developments, based on solid state phased-arrays and MEMS, enable LiDARs to steer the illumination, allowing flexibility in the sampling pattern.
We introduce our framework for image-guided depth sampling that is aimed to reduce depth reconstruction error by generating sampling patterns based on camera imagery. First, we compute an Importance Map for each RGB image of a scene using a set of random sampling patterns and calculating the average per-pixel reconstruction error. Next, since this computation involves extensive sampling, we train a network to predict an Importance Map for each given RGB image. We then use these maps to construct adaptive patterns that are denser in regions that are harder to reconstruct. Finally, we train a depth reconstruction network to predict dense depth maps based on RGB images and our importance-based sampling patterns.
The sampling strategy of our modular framework can be adjusted according to hardware limitations, type of depth predictor, and any custom reconstruction error metric that is intended to be minimized. As the results show, our method of adaptive depth sampling outperforms other sampling strategies.