|M.Sc Student||Rotman Daniel|
|Subject||RGB Based Temporal Depth Restoration|
|Department||Department of Electrical Engineering||Supervisor||Professor Guy Gilboa|
|Full Thesis text|
Depth restoration, the task of correcting depth noise and artifacts, has recently risen in popularity due to the increase in commodity depth cameras. The additional recent development of depth sensors intended for integration into hand-held devices, pushes the boundaries of technology and relies heavily on the trade off of depth quality for sensor size and power usage. It is then critical to attempt to raise the quality of depth arriving from the sensor using algorithmic methods instead of hardware solutions.
In this work we present a new temporal depth restoration method. Utilizing multiple frames and a coupled RGB camera, we create a number of possibilities for an initial degraded depth map, which allows us to arrive at a more educated decision when refining depth images according to color consistencies. In addition, our temporal depth restoration method allows for frame estimation of a possible missing frame by creating an initial guess for a non-existing depth frame. This can allow both temporal and spatial up-sampling, and can help overcome current issues with depth-sensor technology which may require lowered frame rates due to power consumption.
To evaluate depth restoration, we present the Depth Restoration Occlusionless Temporal (DROT) dataset. This dataset offers real depth sensor input coupled with registered pixel-to-pixel color images, and the ground-truth depth to which we wish to compare. We obtain the ground-truth depth using a state-of-the-art depth scanner and a novel method for determining the true depth of the scene. Our dataset includes not only Kinect 1 and Kinect 2 data, but also an Intel R200 sensor intended for integration into hand-held devices. The data from this sensor contains extreme artifacts and noise; we show that depth-enhancement methods are especially necessary as technology moves towards hand-held devices. Evaluating our temporal depth restoration method with this dataset shows significant benefits, particularly for overcoming real sensor-noise artifacts. An extension to the dataset is presented in the form of a sensor analysis dataset which contains objects made of reflective and translucent materials, situated at varying distances from the cameras. This dataset allows identification of depth artifacts for the given cameras in difficult depth sensing situations.