טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentBogomjakov Alexander
SubjectGPU-Assisted Geometry Processing for Novel View
Synthesis from Depth Video
DepartmentDepartment of Computer Science
Supervisor Professor Chaim Craig Gotsman
Full Thesis textFull thesis text - English Version


Abstract

Depth cameras, which provide color and depth information per pixel at video rates, offer exciting new opportunities in computer graphics. We address the challenge of supporting free-viewpoint video of dynamic 3D scenes using live data captured and streamed from widely-spaced viewpoints by a handful of synchronized depth cameras. We introduce the concept of the depth hull, which is a generalization of the well-known visual hull. The depth hull reflects all the dense depth information as observed from several centers of projection around the scene. It is the best approximation of the scene geometry that can be obtained from a given set of depth camera recordings.


Contemporary graphics hardware based on a Graphics Processing Unit (GPU) has developed to successfully compete with the CPU for many specialized tasks. Highly parallelized streaming architecture and a large number of concurrent processing units allows the graphics hardware to carry out calculations far beyond those needed for mere polygonal rendering. We take advantage of this computation power to perform real-time processing for different stages of free-viewpoint video.


We first present a GPU-based adaptation of a simplification algorithm used for preprocessing of the depth data. This non-rendering computation maps well to the graphics hardware because the data is organized on a regular grid. We then present a general improvement to the best existing visual hull rendering algorithm, which is of independent interest. We use this to contribute a hardware-accelerated method for rendering novel views from depth hulls in real-time. This method is based on a combination of techniques from projective shadow mapping and constructive solid geometry (CSG). Our rendering method achieves high-quality results even when only a modest number of depth cameras are deployed. They are applicable to any set of images with accompanying dense depth maps that correspond to arbitrary viewing positions around the scene. We provide experimental results using a system incorporating two depth cameras recording a dynamic scene. We also provide an adaptation of the depth hull and the visual hull for rendering the geometry of complex scenes, consisting of multiple objects.