|M.Sc Thesis||Department of Industrial Engineering and Management|
|Supervisor:||Prof. Gotsman Chaim Craig|
|Full Thesis text|
View Dependent Texture Projection Mapping (VDTPM) is a technique used in rendering a photorealistic novel view based on static real imagery and authentic 3D model of the scene.
Unlike normal texture mapping, where the texture is “pasted” on the 3D polygons, here the texture is placed on the object using texture projection. Texture projection is a technique which uses the camera model of the photograph to create a projective transformation which is then used to project/transform a world coordinate to an image/texture coordinate.
The main difference between the two methods is that normal texture
mapping is static, meaning that for every fragment (pixel) the texture is
predefined before rendering for all possible views. The View Dependent version
of the Texture Projection Mapping is an extension to the technique in which the
texture is chosen dynamically for each fragment based on heuristics.
The thesis includes a detailed explanation of the VDTPM specifically for urban scenes, and our contribution both in algorithmic improvements and in defining a new rendering framework which can be scaleable in the imagery inserted to the scene. We suggest a framework for both polygonal models as well as point clouds which are rendered as splats.
We address an artifact problem which is generated around silhouettes and suggest a novel solution for semi-automatic obstruction removal from the scene.
Finally, in invisible parts of the scene, we use an "exemplar search hole- filling" technique. We show how depth information which represents the surface behavior can be used as a search criterion.