|M.Sc Student||Avidar David|
|Subject||Local-to-Global 3D Point Cloud Registration using a|
|Department||Department of Electrical and Computer Engineering||Supervisors||PROFESSOR EMERITUS David Malah|
|DR. Meir Bar-Zohar|
|Full Thesis text|
Local-to global point cloud registration is a challenging task due to the substantial differences between these two types of data, and the different techniques used to acquire them. Global clouds cover large-scale environments and are usually acquired aerially, e.g., 3D modeling of a city using Airborne Laser Scanning (ALS), or photogrammetry (based on aerial images). In contrast, local clouds are often acquired from ground level and at a much smaller range, for example, using Terrestrial Laser Scanning (TLS), or stereo reconstruction. The differences are often manifested in their point density distribution, occlusions’ nature, and measurement noise characteristics. As a result of these differences, existing point cloud registration approaches, such as keypoint-based registration, tend to fail because existing 3D features only capture local geometric information around each keypoint.
We propose a novel registration method that is robust to the different characteristics of such global and local point clouds. The method is based on converting the global cloud into a viewpoint-based dictionary. For that purpose, a viewpoint grid is defined over the global cloud. We seek to associate each grid viewpoint with the global geometric information visible from it and not just in its locality. We explore associating each viewpoint with a small set of “dictionary clouds”, which capture the geometry of the visible environment. Then, plausible local-to-global transformations can be found via a dictionary search, i.e., finding the best matches between the local cloud and the dictionary clouds. We further demonstrate that the dictionary’s memory requirements and the search runtime can be substantially reduced by replacing each viewpoint’s dictionary clouds with a single panoramic range-image, used as a viewpoint descriptor. This allows efficient dictionary search in the Discrete Fourier Transform (DFT) domain, using phase-correlation. In order to avoid defining dictionary viewpoints inside buildings or on rooftops, a flood-based algorithm for ground detection in large-scale 3D point clouds is presented.
In addition, a registration refinement algorithm, suitable for urban environments, is proposed. The algorithm is based on projection of the global and local clouds on a plane perpendicular to the gravity direction, for which an efficient estimation method is presented. The projected clouds are transformed into 2D edge-maps, whose alignment is used to find the necessary refinement in registration between the original 3D point clouds.
We demonstrate significant improvement in registration performance, achieved by the viewpoint-dictionary-based method, in comparison to state-of-the-art keypoint-based methods (FPFH - Fast Point Feature Histogram, RoPS - Rotational Projection Statistics). For the evaluation, we use a challenging dataset of 104 TLS local clouds and an ALS large-scale global cloud, in a 1km2 urban environment. The proposed flood-based ground detection algorithm is shown to achieve comparable accuracy to a commonly used method, based on a progressive morphological filter, while significantly reducing runtime requirements. It is also shown that in an urban environment, the proposed edge-based-2D-ICP registration refinement method outperforms the commonly used 3D ICP (Iterative Closest Point) refinement method, in terms of runtime, while also slightly improving the registration accuracy.