|M.Sc Student||Ben-Yaacov Hilla|
|Subject||3D Object Description and Classification by Implicit|
|Department||Department of Electrical and Computer Engineering||Supervisors||PROFESSOR EMERITUS David Malah|
|DR. Meir Bar-Zohar|
|Full Thesis text|
Implicit polynomials (IP) are used for the representation of 2D curves and 3D surfaces specified by discrete data. We explore the description abilities of existing 3D implicit polynomials fitting algorithms, Gradient1, Min-Max and Min-Var, and suggest a modification for the Min-Max and Min-Var algorithms, so that they will be rotation invariant. We develop a set of 3D rotation invariants that are linear combinations of the IP coefficients, using a tensor representation of the IP, and two 3D quadratic rotation invariants using trigonometric identities. We explore the quaternion representation as an alternative method for rotation invariants derivation, in a similar way to the complex representation used in 2D. We describe the pre-processing stages required in order to improve the classification performance: locating the center of mass at the origin, scaling, mirroring and selecting 2D projections. We then present a 3D classification method which is based on the Multi Order and Fitting Errors Technique (MOFET) proposed earlier for 2D object classification. This classification approach is based on fitting several polynomials to the object surface, each having a different degree, and on their fitting errors. The advantage of this approach, is that it does not require the use of high computational registration (pose estimation) between the new object representation we would like to classify and the representations of different objects in the dictionary. Instead, it uses a rotation invariant features vector for classification. The classification features we use are the 3D IP rotation invariants and fitting errors, as well as 2D IP rotation invariants and fitting errors, derived from the most informative 2D projections of the 3D objects, and 3D PCA eigenvalues. We demonstrate the classification results on both a rigid objects database and a faces database (acquired in a cooperative situation). Simulation results show that our method outperforms classification based on an IP fitting after pose estimation as well as the Shape Spectrum Descriptor (SSD) classification, which was adopted by the MPEG-7 standard.