|M.Sc Student||Talmi Itamar|
|Subject||Template Matching with Deformable Diversity Smiliarity|
|Department||Department of Computer Science||Supervisor||Professor Lihi Zelnik-Manor|
The task of seeking a given template in a given image is called Template Matching. A template matching algorithm gets two inputs. The first input is a small image, representing a pattern or a template. Such a template can be any object, like a face, a logo, or any other item. The second input to the template matching algorithm is a larger image, denoted as the target image. The output should be a box, locating the template inside the target image. Template Matching is a key component in many computer vision applications such as object detection, tracking, surveillance, medical imaging and image stitching.
Our interest is in Template Matching ``in the wild'', i.e., when no prior information is available on the target image or the template. Also, the template might undergo severe appearance changes in the target image. Such appearance changes can be out of plane rotation, complex deformations, significant background clutter, and occlusions. All these changes are hard to model. Hence, many existing methods that assume a deformation model or a template prior, fail in these conditions.
We propose a novel measure for template matching "in the wild" named Deformable Diversity Similarity -- based on the diversity of feature matches between a target image window and the template.
We rely on both local appearance and geometric information that jointly lead to a powerful approach for matching.
Our key contribution is a similarity measure, that is robust to all the challenges stated above. Empirical evaluation on the most up-to-date benchmark shows that our method outperforms the current state-of-the-art in its detection accuracy while improving computational complexity.