|Ph.D Student||Nakhmani Arie|
|Subject||Visual Tracking: A Particle Filter/Template Matching|
|Department||Department of Electrical and Computer Engineering||Supervisors||PROF. Allen Robert Tannenbaum|
|PROFESSOR EMERITUS Ezra Zeheb|
|Full Thesis text|
Visual tracking is an important task that has received a lot of attention in recent years. Robust general tracking tools are of major interest for applications ranging from surveillance and security to image guided surgery and nanotechnology. In these applications, the targets of interest may be translated, rotated, scaled, or non-rigidly deformed.
In our research, we investigate the problem of tracking arbitrary targets in video sequences. Our main assumptions are that the camera and the target mutual motions are smooth enough, and the target deformations are not significant in the consecutive video frames. We propose computationally efficient particle filtering framework based on low dimensional state space template modeling of the dynamics. The proposed framework is adaptive to changes in target’s appearance, and is able to deal with cluttered noisy scenes, and occluded targets. Specifically, we use a nonstandard particle filtering method via the following two steps: The first step employs the normalized cross-correlation function as the likelihood. The second step is to resample, and to fuse the results of multiple cross-correlations of different patches of the given target, in order to refine the likelihood for the particle filter. We propose certain modifications to the template matching approach, such as robust to scaling and partial occlusion correlations, and adaptive Sobolev active contours (appropriate for multiple targets), which overcome the problems of non-rigidly deforming targets and occlusions, by online modified contour parameters.
The proposed tools were successfully applied to three different categories of targets:
1. Targets (including people and cars) filmed by a regular handheld or stationary camera; the targets were manually selected in the first frame, and tracked automatically, without any prior learning stage.
2. Deforming targets filmed by an infrared camera; the target’s contour was obtained for each video frame, after a manual selection of the initial contour.
3. Nanofluids filmed by a camera attached to a microscope. Multiple targets were detected and tracked to obtain their centroid trajectories. Analysis of velocity, direction, and size of the nanoplatforms brought new insights in the field of nanotechnology, and enabled development of a physical model for the tested nanoplatforms.
The proposed algorithms work in a variety of scenarios and deal naturally with clutter and noise in the scenes, target deformations, partial and entire target occlusions, and low contrast targets. Experimental results show the advantages of our approach compared to state-of-the-art visual trackers.