טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
M.Sc Thesis
M.Sc StudentChikishev Zahar
SubjectThe Maximum Likelihood Estimate of the Single Source
Localization Problem
DepartmentDepartment of Applied Mathematics
Supervisor Professor Amir Beck


Abstract

In this thesis we consider the problem of locating a single radiating source from several noisy measurements using maximum likelihood (ML) criteria. The resulting optimization problem is nonconvex and nonsmooth and thus finding its global solution is in principal a hard task.

An existing method for finding an approximate solution to the problem is the semidefinite relaxation approach. Additionally, the source localization problem can be viewed as a special instance of sensor network localization problems in which several sources are present. For this class of problems, a semidefinite relaxation has also been developed. We show that although the two formulations are different, their optimal values are equal, and we provide theoretical and numerical comparisons of the two approaches.

In the second part of the thesis we depart from the semidefinite relaxation approach and seek for a method which solves the ML problem exactly. Exploiting the special structure of the ML problem objective function, we introduce and analyze two iterative schemes for solving the problem. The first algorithm is a simple explicit fixed-point-based formula, and the second is based on solving at each iteration a nonlinear least squares problem which can be solved globally and efficiently after transforming it into an equivalent quadratic minimization problem with a single quadratic constraint. We show that the nonsmoothness of the problem can be avoided by choosing a specific “good” starting point for both algorithms. More importantly, we prove the convergence of the two schemes to stationary points.

The second algorithm is more involved but the additional computations are shown to be worthwhile, since it usually possesses a much larger region of convergence to the global minimum. We present empirical results that support this tendency to converge to the global minimum, and suggest that despite of its nonconvexity, the ML problem can effectively be solved globally using the devised schemes.