טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
M.Sc Thesis
M.Sc StudentChernoi Jacob
SubjectThe Chebyshev Center and Iterative MMSE: Dominating Least-
Squares Estimation
DepartmentDepartment of Electrical Engineering
Supervisor Professor Yonina Eldar
Full Thesis textFull thesis text - English Version


Abstract

 

We treat the linear regression problem in which we seek to estimate a deterministic parameter vector x, observed through a linear transformation H and colored by Gaussian noise w. We distinguish between two different setups; one where the norm of x is bounded and the other in which x is not limited. Instead of using the conventional least-squares (LS) approach, we explore several alternative estimation techniques. For both scenarios we propose new estimator that dominate LS in terms of mean-squared error (MSE).

First we focus on the constrained regression model. We present and analyze an estimation approach based on the recently proposed Chebyshev center (CC) method. The CC estimator is constructed by minimizing the worst-case squared error, assuming the noise is bounded. We extend the CC approach to the case in which the noise is Gaussian and prove that the resulting estimate dominates the constrained LS (CLS) method when H=I and the dimension of the problem is at least 2. Thus, the MSE of the resulting CC estimate is smaller than that of the CLS regardless of the true parameter value.

Next we focus on the unconstrained regression model, and develop a pre-test type estimator of x. In contrast to conventional pre-test strategies, that do not dominate LS in terms of MSE, our technique is shown to dominate LS when the effective dimension is greater or equal 4. Our estimator is based on a simple and intuitive approach in which we first determine the linear minimum MSE (MMSE) estimate that minimizes the MSE. Since the MMSE solution is a function of the unknown vector x, we propose applying the linear MMSE strategy with the LS substituted for the true value of x to obtain a new estimate. We then use the current estimate in conjunction with the linear MMSE solution and continue iterating until convergence. As we show, the limit is a pre-test type method which is zero when the norm of the data is small, and is otherwise a non-linear shrinkage of LS.