|M.Sc Student||Nemets Simona|
|Subject||Info-Gap Approach to Regularization|
|Department||Department of Mechanical Engineering||Supervisor||PROF. Miriam Zacksenhouse|
|Full Thesis text|
In this work, the problem of determining the coefficients of a linear model from observations is considered with the goal to present an algorithm that will deal with noise and uncertainty intrinsic to such data. In particular, the algorithm should allow prediction of future responses based on a set of training data. A common strategy is to use the least square estimator minimizing the residual error. However, in many practical situations, the problem is ill-posed due to the redundancy of realistic measurements, causing linear dependence in the mathematical models. Then the least squares estimator becomes numerically unstable and provides a meaningless solution.
In order to cope with this instability regularization methods have been developed. These methods replace the original ill-posed problem with a close well-posed problem. The extent of fidelity to the original problem is controlled by a regularization parameter.
The application of regularization to a given problem comprises two stages: first, a specific regularization procedure is chosen and then the regularization parameter is adjusted, with the latter being crucial for the success of the regularization procedure. Therefore, a pertinent problem is to formulate an algorithm, where the relevant parameters will be chosen automatically.
We present a robust-satisficing approach, based on satisficing rather than optimizing some performance criterion, while maximizing the robustness to uncertainties. Given a desired level of performance, the algorithm yields a solution with maximal robustness to uncertainties. An analytic expression is found for the relation between the level of performance and the resulting maximal robustness of the solution.
Given the inherent trade-off between performance and robustness, we propose to use the consistency between the observations and the linear model to determine a unique, parameter free regression. In particular, the best possible residual error that the robust-satisficing regression can achieve possesses a unique minimum with respect to the regularization parameter. The algorithm locates this minimum and returns the value of the parameter, thus completing the problem solution.
It is shown that in many cases the proposed criterion yields better results than existing methods in terms of residual error. The algorithm is applied to the challenging problem of linear regression for neural decoding in brain-machine interfaces and compared to other existing methods including L-curve and generalized cross-validation.