|Ph.D Student||Zlochin Mark|
|Subject||Adaptive Monte Carlo Methods for Bayesian Inference|
|Department||Department of Computer Science||Supervisor||Professor Emeritus Yoram Baram|
Monte Carlo approximation methods have become an important computational tool in many branches of science, such as statistics, computer science, physics and chemistry. Within the last decade there was a tremendous growth in the number of the areas, in which the Monte Carlo methods were applied, as well as in the number of the new algorithms, both general-purpose and task-specific. Despite this rapid development of the field, up to day there was no rigorous analysis of the formal setting of the Monte Carlo approximation problem as well as the required properties of the approximation algorithms.
In this work we present such a formal analysis, with a particular emphasis on the Bayesian inference applications. This analysis reveals several common misconceptions regarding the general Monte Carlo approximation problem, as well as some dependencies, specific to the Bayesian inference applications. Using the insights resulting from this analysis, we propose several modifications of the existing state-of-the-art Monte Carlo algorithms, which produce significantly better estimates, in comparison to their existing counterparts.
We also observe that most of the advanced Monte Carlo methods employ sampling distributions, which are automatically adapted to the target distribution, in order to insure that the sampling distribution is close to the target one. We distinguish between two types of adaptation used in the Monte Carlo algorithms. The first is local adaptation, where the next sampled point is generated using locally available information, such as the value of the target density function or the derivatives thereof. The second type of adaptation is global adaptation, where the global information about the target distribution, such as, for example, the location of the high probability regions, is extracted from the collection of previously sampled points and is subsequently used for generating new sample points.
For the local adaptation, we show that, in many Bayesian learning problems, a richer source of local information is available, due to the particular differential-geometric structure of these problems. Consequently, we present a new algorithm, which significantly outperforms other locally adaptive algorithms, due to the use of this structure. We also note that the strengths and the weaknesses of the two types of adaptation are complementary to one another and present a hybrid method, which combines both local and global adaptation mechanisms. The new method gives a considerable improvement, in comparison to the existing algorithms, which employ only one type of adaptation.