Speaker Affiliation :
Date(s) - 16/01/2017
15 h 30 min - 16 h 30 min
Catégories Pas de Catégories
Sampling over high-dimensional space has become a prerequisite in the applications of Bayesian statistics to machine learning problem. The most common methods for dealing with this problem are Markov Chain Monte Carlo methods. In this talk, I will present new insights on the computational complexity of these algorithms. First, I will discuss the optimal scaling problem for high-dimensional random walk Metropolis algorithms for densities which are differentiable in Lp mean but which may be irregular at some points (like the Laplace density for example) and / or are supported on an interval. The scaling limit is established under assumptions which are much weaker than the one used in the original derivation of (Roberts, Gelman, Gilks, 1997). This result has important practical implications for the use of random walk Metropolis
algorithms in Bayesian frameworks based on sparsity inducing priors.
In the second of the talk, we will present a method based on the Euler discretization of the Langevin diffusion with either constant or decreasing stepsizes. We will give several new results allowing convergence to stationarity under different conditions for the log-density. A particular attention of these bounds with respect to the dimension of the state space will be paid.