Julien Sénégas
July 26, 2002
The goal of this tutorial is to provide an introduction to the concepts of Markov chain Monte Carlo (MCMC) as well as a description of some sampling algorithms. A references book on this subject is [5].
Sampling methods which rely on Markov chain theory are iterative: the principle is to build a succession of states, and once convergence is reached, the consecutive states are assumed to be drawn from the target probability distribution. With these methods, it is possible to sample from general probability distributions, whereas direct sampling algorithms only apply to specific probability distributions such as the Gaussian distribution. Especially, the probability distribution can be a posterior distribution in a Bayesian context, which makes MCMC methods very attractive in Bayesian computation.
The outline of this tutorial is as follows: in section 1, we recall the definition and properties of Markov chains on countable state spaces, then, in section 2 we expose the principles of Monte Carlo integration; some examples of sampling algorithms relying on Markov chain theory are given in section 3.