Notes on the KL-divergence between a Markov chain and its equilibrium distribution
Iain Murray and Ruslan Salakhutdinov, 2008.
After drawing a sample from a distribution, further correlated samples can be obtained by simulating a Markov chain that leaves the target distribution stationary. Often drawing even one sample from a distribution of interest is intractable, so the Markov chain is initialized arbitrarily. This note considers the marginal distribution over the Markov chain’s position at each time step. We show that this marginal never moves further away from the chain’s stationary distribution, as measured by KL-divergence either way around. This is a known result (Cover and Thomas, 1991). The presentation here is for review purposes only.
[PDF, DjVu, GoogleViewer, BibTeX]