An alternative proof of DA convergence

I came across a curio today while looking at recent postings on arXiv, namely a different proof of the convergence of the Data Augmentation algorithm, more than twenty years after it was proposed by Martin Tanner and Wing Hung Wong in a 1987 JASA paper… The convergence under the positivity condition is of course a direct consequence of the ergodic theorem, as shown for instance by Tierney (1994), but the note by Yaming Yu uses instead a Kullback divergence

K(p^{(t)},\pi) = \int \log\{ p^{(t)}(x)/\pi(x) \} p^{(t)}(\text{d}x)

and shows as Liu, Wong and Kong do for the variance (Biometrika, 1994) that this divergence is monotonically decreasing in t. The proof is interesting in that only functional (i.e., non-ergodic) arguments are used, even though I am a wee surprised at IEEE Transactions on Information Theory publishing this type of arcane mathematics… Note that the above divergence is the “wrong” one in that it measures the divergence from p^{(t)}, not from \pi. The convergence thus involves a sequence of divergences rather than a single one. (Of course, this has no consequence on the corollary that the total variation distance goes to zero.)

6 Responses to “An alternative proof of DA convergence”

  1. I believe the proof of Lemma 3.1 relies on the fact that {p^{(t)}(y|x) = \pi(y|x)} and {p^{(t)}(x|y) = \pi(x|y)}.

    For the case {t} odd, we have (using equation (1)) \displaystyle D(p^{(t)}|p^{(t+1)}) + D(p^{(t+1)}|\pi) =
    \displaystyle \int p^{(t)}(x)p^{(t)}(y|x) \left [\log \left ( \frac{p^{(t)}(y|x)}{\pi(y|x)} \right ) + \frac{\pi(y|x)}{p^{(t)}(y|x)}\log \left ( \frac{p^{(t)}(x)}{\pi(x)} \right ) \right ] dx dy
    which is equal to \displaystyle D(p^{(t)}|\pi) = \int p^{(t)}(x,y) \log \frac{p^{(t)}(x,y)}{\pi(x,y)} dx dy
    since the factor {\frac{\pi(y|x)}{p^{(t)}(y|x)} = 1}

  2. I had a further look at the equality of Lemma 3.1, following The Dude’s comments, and I am also unable to prove it:

    D(p^{(t)}|\pi) = D(p^{(t)}|p^{(t+1)}) + \int p^{(t)}(x) \log \dfrac{p^{(t+1)} (x) }{ \pi(x) }\,\text{d}x

    involves a integral against p^{(t)} instead of p^{(t+1)}… I have the same trouble with equation (3). But looking further at the specific shape of the p^{(t)}‘s, I think there is a simplification of the conditionals in the ratio within the logarithm: for one parity of t,

    \int p^{(t)}(z) \log \dfrac{p^{(t+1)} (z)}{ \pi(x,y) }\,\text{d}z = \int p_X^{(t-1)}(x) \pi_{Y|X}(y|x)  \log \dfrac{p_X^{(t)} (x) }{ \pi_X(x)}\,\text{d}xy

    and

    \int p^{(t)}(z) \log \dfrac{p^{(t+1)} (z) }{ \pi(x,y) }\,\text{d}z =\int p_X^{(t)}(x) \log \dfrac{p_X^{(t)} (x) }{ \pi_X(x) }\,\text{d}x

    equal to D(p^{(t+1)}|\pi) . This is a very special feature of the Data Augmentation algorithm.

  3. Well, I would rather agree with xi’an there : the form of divergence used here is not the form used in ML estimation which is just the opposite way (expectation is always taken with respect to the actual , fixed, distribution not the empirical one).

    Whether one would get somewhere using the divergence in the opposite direction is an open question but looking at Cover and Thomas Eq. (2.125) (page 36), one sees that the actual result is

    D(\mu_n|\mu'_n) \geq D(\mu_{n+1}|\mu_{n+1}')

    where \mu_0 and \mu_0' are two different initial distributions for the chain. Hence, D(\pi|\mu_n) is also decreasing but that doesn’t mean that it goes to zero…

    Looking at the paper by Yu, I had some problem with Lemma 3.1 whose “proof is simple and hence omitted”! In fact the first unnumbered expression in page 36 of Cover and Thomas gives the general expression for the difference between the two divergences at successive time instants (which incidentally involve time-reversed kernels). It hard to imagine that it can be expressed as something that does not depend on \pi. This is certainly not true for MC in general, not even for \pi-reversible ones. So, if anyone sees why this is true…

  4. Applied_Probabilist Says:

    Dear Xi’an,

    Just two quick comments on your interesting comment regarding this paper, which is itself a comment on the data augmentation algorithm :-)

    1. “divergence is monotonically decreasing in t”: I wouldn’t attribute this fact to the Yu paper. It is a general phenomenon for Markov chains.

    2. “Note that the above divergence is the ‘wrong’ one”: I’d argue that the other divergence, i.e., the one from pi to p^(t), is the “wrong” one. For one thing, I doubt the proof of the theorem would go through if the other divergence were used. Secondly, when you express the maximum likelihood principle as minimizing the divergence between the empirical distribution and the theoretical distribution, you use the divergence from the empirical to the theoretical (data to model), not the other way around. The same thing here.

    • Thanks. Yuming Yu does note the fact that the divergence is decreasing for all Markov chains, I had missed this comment. As for point 2, I completely agree this is a convenient mathematical perspective and I note the MLE/EM analogy, but, as a decision theorist looking at distances as loss functions, I find less than compelling the fact that the loss is induced by the current approximation to the target (truth) rather than by the target. In other words, it seems to me that the successive distances are commensurable, i.e. that we are not using the same metric at each step. Obviously this does not undermine the total variation result.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.