**W**ith Jim Hobert and Vivekananda Roy, we have just completed an arxived paper on the finer convergence properties of two data augmentation algorithms. Every reversible Markov chain defines a functional operator over an space whose spectrum encodes the convergence properties of the chain. When the state space of the chain is finite, the spectrum is just the set of eigenvalues of the corresponding Markov transition matrix. However, when the state space is infinite, the spectrum may be uncountable, and is nearly always impossible to calculate. This is the case for most applications of the data augmentation (DA) algorithm, where the state space of the DA Markov chain is infinite. However, we show in this paper that, under regularity conditions that include the finiteness of the augmented space, the operators defined by the DA chain and by Hobert and Marchev’s (2008) alternative chain are both compact, and the corresponding spectra are both finite subsets of [0; 1). The main result in the paper is that the spectrum of Hobert and Marchev’s (2008) chain dominates the spectrum of the DA chain in the sense that *the ordered elements of the former are all less than or equal to the corresponding elements of the latter*. As a concrete example, we study a widely used DA algorithm for the exploration of posterior densities associated with standard mixture models, introduced in an earlier paper of mine’s with Jean Diebolt (Diebolt and Robert, 1994). In particular, we compare this mixture DA algorithm with an alternative algorithm proposed by Sylvia Frühwirth-Schnatter (2001) that is based on random label switching. I am very glad to be part of this paper as it provides an exact assessment of the convergence speed of the DA algorithm for normal mixtures, a problem I discussed at length with Richard Tweedie when I visited him in Colorado in 1993…

The above graph shows how the dominant eigenvalue—the one that drives convergence—moves with the number of observations in the case of a Bernoulli mixture for different values of the Bernoulli probabilities. As expected, the convergence speed quickly decreases with the sample size, even in this toy example.The gain resulting from using the second scheme of Sylvia Frühwirth-Schnatter is obvious in the above graph, obtained for a normal mixture.