Archive for MCMC convergence

far south

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , , , on February 23, 2022 by xi'an

21w5107 [day 2]

Posted in Books, Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , on December 1, 2021 by xi'an

After a rich and local (if freezing) dinner on a rooftop facing the baroque Oaxaca cathedral, and an early invigorating outdoor swim in my case!, the morning session was mostly on mixtures, with Helen Ogden exploring X validation for (estimating the number k of components for) finite mixtures, when using the likelihood as an objective function. I was unclear of the goal however when considering that the data supporting the study was Uniform (0,1), nothing like a mixture of Normal distributions. And about the consistency attached to the objective function. The session ended with Diana Cai presenting a counter-argument in the sense that she proved, along with Trevor Campbell and Tamara Broderick, that the posterior on k diverges to infinity with the number n of observations if a mixture model is misspecified for said data. Which does not come as a major surprise since there is no properly defined value of k when the data is not generated from the adopted mixture. I would love to see an extension to the case when the k component mixture contains a non-parametric component! In-between, Alexander Ly discussed Bayes factors for multiple datasets, with some asymptotics showing consistency for some (improper!) priors if one sample size grows to infinity. With actually attaining the same rate under both hypotheses. Luis Nieto-Barajas presented an approach on uncertainty assessment through KL divergence for random probability measures, which requires a calibration of the KL in this setting, as KL does not enjoy a uniform scale, and a prior on a Pólya tree. And Chris Holmes presented a recent work with Edwin Fong and Steven Walker on a prediction approach to Bayesian inference. Which I had had on my reading list for a while. It is a very original proposal where likelihoods and priors are replaced by the sequence of posterior predictives and only parameters of interest get simulated. The Bayesian flavour of the approach is delicate to assess though, albeit a form of non-parametric Bayesian perspective… (I still need to read the paper carefully.)

In the afternoon session, Judith Rousseau presented her recent foray in cut posteriors for semi-parametric HMMs. With interesting outcomes for efficiently estimating the transition matrix, the component distributions, and the smoothing distribution. I wonder at the connection with safe Bayes in that cut posteriors induce a loss of information. Sinead Williamson spoke on distributed MCMC for BNP. Going back at the “theme of the day”, namely clustering and finding the correct (?) number of clusters. With a collapsed versus uncollapsed division that reminded me of the marginal vs. conditional María Gil-Leyva discussed yesterday. Plus a decomposition of a random measure into a finite mixture and an infinite one that also reminded me of the morning talk of Diana Cai. (And making me wonder at the choice of the number K of terms in the finite part.) Michele Guindani spoke about clustering distributions (with firecrackers as a background!). Using the nDP mixture model, which was show to suffer from degeneracy (as discussed by Frederico Camerlenghi et al. in BA). The subtle difference stands in using the same (common) atoms in all random distributions at the top of the hierarchy, with independent weights. Making the partitions partially exchangeable. The approach relies on Sylvia’s generalised mixtures of finite mixtures. With interesting applications to microbiome and calcium imaging (including a mice brain in action!). And Giovanni Rebaudo presented a generalised notion of clustering aligned on a graph, with some observations located between the nodes corresponding to clusters. Represented as a random measure with common parameters for the clusters and separated parameters outside. Interestingly playing on random partitions, Pólya urns, and species sampling.

21w5107 [day 1]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on November 30, 2021 by xi'an

The workshop started by the bad news of our friend Michele Guindani being hit and mugged upon arrival in Oaxaca, Saturday night. Fortunately, he was not hurt, but lost both phone and wallet, always a major bummer when abroad… Still this did not cast a lasting pall on the gathering of long-time no-see friends, whom I had indeed not seen for at least two years. Except for those who came to the CIRMirror!

A few hours later, we got woken up by fairly loud firecrackers (palomas? cohetes?) at 5am, for no reason I can fathom (the Mexican Revolution day was a week ago) although it seemed correlated with the nearby church bells going on at full blast (for Lauds? Hanukkah? Cyber Monday? Chirac’s birthdate?). The above picture was taken the Santa María del Tule town with its super-massive Montezuma cypress tree, with remaining decorations from the Día de los Muertos.

Without launching (much) the debate on whether or not Bayesian non-parametrics qualified as “objective Bayesian” methods, Igor Prünster started the day with a non-parametric presentation of dependent random probability measures. With the always fascinating notion that a random discrete non-parametric prior is inducing a distribution on the partitions (EPPF). And applicability in mixtures and their generalisations. Realising that the highly discrete nature of such measures is not such an issue for a given sample size n, since there are at most n elements in the partition. Beatrice Franzolini discussed of specific ways to create dependent distributions based on independent samples, although her practical example based on one N(-10,1) sample and another (independently) N(10,1) sample seemed to fit in several of the dependent random measures she compared. And Marta Catalano (Warwick) presented her work on partial exchangeability and optimal transportation (which I had also heard in CIRM last June and in Warwick last week). One thing I had not realised earlier was the dependence of the Wasserstein distance on the parameterisation, although it now makes perfect sense. If only for the coupling.  I had alas to miss Isadora Antoniano-Villalobos’ talk as I had to teach my undergrad class in Paris Dauphine at the same time… This non-parametric session was quite homogeneous and rich in perspectives.

In an all-MCMC afternoon, Julyan Arbel talked about reference priors for extreme value distributions, with the “shocking” case of a restriction on the support of one parameter, ξ. Which means in fact that the Jeffreys prior is then undefined. This reminded me somewhat of the work of Clara Grazian on Jeffreys priors for mixtures, where some models were not allowing for Fisher information to exist. The second part of this talk was about modified local versions of Gelman & Rubin (1992) R hats. And the recent modification proposed by Aki and co-authors. Where I thought that a simplification of the multivariate challenge of defining ranks could be alleviated by considering directly the likelihood values of the chains. And Trevor Campbell gradually built an involved parallel tempering method where the powers of a geometric mixture are optimised as spline functions of the temperature. Next, María Gil-Leyva presented her original and ordered approach to mixture estimation, which I discussed in a blog published two days ago (!). She corrected my impressions that (i) the methods were all impervious to label switching and (ii) required some conjugacy to operate. The final talk of the day was by Anirban Bhattacharya on high-D Bayesian regression and coupling techniques for checking convergence, a paper that had been on my reading list for a long while. A very elaborate construct of coupling strategies within a Gibbs sampler, with some steps relying on optimal coupling and others on the use of common random generators.

reproducibility check [Nature]

Posted in Statistics with tags , , , , , , , , on September 1, 2021 by xi'an

While reading the Nature article Swarm Learning, by Warnat-Herresthal et [many] al., which goes beyond federated learning by removing the need for a central coordinator, [if resorting to naïve averaging of the neural network parameters] I came across this reporting summary on the statistics checks made by the authors. With a specific box on Bayesian analysis and MCMC implementation!

more air for MCMC

Posted in Books, R, Statistics with tags , , , , , , , , , , , , , , on May 30, 2021 by xi'an

Aki Vehtari, Andrew Gelman, Dan Simpson, Bob Carpenter, and Paul-Christian Bürkner have just published a Bayesian Analysis paper about using an improved R factor for MCMC convergence assessment. From the early days of MCMC, convergence assessment has been a recurring (and recurrent!) question in the community. First leading to a flurry of proposals, [which Kerrie, Chantal, and myself reviewwwed in the Valencia 1998 proceedings], and then slowly disintegrating under the onslaughts of reality—i.e. that none could not be 100% foolproof in full generality—…. This included the (possibly now forgotten) single-versus-multiple-chains debate between Charlie Geyer [for single] and Andrew Gelman and Don Rubin [for multiple]. The later introduced an analysis-of-variance R factor, which remains quite popular up to this day, in part for being part of most MCMC software, like BUGS. That this R may fail to identify convergence issues, even in the more recent split version, does not come as a major surprise, since any situation with a long-term influence of the starting distribution may well fail to identify missing (significant) parts of the posterior support. (It is thus somewhat disconcerting to me to see that the main recommendation is to move the bound on R from 1.1 to 1.01, reminding me to some extent of a recent proposal to move the null rejection boundary from 0.05 to 0.005…) Similarly, the ESS may prove a poor signal for convergence or lack thereof, especially because the approximation of the asymptotic variance relies on stationarity assumptions. While multiplying the monitoring tools (as in CODA) helps with identifying convergence issues, looking at a single convergence indicator is somewhat like looking only at a frequentist estimator! (And with greater automation comes greater responsibility—in keeping a critical perspective.)

Looking for a broader perspective, I thus wonder at what we would instead need to assess the lack of convergence of an MCMC chain without much massaging of the said chain. An evaluation of the (Kullback, Wasserstein, or else) distance between the distribution of the chain at iteration n or across iterations, and the true target? A percentage of the mass of the posterior visited so far, which relates to estimating the normalising constant, with a relatively vast array of solutions made available in the recent years? I remain perplexed and frustrated by the fact that, 30 years later, the computed values of the visited likelihoods are not better exploited. Through for instance machine-learning approximations of the target. that could themselves be utilised for approximating the normalising constant and potential divergences from other approximations.

%d bloggers like this: