**W**e ended up having a terrific b’day party last Thursday after noon, with about 30 friends listening in Institut Henri Poincaré to Florence, Pierre, and Sylvia giving lectures on my favourite themes, namely ABC, MCMC, and mixture inference. Incl. subtle allusions to my many idiosyncrasies in three different flavours! And a limited number of anecdotes, incl. the unavoidable Cancún glasses disaster! We later headed to a small Ethiopian restaurant located on the other side of the Panthéon, rue de l’Ecole Polytechnique (rather than on the nearby rue Laplace!), which was going to be too tiny for us, especially in these COVID times, until the sky cleared up and the restaurant set enough tables in the small street to enjoy their injeras and wots till almost midnight. The most exciting episode of the evening came when someone tried to steal some of our bags that had been stored in a back room and when Tony spotted the outlier and chased him till the thief dropped the bags..! Thanks to Tony for saving the evening and our computers!!! To Éric, Jean-Michel and Judith for organising this 9/9 event (after twisting my arm just a wee bit). And to all my friends who joined the party, some from far away…

## Archive for cut models

## what a party!

Posted in pictures, Statistics, Travel, University life, Wines with tags ABC, Approximate Bayesian computation, birthday, Cancún, COVID-19, cut models, ethiopian food, IHP, injera, Institut Henri Poincaré, Laplace, MCMC, Panthéon, unbiased MCMC, WoT on September 13, 2021 by xi'an## EM degeneracy

Posted in pictures, Statistics, Travel, University life with tags ABC, BayesComp 2020, Bernstein-von Mises theorem, clustering, compatible conditional distributions, conference, cut models, cycle path, EM algorithm, Gibbs sampling, hidden Markov models, Institut de Mathématique d'Orsay, MCMC, MHC 2021, mixtures, particle filters, physical attendance, Rao-Blackwellisation, SEM, SMC, smoothing, Université Paris-Sud on June 16, 2021 by xi'an**A**t the MHC 2021 conference today (to which I biked to attend for real!, first time since BayesComp!) I listened to Christophe Biernacki exposing the dangers of EM applied to mixtures in the presence of missing data, namely that the algorithm has a rising probability to reach a degenerate solution, namely a single observation component. Rising in the proportion of missing data. This is not hugely surprising as there is a real (global) mode at this solution. If one observation components are prohibited, they should not be accepted in the EM update. Just as in Bayesian analyses with improper priors, the likelihood should bar single or double observations components… Which of course makes EM harder to implement. Or not?! MCEM, SEM and Gibbs are obviously straightforward to modify in this case.

Judith Rousseau also gave a fascinating talk on the properties of non-parametric mixtures, from a surprisingly light set of conditions for identifiability to posterior consistency . With an interesting use of several priors simultaneously that is a particular case of the cut models. Namely a correct joint distribution that cannot be a posterior, although this does not impact simulation issues. And a nice trick turning a hidden Markov chain into a fully finite hidden Markov chain as it is sufficient to recover a Bernstein von Mises asymptotic. If inefficient. Sylvain LeCorff presented a pseudo-marginal sequential sampler for smoothing, when the transition densities are replaced by unbiased estimators. With connection with approximate Bayesian computation smoothing. This proves harder than I first imagined because of the backward-sampling operations…

## two Parisian talks by Pierre Jacob in January

Posted in pictures, Statistics, University life with tags coupling, CREST, cut models, ENSAE, Gibbs sampling, MCMC, Paris-Saclay campus, Pierre Jacob, prior construction, Université Paris Dauphine on December 21, 2017 by xi'an**W**hile back in Paris from Harvard in early January, Pierre Jacob will give two talks on works of his:

January 09, 10:30, séminaire d’Analyse-Probabilités, Université Paris-Dauphine: Unbiased MCMC

*Markov chain Monte Carlo (MCMC) methods provide consistent approximations of integrals as the number of iterations goes to infinity. However, MCMC estimators are generally biased after any fixed number of iterations, which complicates both parallel computation and the construction of confidence intervals. We propose to remove this bias by using couplings of Markov chains and a telescopic sum argument, inspired by Glynn & Rhee (2014). The resulting unbiased estimators can be computed independently in parallel, and confidence intervals can be directly constructed from the Central Limit Theorem for i.i.d. variables. We provide practical couplings for important algorithms such as the Metropolis-Hastings and Gibbs samplers. We establish the theoretical validity of the proposed estimators, and study their variances and computational costs. In numerical experiments, including inference in hierarchical models, bimodal or high-dimensional target distributions, logistic regressions with the Pólya-Gamma Gibbs sampler and the Bayesian Lasso, we demonstrate the wide applicability of the proposed methodology as well as its limitations. Finally, we illustrate how the proposed estimators can approximate the “cut” distribution that arises in Bayesian inference for misspecified models. *

January 11, 10:30, CREST-ENSAE, Paris-Saclay: Better together? Statistical learning in models made of modules *[Warning: Paris-Saclay is not in Paris!]*

*In modern applications, statisticians are faced with integrating heterogeneous data modalities relevant for an inference or decision problem. It is convenient to use a graphical model to represent the statistical dependencies, via a set of connected “modules”, each relating to a specific data modality, and drawing on specific domain expertise in their development. In principle, given data, the conventional statistical update then allows for coherent uncertainty quantification and information propagation through and across the modules. However, misspecification of any module can contaminate the update of others. In various settings, particularly when certain modules are trusted more than others, practitioners have preferred to avoid learning with the full model in favor of “cut distributions”. In this talk, I will discuss why these modular approaches might be preferable to the full model in misspecified settings, and propose principled criteria to choose between modular and full-model approaches. The question is intertwined with computational difficulties associated with the cut distribution, and new approaches based on recently proposed unbiased MCMC methods will be described*.

Long enough after the New Year festivities (if any) to be fully operational for them!

## better together?

Posted in Books, Mountains, pictures, Statistics, University life with tags Bayesian Analysis, better together, Chamonix-Mont-Blanc, cut models, decision theory, diode, Martyn Plummer, MCMSki IV, Scottish independence referendum on August 31, 2017 by xi'an**Y**esterday came out on arXiv a joint paper by Pierre Jacob, Lawrence Murray, Chris Holmes and myself, *Better together? Statistical learning in models made of modules, *paper that was conceived during the MCMski meeting in Chamonix, 2014! Indeed it is mostly due to Martyn Plummer‘s talk at this meeting about the cut issue that we started to work on this topic at the fringes of the [standard] Bayesian world. Fringes because a standard Bayesian approach to the problem would always lead to use the entire dataset and the entire model to infer about a parameter of interest. *[Disclaimer: the use of the very slogan of the anti-secessionists during the Scottish Independence Referendum of 2014 in our title is by no means a measure of support of their position!]* Comments and suggested applications most welcomed!

The setting of the paper is inspired by realistic situations where a model is made of several modules, connected within a graphical model that represents the statistical dependencies, each relating to a specific data modality. In a standard Bayesian analysis, given data, a conventional statistical update then allows for coherent uncertainty quantification and information propagation through and across the modules. However, misspecification of or even massive uncertainty about any module in the graph can contaminate the estimate and update of parameters of other modules, often in unpredictable ways. Particularly so when certain modules are trusted more than others. Hence the appearance of cut models, where practitioners prefer skipping the full model and limit the information propagation between these modules, for example by restricting propagation to only one direction along the edges of the graph. (Which is sometimes represented as a diode on the edge.) The paper investigates in which situations and under which formalism such modular approaches can outperform the full model approach in misspecified settings. By developing the appropriate decision-theoretic framework. Meaning we can choose between [several] modular and full-model approaches.

## cut, baby, cut!

Posted in Books, Kids, Mountains, R, Statistics, University life with tags BUGS, Chamonix, CREST, cut models, decompression, flu, graphical models, JAGS, Martyn Plummer, MCMC, MCMSki IV, Monte Carlo Statistical Methods, OpenBUGS, The BUGS book on January 29, 2014 by xi'an**A**t MCMSki IV, I attended (and chaired) a session where Martyn Plummer presented some developments on cut models. As I was not sure I had gotten the idea *[although this happened to be one of those few sessions where the flu had not yet completely taken over!]* and as I wanted to check about a potential explanation for the lack of convergence discussed by Martyn during his talk, I decided to (re)present the talk at our “MCMSki decompression” seminar at CREST. Martyn sent me his slides and also kindly pointed out to the relevant section of the BUGS book, reproduced above. *(Disclaimer: do not get me wrong here, the title is a pun on the infamous “drill, baby, drill!” and not connected in any way to Martyn’s talk or work!)*

**I** cannot say I get the idea any clearer from this short explanation in the BUGS book, although it gives a literal meaning to the word “cut”. From this description I only understand that a *cut* is the removal of an edge in a probabilistic graph, however there must/may be some arbitrariness in building the wrong conditional distribution. In the Poisson-binomial case treated in Martyn’s case, I interpret the cut as simulating from

instead of

hence loosing some of the information about φ… Now, this cut version is a function of φ and θ that can be fed to a Metropolis-Hastings algorithm. Assuming we can handle the posterior on φ and the conditional on θ given φ. If we build a Gibbs sampler instead, we face a difficulty with the normalising constant m(y|φ). Said Gibbs sampler thus does not work in generating from the “cut” target. Maybe an alternative borrowing from the rather large if disparate missing constant toolbox. (In any case, we *do not* simulate from the original joint distribution.) The natural solution would then be to make a independent proposal on φ with target the posterior given z and then any scheme that preserves the conditional of θ given φ and y; “any” is rather wistful thinking at this stage since the only practical solution that I see is to run a Metropolis-Hasting sampler long enough to “reach” stationarity… I also remain with a lingering although not life-threatening question of whether or not the BUGS code using cut distributions provide the “right” answer or not. Here are my five slides used during the seminar (with a random walk implementation that did not diverge from the true target…):