Archive for 21w5107

comida callejera [jatp]

Posted in pictures, Travel with tags , , , , , , on December 4, 2021 by xi'an

Street food, Oaxaca

21w5107 [½day 4]

Posted in Statistics with tags , , , , , , , , , , , , , , on December 3, 2021 by xi'an

Final ½ day of the 21w5107 workshop for me, as our initial plans were to stop today due to the small number of participants on site. And I had booked plane tickets early, too early. I will thus sadly miss the four afternoon talks, mea culpa! However I did attend Noiritt Chandra’s talk on Bayesian factor analysis. Which has always been a bit of a mystery to me in the sense that the number q of factors need be specified, which is a prior input one rarely controls. Here the goal is to estimate a covariance matrix with a sparse representation. And q is estimated by empirical likelihood ahead of the estimation of the matrix. The focus was on minimaxity and MCMC implementation rather than objective Bayes per se! Then, Daniele Durante spoke about analytical posteriors for probit models using unified skew-Normal priors (following a 2019 Biometrika paper). Including marginal posteriors and marginal likelihood. And for various extensions like dynamic probit models. Opening other computational issues such as simulating high dimensional truncated Normal distributions. (Potential use of delayed acceptance there?) This second talk was also drifting away from objective Bayes! In the first half of his talk, Filippo Ascolani introduced us to trees of random probability measures, each mother node being the distribution of the atoms of the children nodes. (Interestingly, Kingman is both connected to (coalescent) trees and to completely random measures.) My naïve first impression was that the distributions would get more and more degenerate as the number of levels in the tree would increase, however I am unsure this is correct as Filippo mentioned getting observations on all nodes. The talk also made me wonder at how this could be related Radford Neal’s Dirichlet trees. (Which I discovered at my first ICMS workshop about 20 years ago.) Yang Ni concluded the morning with a talk on causality that provided (to me) a very smooth (re)introduction to Bayesian causal graphs.

Even more than last time, I enormously enjoyed the workshop, its location, the fantastic staff at the hotel, and the reconnection with dear friends!, just regretting we could not be a few more. I appreciate the efforts made by on-line participants to stay connected and intervene (thanks, Ed!), but the quality of interactions is sadly of another magnitude when spending all our time together. Hopefully there will be a next time and hopefully we’ll then be back to larger size (and hopefully the location will remain the same). Hasta luego, Oaxaca!

21w5107 [day 2]

Posted in Books, Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , on December 1, 2021 by xi'an

After a rich and local (if freezing) dinner on a rooftop facing the baroque Oaxaca cathedral, and an early invigorating outdoor swim in my case!, the morning session was mostly on mixtures, with Helen Ogden exploring X validation for (estimating the number k of components for) finite mixtures, when using the likelihood as an objective function. I was unclear of the goal however when considering that the data supporting the study was Uniform (0,1), nothing like a mixture of Normal distributions. And about the consistency attached to the objective function. The session ended with Diana Cai presenting a counter-argument in the sense that she proved, along with Trevor Campbell and Tamara Broderick, that the posterior on k diverges to infinity with the number n of observations if a mixture model is misspecified for said data. Which does not come as a major surprise since there is no properly defined value of k when the data is not generated from the adopted mixture. I would love to see an extension to the case when the k component mixture contains a non-parametric component! In-between, Alexander Ly discussed Bayes factors for multiple datasets, with some asymptotics showing consistency for some (improper!) priors if one sample size grows to infinity. With actually attaining the same rate under both hypotheses. Luis Nieto-Barajas presented an approach on uncertainty assessment through KL divergence for random probability measures, which requires a calibration of the KL in this setting, as KL does not enjoy a uniform scale, and a prior on a Pólya tree. And Chris Holmes presented a recent work with Edwin Fong and Steven Walker on a prediction approach to Bayesian inference. Which I had had on my reading list for a while. It is a very original proposal where likelihoods and priors are replaced by the sequence of posterior predictives and only parameters of interest get simulated. The Bayesian flavour of the approach is delicate to assess though, albeit a form of non-parametric Bayesian perspective… (I still need to read the paper carefully.)

In the afternoon session, Judith Rousseau presented her recent foray in cut posteriors for semi-parametric HMMs. With interesting outcomes for efficiently estimating the transition matrix, the component distributions, and the smoothing distribution. I wonder at the connection with safe Bayes in that cut posteriors induce a loss of information. Sinead Williamson spoke on distributed MCMC for BNP. Going back at the “theme of the day”, namely clustering and finding the correct (?) number of clusters. With a collapsed versus uncollapsed division that reminded me of the marginal vs. conditional María Gil-Leyva discussed yesterday. Plus a decomposition of a random measure into a finite mixture and an infinite one that also reminded me of the morning talk of Diana Cai. (And making me wonder at the choice of the number K of terms in the finite part.) Michele Guindani spoke about clustering distributions (with firecrackers as a background!). Using the nDP mixture model, which was show to suffer from degeneracy (as discussed by Frederico Camerlenghi et al. in BA). The subtle difference stands in using the same (common) atoms in all random distributions at the top of the hierarchy, with independent weights. Making the partitions partially exchangeable. The approach relies on Sylvia’s generalised mixtures of finite mixtures. With interesting applications to microbiome and calcium imaging (including a mice brain in action!). And Giovanni Rebaudo presented a generalised notion of clustering aligned on a graph, with some observations located between the nodes corresponding to clusters. Represented as a random measure with common parameters for the clusters and separated parameters outside. Interestingly playing on random partitions, Pólya urns, and species sampling.

21w5107 [day 1]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on November 30, 2021 by xi'an

The workshop started by the bad news of our friend Michele Guindani being hit and mugged upon arrival in Oaxaca, Saturday night. Fortunately, he was not hurt, but lost both phone and wallet, always a major bummer when abroad… Still this did not cast a lasting pall on the gathering of long-time no-see friends, whom I had indeed not seen for at least two years. Except for those who came to the CIRMirror!

A few hours later, we got woken up by fairly loud firecrackers (palomas? cohetes?) at 5am, for no reason I can fathom (the Mexican Revolution day was a week ago) although it seemed correlated with the nearby church bells going on at full blast (for Lauds? Hanukkah? Cyber Monday? Chirac’s birthdate?). The above picture was taken the Santa María del Tule town with its super-massive Montezuma cypress tree, with remaining decorations from the Día de los Muertos.

Without launching (much) the debate on whether or not Bayesian non-parametrics qualified as “objective Bayesian” methods, Igor Prünster started the day with a non-parametric presentation of dependent random probability measures. With the always fascinating notion that a random discrete non-parametric prior is inducing a distribution on the partitions (EPPF). And applicability in mixtures and their generalisations. Realising that the highly discrete nature of such measures is not such an issue for a given sample size n, since there are at most n elements in the partition. Beatrice Franzolini discussed of specific ways to create dependent distributions based on independent samples, although her practical example based on one N(-10,1) sample and another (independently) N(10,1) sample seemed to fit in several of the dependent random measures she compared. And Marta Catalano (Warwick) presented her work on partial exchangeability and optimal transportation (which I had also heard in CIRM last June and in Warwick last week). One thing I had not realised earlier was the dependence of the Wasserstein distance on the parameterisation, although it now makes perfect sense. If only for the coupling.  I had alas to miss Isadora Antoniano-Villalobos’ talk as I had to teach my undergrad class in Paris Dauphine at the same time… This non-parametric session was quite homogeneous and rich in perspectives.

In an all-MCMC afternoon, Julyan Arbel talked about reference priors for extreme value distributions, with the “shocking” case of a restriction on the support of one parameter, ξ. Which means in fact that the Jeffreys prior is then undefined. This reminded me somewhat of the work of Clara Grazian on Jeffreys priors for mixtures, where some models were not allowing for Fisher information to exist. The second part of this talk was about modified local versions of Gelman & Rubin (1992) R hats. And the recent modification proposed by Aki and co-authors. Where I thought that a simplification of the multivariate challenge of defining ranks could be alleviated by considering directly the likelihood values of the chains. And Trevor Campbell gradually built an involved parallel tempering method where the powers of a geometric mixture are optimised as spline functions of the temperature. Next, María Gil-Leyva presented her original and ordered approach to mixture estimation, which I discussed in a blog published two days ago (!). She corrected my impressions that (i) the methods were all impervious to label switching and (ii) required some conjugacy to operate. The final talk of the day was by Anirban Bhattacharya on high-D Bayesian regression and coupling techniques for checking convergence, a paper that had been on my reading list for a long while. A very elaborate construct of coupling strategies within a Gibbs sampler, with some steps relying on optimal coupling and others on the use of common random generators.

%d bloggers like this: