Archive for Bayesian nonparametrics

Bayesian Nonparametrics Networking Workshop 2023

Posted in Statistics, Travel, University life with tags , , , , , , , , on December 3, 2023 by xi'an

Natural statistical science [#2]

Posted in Statistics with tags , , , , , , , , , , , , , , , on November 23, 2023 by xi'an

A rare occurrence of a Bayesian statistics paper in Nature with this “State estimation of a physical system with unknown governing equations” by Course and Nair. A variational Bayes modelling of a state system observed with noise, but without a physical model on the state (SDE) evolution itself. Which means a prior is set on a non-parametric or neural representation of the drift and a linear approximation is used for the variational approximation, leading to a Gaussian process as the approximate distribution. While this applies to highly complex models, like orbiting black holes, it is somewhat a surprise to meet this application of variational inference in a prestigious general science journal like Nature. (The picture above was taken on the train from Marseille at the end of the Bayes Fall school.)

“The approach is based on a technique called Bayesian inference, which is used widely, but which can be computationally challenging for complex systems.” B. Keith

Familial inference

Posted in Statistics, University life with tags , , , , , , , , , on October 3, 2023 by xi'an

An ISBA-BNP webinar on Wednesday, 4 October, at 17:00 UTC by my friend Steve McEachern:

Familial inference: Tests for hypotheses on a family of centers

Many scientific disciplines face a replicability crisis. While these crises have many drivers, we focus on one. Statistical hypotheses are translations of scientific hypotheses into statements about one or more distributions. The most basic tests focus on the centers of the distributions. Such tests implicitly assume a specific center, e.g., the mean or the median. Yet, scientific hypotheses do not always specify a particular center. This ambiguity leaves a gap between scientific theory and statistical practice that can lead to rejection of a true null. The gap is compounded when we consider deficiencies in the formal statistical model. Rather than testing a single center, we propose testing a family of plausible centers, such as those induced by the Huber loss function (the Huber family). Each center in the family generates a point null hypothesis and the resulting family of hypotheses constitutes a familial null hypothesis. A Bayesian nonparametric procedure is devised to test the familial null. Implementation for the Huber family is facilitated by a novel pathwise optimization routine. Along the way, we visit the question of what it means to be the center of a distribution. The favorable properties of the new test are demonstrated theoretically and in case studies.
This is joint work with Ryan Thompson (University of New South Wales), Catherine Forbes (Monash University), and Mario Peruggia (The Ohio State University).

ABC in Lapland²

Posted in Mountains, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , , , , on March 16, 2023 by xi'an

On the second day of our workshop, Aki Vehtari gave a short talk about his recent works on speed up post processing by importance sampling a simulation of an imprecise version of the likelihood until the desired precision is attained, importance corrected by Pareto smoothing¹⁵. A very interesting foray into the meaning of practical models and the hard constraints on computer precision. Grégoire Clarté (formerly a PhD student of ours at Dauphine) stayed on a similar ground of using sparse GP versions of the likelihood and post processing by VB²³ then stir and repeat!

Riccardo Corradin did model-based clustering when the nonparametric mixture kernel is missing a normalizing constant, using ABC with a Wasserstein distance and an adaptive proposal, with some flavour of ABC-Gibbs (and no issue of label switching since this is clustering). Mixtures of g&k models, yay! Tommaso Rigon reconsidered clustering via a (generalised Bayes à la Bissiri et al.) discrepancy measure rather than a true model, summing over all clusters and observations a discrepancy between said observation and said cluster. Very neat if possibly costly since involving distances to clusters or within clusters. Although she considered post-processing and Bayesian bootstrap, Judith (formerly [?] Dauphine)  acknowledged that she somewhat drifted from the theme of the workshop by considering BvM theorems for functionals of unknown functions, with a form of Laplace correction. (Enjoying Lapland so much that I though “Lap” in Judith’s talk was for Lapland rather than Laplace!!!) And applications to causality.

After the (X country skiing) break, Lorenzo Pacchiardi presented his adversarial approach to ABC, differing from Ramesh et al. (2022) by the use of scoring rule minimisation, where unbiased estimators of gradients are available, Ayush Bharti argued for involving experts in selecting the summary statistics, esp. for misspecified models, and Ulpu Remes presented a Jensen-Shanon divergence for selecting models likelihood-freely²², using a test statistic as summary statistic..

Sam Duffield made a case for generalised Bayesian inference in correcting errors in quantum computers, Joshua Bon went back to scoring rules for correcting the ABC approximation, with an importance step, while Trevor Campbell, Iuri Marocco and Hector McKimm nicely concluded the workshop with lightning-fast talks in place of the cancelled poster session. Great workshop, in my most objective opinion, with new directions!

Saint-Flour, 2023

Posted in Statistics, Travel, University life with tags , , , , on February 4, 2023 by xi'an