Archive for variational Bayes methods

day four at ISBA 22

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , on July 3, 2022 by xi'an

Woke up an hour later today! Which left me time to work on [shortening] my slides for tomorrow, run to Mon(t) Royal, and bike to St-Viateur Bagels for freshly baked bagels. (Which seemed to be missing salt, despite my low tolerance for salt in general.)

Terrific plenary lecture by Pierre Jacob in his Susie Bayarri’s Lecture about cut models!  Offering a very complete picture of the reasons for seeking modularisation, the theoretical and practical difficulties with the approach, and some asymptotics as well. Followed a great discussion by Judith on cut posteriors separating interest parameters from nuisance parameters, especially in semi-parametric models. Even introducing two priors on the same parameters! And by Jim Berger, who coauthored with Susie the major cut paper inspiring this work, and illustrated the concept on computer experiments (not falling into the fallacy pointed out by Martin Plummer at MCMski(v) in Chamonix!).

Speaking of which, the Scientific Committee for the incoming BayesComp²³ in Levi, Finland, had a working meeting to which I participated towards building the programme as it is getting near. For those interested in building a session, they should make preparations and take advantage of being together in Mon(t)réal, as the call is coming out pretty soon!

Attended a session on divide-and-conquer methods for dependent data, with Sanvesh Srivastava considering the case of hidden Markov models and block processing the observed sequence. Which is sort of justified by the forgettability of long-past observations. I wonder if better performances could be achieved otherwise as the data on a given time interval gives essentially information on the hidden chain at other time periods.

I was informed this morn that Jackie Wong, one speaker in our session tomorrow could not make it to Mon(t)réal for visa reasons. Which is unfortunate for him, the audience and everyone involved in the organisation. This reinforces my call for all-time hybrid conferences that avoid penalising (or even discriminating) against participants who cannot physically attend for ethical, political (visa), travel, health, financial, parental, or any other, reasons… I am often opposed the drawbacks of lower attendance, risk of a deficit, dilution of the community, but there are answers to those, existing or to be invented, and the huge audience at ISBA demonstrates a need for “real” meetings that could be made more inclusive by mirror (low-key low-cost) meetings.

Finished the day at Isle de Garde with a Pu Ehr flavoured beer, in a particularly lively (if not jazzy) part of the city…

day three at ISBA 22

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , on July 1, 2022 by xi'an

Still woke up early too early [to remain operational for the poster session], finalised the selection of our MASH 2022/3 students, then returned to the Jean-Drapeau pool, which was  even more enjoyable in a crisp bright blue morning (and hardly anyone in my lane).

Attended a talk by Li Ma, who reviewed complexifying stick-breaking priors on the weights and introduced a balanced tree stick mechanism (why same depth?) (with links to Jara & Hanson 2010 and Stefanucci & Canale 2021). Then I listened to Giovanni Rebaubo creating clustering Gibbs-type processes along graphs, I sorted of dozed and missed the point as it felt as if the graph turned from a conceptual connection into a physical one! Catherine Forbes talked about a sequential version of stochastic variational approximation (published in St&Co) exploiting the update-one-at-a-time feature of Bayesian construction, except that each step relies on the previous approximation, meaning that the final—if fin there is!—approximation can end up far away from the optimal stochastic variational approximation. Assessing the divergence away from the target (in real time and tight budget would be nice).

After a quick lunch where I tasted seaweed-shell gyozas (!), I went to the generalised Bayesian inference session on Gibbs posteriors, [sort of] making up for the missed SAVI workshop! With Alice Kirichenko (Warwick) deriving information complexity bounds under misspecification, plus deriving an optimal value for the [vexing] coefficient η [in the Gibbs posterior], and Jack Jewson (ex-Warwick), raising the issue of improper models within Gibbs posteriors, although the reference or dominating measure is a priori arbitrary in these settings. But missing the third talk, about Gibbs posteriors again, and Chris Homes’ discussion, to attend part of the Savage (thesis) Award, with finalists Marta Catalano (Warwick faculty), Aditi Shenvi (Warwick student), and John O’Leary (an academic grand-children of mine’s as Pierre Jacob was his advisor). What a disappointment to have to wait for Friday night to hear the outcome!

I must confess to some  (French-speaker) énervement at hearing Mon(t)-réal massacred as Mon-t-real..! A very minor hindrance though, when put in perspective with my friend and Warwick colleague Gareth Roberts forced to evacuate his hotel last night due to a fire in basement, fortunately unscathed but ruining Day 3 for him… (Making me realise the conference hotel itself underwent a similar event 14 years ago.)

the surprisingly overlooked efficiency of SMC

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on December 15, 2020 by xi'an

At the Laplace demon’s seminar today (whose cool name I cannot tire of!), Nicolas Chopin gave a webinar with the above equally cool title. And the first slide debunking myths about SMC’s:

The second part of the talk is about a recent arXival Nicolas wrote with his student Hai-Dang DauI missed, about increasing the number of MCMC steps when moving the particles. Called waste-free SMC. Where only one fraction of the particles is updated, but this is enough to create a sort of independence from previous iterations of the SMC. (Hai-Dang Dau and Nicolas Chopin had to taylor their own convergence proof for this modification of the usual SMC. Producing a single-run assessment of the asymptotic variance.)

On the side, I heard about a very neat (if possibly toyish) example on estimating the number of Latin squares:

And the other item of information is that Nicolas’ and Omiros’ book, An Introduction to Sequential Monte Carlo, has now appeared! (Looking forward reading the parts I had not yet read.)

computational advances in approximate Bayesian methods [at JSM]

Posted in Statistics with tags , , , , , , , on August 5, 2020 by xi'an

Another broadcast for an ABC (or rather ABM) session at JSM, organised and chaired by Robert Kohn, taking place tomorrow at 10am, ET, i.e., 2pm GMT, with variational and ABC talks:

454 * Thu, 8/6/2020, 10:00 AM – 11:50 AM Virtual
Computational Advances in Approximate Bayesian Methods — Topic Contributed Papers
Section on Bayesian Statistical Science
Organizer(s): Robert Kohn, University of New South Wales
Chair(s): Robert Kohn, University of New South Wales
10:05 AM Sparse Variational Inference: Bayesian Coresets from Scratch
Trevor Campbell, University of British Columbia
10:25 AM Fast Variational Approximation for Multivariate Factor Stochastic Volatility Model
David Gunawan, University of Wollongong; Robert Kohn, University of New South Wales; David Nott, National University of Singapore
10:45 AM High-Dimensional Copula Variational Approximation Through Transformation
Michael Smith, University of Melbourne; Ruben Loaiza-Maya, Monash University ; David Nott, National University of Singapore
11:05 AM Mini-Batch Metropolis-Hastings MCMC with Reversible SGLD Proposal
Rachel Wang, University of Sydney; Tung-Yu Wu, Stanford University; Wing Hung Wong, Stanford University
11:25 AM Weighted Approximate Bayesian Computation via Large Deviations Theory
Cecilia Viscardi, University of Florence; Michele Boreale, University of Florence; Fabio Corradi, University of Florence; Antonietta Mira, Università della Svizzera Italiana (USI)
11:45 AM Floor Discussion

distortion estimates for approximate Bayesian inference

Posted in pictures, Statistics, University life with tags , , , , , , , , , on July 7, 2020 by xi'an

A few days ago, Hanwen Xing, Geoff Nichols and Jeong Eun Lee arXived a paper with the following title, to be presented at uai2020. Towards assessing the fit of the approximation for the actual posterior, given the available data. This covers of course ABC methods (which seems to be the primary focus of the paper) but also variational inference and synthetic likelihood versions. For a parameter of interest, the difference between exact and approximate marginal posterior distributions is see as a distortion map, D = F o G⁻¹, interpreted as in optimal transport and estimated by normalising flows. Even when the approximate distribution G is poorly estimated since D remains the cdf of G(X) when X is distributed from F. The marginal posterior approximate cdf G can be estimated by ABC or another approximate technique. The distortion function D is itself restricted to be a Beta cdf, with parameters estimated by a neural network (although based on which input is unclear to me, unless the weights in (5) are the neural weights). The assessment is based on the estimated distortion at the dataset, as a significant difference from the identity signal a poor fit for the approximation. Overall, the procedure seems implementable rather easily and while depending on calibrating choices (other than the number of layers in the neural network) a realistic version of the simulation-based diagnostic of Talts et al. (2018).

%d bloggers like this: