## what a party!

Posted in pictures, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , on September 13, 2021 by xi'an

We ended up having a terrific b’day party last Thursday after noon, with about 30 friends listening in Institut Henri Poincaré to Florence, Pierre, and Sylvia giving lectures on my favourite themes, namely ABC, MCMC, and mixture inference. Incl. subtle allusions to my many idiosyncrasies in three different flavours!  And a limited number of anecdotes, incl. the unavoidable Cancún glasses disaster! We later headed to a small Ethiopian restaurant located on the other side of the Panthéon, rue de l’Ecole Polytechnique (rather than on the nearby rue Laplace!), which was going to be too tiny for us, especially in these COVID times, until the sky cleared up and the restaurant set enough tables in the small street to enjoy their injeras and wots till almost midnight. The most exciting episode of the evening came when someone tried to steal some of our bags that had been stored in a back room and when Tony spotted the outlier and chased him till the thief dropped the bags..! Thanks to Tony for saving the evening and our computers!!! To Éric, Jean-Michel and Judith for organising this 9/9 event (after twisting my arm just a wee bit). And to all my friends who joined the party, some from far away…

## congrats [IMS related]

Posted in Statistics with tags , , , , , , , , , , , on July 21, 2021 by xi'an

When I read through the June-July issue of the IMS Bulletin, I saw many causes for celebration and congratulations!, from Richard Samworth’s award of an Advanced ERC grant, to the new IMS fellows, including my friends, Ismael Castillo, Steve Mc Eachern, and Natesh Pillai, as well as my current or former associate editors, Johan Segers (JRSS B) and Changbao Wu (Biometrika). To my friends Alicia Carriquiry, David Dunson, and Tamara Broderick receiving 2021 COPSS awards, along others, including Wing Hung Wong (of the precursor Tanner & Wong, 1987 fame!). Natesh also figures among the “Quadfecta 23”, the exclusive club of authors having published at least one paper in each of the four Annals published by the IMS!

## black box MCMC

Posted in Books, Statistics with tags , , , , , , , , on July 17, 2021 by xi'an

“…back-box methods, despite using no information of the proposal distribution, can actually give better estimation accuracy than the typical importance sampling [methods]…”

Earlier this week I was pointed out to Liu & Lee’s black box importance sampling, published in AISTATS 2017. (which I did not attend). Already found in Briol et al. (2015) and Oates, Girolami, and Chopin (2017), the method starts from Charles Stein‘s “unbiased estimator of the loss” (that was a fundamental tool in my own PhD thesis!), a variation on integration by part:

$\mathbb E_p[\nabla\log p(X) f(X)+\nabla f(X)]=0$

for differentiable functions f and p cancelling at the boundaries. It also holds for the kernelised extension

$\mathbb E_p[k_p(X,x')]=0$

for all x’, where the integrand is a 1-d function of an arbitrary kernel k(x,x’) and of the score function ∇log p. This null expectation happens to be a minimum since

$\mathbb E_{X,X'\sim q}[k_p(X,X')]\ge 0$

and hence importance weights can be obtained by minimising

$\sum_{ij} w_i w_j k_p(x_i,x_j)$

in w (from the unit simplex), for a sample of iid realisations from a possibly unknown distribution with density q. Liu & Lee show that this approximation converges faster than the standard Monte Carlo speed √n, when using Hilbertian properties of the kernel through control variates. Actually, the same thing happens when using a (leave-one-out) non-parametric kernel estimate of q rather than q. At least in theory.

“…simulating n parallel MCMC chains for m steps, where the length m of the chains can be smaller than what is typically used in MCMC, because it just needs to be large enough to bring the distribution `roughly’ close to the target distribution”

A practical application of the concept is suggested in the above quote. As a corrected weight for interrupted MCMC. Or when using an unadjusted Langevin algorithm. Provided the minimisation of the objective quadratic form is fast enough, the method can thus be used as a benchmark for regular MCMC implementation.

## ISBA 2021 grand finale

Posted in Kids, Mountains, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on July 3, 2021 by xi'an

Last day of ISBA (and ISB@CIRM), or maybe half-day, since there are only five groups of sessions we can attend in Mediterranean time.

My first session was one on priors for mixtures, with 162⁺ attendees at 5:15am! (well, at 11:15 Wien or Marseille time), Gertrud Malsiner-Walli distinguishing between priors on number of components [in the model] vs number of clusters [in the data], with a minor question of mine whether or not a “prior” is appropriate for a data-dependent quantity. And Deborah Dunkel presenting [very early in the US!] anchor models for fighting label switching, which reminded me of the talk she gave at the mixture session of JSM 2018 in Vancouver. (With extensions to consistency and mixtures of regression.) And Clara Grazian debating on objective priors for the number of components in a mixture [in the Sydney evening], using loss functions to build these. Overall it seems there were many talks on mixtures and clustering this year.

After the lunch break, when several ISB@CIRM were about to leave, we ran the Objective Bayes contributed session, which actually included several Stein-like minimaxity talks. Plus one by Théo Moins from the patio of CIRM, with ciccadas in the background. Incredibly chaired by my friend Gonzalo, who had a question at the ready for each and every speaker! And then the Savage Awards II session. Which ceremony is postponed till Montréal next year. And which nominees are uniformly impressive!!! The winner will only be announced in September, via the ISBA Bulletin. Missing the ISBA general assembly for a dinner in Cassis. And being back for the Bayesian optimisation session.

I would have expected more talks at the boundary of BS & ML (as well as COVID and epidemic decision making), the dearth of which should be a cause for concern if researchers at this boundary do not prioritise ISBA meetings over more generic meetings like NeurIPS… (An exception was George Papamakarios’ talk on variational autoencoders in the Savage Awards II session.)

Many many thanks to the group of students at UConn involved in setting most of the Whova site and running the support throughout the conference. It indeed went on very smoothly and provided a worthwhile substitute for the 100% on-site version. Actually, I both hope for the COVID pandemic (or at least the restrictions attached to it) to abate and for the hybrid structure of meetings to stay, along with the multiplication of mirror workshops. Being together is essential to the DNA of conferences, but travelling to a single location is not so desirable, for many reasons. Looking for ISBA 2022, a year from now, either in Montréal, Québec, or in one of the mirror sites!

## ISBA 2021 low key

Posted in Kids, Mountains, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , , , , , , , , , , , on July 2, 2021 by xi'an

Fourth day of ISBA (and ISB@CIRM), which was a bit low key for me as I had a longer hike with my wife in the morning, including a swim in a sea as cold as the Annecy lake last month!, but nonetheless enjoyable and crystal clear, then attacked my pile of Biometrika submissions that had accumulated beyond the reasonable since last week, chased late participants who hadn’t paid yet, reviewed a paper that was due two weeks ago, chatted with participants before they left, discussed a research problem, and as a result ended attending only four sessions over the whole day. Including one about Models and Methods for Networks and Graphs, with interesting computation challenges, esp. in block models, the session in memoriam of Hélène Massam, where Gérard Letac (part of ISB@CIRM!), Jacek Wesolowski, and Reza Mohammadi, all coauthors of Hélène, made presentations on their joint advances. Hélène was born in Marseille, actually, in 1949, and even though she did not stay in France after her École Normale studies, it was a further commemoration to attend this session in her birth-place. I also found out about them working on the approximation of a ratio of normalising constants for the G-Wishart. The last session of my data was the Susie Bayarri memorial lecture, with Tamara Roderick as the lecturer. Reporting on an impressive bunch of tricks to reduce computing costs for hierarchical models with Gaussian processes.