Archive for misspecification

BayesComp²³ [aka MCMski⁶]

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , on March 20, 2023 by xi'an

The main BayesComp meeting started right after the ABC workshop and went on at a grueling pace, and offered a constant conundrum as to which of the four sessions to attend, the more when trying to enjoy some outdoor activity during the lunch breaks. My overall feeling is that it went on too fast, too quickly! Here are some quick and haphazard notes from some of the talks I attended, as for instance the practical parallelisation of an SMC algorithm by Adrien Corenflos, the advances made by Giacommo Zanella on using Bayesian asymptotics to assess robustness of Gibbs samplers to the dimension of the data (although with no assessment of the ensuing time requirements), a nice session on simulated annealing, from black holes to Alps (if the wrong mountain chain for Levi), and the central role of contrastive learning à la Geyer (1994) in the GAN talks of Veronika Rockova and Éric Moulines. Victor  Elvira delivered an enthusiastic talk on our massively recycled importance on-going project that we need to complete asap!

While their earlier arXived paper was on my reading list, I was quite excited by Nicolas Chopin’s (along with Mathieu Gerber) work on some quadrature stabilisation that is not QMC (but not too far either), with stratification over the unit cube (after a possible reparameterisation) requiring more evaluations, plus a sort of pulled-by-its-own-bootstrap control variate, but beating regular Monte Carlo in terms of convergence rate and practical precision (if accepting a large simulation budget from the start). A difficulty common to all (?) stratification proposals is that it does not readily applies to highly concentrated functions.

I chaired the lightning talks session, which were 3mn one-slide snapshots about some incoming posters selected by the scientific committee. While I appreciated the entry into the poster session, the more because it was quite crowded and busy, if full of interesting results, and enjoyed the slide solely made of “0.234”, I regret that not all poster presenters were not given the same opportunity (although I am unclear about which format would have permitted this) and that it did not attract more attendees as it took place in parallel with other sessions.

In a not-solely-ABC session, I appreciated Sirio Legramanti speaking on comparing different distance measures via Rademacher complexity, highlighting that some distances are not robust, incl. for instance some (all?) Wasserstein distances that are not defined for heavy tailed distributions like the Cauchy distribution. And using the mean as a summary statistic in such heavy tail settings comes as an issue, since the distance between simulated and observed means does not decrease in variance with the sample size, with the practical difficulty that the problem is hard to detect on real (misspecified) data since the true distribution behing (if any) is unknown. Would that imply that only intrinsic distances like maximum mean discrepancy or Kolmogorov-Smirnov are the only reasonable choices in misspecified settings?! While, in the ABC session, Jeremiah went back to this role of distances for generalised Bayesian inference, replacing likelihood by scoring rule, and requirement for Monte Carlo approximation (but is approximating an approximation that a terrible thing?!). I also discussed briefly with Alejandra Avalos on her use of pseudo-likelihoods in Ising models, which, while not the original model, is nonetheless a model and therefore to taken as such rather than as approximation.

I also enjoyed Gregor Kastner’s work on Bayesian prediction for a city (Milano) planning agent-based model relying on cell phone activities, which reminded me at a superficial level of a similar exploitation of cell usage in an attraction park in Singapore Steve Fienberg told me about during his last sabbatical in Paris.

In conclusion, an exciting meeting that should have stretched a whole week (or taken place in a less congenial environment!). The call for organising BayesComp 2025 is still open, by the way.

 

day four at ISBA 22

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , on July 3, 2022 by xi'an

Woke up an hour later today! Which left me time to work on [shortening] my slides for tomorrow, run to Mon(t) Royal, and bike to St-Viateur Bagels for freshly baked bagels. (Which seemed to be missing salt, despite my low tolerance for salt in general.)

Terrific plenary lecture by Pierre Jacob in his Susie Bayarri’s Lecture about cut models!  Offering a very complete picture of the reasons for seeking modularisation, the theoretical and practical difficulties with the approach, and some asymptotics as well. Followed a great discussion by Judith on cut posteriors separating interest parameters from nuisance parameters, especially in semi-parametric models. Even introducing two priors on the same parameters! And by Jim Berger, who coauthored with Susie the major cut paper inspiring this work, and illustrated the concept on computer experiments (not falling into the fallacy pointed out by Martin Plummer at MCMski(v) in Chamonix!).

Speaking of which, the Scientific Committee for the incoming BayesComp²³ in Levi, Finland, had a working meeting to which I participated towards building the programme as it is getting near. For those interested in building a session, they should make preparations and take advantage of being together in Mon(t)réal, as the call is coming out pretty soon!

Attended a session on divide-and-conquer methods for dependent data, with Sanvesh Srivastava considering the case of hidden Markov models and block processing the observed sequence. Which is sort of justified by the forgettability of long-past observations. I wonder if better performances could be achieved otherwise as the data on a given time interval gives essentially information on the hidden chain at other time periods.

I was informed this morn that Jackie Wong, one speaker in our session tomorrow could not make it to Mon(t)réal for visa reasons. Which is unfortunate for him, the audience and everyone involved in the organisation. This reinforces my call for all-time hybrid conferences that avoid penalising (or even discriminating) against participants who cannot physically attend for ethical, political (visa), travel, health, financial, parental, or any other, reasons… I am often opposed the drawbacks of lower attendance, risk of a deficit, dilution of the community, but there are answers to those, existing or to be invented, and the huge audience at ISBA demonstrates a need for “real” meetings that could be made more inclusive by mirror (low-key low-cost) meetings.

Finished the day at Isle de Garde with a Pu Ehr flavoured beer, in a particularly lively (if not jazzy) part of the city…

day three at ISBA 22

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , on July 1, 2022 by xi'an

Still woke up early too early [to remain operational for the poster session], finalised the selection of our MASH 2022/3 students, then returned to the Jean-Drapeau pool, which was  even more enjoyable in a crisp bright blue morning (and hardly anyone in my lane).

Attended a talk by Li Ma, who reviewed complexifying stick-breaking priors on the weights and introduced a balanced tree stick mechanism (why same depth?) (with links to Jara & Hanson 2010 and Stefanucci & Canale 2021). Then I listened to Giovanni Rebaubo creating clustering Gibbs-type processes along graphs, I sorted of dozed and missed the point as it felt as if the graph turned from a conceptual connection into a physical one! Catherine Forbes talked about a sequential version of stochastic variational approximation (published in St&Co) exploiting the update-one-at-a-time feature of Bayesian construction, except that each step relies on the previous approximation, meaning that the final—if fin there is!—approximation can end up far away from the optimal stochastic variational approximation. Assessing the divergence away from the target (in real time and tight budget would be nice).

After a quick lunch where I tasted seaweed-shell gyozas (!), I went to the generalised Bayesian inference session on Gibbs posteriors, [sort of] making up for the missed SAVI workshop! With Alice Kirichenko (Warwick) deriving information complexity bounds under misspecification, plus deriving an optimal value for the [vexing] coefficient η [in the Gibbs posterior], and Jack Jewson (ex-Warwick), raising the issue of improper models within Gibbs posteriors, although the reference or dominating measure is a priori arbitrary in these settings. But missing the third talk, about Gibbs posteriors again, and Chris Homes’ discussion, to attend part of the Savage (thesis) Award, with finalists Marta Catalano (Warwick faculty), Aditi Shenvi (Warwick student), and John O’Leary (an academic grand-children of mine’s as Pierre Jacob was his advisor). What a disappointment to have to wait for Friday night to hear the outcome!

I must confess to some  (French-speaker) énervement at hearing Mon(t)-réal massacred as Mon-t-real..! A very minor hindrance though, when put in perspective with my friend and Warwick colleague Gareth Roberts forced to evacuate his hotel last night due to a fire in basement, fortunately unscathed but ruining Day 3 for him… (Making me realise the conference hotel itself underwent a similar event 14 years ago.)

robust inference using posterior bootstrap

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on February 18, 2022 by xi'an

The famous 1994 Read Paper by Michael Newton and Adrian Raftery was entitled Approximate Bayesian inference, where the boostrap aspect is in randomly (exponentially) weighting each observation in the iid sample through a power of the corresponding density, a proposal that happened at about the same time as Tony O’Hagan suggested the related fractional Bayes factor. (The paper may also be equally famous for suggesting the harmonic mean estimator of the evidence!, although it only appeared as an appendix to the paper.) What is unclear to me is the nature of the distribution g(θ) associated with the weighted bootstrap sample, conditional on the original sample, since the outcome is the result of a random Exponential sample and of an optimisation step. With no impact of the prior (which could have been used as a penalisation factor), corrected by Michael and Adrian via an importance step involving the estimation of g(·).

At the Algorithm Seminar today in Warwick, Emilie Pompe presented recent research, including some written jointly with Pierre Jacob, [which I have not yet read] that does exactly that inclusion of the log prior as penalisation factor, along with an extra weight different from one, as motivated by the possibility of a misspecification. Including a new approach to cut models. An alternative mentioned during the talk that reminds me of GANs is to generate a pseudo-sample from the prior predictive and add it to the original sample. (Some attendees commented on the dependence of the later version on the chosen parameterisation, which is an issue that had X’ed my mind as well.)

focused Bayesian prediction

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on June 3, 2020 by xi'an

In this fourth session of our One World ABC Seminar, my friend and coauthor Gael Martin, gave an after-dinner talk on focused Bayesian prediction, more in the spirit of Bissiri et al. than following a traditional ABC approach.  because along with Ruben Loaiza-Maya and [my friend and coauthor] David Frazier, they consider the possibility of a (mild?) misspecification of the model. Using thus scoring rules à la Gneiting and Raftery. Gael had in fact presented an earlier version at our workshop in Oaxaca, in November 2018. As in other solutions of that kind, difficulty in weighting the score into a distribution. Although asymptotic irrelevance, direct impact on the current predictions, at least for the early dates in the time series… Further calibration of the set of interest A. Or the focus of the prediction. As a side note the talk perfectly fits the One World likelihood-free seminar as it does not use the likelihood function!

“The very premise of this paper is that, in reality, any choice of predictive class is such that the truth is not contained therein, at which point there is no reason to presume that the expectation of any particular scoring rule will be maximized at the truth or, indeed, maximized by the same predictive distribution that maximizes a different (expected) score.”

This approach requires the proxy class to be close enough to the true data generating model. Or in the word of the authors to be plausible predictive models. And to produce the true distribution via the score as it is proper. Or the closest to the true model in the misspecified family. I thus wonder at a possible extension with a non-parametric version, the prior being thus on functionals rather than parameters, if I understand properly the meaning of Π(Pθ). (Could the score function be misspecified itself?!) Since the score is replaced with its empirical version, the implementation is  resorting to off-the-shelf MCMC. (I wonder for a few seconds if the approach could be seen as a pseudo-marginal MCMC but the estimation is always based on the same observed sample, hence does not directly fit the pseudo-marginal MCMC framework.)

[Notice: Next talk in the series is tomorrow, 11:30am GMT+1.]

%d bloggers like this: