Archive for Casa Matemática Oaxaca

ABC by classification

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , on December 21, 2021 by xi'an

As a(nother) coincidence, yesterday, we had a reading group discussion at Paris Dauphine a few days after Veronika Rockova presented the paper in person in Oaxaca. The idea in ABC by classification that she co-authored with Yuexi Wang and Tetsuya Kaj is to use the empirical Kullback-Leibler divergence as a substitute to the intractable likelihood at the parameter value θ. In the generalised Bayes setting of Bissiri et al. Since this quantity is not available it is estimated as well. By a classification method that somehow relates to Geyer’s 1994 inverse logistic proposal, using the (ABC) pseudo-data generated from the model associated with θ. The convergence of the algorithm obviously depends on the choice of the discriminator used in practice. The paper also makes a connection with GANs as a potential alternative for the generalised Bayes representation. It mostly focus on the frequentist validation of the ABC posterior, in the sense of exhibiting a posterior concentration rate in n, the sample size, while requiring performances of the discriminators that may prove hard to check in practice. Expanding our 2018 result to this setting, with the tolerance decreasing more slowly than the Kullback-Leibler estimation error.

Besides the shared appreciation that working with the Kullback-Leibler divergence was a nice and under-appreciated direction, one point that came out of our discussion is that using the (estimated) Kullback-Leibler divergence as a form of distance (attached with a tolerance) is less prone to variability (or more robust) than using directly (and without tolerance) the estimate as a substitute to the intractable likelihood, if we interpreted the discrepancy in Figure 3 properly. Another item was about the discriminator function itself: while a machine learning methodology such as neural networks could be used, albeit with unclear theoretical guarantees, it was unclear to us whether or not a new discriminator needed be constructed for each value of the parameter θ. Even when the simulations are run by a deterministic transform.

21w5107 [½day 4]

Posted in Statistics with tags , , , , , , , , , , , , , , on December 3, 2021 by xi'an

Final ½ day of the 21w5107 workshop for me, as our initial plans were to stop today due to the small number of participants on site. And I had booked plane tickets early, too early. I will thus sadly miss the four afternoon talks, mea culpa! However I did attend Noiritt Chandra’s talk on Bayesian factor analysis. Which has always been a bit of a mystery to me in the sense that the number q of factors need be specified, which is a prior input one rarely controls. Here the goal is to estimate a covariance matrix with a sparse representation. And q is estimated by empirical likelihood ahead of the estimation of the matrix. The focus was on minimaxity and MCMC implementation rather than objective Bayes per se! Then, Daniele Durante spoke about analytical posteriors for probit models using unified skew-Normal priors (following a 2019 Biometrika paper). Including marginal posteriors and marginal likelihood. And for various extensions like dynamic probit models. Opening other computational issues such as simulating high dimensional truncated Normal distributions. (Potential use of delayed acceptance there?) This second talk was also drifting away from objective Bayes! In the first half of his talk, Filippo Ascolani introduced us to trees of random probability measures, each mother node being the distribution of the atoms of the children nodes. (Interestingly, Kingman is both connected to (coalescent) trees and to completely random measures.) My naïve first impression was that the distributions would get more and more degenerate as the number of levels in the tree would increase, however I am unsure this is correct as Filippo mentioned getting observations on all nodes. The talk also made me wonder at how this could be related Radford Neal’s Dirichlet trees. (Which I discovered at my first ICMS workshop about 20 years ago.) Yang Ni concluded the morning with a talk on causality that provided (to me) a very smooth (re)introduction to Bayesian causal graphs.

Even more than last time, I enormously enjoyed the workshop, its location, the fantastic staff at the hotel, and the reconnection with dear friends!, just regretting we could not be a few more. I appreciate the efforts made by on-line participants to stay connected and intervene (thanks, Ed!), but the quality of interactions is sadly of another magnitude when spending all our time together. Hopefully there will be a next time and hopefully we’ll then be back to larger size (and hopefully the location will remain the same). Hasta luego, Oaxaca!

21w5107 [½day 3]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , on December 2, 2021 by xi'an

Day [or half-day] three started without firecrackers and with David Rossell (formerly Warwick) presenting an empirical Bayes approach to generalised linear model choice with a high degree of confounding, using approximate Laplace approximations. With considerable improvements in the experimental RMSE. Making feeling sorry there was no apparent fully (and objective?) Bayesian alternative! (Two more papers on my reading list that I should have read way earlier!) Then Veronika Rockova discussed her work on approximate Metropolis-Hastings by classification. (With only a slight overlap with her One World ABC seminar.) Making me once more think of Geyer’s n⁰564 technical report, namely the estimation of a marginal likelihood by a logistic discrimination representation. Her ABC resolution replaces the tolerance step by an exponential of minus the estimated Kullback-Leibler divergence between the data density and the density associated with the current value of the parameter. (I wonder if there is a residual multiplicative constant there… Presumably not. Great idea!) The classification step need be run at every iteration, which could be sped up by subsampling.

On the always fascinating theme of loss based posteriors, à la Bissiri et al., Jack Jewson (formerly Warwick) exposed his work generalised Bayesian and improper models (from Birmingham!). Using data to decide between model and loss, which sounds highly unorthodox! First difficulty is that losses are unscaled. Or even not integrable after an exponential transform. Hence the notion of improper models. As in the case of robust Tukey’s loss, which is bounded by an arbitrary κ. Immediately I wonder if the fact that the pseudo-likelihood does not integrate is important beyond the (obvious) absence of a normalising constant. And the fact that this is not a generative model. And the answer came a few slides later with the use of the Hyvärinen score. Rather than the likelihood score. Which can itself be turned into a H-posterior, very cool indeed! Although I wonder at the feasibility of finding an [objective] prior on κ.

Rajesh Ranganath completed the morning session with a talk on [the difficulty of] connecting Bayesian models and complex prediction models. Using instead a game theoretic approach with Brier scores under censoring. While there was a connection with Veronika’s use of a discriminator as a likelihood approximation, I had trouble catching the overall message…

San Jeronimo Tlacochahuaya [jatp]

Posted in pictures, Travel with tags , , , , , , on November 28, 2021 by xi'an

snapshot from Mitla, Oaxacan

Posted in Mountains, pictures, Travel with tags , , , , , , , , , on November 28, 2021 by xi'an

%d bloggers like this: