## Archive for Bayesian nonparametrics

## Judith’s colloquium at Warwick

Posted in Statistics with tags Bayesian inference, Bayesian nonparametrics, Bayesian tests of hypotheses, colloquium, Hawkes processes, Judith Rousseau, seminar, University of Oxford, University of Warwick on February 21, 2020 by xi'an## BayesComp 2020 at a glance

Posted in Statistics, Travel, University life with tags ABC, BayesComp 2020, Bayesian computation, Bayesian nonparametrics, conference, Gainesville, Gaussian processes, HMC, ISBA, MCMC, non-reversible diffusion, poster session, reversible Markov chain, simulation, University of Florida, USA, Wasserstein distance on December 18, 2019 by xi'an## Bayesian probabilistic numerical methods

Posted in Books, pictures, Statistics, University life with tags ANOVA, Bayesian nonparametrics, probabilistic numerics, SIAM, Siam Review, Society for Industrial and Applied Mathematics, University of Warwick on December 5, 2019 by xi'an

“…in isolation, the error of a numerical method can often be studied and understood, but when composed into a pipeline the resulting error structure maybe non-trivial and its analysis becomes more difficult. The real power of probabilistic numerics lies in its application to pipelines of numerical methods, where the probabilistic formulation permits analysis of variance (ANOVA) to understand the contribution of each discretisation to the overall numerical error.”

**J**on Cockayne (Warwick), Chris Oates (formerly Warwick), T.J. Sullivan, and Mark Girolami (formerly Warwick) got their survey on Bayesian probabilistic numerical methods in the SIAM (Society for Industrial and Applied Mathematics) Review, which is quite a feat given the non-statistical flavour of the journal (although Art Owen is now one of the editors of the review). As already reported in some posts on the ‘Og, the concept relies on the construction of a prior measure over a set of potential solutions, and numerical methods are assessed against the associated posterior measure. Not only is this framework more compelling in a conceptual sense, but it also leads to novel probabilistic numerical methods managing to solve quite challenging numerical tasks. Congrats to the authors!

## noninformative Bayesian prior with a finite support

Posted in Statistics, University life with tags Bayesian nonparametrics, data dependent priors, minimal description length principle, minimaxity, noninformative priors, objective Bayes, PNAS on December 4, 2018 by xi'an**A** few days ago, Pierre Jacob pointed me to a PNAS paper published earlier this year on a form of noninformative Bayesian analysis by Henri Mattingly and coauthors. They consider a prior that “maximizes the mutual information between parameters and predictions”, which sounds very much like José Bernardo’s notion of reference priors. With the rather strange twist of having the prior depending on the data size m even they work under an iid assumption. Here information is defined as the difference between the entropy of the prior and the conditional entropy which is not precisely defined in the paper but looks like the expected [in the data x] Kullback-Leibler divergence between prior and posterior. (I have general issues with the paper in that I often find it hard to read for a lack of precision and of definition of the main notions.)

One highly specific (and puzzling to me) feature of the proposed priors is that they are supported by a finite number of atoms, which reminds me very much of the (minimax) least favourable priors over compact parameter spaces, as for instance in the iconic paper by Casella and Strawderman (1984). For the same mathematical reason that non-constant analytic functions must have separated maxima. This is conducted under the assumption and restriction of a compact parameter space, which must be chosen in most cases. somewhat arbitrarily and not without consequences. I can somehow relate to the notion that a finite support prior translates the limited precision in the estimation brought by a finite sample. In other words, given a sample size of m, there is a maximal precision one can hope for, producing further decimals being silly. Still, the fact that the support of the prior is fixed *a priori*, completely independently of the data, is both unavoidable (for the prior to be *prior*!) and very dependent on the choice of the compact set. I would certainly prefer to see a maximal degree of precision expressed *a posteriori*, meaning that the support would then depend on the data. And handling finite support posteriors is rather awkward in that many notions like confidence intervals do not make much sense in that setup. (Similarly, one could argue that Bayesian non-parametric procedures lead to estimates with a finite number of support points but these are determined based on the data, not *a priori*.)

Interestingly, the derivation of the “optimal” prior is operated by iterations where the next prior is the renormalised version of the current prior times the exponentiated Kullback-Leibler divergence, which is “guaranteed to converge to the global maximum” for a discretised parameter space. The authors acknowledge that the resolution is poorly suited to multidimensional settings and hence to complex models, and indeed the paper only covers a few toy examples of moderate and even humble dimensions.

Another difficulty with the paper is the absence of temporal consistency: since the prior depends on the sample size, the posterior for n i.i.d. observations is no longer the prior for the (n+1)th observation.

“Because it weights the irrelevant parameter volume, the Jeffreys prior has strong dependence on microscopic effects invisible to experiment”

I simply do not understand the above sentence that apparently counts as a criticism of Jeffreys (1939). And would appreciate anyone enlightening me! The paper goes into comparing priors through Bayes factors, which ignores the main difficulty of an automated solution such as Jeffreys priors in its inability to handle infinite parameter spaces by being almost invariably improper.

## BNP12

Posted in pictures, Statistics, Travel, University life with tags Bayesian nonparametrics, BNP12, Coventry, England, ISBA, Midlands, O'Bayes 2019, objective Bayes, Oxford, support, UK, University of Oxford, University of Warwick on October 9, 2018 by xi'an**T**he next BNP (Bayesian nonparametric) conference is taking place in Oxford (UK), prior to the O’Bayes 2019 conference in Warwick, in June 24-28 and June 29-July 2, respectively. At this stage, the Scientific Committee of BNP12 invites submissions for possible contributed talks. The deadline for submitting a title/abstract is 15th December 2018. And the submission of applications for travel support closes on 15th December 2018. Currently, there are 35 awards that could be either travel awards or accommodation awards. The support is for junior researchers (students currently enrolled in a Dphil (PhD) programme or having graduated after 1st October 2015). The applicant agrees to present her/his work at the conference as a poster or oraly if awarded the travel support.

As for O’Bayes 2019, we are currently composing the programme, following the 20 years tradition of these O’Bayes meetings of having the Scientific Committee (Marilena Barbieri, Ed George, Brunero Liseo, Luis Pericchi, Judith Rousseau and myself) inviting about 25 speakers to present their recent work and 25 discussants to… discuss these works. With a first day of introductory tutorials to Bayes, O’Bayes and beyond. I (successfully) proposed this date and location to the O’Bayes board to take advantage of the nonparametric Bayes community present in the vicinity so that they could attend both meetings at limited cost and carbon impact.

## yes, another potential satellite to ISBA 2018!

Posted in Statistics with tags Bayesian nonparametrics, Bordeaux, Edinburgh, flight, France, French wine, image analysis, satellite workshop, signal, vineyard, workshop on May 22, 2018 by xi'an**O**n July 2-4, 2018, there will be an ISBA sponsored workshop on Bayesian non-parametrics for signal and image processing, in Bordeaux, France. This is a wee bit further than Warwick (BAYsm) or Rennes (MCqMC), but still manageable from Edinburgh with direct flights (if on Ryanair). Deadline for free (yes, free!) registration is May 31.

## BimPressioNs [BNP11]

Posted in Books, pictures, Statistics, Travel, University life, Wines with tags École Normale Supérieure, Bayesian nonparametrics, BNP11, empirical likelihood, French cheese, Hamiltonian, IHP, Noel Cressie, NPR, Paris on June 29, 2017 by xi'an**W**hile my participation to BNP 11 has so far been more at the janitor level [although not gaining George Casella’s reputation on NPR!] than at the scientific one, since we had decided in favour of the least expensive and unstaffed option for coffee breaks, to keep the registration fees at a minimum [although I would have gladly gone all the way to removing all coffee breaks!, if only because such breaks produce much garbage], I had fairly good chats at the second poster session, in particular around empirical likelihoods and HMC for discrete parameters, the first one based on the general Cressie-Read formulation and the second around the recently arXived paper of Nishimura et al., which I wanted to read. Plus many other good chats full stop, around terrific cheese platters!

This morning, the coffee breaks were much more under control and I managed to enjoy [and chair] the entire session on empirical likelihood, with absolutely fantastic talks from Nils Hjort and Art Owen (the third speaker having gone AWOL, possibly a direct consequence of Trump’s travel ban).