Archive for Bayesian nonparametrics

ABC webinar, first!

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , on April 13, 2020 by xi'an


The première of the ABC World Seminar last Thursday was most successful! It took place at the scheduled time, with no technical interruption and allowed 130⁺ participants from most of the World [sorry, West Coast friends!] to listen to the first speaker, Dennis Prangle,  presenting normalising flows and distilled importance sampling. And to answer questions. As I had already commented on the earlier version of his paper, I will not reproduce them here. In short, I remain uncertain, albeit not skeptical, about the notions of normalising flows and variational encoders for estimating densities, when perceived as a non-parametric estimator due to the large number of parameters it involves and wonder at the availability of convergence rates. Incidentally, I had forgotten at the remarkable link between KL distance & importance sampling variability. Adding to the to-read list Müller et al. (2018) on neural importance sampling.


Judith’s colloquium at Warwick

Posted in Statistics with tags , , , , , , , , on February 21, 2020 by xi'an

the most important statistical ideas of the past 50 years

Posted in Books, pictures, Statistics, Travel with tags , , , , , , , , , , , , , , , , , on January 10, 2020 by xi'an

A grand building entrance near the train station in HelsinkiAki and Andrew are celebrating the New Year in advance by composing a list of the most important statistics ideas occurring (roughly) since they were born (or since Fisher died)! Like

  • substitution of computing for mathematical analysis (incl. bootstrap)
  • fitting a model with a large number of parameters, using some regularization procedure to get stable estimates and good predictions (e.g., Gaussian processes, neural networks, generative adversarial networks, variational autoencoders)
  • multilevel or hierarchical modelling (incl. Bayesian inference)
  • advances in statistical algorithms for efficient computing (with a long list of innovations since 1970, including ABC!), pointing out that a large fraction was of the  divide & conquer flavour (in connection with large—if not necessarily Big—data)
  • statistical decision analysis (e.g., Bayesian optimization and reinforcement learning, getting beyond classical experimental design )
  • robustness (under partial specification, misspecification or in the M-open world)
  • EDA à la Tukey and statistical graphics (and R!)
  • causal inference (via counterfactuals)

Now, had I been painfully arm-bent into coming up with such a list, it would have certainly been shorter, for lack of opinion about some of these directions (even the Biometrika deputeditoship has certainly helped in reassessing the popularity of different branches!), and I would have have presumably been biased towards Bayes as well as more mathematical flavours. Hence objecting to the witty comment that “theoretical statistics is the theory of applied statistics”(p.10) and including Ghosal and van der Vaart (2017) as a major reference. Also bemoaning the lack of long-term structure and theoretical support of a branch of the machine-learning literature.

Maybe also more space and analysis could have been spent on “debates remain regarding appropriate use and interpretation of statistical methods” (p.11) in that a major difficulty with the latest in data science is not so much the method(s) as the data on which they are based, which in a large fraction of the cases, is not representative and is poorly if at all corrected for this bias. The “replication crisis” is thus only one (tiny) aspect of the challenge.

BayesComp 2020 at a glance

Posted in Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , on December 18, 2019 by xi'an

Bayesian probabilistic numerical methods

Posted in Books, pictures, Statistics, University life with tags , , , , , , on December 5, 2019 by xi'an

“…in isolation, the error of a numerical method can often be studied and understood, but when composed into a pipeline the resulting error structure maybe non-trivial and its analysis becomes more difficult. The real power of probabilistic numerics lies in its application to pipelines of numerical methods, where the probabilistic formulation permits analysis of variance (ANOVA) to understand the contribution of each discretisation to the overall numerical error.”

Jon Cockayne (Warwick), Chris Oates (formerly Warwick), T.J. Sullivan, and Mark Girolami (formerly Warwick) got their survey on Bayesian probabilistic numerical methods in the SIAM (Society for Industrial and Applied Mathematics) Review, which is quite a feat given the non-statistical flavour of the journal (although Art Owen is now one of the editors of the review). As already reported in some posts on the ‘Og, the concept relies on the construction of a prior measure over a set of potential solutions, and numerical methods are assessed against the associated posterior measure. Not only is this framework more compelling in a conceptual sense, but it also leads to novel probabilistic numerical methods managing to solve quite challenging numerical tasks. Congrats to the authors!