## Archive for London

## London snapshot [jatp]

Posted in pictures, Running, Statistics, Travel with tags Britain, Cartwright Gardens, Errol Street, Journal of the Royal Statistical Society, London, Read paper, St Pancras on April 13, 2017 by xi'an## beyond objectivity, subjectivity, and other ‘bjectivities

Posted in Statistics with tags Andrew Gelman, Christian Hennig, discussion paper, Errol Street, frequentist inference, London, objectivism, Read paper, Royal Statistical Society, RSS, Series A, statistical modelling, subjective versus objective Bayes, subjectivity on April 12, 2017 by xi'an**H**ere is my discussion of Gelman and Hennig at the Royal Statistical Society, which I am about to deliver!

## objective and subjective RSS Read Paper next week

Posted in Books, pictures, Statistics, Travel, University life, Wines with tags Andrew Gelman, Christian Hennig, discussion paper, England, frequentist inference, London, objective Bayes, objectivism, Philosophy of Science, Read paper, Royal Statistical Society, RSS, Series A, subjective versus objective Bayes, subjectivity on April 5, 2017 by xi'an**A**ndrew Gelman and Christian Hennig will give a Read Paper presentation next Wednesday, April 12, 5pm, at the Royal Statistical Society, London, on their paper “Beyond subjective and objective in statistics“. Which I hope to attend and else to write a discussion. Since the discussion (to published in Series A) is open to everyone, I strongly encourage ‘Og’s readers to take a look at the paper and the “radical” views therein to hopefully contribute to this discussion. Either as a written discussion or as comments on this very post.

## The Hanging Tree

Posted in Books, Kids, Travel with tags Banff, Ben Aaronovitch, England, English magic, Hyde Park, London, Rivers of London, Thames on March 25, 2017 by xi'an**T**his is the ~~fifth~~ sixth volume of Ben Aaronovitch’s Rivers of London series. Which features PC Peter Grant from the London’s Metropolitan Police specialising in paranormal crime. Joining a line of magicians that was started by Isaac Newton. And with the help of water deities. Although this English magic sleuthing series does not compare with the superlative Jonathan Strange & Mr. Norrell single book, The Hanging Tree remains highly enjoyable, maybe more for its style and vocabulary than for the detective story itself, which does not sound completely coherent (unless I read it too quickly during the wee hours in Banff last week). And does not bring much about this part of London. Still a pleasure to read as the long term pattern of Aaronovitch’s universe slowly unravels and some characters get more substance and depth.

## art brut [reposted]

Posted in pictures with tags art brut, Harry Pearce, London, remains of the day, The Guardian on December 14, 2016 by xi'an## a Bayesian criterion for singular models [discussion]

Posted in Books, Statistics, University life with tags ABC in London, Bayesian principles, BIC, discussion paper, effective dimension, information criterion, judicial system, latex2wp, London, non-regular models, Ockham's razor, penalised likelihood, Read paper, Royal Statistical Society, sBIC, Series B, singular models on October 10, 2016 by xi'an*[Here is the discussion Judith Rousseau and I wrote about the paper by Mathias Drton and Martyn Plummer, a Bayesian criterion for singular models, which was discussed last week at the Royal Statistical Society. There is still time to send a written discussion! Note: This post was written using the latex2wp converter.]*

**I**t is a well-known fact that the BIC approximation of the marginal likelihood in a given irregular model fails or may fail. The BIC approximation has the form

where corresponds on the number of parameters to be estimated in model . In irregular models the dimension typically does not provide a good measure of complexity for model , at least in the sense that it does not lead to an approximation of

A way to understand the behaviour of is through the *effective dimension*

when it exists, see for instance the discussions in Chambaz and Rousseau (2008) and Rousseau (2007). Watanabe (2009} provided a more precise formula, which is the starting point of the approach of Drton and Plummer:

where is the true parameter. The authors propose a clever algorithm to approximate of the marginal likelihood. Given the popularity of the BIC criterion for model choice, obtaining a relevant penalized likelihood when the models are singular is an important issue and we congratulate the authors for it. Indeed a major advantage of the BIC formula is that it is an off-the-shelf crierion which is implemented in many softwares, thus can be used easily by non statisticians. In the context of singular models, a more refined approach needs to be considered and although the algorithm proposed by the authors remains quite simple, it requires that the functions and need be known in advance, which so far limitates the number of problems that can be thus processed. In this regard their equation (3.2) is both puzzling and attractive. Attractive because it invokes nonparametric principles to estimate the underlying distribution; puzzling because why should we engage into deriving an approximation like (3.1) and call for Bayesian principles when (3.1) is at best an approximation. In this case why not just use a true marginal likelihood?

**1. Why do we want to use a BIC type formula? **

The BIC formula can be viewed from a purely frequentist perspective, as an example of penalised likelihood. The difficulty then stands into choosing the penalty and a common view on these approaches is to choose the smallest possible penalty that still leads to consistency of the model choice procedure, since it then enjoys better separation rates. In this case a penalty is sufficient, as proved in Gassiat et al. (2013). Now whether or not this is a desirable property is entirely debatable, and one might advocate that for a given sample size, if the data fits the smallest model (almost) equally well, then this model should be chosen. But unless one is specifying what *equally well* means, it does not add much to the debate. This also explains the popularity of the BIC formula (in regular models), since it approximates the marginal likelihood and thus benefits from the Bayesian justification of the measure of fit of a model for a given data set, often qualified of being a Bayesian Ockham’s razor. But then why should we not compute instead the marginal likelihood? Typical answers to this question that are in favour of BIC-type formula include: (1) BIC is supposingly easier to compute and (2) BIC does not call for a specification of the prior on the parameters within each model. Given that the latter is a difficult task and that the prior can be highly influential in non-regular models, this may sound like a good argument. However, it is only apparently so, since the only justification of BIC is purely asymptotic, namely, in such a regime the difficulties linked to the choice of the prior disappear. This is even more the case for the sBIC criterion, since it is only valid if the parameter space is compact. Then the impact of the prior becomes less of an issue as non informative priors can typically be used. With all due respect, the solution proposed by the authors, namely to use the posterior mean or the posterior mode to allow for non compact parameter spaces, does not seem to make sense in this regard since they depend on the prior. The same comments apply to the author’s discussion on *Prior’s matter for sBIC*. Indeed variations of the sBIC could be obtained by penalizing for bigger models via the prior on the weights, for instance as in Mengersen and Rousseau (2011) or by, considering repulsive priors as in Petralia et al. (20120, but then it becomes more meaningful to (again) directly compute the marginal likelihood. Remains (as an argument in its favour) the relative computational ease of use of sBIC, when compared with the marginal likelihood. This simplification is however achieved at the expense of requiring a deeper knowledge on the behaviour of the models and it therefore looses the off-the-shelf appeal of the BIC formula and the range of applications of the method, at least so far. Although the dependence of the approximation of on , $latex {j \leq k} is strange, this does not seem crucial, since marginal likelihoods in themselves bring little information and they are only meaningful when compared to other marginal likelihoods. It becomes much more of an issue in the context of a large number of models.

**2. Should we care so much about penalized or marginal likelihoods ? **

Marginal or penalized likelihoods are exploratory tools in a statistical analysis, as one is trying to define a reasonable model to fit the data. An unpleasant feature of these tools is that they provide numbers which in themselves do not have much meaning and can only be used in comparison with others and without any notion of uncertainty attached to them. A somewhat richer approach of exploratory analysis is to *interrogate* the posterior distributions by either varying the priors or by varying the loss functions. The former has been proposed in van Havre et l. (2016) in mixture models using the prior tempering algorithm. The latter has been used for instance by Yau and Holmes (2013) for segmentation based on Hidden Markov models. Introducing a decision-analytic perspective in the construction of information criteria sounds to us like a reasonable requirement, especially when accounting for the current surge in studies of such aspects.

*[Posted as arXiv:1610.02503]*

## advanced computational methods for complex models in Biology [talk]

Posted in Books, pictures, Statistics, Travel, University life with tags ABC, Bayesian computing, Biology, coalescent, computational biology, England, EPSRC, expectation-propagation, London, random forests, UCL, University College London, Wright-Fisher model on September 29, 2016 by xi'an**H**ere are the slides of the presentation I gave at the EPSRC Advanced Computational methods for complex models in Biology at University College London, last week. Introducing random forests as proper summaries for both model choice and parameter estimation (with considerable overlap with earlier slides, obviously!). The other talks of that highly interesting day on computational Biology were mostly about ancestral graphs, using Wright-Fisher diffusions for coalescents, plus a comparison of expectation-propagation and ABC on a genealogy model by Mark Beaumont and the decision theoretic approach to HMM order estimation by Chris Holmes. In addition, it gave me the opportunity to come back to the Department of Statistics at UCL more than twenty years after my previous visit, at a time when my friend Costas Goutis was still there. And to realise it had moved from its historical premises years ago. (I wonder what happened to the two staircases built to reduce frictions between Fisher and Pearson if I remember correctly…)