## Archive for the Books Category

## Eagle and Child

Posted in Books, Kids, pictures, Travel, University life, Wines with tags ale, Eagle and Child, Leuven or yet again in Oxford, Lewis, pub, steak and ale pie, Tolkien on February 4, 2016 by xi'an## expectation-propagation from Les Houches

Posted in Books, Mountains, pictures, Statistics, University life with tags belief propagation, book review, Chamonix, CHANCE, expectation-propagation, French Alps, Lasso, Les Houches, Oxford University Press, sparsity, summer school on February 3, 2016 by xi'an**A**s CHANCE book editor, I received the other day from Oxford University Press acts from an École de Physique des Houches on Statistical Physics, Optimisation, Inference, and Message-Passing Algorithms that took place there in September 30 – October 11, 2013. While it is mostly unrelated with Statistics, and since Igor Caron already reviewed the book a year and more ago, I skimmed through the few chapters connected to my interest, from Devavrat Shah’s chapter on graphical models and belief propagation, to Andrea Montanari‘s denoising and sparse regression, including LASSO, and only read in some detail Manfred Opper’s expectation propagation chapter. This paper made me realise (or re-realise as I had presumably forgotten an earlier explanation!) that expectation propagation can be seen as a sort of variational approximation that produces by a sequence of iterations the distribution within a certain parametric (exponential) family that is the closest to the distribution of interest. By writing the Kullback-Leibler divergence the opposite way from the usual variational approximation, the solution equates the expectation of the natural sufficient statistic under both models… Another interesting aspect of this chapter is the connection with estimating normalising constants. (I noticed a slight typo on p.269 in the final form of the Kullback approximation q() to p().

## Le Monde puzzle [#947]

Posted in Books, Kids, pictures, R, Statistics with tags aperiodicity, chequerboard, coupling from the past, dead leaves, Le Monde, mathematical puzzle, R, Wilfrid Kendall on February 2, 2016 by xi'an**A**nother boardgame in Le Monde mathematical puzzle :

Given an 8×8 chequerboard, consider placing 2×2 tiles over thischequerboard until (a) the entire surface is covered and (b) removing a single 2×2 tile exposes some of the original chequerboard. What is the maximal number of 2×2 tiles one can set according to this scheme? And for a 10×10 chequerboard?

**T**his puzzle reminded me of Wilfrid Kendall’s introduction to perfect sampling with leaves seen through a ceiling window falling from above, until sky was no longer visible. The representation was used by Wilfrid to explain that coupling from the past did not need to go all the way back to infinity:

Defining a random coverage of the chequerboard by those 2×2 tiles amounts to selecting a random permutation þ of 1:(n-1)² and finding the subvector of þ producing a full coverage

grid=matrix(0,n,n) path=sample(1:(n-1)^2) #random permutation path=path+((path-1)%/%(n-1)) #account for border shift i=1 neigh=c(0,1,n,n+1) while (min(grid)==0){ #all entries covered grid[path[i]+neigh]=grid[path[i]+neigh]+1 i=i+1 } i=i-1

Then removing superfluous tiles:

for (k in sample(1:i)){ krid=grid krid[path[k]+neigh]=krid[path[k]+neigh]-1 if (min(krid)>0){ #off used below off[k]=FALSE; grid=krid} #useless tile }

And checking the remaining ones are truly essential:

mingrid=0 for (k in (1:i)[off]){ krid=grid krid[path[k]+neigh]=krid[path[k]+neigh]-1 mingrid=max(mingrid,min(krid)) } sky=(mingrid>0) #rejection of the grid

leads to the maximum number of tiles to be *[at least]* M=16,25,37,54 for n=6,8,10,12…

## Goodness-of-fit statistics for ABC

Posted in Books, Statistics, University life with tags ABC, ABC model choice, Bayesian p-values, goodness of fit, posterior predictive, summary statistics on February 1, 2016 by xi'an

“Posterior predictive checks are well-suited to Approximate Bayesian Computation”

Louisiane Lemaire and her coauthors from Grenoble have just arXived a new paper on designing a goodness-of-fit statistic from ABC outputs. The statistic is constructed from a comparison between the observed (summary) statistics and replicated summary statistics generated from the posterior predictive distribution. This is a major difference with the standard ABC distance, when the replicated summary statistics are generated from the prior predictive distribution. The core of the paper is about calibrating a posterior predictive p-value derived from this distance, since it is not properly calibrated in the frequentist sense that it is not uniformly distributed “under the null”. A point I discussed in an ‘Og entry about Andrews’ book a few years ago.

The paper opposes the average distance between ABC acceptable summary statistics and the observed realisation to the average distance between ABC posterior predictive simulations of summary statistics and the observed realisation. In the simplest case (e.g., without a post-processing of the summary statistics), the main difference between both average distances is that the summary statistics are used twice in the first version: first to select the acceptable values of the parameters and a second time for the average distance. Which makes it biased downwards. The second version is more computationally demanding, especially when deriving the associated p-value. It however produces a higher power under the alternative. Obviously depending on how the alternative is defined, since goodness-of-fit is only related to the null, i.e., to a specific model.

From a general perspective, I do not completely agree with the conclusions of the paper in that (a) this is a frequentist assessment and partakes in the shortcomings of p-values and (b) the choice of summary statistics has a huge impact on the decision about the fit since hardly varying statistics are more likely to lead to a good fit than appropriately varying ones.

## read paper [in Bristol]

Posted in Books, pictures, Statistics, Travel, University life with tags Bayes factors, Bayesian hypothesis testing, Bayesian model choice, Bristol, cake, England, improper priors, mixtures of distributions, Neyman-Pearson, non-informative priors, parametrisation, Pima Indians, Read paper, seminar, University of Bristol on January 29, 2016 by xi'an**I** went to give a seminar in Bristol last Friday and I chose to present the testing with mixture paper. As we are busy working on the revision, I was eagerly looking for comments and criticisms that could strengthen this new version. As it happened, the (Bristol) Bayesian Cake (Reading) Club had chosen our paper for discussion, two weeks in a row!, hence the title!, and I got invited to join the group the morning prior to the seminar! This was, of course, most enjoyable and relaxed, including an home-made cake!, but also quite helpful in assessing our arguments in the paper. One point of contention or at least of discussion was the common parametrisation between the components of the mixture. Although all parametrisations are equivalent from a *single* component point of view, I can [almost] see why using a mixture with the same parameter value on all components may impose some unsuspected constraint on that parameter. Even when the parameter is *the same moment* for both components. This still sounds like a minor counterpoint in that the weight should converge to either zero or one and hence eventually favour the posterior on the parameter corresponding to the “true” model.

Another point that was raised during the discussion is the behaviour of the method under misspecification or for an M-open framework: when neither model is correct does the weight still converge to the boundary associated with the closest model (as I believe) or does a convexity argument produce a non-zero weight as it limit (as hinted by one example in the paper)? I had thought very little about this and hence had just as little to argue though as this does not sound to me like the primary reason for conducting tests. Especially in a Bayesian framework. If one is uncertain about both models to be compared, one should have an alternative at the ready! Or use a non-parametric version, which is a direction we need to explore deeper before deciding it is coherent and convergent!

A third point of discussion was my argument that mixtures allow us to rely on the same parameter and hence the same prior, whether proper or not, while Bayes factors are less clearly open to this interpretation. This was not uniformly accepted!

Thinking afresh about this approach also led me to broaden my perspective on the use of the posterior distribution of the weight(s) α: while previously I had taken those weights mostly as a proxy to the posterior probabilities, to be calibrated by pseudo-data experiments, as for instance in Figure 9, I now perceive them primarily as the portion of the data in agreement with the corresponding model [or hypothesis] and more importantly as a solution for staying away from a Neyman-Pearson-like decision. Or error evaluation. Usually, when asked about the interpretation of the output, my answer is to compare the behaviour of the posterior on the weight(s) with a posterior associated with a sample from each model. Which does sound somewhat similar to posterior predictives if the samples are simulated from the associated predictives. But the issue was not raised during the visit to Bristol, which possibly reflects on how unfrequentist the audience was [the Statistics group is], as it apparently accepted with no further ado the use of a posterior distribution as a soft assessment of the comparative fits of the different models. If not necessarily agreeing the need of conducting hypothesis testing (especially in the case of the Pima Indian dataset!).

## love-hate Metropolis algorithm

Posted in Books, pictures, R, Statistics, Travel with tags auxiliary variable, doubly intractable problems, Metropolis-Hastings algorithm, Monte Carlo Statistical Methods, multimodality, normalising constant, parallel tempering, pseudo-marginal MCMC, The night of the hunter, unbiased estimation on January 28, 2016 by xi'an**H**yungsuk Tak, Xiao-Li Meng and David van Dyk just arXived a paper on a multiple choice proposal in Metropolis-Hastings algorithms towards dealing with multimodal targets. Called “A repulsive-attractive Metropolis algorithm for multimodality” *[although I wonder why XXL did not jump at the opportunity to use the “love-hate” denomination!]*. The proposal distribution includes a [forced] downward Metropolis-Hastings move that uses the inverse of the target density π as its own target, namely 1/{π(x)+ε}. Followed by a [forced] Metropolis-Hastings upward move which target is {π(x)+ε}. The +ε is just there to avoid handling ratios of zeroes (although I wonder why using the convention 0/0=1 would not work). And chosen as 10⁻³²³ by default in connection with R smallest positive number. Whether or not the “downward” move is truly downwards and the “upward” move is truly upwards obviously depends on the generating distribution: I find it rather surprising that the authors consider the *same* random walk density in both cases as I would have imagined relying on a more dispersed distribution for the downward move in order to reach more easily other modes. For instance, the downward move could have been based on an *anti*-Langevin proposal, relying on the gradient to proceed further down…

This special choice of a single proposal however simplifies the acceptance ratio (and keeps the overall proposal symmetric). The final acceptance ratio still requires a ratio of intractable normalising constants that the authors bypass by Møller et al. (2006) auxiliary variable trick. While the authors mention the alternative pseudo-marginal approach of Andrieu and Roberts (2009), they do not try to implement it, although this would be straightforward here: since the normalising constants are the probabilities of accepting a downward and an upward move, respectively. Those can easily be evaluated at a cost similar to the use of the auxiliary variables. That is,

– generate a few moves from the current value and record the proportion *p* of accepted downward moves;

– generate a few moves from the final proposed value and record the proportion *q* of accepted downward moves;

and replace the ratio of intractable normalising constants with *p/q*. It is not even clear that one needs those extra moves since the algorithm requires an acceptance in the downward and upward moves, hence generate Geometric variates associated with those probabilities p and q, variates that can be used for estimating them. From a theoretical perspective, I also wonder if forcing the downward and upward moves truly leads to an improved convergence speed. Considering the case when the random walk is poorly calibrated for either the downward or upward move, the number of failed attempts before an acceptance may get beyond the reasonable.

As XXL and David pointed out to me, the unusual aspect of the approach is that here the proposal density is intractable, rather than the target density itself. This makes using Andrieu and Roberts (2009) seemingly less straightforward. However, as I was reminded this afternoon at the statistics and probability seminar in Bristol, the argument for the pseudo-marginal based on an unbiased estimator is that w Q(w|x) has a marginal in x equal to π(x) when the expectation of w is π(x). In thecurrent problem, the proposal in x can extended into a proposal in (x,w), w P(w|x) whose marginal is the proposal on x.

If we complement the target π(x) with the conditional P(w|x), the acceptance probability would then involve

{π(x’) P(w’|x’) / π(x) P(w|x)} / {w’ P(w’|x’) / w P(w|x)} = {π(x’) / π(x)} {w/w’}

so it seems the pseudo-marginal (or auxiliary variable) argument also extends to the proposal. Here is a short experiment that shows no discrepancy between target and histogram:

nozero=1e-300 #love-hate move move<-function(x){ bacwa=1;prop1=prop2=rnorm(1,x,2) while (runif(1)>{pi(x)+nozero}/{pi(prop1)+nozero}){ prop1=rnorm(1,x,2);bacwa=bacwa+1} while (runif(1)>{pi(prop2)+nozero}/{pi(prop1)+nozero}) prop2=rnorm(1,prop1,2) y=x if (runif(1)<pi(prop2)*bacwa/pi(x)/fowa){ y=prop2;assign("fowa",bacwa)} return(y)} #arbitrary bimodal target pi<-function(x){.25*dnorm(x)+.75*dnorm(x,mean=5)} #running the chain T=1e5 x=5*rnorm(1);luv8=rep(x,T) fowa=1;prop1=rnorm(1,x,2) #initial estimate while (runif(1)>{pi(x)+nozero}/{pi(prop1)+nozero}){ fowa=fowa+1;prop1=rnorm(1,x,2)} for (t in 2:T) luv8[t]=move(luv8[t-1])

## R typos

Posted in Books, Kids, R, Statistics, Travel, University life with tags Amsterdam, Bayesian Analysis, MCMskv, Metropolis-Hastings algorithm, mixtures, Monte Carlo Statistical Methods, R, random walk, testing as mixture estimation on January 27, 2016 by xi'an**A**t MCMskv, Alexander Ly (from Amsterdam) pointed out to me some R programming mistakes I made in the introduction to Metropolis-Hastings algorithms I wrote a few months ago for the Wiley on-line encyclopedia! While the outcome (Monte Carlo posterior) of the corrected version is moderately changed this is nonetheless embarrassing! The example (if not the R code) was a mixture of a Poisson and a Geometric distributions borrowed from our testing as mixture paper. Among other things, I used a flat prior on the mixture weights instead of a Beta(1/2,1/2) prior *and* a simple log-normal random walk on the mean parameter instead of a more elaborate second order expansion discussed in the text. And I also inverted the probabilities of success and failure for the Geometric density. The new version is now available on arXiv, and hopefully soon on the Wiley site, but one (the?) fact worth mentioning here is that the (right) corrections in the R code first led to overflows, because I was using the Beta random walk Be(εp,ε(1-p)) which major drawback I discussed here a few months ago. With the drag that nearly zero or one values of the weight parameter produced infinite values of the density… Adding 1 (or 1/2) to each parameter of the Beta proposal solved the problem. And led to a posterior on the weight still concentrating on the correct corner of the unit interval. In any case, a big thank you to Alexander for testing the R code and spotting out the several mistakes…