**P**anayiota Touloupou (Warwick), Naif Alzahrani, Peter Neal, Simon Spencer (Warwick) and Trevelyan McKinley arXived a paper yesterday on Model comparison with missing data using MCMC and importance sampling, where they proposed an importance sampling strategy based on an early MCMC run to approximate the marginal likelihood a.k.a. the evidence. Another instance of estimating a constant. It is thus similar to our Frontier paper with Jean-Michel, as well as to the recent Pima Indian survey of James and Nicolas. The authors give the difficulty to calibrate reversible jump MCMC as the starting point to their research. The importance sampler they use is the natural choice of a Gaussian or *t* distribution centred at some estimate of θ and with covariance matrix associated with Fisher’s information. Or derived from the warmup MCMC run. The comparison between the different approximations to the evidence are done first over longitudinal epidemiological models. Involving 11 parameters in the example processed therein. The competitors to the 9 versions of importance samplers investigated in the paper are the raw harmonic mean [rather than our HPD truncated version], Chib’s, path sampling and RJMCMC [which does not make much sense when comparing two models]. But neither bridge sampling, nor nested sampling. Without any surprise (!) harmonic means do not converge to the right value, but more surprisingly Chib’s method happens to be less accurate than most importance solutions studied therein. It may be due to the fact that Chib’s approximation requires three MCMC runs and hence is quite costly. The fact that the mixture (or defensive) importance sampling [with 5% weight on the prior] did best begs for a comparison with bridge sampling, no? The difficulty with such study is obviously that the results only apply in the setting of the simulation, hence that e.g. another mixture importance sampler or Chib’s solution would behave differently in another model. In particular, it is hard to judge of the impact of the dimensions of the parameter and of the missing data.

## Archive for harmonic mean

## approximating evidence with missing data

Posted in Books, pictures, Statistics, University life with tags Bayes factor, Bayesian Choice, Bayesian model comparison, bridge sampling, Chib's approximation, defensive mixture, harmonic mean, importance sampling, MCMC algorithms, mixture, Monte Carlo Statistical Methods, nested sampling, Pima Indians, reversible jump MCMC, simulation, University of Warwick on December 23, 2015 by xi'an## rediscovering the harmonic mean estimator

Posted in Kids, Statistics, University life with tags cross validated, estimating a constant, harmonic mean, HPD region, importance sampling, infinite variance estimators, normalising constant, Radford Neal on November 10, 2015 by xi'an**W**hen looking at unanswered questions on X validated, I came across a question where the author wanted to approximate a normalising constant

while simulating from the associated density, *g*. While seemingly unaware of the (huge) literature in the area, he re-derived [a version of] the harmonic mean estimate by considering the [inverted importance sampling] identity

when α is a probability density and by using for α the uniform over the whole range of the simulations from *g*. This choice of α obviously leads to an estimator with infinite variance when the support of *g* is unbounded, but the idea can be easily salvaged by using instead another uniform distribution, for instance on an highest density region, as we studied in our papers with Darren Wraith and Jean-Michel Marin. (Unfortunately, the originator of the question does not seem any longer interested in the problem.)

## a remarkably simple and accurate method for computing the Bayes factor &tc.

Posted in Statistics with tags Bayes factors, harmonic mean, HPD region, Lebesgue integration, MaxEnt2009, quadrature, Riemann integration, San Antonio, Thomas Bayes on February 13, 2013 by xi'an**T**his recent arXiv posting by Martin Weinberg and co-authors was pointed out to me by friends because of its title! It indeed sounded a bit inflated. And also reminded me of old style papers where the title was somehow the abstract. Like *An Essay towards Solving a Problem in the Doctrine of Chances*… So I had a look at it on my way to Gainesville. The paper starts from the earlier paper by Weinberg (2012) in Bayesian Analysis where he uses an HPD region to determine the Bayes factor by a safe harmonic mean estimator (an idea we already advocated earlier with Jean-Michel Marin in the San Antonio volume and with Darren Wraith in the MaxEnt volume). An extra idea is to try to optimise [against the variance of the resulting evidence] the region over which the integration is performed: “choose a domain that results in the most accurate integral with the smallest number of samples” (p.3). The authors proceed by volume peeling, using some quadrature formula for the posterior coverage of the region, either by Riemann or Lebesgue approximations (p.5). I was fairly lost at this stage and the third proposal based on adaptively managing hyperrectangles (p.7) went completely over my head! The sentence “the results are clearly worse with O(**∞**) errors, but are still remarkably better for high dimensionality”(p.11) did not make sense either… The method may thus be remarkably simple, but the paper is not written in a way that conveys this impression!

## estimating a constant (not really)

Posted in Books, Statistics, University life with tags All of Statistics, Bayes factor, Bayesian inference, Cédric Villani, cross validated, digits of π, harmonic mean, Henri Poincaré, Law of Large Numbers, MCMC algorithms, Monte Carlo methods, numerical analysis, StackExchange, Statistics on October 12, 2012 by xi'an**L**arry Wasserman wrote a blog entry on the normalizing constant paradox, where he repeats that he does not understand my earlier point…Let me try to recap here this point and the various comments I made on StackExchange *(while keeping in mind all this is for intellectual fun!)*

**T**he entry is somehow paradoxical in that Larry acknowledges (in that post) that the analysis in his book, ** All of Statistics**, is wrong. The fact that

*“g(x)/c is a valid density only for one value of c”*(and hence cannot lead to a notion of likelihood on

*c*) is the very reason why I stated that there can be no statistical inference nor prior distribution about

*c*: a sample from

*f*does not bring

*statistical information*about

*c*and there can be no

*statistical*estimate of

*c*based on this sample. (In case you did not notice, I insist upon

*statistical*!)

** T**o me this problem is completely different from a statistical problem, at least in the modern sense: if I need to approximate the constant *c*—as I do in fact when computing Bayes factors—, I can produce an arbitrarily long sample from a certain importance distribution and derive a converging (and sometimes unbiased) approximation of *c*. Once again, this is Monte Carlo integration, a numerical technique based on the Law of Large Numbers and the stabilisation of frequencies. (Call it a *frequentist* method if you wish. I completely agree that MCMC methods are inherently *frequentist* in that sense, And see no problem with this because they are not *statistical* methods. Of course, this may be the core of the disagreement with Larry and others, that they call statistics the Law of Large Numbers, and I do not. This lack of separation between both notions also shows up in a recent general public talk on Poincaré’s mistakes by Cédric Villani! All this may just mean I am irremediably Bayesian, seeing anything motivated by frequencies as non-statistical!) But that process does not mean that *c* can take a range of values that would index a family of densities compatible with a given sample. In this Monte Carlo integration approach, the distribution of the sample is completely under control (modulo the errors induced by pseudo-random generation). This approach is therefore outside the realm of Bayesian analysis *“that puts distributions on fixed but unknown constants”*, because those unknown constants parameterise the distribution of an observed sample. Ergo, *c* is not a parameter of the sample and the sample Larry argues about (*“we have data sampled from a distribution”*) contains no information whatsoever about *c* that is not already in the function *g*. (It is not “data” in this respect, but a stochastic sequence that can be used for approximation purposes.) Which gets me back to my first argument, namely that *c* is known (and at the same time difficult or impossible to compute)!

** L**et me also answer here the comments on *“why is this any different from estimating the speed of light c?”* *“why can’t you do this with the 100th digit of π?”* on the earlier post or on StackExchange. Estimating the speed of light means for me (who repeatedly flunked Physics exams after leaving high school!) that we have a physical experiment that measures the speed of light (as the original one by Rœmer at the Observatoire de Paris I visited earlier last week) and that the statistical analysis infers about *c* by using those measurements and the impact of the imprecision of the measuring instruments (as we do when analysing astronomical data). If, now, there exists a physical formula of the kind

where φ is a probability density, I can imagine stochastic approximations of *c* based on this formula, but I do not consider it a statistical problem any longer. The case is thus clearer for the 100th digit of *π*: it is also a fixed number, that I can approximate by a stochastic experiment but on which I cannot attach a statistical tag. (It is 9, by the way.) Throwing darts at random as I did during my Oz tour is not a statistical procedure, but simple Monte Carlo à la Buffon…

** O**verall, I still do not see this as a paradox for our field (and certainly not as a critique of Bayesian analysis), because there is no reason a statistical technique should be able to address any and every numerical problem. (Once again, Persi Diaconis would almost certainly differ, as he defended a Bayesian perspective on numerical analysis in the early days of MCMC…) There may be a “Bayesian” solution to this particular problem (and that would nice) and there may be none (and that would be OK too!), but I am not even convinced I would call this solution “Bayesian”! *(Again, let us remember this is mostly for intellectual fun!)*

## Harmonic means for reciprocal distributions

Posted in Statistics with tags harmonic mean, inverse normal distribution, reciprocal on November 17, 2011 by xi'an**A**n interesting post on ExploringDataBlog on the properties of the distribution of *1/X*. Hmm, maybe not the most enticing way of presenting it, since there does not seem anything special in a generic inversion! What attracted me to this post (via Rbloggers) is the fact that a picture shown there was one I had obtained about twenty years ago when looking for a particular conjugate prior in astronomy, a distribution I dubbed *the inverse normal distribution* (to distinguish it from *the inverse Gaussian distribution*). The author, Ron Pearson *[who manages to mix the first name and the second name of two arch-enemies of 20th Century statistics!]* points out that well-behaved distributions usually lead to heavy tailed reciprocal distributions. Of course, the arithmetic mean of a variable *X* is the inverse of the harmonic mean of the inverse variable *1/X*, so looking at those distributions makes sense. The post shows that, for the inverse normal distribution, depending on the value of the normal mean, the harmonic mean has tails that vary between a Cauchy and a normal distributions…