Archive for guest post

Bayes Comp 2018 [call for guest posts]

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , on March 26, 2018 by xi'an

As the next MCMski conference, now called Bayes Comp, is starting in Barcelona, Spain, March 26-29, I welcome all guest posts covering the conference, since I am not going to be there! Enjoy!

Bayesian model averaging in astrophysics [guest post]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , on August 12, 2015 by xi'an

.[Following my posting of a misfiled 2013 blog, Ewan Cameron told me of the impact of this paper in starting his own blog and I asked him for a guest post, resulting in this analysis, much deeper than mine. No warning necessary this time!]

Back in February 2013 when “Bayesian Model Averaging in Astrophysics: A Review” by Parkinson & Liddle (hereafter PL13) first appeared on the arXiv I was a keen, young(ish) postdoc eager to get stuck into debates about anything and everything ‘astro-statistical’. And with its seemingly glaring flaws, PL13 was more grist to the mill. However, despite my best efforts on various forums I couldn’t get a decent fight started over the right way to do model averaging (BMA) in astronomy, so out of sheer frustration two months later I made my own soapbox to shout from at Another Astrostatistics Blog. Having seen PL13 reviewed recently here on Xian’s Og it feels like the right time to revisit the subject and reflect on where BMA in astronomy is today.

As pointed out to me back in 2013 by Tom Loredo, the act of Bayesian model averaging has been around much longer than its name; indeed an early astronomical example appears in Gregory & Loredo (1992) in which the posterior mean representation of an unknown signal is constructed for an astronomical “light-curve”, averaging over a set of constant and periodic candidate models. Nevertheless the wider popularisation of model averaging in astronomy has only recently taken place through a variety of applications in cosmology: e.g. Liddle, Mukherjee, Parkinson & Wang (2006) and Vardanyan, Trotta & Silk (2011).

In contrast to earlier studies like Gregory & Loredo (1992)—or the classic review on BMA by Hoeting et al. (1999)—in which the target of model averaging is typically either a utility function, a set of future observations, or a latent parameter of the observational process (e.g. the unknown “light-curve” shape) shared naturally by all competing models, the proposal of cosmological BMA studies is to produce a model-averaged version of the posterior for a given ‘shared’ parameter: a so-called “model-averaged PDF”. This proposal didn’t sit well with me back in 2013, and it still doesn’t sit well with me today. Philosophically: without a model a parameter has no meaning, so why should we want to seek meaning in the marginalised distribution of a parameter over an entire set of models? And, practically: to put it another way, without knowing the model ‘label’ to which a given mass of model-averaged parameter probability belongs there’s nothing much useful we can do with this ‘PDF’: nothing much we can say about the data we’ve just analysed and nothing much we can say about future experiments. Whereas the space of the observed data is shared automatically by all competing models, it seems to me to be somehow “un-Bayesian” to place the further restriction that the parameters of separate models share the same scale and topology. I say “un-Bayesian” since this mode of model averaging suggests a formulation of the parameter space + prior pairing stronger than the statement of one’s prior beliefs for the distribution of observable data given the model. But I would be happy to hear arguments from the other side in the comments box below … ! Continue reading

Bayesian optimization for likelihood-free inference of simulator-based statistical models [guest post]

Posted in Books, Statistics, University life with tags , , , , , , , on February 17, 2015 by xi'an

[The following comments are from Dennis Prangle, about the second half of the paper by Gutmann and Corander I commented last week.]

Here are some comments on the paper of Gutmann and Corander. My brief skim read through this concentrated on the second half of the paper, the applied methodology. So my comments should be quite complementary to Christian’s on the theoretical part!

ABC algorithms generally follow the template of proposing parameter values, simulating datasets and accepting/rejecting/weighting the results based on similarity to the observations. The output is a Monte Carlo sample from a target distribution, an approximation to the posterior. The most naive proposal distribution for the parameters is simply the prior, but this is inefficient if the prior is highly diffuse compared to the posterior. MCMC and SMC methods can be used to provide better proposal distributions. Nevertheless they often still seem quite inefficient, requiring repeated simulations in parts of parameter space which have already been well explored.

The strategy of this paper is to instead attempt to fit a non-parametric model to the target distribution (or in fact to a slight variation of it). Hopefully this will require many fewer simulations. This approach is quite similar to Richard Wilkinson’s recent paper. Richard fitted a Gaussian process to the ABC analogue of the log-likelihood. Gutmann and Corander introduce two main novelties:

  1. They model the expected discrepancy (i.e. distance) Δθ between the simulated and observed summary statistics. This is then transformed to estimate the likelihood. This is in contrast to Richard who transformed the discrepancy before modelling. This is the standard ABC approach of weighting the discrepancy depending on how close to 0 it is. The drawback of the latter approach is it requires picking a tuning parameter (the ABC acceptance threshold or bandwidth) in advance of the algorithm. The new approach still requires a tuning parameter but its choice can be delayed until the transformation is performed.
  2. They generate the θ values on-line using “Bayesian optimisation”. The idea is to pick θ to concentrate on the region near the minimum of the objective function, and also to reduce uncertainty in the Gaussian process. Thus well explored regions can usually be neglected. This is in contrast to Richard who chose θs using space filling design prior to performing any simulations.

I didn’t read the paper’s theory closely enough to decide whether (1) is a good idea. Certainly the results for the paper’s examples look convincing. Also, one issue with Richard‘s approach was that because the log-likelihood varied over such a wide variety of magnitudes, he needed to fit several “waves” of GPs. It would be nice to know if the approach of modelling the discrepancy has removed this problem, or if a single GP is still sometimes an insufficiently flexible model.

Novelty (2) is a very nice and natural approach to take here. I did wonder why the particular criterion in Equation (45) was used to decide on the next θ. Does this correspond to optimising some information theoretic quantity? Other practical questions were whether it’s possible to parallelise the method (I seem to remember talking to Michael Gutmann about this at NIPS but can’t remember his answer!), and how well the approach scales up with the dimension of the parameters.

3,000 posts and 1,000,000 views so far…

Posted in Books, Kids, Statistics with tags , on September 12, 2014 by xi'an

As the ‘Og went over its [first] million views and 3,000 posts since its first post in October 2008, the most popular entries (lots of book reviews, too many obituaries, and several guest posts):

In{s}a(ne)!! 9,330
“simply start over and build something better” 8,514
George Casella 6,712
About 4,853
Bayesian p-values 4,468
Sudoku via simulated annealing 4,150
Julien on R shortcomings 3,673
Solution manual to Bayesian Core on-line 3,040
Solution manual for Introducing Monte Carlo Methods with R 2,954
#2 blog for the statistics geek?! 2,706
Of black swans and bleak prospects 2,596
Gelman’s course in Paris, next term! 2,451
the Art of R Programming [guest post] 2,242
Parallel processing of independent Metropolis-Hastings algorithms 2,208
Bayes’ Theorem 1,925
Bayes on the Beach 2010 [2] 1,778
Do we need an integrated Bayesian/likelihood inference? 1,742
Théorème vivant 1,617
Dennis Lindley (1923-2013) 1,613
Coincidence in lotteries 1,543
The mistborn trilogy 1,532
Julian Besag 1945-2010 1,529
Frequency vs. probability 1,448
Bayes’ Theorem in the 21st Century, really?! 1,401
the cartoon introduction to statistics 1,398
understanding computational Bayesian statistics 1,369
The Search for Certainty 1,274
Bayesian modeling using WinBUGS 1,273
Particle MCMC discussion 1,256
Reference prior for logistic regression 1,215
Tornado in Central Park 1,142
Harmonic mean estimators 1,138
A ridiculous email 1,134
Andrew gone NUTS! 1,132
Top 15 all-timers? 1,130
Millenium 1 [movie] 1,121
Monte Carlo Statistical Methods third edition 1,102
Introducing Monte Carlo Methods with R: a first course 1,090

Continue reading

Shravan’s comments on “Valen in Le Monde” [guest post]

Posted in Books, Statistics, University life with tags , , , , , , , on November 22, 2013 by xi'an

[Those are comments sent yesterday by Shravan Vasishth in connection with my post. Since they are rather lengthy, I made them into a post. Shravan is also the author of The foundations of Statistics and we got in touch through my review of the book . I may address some of his points later, but, for now, I find the perspective of a psycholinguist quite interesting to hear.]

Christian, Is the problem for you that the p-value, however low, is only going to tell you the probability of your data (roughly speaking) assuming the null is true, it’s not going to tell you anything about the probability of the alternative hypothesis, which is the real hypothesis of interest.

However, limiting the discussion to (Bayesian) hierarchical models (linear mixed models), which is the type of model people often fit in repeated measures studies in psychology (or at least in psycholinguistics), as long as the problem is about figuring out P(θ>0) or P(θ>0), the decision (to act as if θ>0) is going to be the same regardless of whether one uses p-values or a fully Bayesian approach. This is because the likelihood is going to dominate in the Bayesian model.

Andrew has objected to this line of reasoning by saying that making a decision like θ>0 is not a reasonable one in the first place. That is true in some cases, where the result of one experiment never replicates because of study effects or whatever. But there are a lot of effects which are robust and replicable, and where it makes sense to ask these types of questions.

One central issue for me is: in situations like these, using a low p-value to make such a decision is going to yield pretty similar outcomes compared to doing inference using the posterior distribution. The machinery needed to do a fully Bayesian analysis is very intimidating; you need to know a lot, and you need to do a lot more coding and checking than when you fit an lmer type of model.

It took me 1.5 to 2 years of hard work (=evenings spent not reading novels) to get to the point that I knew roughly what I was doing when fitting Bayesian models. I don’t blame anyone for not wanting to put their life on hold to get to such a point. I find the Bayesian method attractive because it actually answers the question I really asked, namely is θ>0 or θ<0? This is really great, I don’t have beat around the bush any more! (there; I just used an exclamation mark). But for the researcher unwilling (or more likely: unable) to invest the time into the maths and probability theory and the world of BUGS, the distance between a heuristic like a low p-value and the more sensible Bayesian approach is not that large.

run my code [guest post]

Posted in Statistics with tags , , , , on July 18, 2012 by xi'an

(This guest post has been written by Nicolas Chopin.)

I have been contacted by Christophe Pérignon, a prof. of Finance at HEC and co-founder of RunMyCode.org, a very interesting initiative that deserves to be publicised widely. Essentially, it’s arxiv for scientific code. You can create a “companion web-site” for each of your projects, post your code (with links to the corresponding paper), and let users run your code in the “cloud”, with their own data. All of this through a simple web-page interface.

I’ve not tried it yet, and I still wonder if this does not sound too good to be true; for instance, I wonder what happens if too many people post computer-intensive programs that take hours to complete. But Christophe tells me there is some badass hardware behind the project (a big server from CNRS); they are also backed by prestigious institutions (Columbia, NSF, CNRS, etc.).

But certainly the idea is excellent, and looks like the next step in reproducible research. (One of the co-founders is Victoria Stodden, an Assistant prof in stat at Columbia, and a well-known advocate of reproducible research and open research.) One could also use it to illustrate an idea in a conference, or during a course.

The project was started by people in Economics and Business. There are still some reference to this on the web site, and indirectly through the list of currently implemented languages (Matlab, R, and … Rats!). But Christophe tells me that they want to reach further. They have already projects in image analysis for instance. They are apparently open to other computer languages (e.g. Python), if there is some demand.

It is going to be really interesting to see how much this project is going to gather steam in our field and beyond. Perhaps this is the start of a new trend where we will run more and more our programs “in the cloud”, with the added benefits of openness and simplicity. We live in exciting times!